gnoviCon 2018: The Ethics of Artificial Intelligence

At the 2018 gnoviCon, the second panel presented a discussion on the ethics of artificial intelligence (AI). Moderated by Dr. Meg Jones, Assistant Professor in Georgetown’s Communication, Culture & Technology (CCT) program, the panel featured four experts on artificial intelligence from diverse backgrounds, including David Robinson, Managing Director and Cofounder of Upturn, Elana Zeide, Visiting Assistant Professor at Seton Hall University’s School of Law, Leslie Harris, Adjunct Professor at Georgetown University’s CCT program and Senior Fellow at the Future of Policy Forum, and Amanda Levendowski, a fellow with NYU’s Law & Policy Clinic and Information Law Institute. All four panelists engaged the audience in a captivating conversation about AI’s propensity to ignore certain nuances that would otherwise allow it to make ethically sound decisions.

Robinson started by describing AI as a technology that performs what once required a human to do. He gave the example of using AI for risk assessment during arrests: Do we send a criminal home or to jail? Robinson discussed the ramifications of using automated means of risk assessment, stating that the biases embedded in the technologies often yield unfair results,  “For example you are more likely to get arrested for drugs if you are in certain communities. But when we feed it into the computer we act as if it means the same thing for an inner city person to get arrested and a suburb person.” He ended by stating, “We need people to be able to feel intrepid when in position of authority about if this will tell me where the data lies or the comforting feeling that I am able to do that.”

The next speaker was Elana Zeide who discussed the use of automated technologies in education. She talked about how AI is used to “optimize learning experiences for individuals” describing various benefits such as cost efficiency and speed. However, though we may think automated technologies are neutral and prevent discrimination, Zeide revealed that there may be biases embedded that only allow certain populations to succeed. “If you use predictive models, then they promote the status quo. Without some other added value, you risk having that prediction replicate. Who is most likely to succeed as a physics major? Based on history, probably a white guy. So if you use AI scores to determine who to admit, that’s a problem with closing discrimination gaps.”

Harris addressed AI as a tool that algorithmically regulates speech and dissemination of information on the internet. She questioned the ethical responsibilities of platforms such as Facebook stating “60% of Facebook users have no idea that they are getting content from an algorithm, and 65% of Americans say that they get news from Facebook. So the algorithmic curation of information is being decided by Facebook and shaping our understanding of the world.” Harris cited that this curation of content “[robs] us of autonomy, making us susceptible to fake news.”

The algorithmic management of content extends into the regulation of hate speech which she noted ignores certain contextual nuances. Her discussion of Google’s determination to detect toxic speech reveals how problematic this can be. Harris took two examples that are weighed almost equally in their level of toxicity. A statement such as “I f-ing love you man, happy birthday” receives a very similar ranking as “I hate to be black in Trump’s America.” Harris candidly offers the idea that the white, American, male is the guiding principle of what is appropriate content. She revealedthat studies show biases on the basis of gender and race, harkening back to Zeide’s example of how AI determines who is more likely to succeed in higher education. She ended by advising the audience to continue these important conversations about the ethics of artificial intelligence.

The final speaker, Amanda Levendowski, spoke about the role of law in unethical AI. She got to the root of the problem stating that the engineers building these systems are not concerned with ethics of what they develop, nor are the “lawyers that litigate them.” Given that this is the case, she ended on a thought provoking note asking if AI can ever be ethical?

Ridding AI of bias appears to be a nearly impossible task. Given this reality, what can we do to avoid having our own judgements swayed by automated algorithms? Jones asserted that we must be sure to engage in “deep seated ethical conversations” and spread  awareness about ethical ramifications of relying on AI.

Leave a Reply

Your email address will not be published. Required fields are marked *