Let’s Not Talk About the Apocalypse

artificial-intelligence-2228610_1920

What comes to mind when you think about artificial intelligence? Somewhere in there is bound to be an impression from Hollywood or something related to the singularity—that hypothesized point in the future when robots become superintelligent and far surpass humans. Maybe Black Mirror or sex robots or driverless cars or job-stealing robots come to mind (McKinnon 2018).

At times there’s a disconnect between those ideas that dominate that public sphere and what those working more closely with the technology say. While you do hear prominent technologists warn of the threats posed by AI (Calo 2017), there is also a swell of news and research being pumped out about practical, life-improving advances.

AI is about much more than robot doom, and it’s time to take a closer look at the narratives and language we use to talk about it.

Such a critical approach is necessary not least because the technology can have important consequences for people’s everyday lives. Google’s chief scientist of AI and machine learning, Fei-Fei Li, says that’s exactly the point: “A.I. is made by humans, intended to behave by humans and, ultimately, to impact humans lives and human society” (New York Times 2018). And increasingly, computer programs that could be called AI are making decisions, at times consequential ones, about people. This technology has the potential to improve sentencing and parole decisions in criminal justice. It can help make hiring processes more efficient and improve precision medicine, cancer research (Executive Office 2016), kidney injuries (King 2018), and more.

But these systems also operate in ways that are problematic. You might’ve seen headlines about this too: “Congress is worried about AI bias and diversity” (Gershgorn 2018) or “Facial Recognition Is Accurate, if You’re a White Guy” (Lohr 2018) or “Forget Killer Robots—Bias Is the Real AI Danger” (Knight 2017). Such systems have been wrong about health risks (Caruana et al 2015). Others have been shown to discriminate based on gender, sexual orientation, and race (Datta et al 2018).

There is a building sense that this is a problem, and that more needs to be done to fix it and to make sure the human values stay in the machine loop. Some say it’s necessary to educate the public so people can evaluate automated systems on their own, or to ensure that humans know what their rights and recourse options are (Citron et al, 2014, Sandvig et al 2014, Burrell 2016, Zarsky 2016). Others focus on trust, arguing that people need to know enough about these systems to trust them before they’ll be comfortable using them (Ribeiro et al 2016). And the European Union’s General Data Protection Regulation, which goes into effect in May 2018, gives individuals a right to obtain an explanation for automated decisions that are made about them (Selbst and Powles 2018).

But in the United States especially, these efforts run up against engrained narratives. As law and emerging technology expert Ryan Calo puts it: “Policymakers . . . may have a role in educating citizens about the technology and its potential effects. Here they face an uphill battle, at least in the United States, due to decades of books, films, television shows, and even plays that depict AI as a threatening substitute for people.”

A small way to start making progress on this front is to look more closely at the language used and the stories told to the public. The stories we tell reflect cultural values and the conceptions of what is possible for that culture. They’re central to how we make meaning of and navigate the world around us. And a central part of those narratives is agency—often human beings operating as actors (Patterson and Monroe 1998).

But today, it isn’t just people who have agency in popular narratives. As new technologies increasingly perform tasks that were once considered the special domain of humans—such as finding patterns in large amounts of data—they take on lives of their own, and at times threaten our conceptions of humanity.

This happens in small, pervasive ways, not just in large-scale science fiction. Take a recent New York Times headline: “Why We May Soon Be Living in Alexa’s World,” referring to Amazon’s brand of digital assistant (Manjoo). It starts with the line, “My wife and I were just settling into bed one night when Alexa, the other woman in my life, decided to make herself heard.” The article goes on to discuss the relatively dry topic of Amazon’s business strategy (which is perhaps why a catchy headline and opening was needed), and it’s entertaining, as many of these kinds of stories are.

Still, language like this reinforces basic patterns of thought—namely, that the collective “we” will soon be in thrall to intelligent machines, if we aren’t already. When machines like Alexa are described as “living” in a particular place or being “the other woman” (as the article does) or “thinking” some thing, the agency of those “AIs” is bolstered.

Black-and-white narratives at the extremes may also negatively impact how we develop technology. Determinist stories discount the fact that the development of any kind of computing technology depends in no small part on the social forces acting on it (Levy and Murnane 2013), including industry, governments, and the public. And as Calo argues, “devoting disproportionate attention and resources to the AI apocalypse has the potential to distract policymakers from addressing AI’s more immediate harms and challenges and could discourage investment in research on AI’s present social impacts.”

All of this is playing out against a backdrop of anxiety. For instance, a recent Pew poll indicated that 72 percent of U.S. adults worry about a “future where robots and computers can do many human jobs,” and 67 percent worry about the “development of algorithms that can evaluate and hire job candidates” (Smith and Anderson 2017).

As discussions about AI increase, it would be wise to keep a focus on questions of power and access—who benefits from the doom and opacity, who is hurt by it, and what is best for society. The language we use and the stories we tell play an important part in that.


Works Cited

Ananny, Mike, and Kate Crawford. 2016. “Seeing without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability.” New Media & Society, December, 1461444816676645. https://doi.org/10.1177/1461444816676645.

Burrell, Jenna. 2016. “How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms.” Big Data & Society 3 (1): 2053951715622512. https://doi.org/10.1177/2053951715622512.

Calo, Ryan. 2017. “Artificial Intelligence Policy: A Primer and Roadmap.” SSRN Scholarly Paper ID 3015350. Rochester, NY: Social Science Research Network. https://papers.ssrn.com/abstract=3015350.

Caruana, Rich, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noémie Elhadad. 2015. “Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission. Proc. 21th ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining 1721–1730 (August). http://people.dbmi.columbia.edu/noemie/papers/15kdd.pdf

Citron, Danielle Keats, and Frank A. Pasquale. 2014. “The Scored Society: Due Process for Automated Predictions.” Washington Law Review 89, no. 1. https://digital.law.washington.edu/dspace-law/bitstream/handle/1773.1/1318/89WLR0001.pdf.

Datta, Amit, Anupam Datta, Jael Makagon, Deirdre K. Mulligan, and Michael Carl Tschantz. 2018. “Discrimination in Online Advertising: A Multidisciplinary Inquiry.” In Conference on Fairness, Accountability and Transparency, 20–34. http://proceedings.mlr.press/v81/datta18a.html.

Executive Office of the President, National Science and Technology Council Committee on Technology. 2016. “Preparing for the Future of Artificial Intelligence.” October. https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf.

Gershgorn, Dave. 2018. “Congress Is Worried about AI Bias and Diversity.” Quartz. February 15. https://qz.com/1208581/diversity-and-bias-in-ai-has-reached-us-congress/.

Heilbroner, Robert L. 1967. “Do Machines Make History?” Technology and Culture 8, no. 3 (July 1967): 335–345.

King, Dominic. 2018. “Researching Patient Deterioration with the US Department of Veterans Affairs.” DeepMind. February 22. https://deepmind.com/blog/research-department-veterans-affairs/.

Knight, Will. 2017. “Forget Killer Robots—Bias Is the Real AI Danger.” MIT Technology Review. October 3. https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/.

Levy, Frank, and Richard J. Murnane. 2013. “Dancing with Robots: Humans Skills for Computerized Work.” Third Way. http://content.thirdway.org/publications/714/Dancing-With-Robots.pdf.

Lohr, Steve. 2018. “Facial Recognition Is Accurate, If You’re a White Guy.” The New York Times. February 9, sec. Technology. https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html.

Manjoo, Farhad. 2018. “Why We May Soon Be Living in Alexa’s World.” The New York Times, February 21, 2018, sec. Technology. https://www.nytimes.com/2018/02/21/technology/amazon-alexa-world.html.

McKinnon, Alex. 2018. “AIs Have Replaced Aliens as Our Greatest World-Destroying Fear.” Quartz. February 8. https://qz.com/1201846/ais-have-replaced-aliens-as-our-greatest-world-destroying-fear/.

The New York Times. 2018. “How Artificial Intelligence Is Edging Its Way Into Our Lives.” February 12. https://www.nytimes.com/2018/02/12/technology/artificial-intelligence-new-work-summit.html.

Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2016. “‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier.” ArXiv. February. http://arxiv.org/abs/1602.04938.

Sandvig, Christian, Kevin Hamilton, Karrie Karahalios, Cédric Langbort. 2014. “Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms.” Paper presented to “Data and Discrimination: Converting Critical Concerns into Productive Inquiry,” a preconference at the 64th Annual Meeting of the International Communication Association. May 22, 2014.

Selbst, Andrew D., and Julia Powles. 2017. “Meaningful Information and the Right to Explanation.” International Data Privacy Law 7 (4): 233–42. https://doi.org/10.1093/idpl/ipx022.

Smith, Aaron, and Monica Anderson. 2017. “Automation in Everyday Life.” Pew Research Center: Internet, Science & Tech. October 4, 2017. http://www.pewinternet.org/2017/10/04/automation-in-everyday-life/.

Zarsky, Tal. 2016. “The Trouble with Algorithmic Decisions: An Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making.” Science, Technology, & Human Values 41 (1): 118–32. https://doi.org/10.1177/0162243915605575.

Leave a Reply

Your email address will not be published. Required fields are marked *