Extremist Speech and the Limits of Natural Language Processing

LetterSearch

Every so often, a story will break that artificial intelligence can now top humans in reading tests. A recent one came when Microsoft and Alibaba both said their AI beat humans on a reading comprehension test.

Turns out that the test was quite constrained and far from what a human actually does when interpreting words on a page (Simonite 2018). That is a much harder nut to crack. The thorny problem of extremism illustrates that well.

Even when you’re having a face-to-face conversation with people, it’s difficult enough to figure out what they’re thinking about saying and doing. Trying to do that across thousands of miles, using only the words that they typed on a Facebook page, say, is even harder. Now imagine trying to build a computer program that can find those typed words amid all the others floating around online and use them to come to conclusions about someone’s beliefs and intentions.

That’s in part what happens when natural language processing (NLP) techniques are used to identify and remove online extremist content (broadly defined).

Linguistics meets artificial intelligence in the field of NLP, also called computational linguistics. Work in this area uses computer code to model the natural languages that humans grow up speaking so machines can interpret them (Russell and Norvig 2009, Jurafsky 2009).

Many common NLP tools are blunt instruments when used for detecting extremist content online, not least because the sticky question of intent is important but difficult to discern from text alone.

A lot of offline research into extremist behavior has focused on intent—trying to determine what markers indicate to the outside world that someone is going to move from radical thought to violent action. In the offline world, too, the human drivers of these behaviors are neither fully understood nor easy to define. One confounding factor is context—someone’s historical experiences and environmental factors, for instance (Ali Saifudeen 2016).

Risk assessments attempt to put a finer point on this, describing what types of behaviors indicate an intent to commit a violent extremist act. Indicators of violent extremist leanings include semantic content, or the use of language to demonstrate extremist views or allegiance to a terrorist group (Pressman and Ivan 2016).

Yet, the existence of risk assessment tools does not mean they are effective, widely applicable, or objective. Using them requires subjective expert knowledge (Shortland 2016). The behaviors under the microscope stretch across online and offline worlds, and that interaction is difficult to disentangle (Shortland 2016, Hills et al. 2015, Weimann 2004, Ali Saifudeen 2016). In the online space, extremist groups such as the Islamic State are using “deceptive technologies,” including bots and spam services (Berger and Morgan 2015). The anonymity afforded by the internet allows people to appear either more or less radical than they really are. Plus, they may not even be conducting their most relevant conversations online at all (Shortland 2016).

In other words, online language that appears to indicate intent may not, in fact, be a good indicator of intent (Shortland 2016). Things get even more complicated when you mix in NLP.

Many current techniques being explored by academics operating in the extremism space are far from able to capture intent. Natural languages are modeled in programming languages. These programs or parsers trade in firm rules or algorithms that do not deal with uncertainty well (Hausser 2014, Russell and Norvig 2009). But natural languages operate in uncertain situations. There is no one, true meaning for a sentence—humans determine meaning by drawing inferences based on the incomplete information they have available (Russell and Norvig 2009). Programs can incorporate probability to help deal with uncertainty and take a step closer to human reasoning (Russell and Norvig 2009), but they aren’t the same.

Much of the work related to extremism and NLP relies on extracting information from text and classifying it. For example, algorithms identify different parts of speech—such as proper names—in a text and then compare those words to a list of proper names already identified by researchers as markers of extremist speech (Yang Hui 2016). The program classifies the text as extremist or not based on the results of that comparison. Stylometric analysis aimed at uncovering similarities between authors’ writing styles can also be automated and used to classify content (Johansson et al. 2016, Dahlin et al. 2012). Another example that uses similar classification techniques is sentiment analysis, which focuses on identifying emotions. That moves closer to intent but is still far from it (Johansson et al. 2016).

To have a chance to work, these tools require significant knowledge about what is a sign of violent extremist leanings (Berger and Morgan 2015). These models can become very complex very quickly, requiring either significant simplifications or large amounts of processing power for them to be useful at scale. Conceptually, variation in vocabulary in text presents a problem, and the data that exist to train algorithms are in large part limited to the English language  (Johansson et al. 2016), for the time being at least. The list goes on, and solutions to these problems are still being worked out (Johansson et al. 2016, Yang Hui 2016, Chambers and Jurafsky 2011).

The still-developing field of computational pragmatics could potentially help in a way. Understanding pragmatics is an attempt to get at the intentions of the author (or speaker) (Jurafsky and Martin 2009), and the practice of computational pragmatics seeks to find ways to automate more of humans’ reasoning abilities. Yet, determining intent is about more than linguistic markers online. Humans rely on both linguistic and non-linguistic characteristics, including context and historical experience, to make inferences (Bunt et al. 2014, Bunt and Black 2000).

NLP, then, is only one part of and potentially not the most effective part of any effort to identify extremist content online. Researchers should perhaps focus on building models that augment NLP techniques applied to online data with other tools, such as social network analysis, image recognition, geolocation, events such as account closures (Shortland 2016), and the data mining of stores of information gathered from offline behaviors over time. A shift at the NLP technique level toward unsupervised efforts that target the “abstract universals” of language rather than working with predefined word lists may be warranted as these tools become more widely accessible (Bunt et al. 2014). However, access to data with which to develop algorithms remains a problem for those outside of large corporations.

More fundamentally, drawing conclusions based on uncertain information is an intimate but not-well-understood part of the human cognitive experience. In building these kinds of algorithmic systems, we run up against boundaries not just in understanding what combination of factors drives someone to extremes and what the markers of that behavior are, but also in knowing how our own minds and brains operate.


Works Cited

This article is based on a longer research paper. Find the full reference list here.

Ali Saifudeen, Omer. 2016. “Getting Out of the Armchair: Potential Tipping Points for Online Radicalisation.” In Combating Violent Extremism and Radicalization in the Digital Era, edited by Majeed Khader et al., 129–148. Hershey, PA: IGI Global.

Berger, J.M., and Jonathon Morgan. 2015. “The ISIS Twitter Census: Defining and Describing the Population of ISIS Supporters on Twitter.” Brookings Institution. March 5, 2015. https://www.brookings.edu/research/the-isis-twitter-census-defining-and-describing-the-population-of-isis-supporters-on-twitter/.

Bunt, Harry, Johan Bos, and Stephen Pulman, eds.. 2014. Computing Meaning. Volume 4. Dordrecht, Germany: Springer Science+Business Media.

Bunt, Harry, and William Black, eds. 2000. Abduction, Belief and Context in Dialogue: Studies in Computational Pragmatics. Amsterdam and Philadelphia: John Benjamins Publishing Company.

Chambers, Nathanael, and Dan Jurafsky. 2011. “Template-Based Information Extraction without the Templates.” Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (2011): 976–986.

Dahlin, Johan, Fredrik Johansson, Lisa Kaati, Christian Martenson, and Pontus Svenson. 2012. “Combining Entity Matching Techniques for Detecting Extremist Behavior on Discussion Boards.” Advances in Social Networks Analysis and Mining (2012). https://www.foi.se/download/18.3bca00611589ae798781ab/1480076518941/FOI-S–4092–SE.pdf.

Hausser, Roland. 2014. Foundations of Computational Linguistics: Human-Computer Communication in Natural Language. 3rd Edition. Berlin: Springer-Verlag.

Hills, Stefanie, Tom Jackson, and Martin Sykora. 2015. “Open-Source Intelligence Monitoring for the Detection of Domestic Terrorist Activity: Exploring Inexplicit Linguistic Cues to Threat and Persuasion for Natural Language Processing.” Reading presented at the European Conference on Cyber Warfare and Security, July 2015. https://search.proquest.com/openview/04d112da1a825844fbfda45aae56f2b8/1?pq-origsite=gscholar&cbl=396497.

Johansson, Fredrik, Lisa Kaati, and Magnus Sahlgren. 2016. “Detecting Linguistic Markers of Violent Extremism in Online Environments.” In Combating Violent Extremism and Radicalization in the Digital Era, edited by Majeed Khader et al., 374–390. Hershey, PA: IGI Global.

Jurafsky, Daniel, and James H. Martin. 2009. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. 2nd Edition. Upper Saddle River, NJ: Pearson Education, Inc.

Pressman, D. Elaine, and Cristina Ivan. 2016. “Internet Use and Violent Extremism: A Cyber-VERA Risk Assessment Protocol.” In Combating Violent Extremism and Radicalization in the Digital Era, edited by Majeed Khader et al., 391–409. Hershey, PA: IGI Global.

Russell, Stuart, and Peter Norvig. 2009. Artificial Intelligence: A Modern Approach. 3rd edition. Upper Saddle River: Pearson.

Shortland, Neil D. 2016. “‘On the Internet, Nobody Knows You’re a Dog’: the Online Risk Assessment of Violent Extremists.” In Combating Violent Extremism and Radicalization in the Digital Era, edited by Majeed Khader et al., 349–373. Hershey, PA: IGI Global.

Simonite, Tim. “AI Beat Humans at Reading! Maybe Not.” WIRED. January 18, 2018. https://www.wired.com/story/ai-beat-humans-at-reading-maybe-not/.

Weimann, Gabriel. 2004. “How Modern Terrorism Uses the Internet.” United States Institute of Peace. March 13, 2004. https://www.usip.org/publications/2004/03/wwwterrornet-how-modern-terrorism-uses-internet.

Yang Hui, Jennifer. 2016. “Social Media Analytics for Intelligence and Countering Violent Extremism.” In Combating Violent Extremism and Radicalization in the Digital Era, edited by Majeed Khader et al., 328–348. Hershey, PA: IGI Global.

Leave a Reply

Your email address will not be published. Required fields are marked *