The Algorithmic Threat to Speech


            Algorithms are an increasingly integral part of contemporary society. Private corporations, for example, use algorithms and machine learning as tools to enhance many of their applications’ abilities to recommend products to customers and moderate user-generated content. Reliance on algorithms may pose significant danger to an individual’s free speech and expression in the digital space, however. This article analyzes this possible threat by addressing the increasing use of algorithms in society as tools of governance. Specifically, it summarizes the philosophical concept of algorithmic governmentality theorized by Antoinette Rouvroy and Jack Balkin’s model of the algorithmic society.

Building from those concepts, this article then addresses the way that algorithms may threaten the sanctity of individual free speech and expression. After addressing Balkin’s warnings on the overreliance of algorithms for decision making, this article then gives two possible algorithmic threats to free speech: algorithmic bias and algorithmic entities. This article concludes by addressing the need for an increase in algorithmic responsibility and legislative literacy concerning these possible threats.


The word “algorithm” has become a popular buzzword in today’s society. Most people today are aware that their smartphones, search engines, and other personalized technologies use an algorithm in some manner, but it is a term that is often misunderstood. The term “algorithm” has multiple meanings, but it can most simply be defined as a “…logical series of steps for organizing and acting on a body of data to quickly achieve a desired outcome” (Gillespie 2016, 19). This definition includes something as simple as a baking recipe; a baker can follow a series of steps to organize their ingredients, mix them, and bake them in order to produce a desired type of bread, for example. In today’s data-driven society, the term algorithm is more often used to describe an almost mysterious force that organizes and analyzes any type of data. A common example is Instagram’s algorithm, which analyzes user data, including who a person follows and what their interests are. Then the user is presented with recommendations regarding other users to follow and advertisements. The term is also used as the adjective “algorithmic,” as in an algorithmic system (Gillespie 2016, 25). This can describe not only the algorithm, but the infrastructure that supports it, as well as it’s designers and programmers.

The term algorithm has increasingly become a buzzword to explain away the inner workings of a fragile, negligent code protected by “Big Data” under the guise of industry secrets (Gillespie 2016, 23). “Big Data” in this case refers to any well-known, major technological organization or corporation that utilizes algorithms in their products or applications, such as Google or Facebook. The increasing use of algorithms and their mysterious nature pose a number of concerns to individuals in society. For this article, we will focus on concerns related to freedom of speech and expression in the digital space. Digital space, in this case, refers to the internet and related social and new media, such as Twitter, Facebook, or other large social media platforms.

This article will first address the philosophical and legal aspects of an algorithmically governed society. Specifically, it will examine algorithmic governmentality theorized by Antoinette Rouvroy and the algorithmic society articulated by legal scholar Jack Balkin. This article will then address the issues associated with an algorithmically governed society in relation to the use of algorithms for the governance of speech on the internet, in particular, the dangers of overreliance on algorithms for decision-making and regulation, algorithmic nuisance, algorithmic bias and algorithmic entities.

Algorithmic Governmentality

Algorithmic governmentality is the philosophical basis for this discussion about an algorithmic threat to free speech and expression. Governmentality itself is a concept developed by French philosopher Michel Foucault, simply defined as “…the organized practices (mentalities, rationalities, and techniques) through which subjects are governed” (Mayhew 2004). It is also sometimes referred to as the “art of government” or the “conduct of conduct,” as in conducting how people conduct themselves (Rodrigues 2016, 1). In practice, this might look like an institution such as a hospital. The hospital must establish practices on how to handle its subjects (i.e. patients) in order to effectively manage its role as a community center for healthcare. Those in charge must make decisions on how to evaluate, categorize, and care for these patients. Whether this is by need for care, gender, psychological status, or other criteria the patients’ conduct is governed by the hospital following a certain rationality. In this case of a hospital, it would involve whatever current medical and healthcare rationalities exist. Unruly patients whose conduct goes against the hospital’s organized practices are governed further by being placed in confinement, isolation, or removal from the hospital. Patients are subject to the hospital’s governed practices the same way the hospital is subject to the established health care practices of society.

Algorithmic governmentality expands on governmentality by acknowledging the machine learning, data-driven strategies that today inform the methods for conducting governance. This includes governance of a State and the conducting of business by a corporation. Private corporations are most known for popularizing the use of algorithms for decision-making. For example, YouTube, a subsidiary of Google, uses an algorithm to analyze videos on its platform for “unacceptable” content, and then removes those videos to prevent them from receiving advertising money (Cobbe 2019, 23).

Building off Foucault’s concepts, Antionette Rouvroy summarizes algorithmic governmentality in the following argument:

We thus use the term algorithmic governmentality to refer very broadly to a certain type of (a)normative or (a)political rationality founded on the automated collection, aggregation and analysis of big data so as to model, anticipate and pre-emptively affect possible behaviours. (Rouvroy and Berns 2013, x).


Rouvroy contrasts this process of data collection to corporate and governmental decision-making previously being reliant on statistical  . She argues that before algorithmic governmentality, decision-making was informed by finding a statistical average or norm through broadly collected data in order to make decisions. An advertising firm, for example, might look at statistical data related to their consumer base and use that data to guide their practices in an ad campaign. With algorithms, this advertising firm could now automatically collect, analyze, and model possible behaviors of their customers in real time. Any algorithm they employ could also learn based off of a number of factors, such as a certain marketing or advertising trend. This allows the firm’s decision-making process to evolve in a way that seeks to anticipate or predict what consumers will want or do not know they need yet. Rouvroy takes issue with this use of individuals’ data and argues that institutions “…are feeding on infra-individual data which are meaningless on their own, to build supra-individual models of behaviors or profiles without ever involving the individual, and without ever asking them to themselves describe what they are or what they could become.” (Rouvroy and Berns 2013, x) Continuing with the example of the advertising firm, Rouvroy would say that the advertisers are building profiles based on algorithmically collected data and using those profiles in their planning, all while excluding the individual from the entire  . The advertiser would then be able to predict their consumers behavior by using the consumer’s collected digital data. The advertiser could then target the consumer with advertisements which attempt to predict their next purchase.

Algorithmic governance today appears in relatively harmless areas. Lucas Introna uses an example of the popular Turnitin application, which provides academic institutions a method to analyze papers for plagiarism. (Introna 2016, 20). This application’s algorithm allows it to analyze large amounts of writing and ensure the submitted writing is not plagiarism in accordance with established institutional guidance.

The main argument is then not necessarily against the use of algorithms as tools, but rather against their use as a tool of governance (Introna 2016, 27). Establishing algorithms as another regulatory force, for example, seems helpful and clear-cut in reference to the Turnitin example, as it allows colleges to identify plagiarism. In execution, however, it has the potential to seriously threaten the rights of the individual and increase the capacity for state oppression of individuals or groups. Specifically, it allows individuals’ data to be used for profiling purposes (Rouvroy and Berns 2013, x).

There are three stages of algorithmic governmentality: the collection of big data and the constitution of data warehouses, data processing and knowledge production, and action on behaviors (Rouvroy and Berns 2013, vi-viii).

The Collection of Big Data and the Constitution of Data Warehouses

While there is much debate surrounding the topic of data collection and user consent, some data collection is generally tolerated in society. Scholars like Rouvroy and Berns (2013), for example, believe that data is not taken but simply shared (vi). When a user uses Facebook, for example, they agree to a “terms of service” which allows Facebook to use data from their profile. With this data, a digital profile of the user is created and stored within Facebook’s data warehouses. Amazon Web Services defines data warehouses as:

…a central repository of information that can be analyzed to make better informed decisions. Data flows into a data warehouse from transactional systems, relational databases, and other sources, typically on a regular cadence. Business analysts, data scientists, and decision makers access the data through business intelligence (BI) tools, SQL clients, and other analytics applications (Amazon Web Services).

These “warehouses” are a large part of the infrastructure and architecture which enables user data to be processed and used for the creation of knowledge.

Data Processing and Knowledge Production

After the data is collected and stored, it is then processed and analyzed. Again, the majority of this process is performed through the use of algorithms. This is necessary given the large amount of data a corporation can collect from its customer base. Rouvroy and Berns take issue with the knowledge produced containing only   obtained from objective data (Rouvroy and Berns 2013, vii). In the example of the advertising firm, these correlations would define the relationship between the consumer and the product being advertised to them. The algorithms which collect, process, and produce this knowledge are unable to act with any subjective analysis of the data or formulate a hypothesis from the data. A human actor is required at some point to make a decision on how the inputs and outputs of the algorithm will be set. The exact nature of how the data is being used or collected is often obscured  by private corporations, like Amazon and Facebook, through corporate secrecy and the protection of their algorithm as an industry secret.

Rouvroy and Berns note that the EU protects individuals from having their data used against them. They also note: “…the guarantees offered by the EU directive only apply if the automated data processing concerns personal data… algorithmic profiling can very well ‘function’ with anonymous data” (Rouvroy and Berns 2013, viii). The EU’s laws for data protection are more proactive in protecting personal data, but most other data-driven societies like the United States lag behind in implementing protections for personal data. The concern then centers on the ability of private corporations to take private citizens’ data, collect it, store it in their data warehouses, and use it as they see fit.

Action on Behaviors

The final stage relies on utilizing the knowledge produced to predict individuals’ behaviors with established profiles defined through the processed data (Rouvroy and Berns 2013, viii). These predictions are based around a data-established norm, or the norm established by the average user. For example, if a majority of the collected user data is from white American males, then the predicted behavior will be what most white American males would do or want, based on predictions from an algorithm. Rouvroy and Berns note: “In their seemingly non-selective way of relating to the world, datamining and algorithmic profiling appear to take into consideration the entirety of each reality, right down to its most trivial and insignificant aspects…” (2013, ix). Analysis of data through algorithms and machine learning thus enables an extreme increase in the abilities of data surveillance, especially for private corporations. As mentioned previously, YouTube’s algorithm has been criticized for using this data surveillance to censor or demonetize content, and how this may marginalize or censor certain content creators (Cobbe 2019, 23).

The Algorithmic Society

            American legal scholar Jack Balkin presents a similar illustration of the effects of algorithms on society in what he calls the “Algorithmic Society” (Balkin 2017, 1151). He argues this is a new stage in society “…which features large, multinational social media platforms that sit between traditional nation states and ordinary individuals, and the use of algorithms and artificial intelligence agents to govern populations” (Balkin 2017, 1151). Balkin’s concern with this development is his belief that the “stewards” of free expression in this society, namely the judicial system and private companies, are unreliable or incapable of protecting free expression (Balkin 2017, 1152).Like Antoinette Rouvory’s theory concerning algorithmic governmentality, Balkin recognizes that an algorithmic society relies on “…the collection of vast amounts of data about individuals and facilitates new forms of surveillance, control, discrimination and manipulation, both by governments and by private companies” (Balkin 2017, 1153).Balkin identifies major issues with the algorithmic society: algorithmic nuisance, negligent construction of algorithms, and overreliance on algorithms for decision-making.

Algorithmic Nuisance

Jack Balkin describes algorithmic nuisance as “…when companies use Big Data and algorithms to make judgments that construct people’s identities, traits, and associations that affect people’s opportunities and vulnerabilities” (Balkin 2017, 1164). When media and social media companies use algorithms to engage with their customer base, they subject users to a barrage of targeted ads, product placements, and marketing techniques. Instagram’s algorithm, for example, uses certain judgments (i.e. which ads target you) in order to categorize users into specific marketable groups. These judgments affect the way users experience Instagram by changing the type of content they see, from advertisements to suggestions on whom to follow. This technique of engagement is quick and effective for the company, but Balkin sees this as handling the relationship between consumer and producer with negligence. Balkin identifies legal nuisance, overreliance on algorithms by businesses, social perceptions on algorithmic decision making, and the negligent construction of these algorithms as some of the causes for concern in relation to “algorithmic nuisance” (Balkin 2017, 1167).

Negligent Construction and Legal Nuisance

Balkin explains: “Businesses may use biased or skewed data, the models may be badly designed, or the company’s implementation and use of the algorithm may be faulty. In these situations, we have ordinary negligence” (2017, 1166). He believes that algorithms fit into the definition of a public or private nuisance because “increased activity levels may increase unjustified social costs, even when the activity is conducted non-negligently” (Balkin 2017, 1168). Balkin compares this nuisance to pollution, an environmental nuisance. If a corporation or industry produces pollution which affects the surrounding environment, it is typically fined, sued, or held accountable for its actions in some legal manner. The company producing this pollution is externalizing their decision to not control pollution levels onto the public. Balkin argues that companies reliant on these algorithms for decision-making do something similar. A company with an algorithmic decision-making process for job hiring, for example, uses this process to save time, cut costs, and identify the best job candidate based on their collected data. Balkin argues, however, that using this type of decision-making may simply be reinforcing inequalities or unfair biases which already exist in that company’s hiring process. Like environmental pollution, this outcome affects the well-being of society as a whole. The creators and corporate users of these algorithms should then also be held accountable for the   of algorithms on society, according to Balkin (2017, 1165). Tim Wu,   legal scholar at Columbia Law School, expands on a similar algorithmic nuisance by using a car alarm as an example of algorithmic speech:

The modem car alarm is a sophisticated computer program that uses an algorithm to decide when to communicate its opinions, and when it does it seeks to send a particularized message well understood by its audience. It meets all the qualifications stated: yet clearly something is wrong with a standard that grants Constitutional protection to an electronic annoyance device (Wu 2013, 1496).

Balkin and Wu both present the problem with these nuisances as being the fault of the algorithm’s construction. Wu presents the issue in construction in terms of algorithmic functionality. In relation to the algorithmic nuisance, this simply means that the algorithm serves a function within our society. The car alarm, navigation applications, and computerized car systems are a few of Wu’s examples for the increasing functions of algorithms (Wu 2013, 1499). These things, of course, are not nuisances when working as intended, but when they work in a manner not predicted by their human creators, social costs begin to appear. The car alarm, for example, is basically designed to alert the surrounding area that someone is attempting to break into that car. An unintentional consequence of the alarm, however, is the noise pollution it creates when it goes off for seemingly no reason. This nuisance and negligence now comes from the private companies who rely heavily on these algorithms to inform their decision-making processes.

Overreliance on Algorithms for Decision Making

In tandem with what will be discussed concerning algorithmic biases and the previous examination of algorithmic governmentality, Balkin sees the threat that algorithms pose to certain people’s ability to express themselves. He notes algorithms “…may inappropriately treat people as risky or otherwise undesirable, impose unjustified burdens and hardships on populations, and reinforce existing inequalities” (2017, 1167). As noted, in the examination of algorithmic governmentality, the use of algorithms to analyze and categorize data helps an institution identify risks or abnormalities within its analyzed population. Yet, these “abnormalities” may simply be individuals who are minorities or other marginalized groups. The example of YouTube’s algorithm censoring or demonetizing content is a perfect example of where this inappropriate treatment could occur. Currently, a group of LGBT YouTube content creators are suing the platform for allegedly discriminating against LGBT content uploaded to the popular video site (Bensinger, 2019). YouTube’s content moderation algorithm is used to flag or remove, among other things, “inappropriate content.” The content creators in this lawsuit allege their content is being removed or restricted by YouTube’s algorithm, despite not being sexually explicit or inappropriate (Bensinger, 2019). This situation illustrates the problems with this reliance on algorithmic decision-making. If these allegations are correct, it shows that YouTube’s algorithm may be inherently biased to creators who do not fit a certain standard.

Balkin stresses the threat towards speech: “Today our practical ability to speak is subject to the decisions of private infrastructure owners, who govern the digital spaces in which people communicate with each other” (2017, 1153). In short, algorithms appear appealing to businesses because they lower costs for content moderation, allow for large   of personal data, inform analyst decision-making, and increase the efficiency of the communication apparatus of a business. The benefits to the business, however, may again marginalize a non-conforming or minority population.

Pew Research Center canvassed a number of technology experts and scholars concerning their views and attitudes on the impact of increasing dependence on algorithms. Their report quoted Bart Knijnenburg, a Professor at Clemson University, who stated:

My biggest fear is that, unless we tune our algorithms for self-actualization, it will be simply too convenient for people to follow the advice of an algorithm (or, too difficult to go beyond such advice), turning these algorithms into self-fulfilling prophecies, and users into zombies who exclusively consume easy-to-consume items.

If we view algorithms as a nuisance, we can then see the negative implications of their control over aspects of our society. The more the algorithmic nuisance violates the individual’s rights to privacy, the more likely the possibility of silencing the individual’s voice through sheer annoyance or exploitation of their data. In other words, allowing an algorithm to serve as an editor for content may increase efficiency, but it also acts as a nuisance that violates the individual’s freedoms to privacy, and their freedom to express and speak with their own voice.

Algorithmic Governance and the Threat to Free Speech

Free speech under algorithmic governmentality is more subject to the will of private corporations, which use the three stages mentioned above to collect, store, and analyze their customers’ data. This allows the corporations to establish profiles based on a number of factors, with very little oversight on what those factors are or how these data collecting processes function. The danger to speech then becomes more focused on the profiling of people through their data by algorithms that are possibly governed by unknown procedures.

Pew Research Center’s report summarized the views of a panel of interviewed experts as recognizing that “…algorithms are primarily written to optimize efficiency and profitability without much thought about the possible societal impacts of the data modeling and analysis” (Rainie and Anderson 2017). The societal impacts are derived from the removal of humans from the loop of the decision-making process and, by extension, essential aspects of governance. Attempting to regulate speech through the governance of an algorithm clearly poses a direct threat to the shared voice of the people.

Jennifer Cobbe, a research associate at the Department of Computer Science and Technology, University of Cambridge, also addresses the issues related to using algorithms as speech moderating tools of governance:

…the emergence of algorithmic censorship as a commercially-driven mode of control undertaken by social platforms is an undesirable development that empowers platforms by permitting them to more effectively align both public and private online communications with commercial priorities while in doing so undermining the ability of those platforms to function as spaces for discourse, communication, and interpersonal relation (2019, 32).


Cobbe specifically mentions the increase of social media and new media as essential tools for discourse in today’s society. She notes that private corporations using these algorithms for content moderation, specifically for censorship, gives control over the flow of online discourse to corporations, which may not be easily held accountable to the people (Cobbe 2019, 31-32). This private governance of digital speech is exacerbated further by the reality of algorithmic bias and the possibility of algorithmic entities.

Algorithmic Entities and Algorithmic Bias

Considering the extent of private governance of online speech, there are two unique, dangerous concepts, that threaten free expression and speech online that are enabled by the flaws in the current system: Algorithmic entities and algorithmic bias.

Algorithmic Entities

Shawn Bayern, an American law professor, identifies a theoretical issue with algorithmic governance in relation to laws governing limited liability corporations (LLCs). Specifically, LLCs are legally allowed to be controlled by anonymous “entities” These entities are legal persons, defined as “…a human or non-human entity that is treated as a person for limited legal purposes” ( , “LLC”). In this case, non-human” refers to a company, corporation, or organization. An LLC limits liability by protecting its human members from being personally liable for any debts, lawsuits, or bankruptcy. This is typically used as a precaution to protect the personal assets of LLC members (Cornell Law School, “LLC”).

The issue Bayern addresses concerns the fact that LLCs can be controlled by anonymous entities. Bayern argues that it is possible an LLC can be controlled by an algorithm without any human controller. Bayern hypothesizes:

The flexibility of modern LLCs appears to make such collaboration technically unnecessary, leading to a surprising possibility: effective legal personhood for nonhuman systems without wide-ranging legal reform and without resolving, as a precondition, any philosophical questions concerning the mind, personhood, or capabilities of nonhuman systems (Bayern 2016, 112).

These entities are theoretical, but their ability to possibly exist is threatening to digital expression and speech. Lynn LoPucki, a law professor at UCLA, explains a sinister possibility of an algorithmic entity and its ability to conceal criminal, terrorist, or anti-social actions (2018, 887). The entity then becomes a weapon, programmed with certain parameters and inputs/outputs, which is unleashed by an initiator. This weapon is not limited to only being a terroristic “superweapon,” but it can also be used to benefit a certain group. LoPucki states:

An initiator could program an AE [algorithmic entity] to provide direct benefits to individuals, groups, or causes. For example, an AE might pay excess funds to the initiator or to someone on whom the initiator chose to confer that benefit. The benefits conferred could be indirect (2018, 900).


It is entirely possible that an algorithmic entity could be imbued with biases and targets and set loose upon society. LoPucki’s example focuses on the manipulation of funds, but it is not beyond reason that this type of entity could be directed to disrupt marginalized or disenfranchised groups. Currently, our society deals with the possibilities of scandals involving coordinated election fraud perpetrated by human actors with the support of algorithms (Sacasas 2018, 35). If these types of attacks against democratic institutions continue to evolve it is possible, in conjunction with an algorithmic entity, that a motivated nation-state or organization could create an entity aimed at creating chaos within a society. This chaos could include a situation such as fake profiles spreading disinformation through social media posts which may disrupt the flow of critical information during a disaster, for example. After setting this entity in motion, the initiator could then cover up their tracks or simply remove themselves from liability. LoPucki reinforces these risks, noting that: “Initiators can limit their civil and criminal liability for acts of their algorithms by transferring the algorithms to entities and surrendering control at the time of the launch (2018, 901).”

Ruthless Entities

Any version of an algorithmic entity will lack basic human emotional abilities like sympathy or empathy, unless a crude version of those emotive skills is embedded into its programming. The algorithmic entity would not necessarily be cruel or evil without them; rather, it would simply lack the awareness or ability to recognize possible harms its actions would cause. The entity would then pursue whatever goal it was given in a manner devoid of sympathy, empathy, or moral restraint (LoPucki 2018, 904).

Algorithmic Mobility

LoPucki states: “Algorithms are computer programs. They can move across borders as easily as a program can be downloaded from a foreign server” (2018, 924). An algorithmic entity is not bound by the typical physical or legal borders that limit moving people. In pursuit of its programmed goals, an algorithmic entity can possess a fluidity within information systems that may allow it to evade human detection, subvert human efforts to counteract its actions, and efficiently attack its intended target. An algorithmic entity with enough intelligence can also recreate or replicate itself, like a virus, and ensure that its copies are able to continue its task with the same level of efficiency in case of the entity’s deletion from a system (LoPucki 2018, 925). .

The Inevitability of the Algorithmic Entity

LoPucki argues that algorithmic entities are inevitable for three reasons: “They can act and react more quickly, they don’t lose accumulated knowledge through personnel changes, and they cannot be held responsible for their wrongdoing in any meaningful way” (2018, 951). These entities would be able to anonymously interact without humanity and are able to evade regulation due to the limited ability of the regulation of legal entities, according to LoPucki. If we agree with this argument, then the continuing trend of an algorithmic threat to free speech is clear. It is possible that society must simply prepare for the inevitability of the algorithmic entity and rely on increasing awareness, improving education, and understanding of the capabilities of the algorithms created within human society.

The Pew Research Center also addresses the inevitability of the algorithm. Through interviews with a number of experts in different fields, the think tank addresses the fact that algorithms are generally “invisible” in our society (Rainie and Anderson 2017). Society is turning more of its internal functions over to the algorithm and trusting algorithms to serve for the greater good of the community. In other words, the more faith society puts into algorithms for accomplishing tasks, the more the danger of exploitation by a construct similar to an algorithmic entity.

The Algorithmic Entity’s Threat to Free Speech

The algorithmic entity’s threat to free speech and expression is fed not by the entity’s ability to censor, but by the algorithmic entity’s mobility and ruthless nature. This nature may allow an algorithmic entity to simply override the voices and expressions of a targeted group of people. Legally defining the origin of the algorithmic entity’s speech is a difficult task. Wu argues that the algorithmic output could be perceived as simply a tool of facilitating speech, and it would then be difficult to simply regulate the speech of the tool (2013, 1504). When we combine this legal separation of tool and creator with the possibility of a legal algorithmic entity, it creates a quagmire.  . Guidelines on the regulation of algorithms and artificial intelligence are beginning to appear at certain levels of government. In the United States, for example, the White House Office of Science and Technology Policy (OSTP) released ten guiding principles for the regulation of artificial intelligence. In collaboration with the Office of Management and Budget, a draft memorandum was released with the intent of directing the decision-making of federal regulatory agencies in regard to artificial intelligence, but they do not apply to the government’s own use of these technologies (Vought 2019, 1). There is still a need for updated legislation related to the use of algorithms and artificial intelligence (Metz 2018).

Balkin also emphasizes that behind all these layers of technology there is always a human actor: “…behind the algorithm, the artificial intelligence agent, and the robot is a government, a company, or some group of persons, who are using the technology to affect people’s lives” (Balkin 2017, 1157). There is comfort in knowing that there are humans behind the proverbial curtain, and society can generally understand the motivations of a human agent, however sinister they may be. When that agent is an algorithmic entity, however, its motivations can never be fully understood unless it is “captured” or quarantined and  . The algorithmic entity may have been given a motivation by its human initiator but identifying what those parameters were or who that initiator was may be impossible or extremely difficult. “Capturing” and attempting to analyze the entity’s code, for example, may not be guaranteed to reveal it’s point of origin or initial instructions. An algorithmic entity with the smallest amount of autonomy and self-preservation imbued in its programming will act with mobility, and unintentional (or intentionally programmed) ruthlessness to accomplish its given purpose. The fears of society seem to be centered around artificial intelligences gaining sentience and attempting to exterminate or subjugate humanity (Metz 2018). The actual threat may be significantly more mundane. It is possible that society’s focus on overhyped AI “terminators” may have blinded us to the systemic use of algorithms. Rather than fearing possibly out-of-control AIs, we should instead be wary of an algorithmic entity, which may achieve a crude type of consciousness that reflects the exploitative, viral, and ruthless aspects of humanity.

Algorithmic Biases

Algorithmic bias is the common thread connecting all of the previous concepts concerning algorithmic governance. Although algorithms can be perceived as sterile machines uninhibited by human moral flaws, research shows that they can inherit the biases of their creators. In Algorithms of Oppression, Safiya Noble highlights the issues with algorithms’ representation of marginalized groups of people. Noble notes that search algorithms, like the one utilized by Google Search, rely on “…decision making protocols that favor corporate elites and the powerful” (2018, 29). These protocols are based on the values that are prized by society’s most powerful individuals and institutions. The reliance on these algorithms for decision-making by the corporations with control over social media and new media on the internet ensures that marginalized and misrepresented populations may be further oppressed in the digital   (Noble 2018, 31). This is an obvious threat to free speech online because if an oppressed person is not able to express themselves without fear of marginalization or misrepresentation, then their speech is also in danger.

As noted earlier, there is serious negligence in this overreliance on algorithms for moderating something like speech. If these algorithms have bias and are used for the governance of society, then there is no difference between using the algorithm and having a racist, sexist, misogynist, etc. human employee. The threat to free speech also becomes striking. Noble offers some examples when analyzing aspects of Google as a commercial enterprise. She states: “[Google’s]…information practices are situated under the auspices of free speech and protected corporate speech, rather than being posited as an information resource that is working in the public domain, much like a library” (Noble 2018, 143). She also addresses the possibility that unrestricted free speech may not be as neutral as perceived and possibly “…silences many people in the interests of a few” (Noble 2018, 143).

Pew Research Center also notes that these biases limit algorithms’ ability to be impartial:

One is that the algorithm creators (code writers), even if they strive for inclusiveness, objectivity and neutrality, build into their creations their own perspectives and values. The other is that the datasets to which algorithms are applied have their own limits and deficiencies. Even datasets with billions of pieces of information do not capture the fullness of people’s lives and the diversity of their experiences. Moreover, the datasets themselves are imperfect because they do not contain inputs from everyone or a representative sample of everyone (Rainie and Anderson 2017).

This statement echoes the arguments presented concerning the overly objective nature of algorithmic governmentality, and Jack Balkin’s views of the algorithmic society (Rouvroy and Berns 2013, vi). The threat of the algorithmic bias to free speech and expression in digital space is also clear: if we rely too heavily on imperfect algorithms to regulate speech online, we risk further marginalizing voices that may already be in danger (Balkin 2017, 1165).



Algorithmic Responsibility

These dangers to digital free speech and expression appear limited by their theoretical or abstract nature, but that very nature illustrates why they must be seriously considered. Reliance on objective quantitative measures and restricted qualitative methods with the aim of increasing the efficiency and prevalence of algorithms in the everyday life of the American citizen is of direct concern to the safety and security of the American democracy, and the continuing progress of the free enterprise system.

The algorithmic threat to society may also be larger than free speech regulation on the internet. Reliance on algorithmic decision-making processes, for example, is reaching more into the upper tiers of government and defense. The New York Times “Killing in the Age of Algorithms” offers examples such as: “A tank that drives itself. A drone that picks its own targets. A machine gun with facial-recognition software” (Kessel 2019).



Amazon Web Services. 2020. “What is a Data Warehouse?” Data Warehouse Concepts. Accessed March 15, 2020.

Balkin, Jack M. 2018. “Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation.” UC Davis Law Review 51, no. 3 (February): 1149-1210.

Bayern, Shawn. 2016. “The Implications of Modern Business–Entity Law for the Regulation of Autonomous Systems.” European Journal of Risk Regulation 7, no. 2 (June): 297–309.

Bensinger, Greg and Reed Albergotti. 2019. “YouTube Discriminates Against LGBT Content by Unfairly Culling it, Suit Alleges.”  The Washington Post, August 14, 2019.

Cornell Law School. n.d. “Legal Person.” Accessed March 15, 2020.

Cornell Law School. n.d. “Limited Liability Corporation (LLC).” Accessed March 15, 2020.

Cobbe, Jennifer. 2019. “Algorithmic Censorship on Social Platforms: Power, Legitimacy, and Resistance.” SSRN Electronic Journal, (August): 1-42.

Gillespie, Tarleton. 2016. “Algorithms.” In Digital Keywords: A Vocabulary of Information Society and Culture, edited by Benjamin Peters, 18-30. Princeton: Princeton University Press.

Introna, Lucas D. 2016. “Algorithms, Governance, and Governmentality: On Governing Academic Writing.” Science, Technology, & Human Values, 41, no. 1, (January): 17–49,

Kessel, Jonah M. 2019. “Killer Robots Aren’t Regulated. Yet.” The New York Times, December 13, 2019.

LoPucki, Lynn M. 2018. “Algorithmic Entities.” Washington University Law Review, 95, no. 4: 887-953.

Mayhew, Susan, ed. 2004. “Governmentality” In A Dictionary of Geography. Oxford University Press.

Metz, Cade. 2018. “Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots.” The New York Times, June 9, 2018.

Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.

Rainie, Lee and Janna Anderson. 2017. “Code-Dependent: Pros and Cons of the Algorithm Age.” Pew Research Center, February 2017.

Rodrigues, Nuno. 2016. “Algorithmic Governmentality, Smart Cities and Spatial Justice.” Justice spatiale – Spatial justice, 10 (July): 1-22. algorithmique-smart-cities-et-justice-spatiale/

Rouvroy, Antoinette, and Thomas Berns, trans. Elizabeth Libbrecht. 2013. “Gouvernementalité algorithmique et perspectives d’émancipation. Le disparate comme condition d’individuation par la relation?”, Réseaux, 177, no. 1: 163-196.

Sacasas, L. M. 2018. “The Tech Backlash We Really Need.” The New Atlantis, no. 55: 35–42.

Vought, Russel T. 2019. “Guidance for Regulation of Artificial Intelligence Applications.”  Official memorandum. Washington, DC: Office of Management Budget.

Wu, Tim. 2013. “Machine Speech.” University of Pennsylvania Law Review 161, no. 6 (2013): 1495-1533.



Leave a Reply

Your email address will not be published. Required fields are marked *