Robotic Emulation

Abstract

Science fiction has presented us with a picture of robots that have the potential to mimic humans to the point of being indistinguishable from them. This paper is about some of the essential ways the two are different. The argument is that we differ from machines at our base processing level, and even though some modern manifestations of robotics are approaching human-like appearance and capability, it is important to ensure that we are not tricked by their design. Human beings’ capacity for free will and intuition, experience of pain, and robots’ characteristics as compared to animals present stark differences between humans and machines. Drawing on Immanuel Kant’s discussion of humanity and animality, this paper illustrates how different our current versions of robots are and why they should always be considered differently when it comes to governing and creating policy for these emerging technologies.

Introduction

Conceptually, we think of robots as having the potential to emulate human capabilities and someday even perform certain tasks better than humans ever could (or perform tasks that humans cannot perform at all). However, that is just what we are designing robots to do—to emulate human behavior—to seem to be the same or to fulfill the same purpose, principle, or objective as a human would, but in a different way. The functioning of a robot is completely different from a human at its base processing level. It is driven and empowered through computation and electrical signals, whereas humans are driven by something else, which is and has been the subject of much debate since the birth of religion and philosophy. This paper is about the essential differences between humans and robots. Current discussion includes robot rights, robots working side by side with humans, and robots replacing humans in the workforce. Ultimately, I will define key differences between humans and robots that are related to the concepts of free will and creativity, the human capacity for intuition, and the sensory experience of pain. Robots in specific roles will be used to illustrate and support the main idea that robots and humans are essentially different because of the way they function, and this should influence the way we approach governance in specific instances. However, I will not give narrow recommendations for how we should govern robotics, because the main purpose of this paper is to set a framework for thinking about robotics in general and in any given situation, from children’s toys to military use.

Programs and Free Will

A robot’s programming, at its basest level, is built from binary instructions. That is, it is commanded to perform one set of operations given x set of circumstances1 and another set of operations given y set of circumstances. These circumstances have numerical thresholds or boundaries, and though the program may seem to “decide” on one or another action, it is making this decision based on a sensor reading that is connected to a particular command. In this sense, it does not have free will, or the ability to creatively set its own goal or objective. Put differently, given the option of two sets of instructions, a program cannot choose to do a third thing.

Immanuel Kant famously wrote at length about free will, moral decision-making, and what it means to be human. To draw on his concept of imperatives,2 a maxim is essentially a statement that is organized thus: “I perform action x in order to accomplish objective y” (Kant 2015, p.19). This is where we can draw a line between creative decision-making and sensor-reading based selection. It is certainly true that given the instruction “Take this letter to the mailbox and place it inside,” a robot’s breadth of possible actions will be much narrower than if the given instruction were “Run all of my errands for the day.” The difference between the possible number and consequences of the robot’s actions in each of these situations is clear. Some may argue that the robot has more independence if given the second instruction, but it is important to remember that with our current technology, each and every action of a program is ultimately framed as one of two options and based on a sensor reading. An even stronger distinction emerges if we try to look for the robot’s maxim—in every case, no matter what the selected action x may be, the highest ultimate goal y will always represent the programmer’s general intent (in these cases, putting the letter in the mailbox or completing all assigned errands).

The program may behave in a variety of ways that are unpredictable prior to the moment of action, but the ultimate, highest objective will remain the same and will be equivalent to the programmer’s purpose. Because of this, we cannot say that programs have free will, at least not by Kantian standards. This is an important distinction and separates machines from humans in a very essential way, at the functional level and in terms of potential for action.

Intuition and Context

Human beings operate primarily in accordance with intuition. Particularly, much of our interaction with other humans is based on this capacity: a simple statement like “I am so happy” can be received in a variety of ways, because of emphasis placed on one word or another, or because of a certain cadence that could indicate sincerity or sarcasm to the listener. In this regard, the concept of digital information transmission is far more effective for passing along information since the range of messages that can be received using the same words is far less varied.

So what is intuition, or rather, how do humans intuit? One aspect related to intuition is a person’s ability to recognize a vague similarity of one situation to some previous experience. This capacity could arguably be included in a program and would represent something like machine learning, living in the program’s memory. Another aspect of intuition, one that is perhaps more ingrained or a priori, is a person’s ability to determine that something is out of place, even if the person cannot easily map that situation in relation to a previous one. This is likely related to the “fight or flight” brain response and an instinctual sense of danger.

Iraq and Afghanistan veteran Daniel Davis talks about intuition in his writing about military robots and the question of who should control the “kill command.” Though he is referring specifically to military robots, his explanation nicely illustrates the distinction between robotic systems and humans. “A machine cannot sense something is wrong and take action when no orders have been given,” he writes. “It doesn’t have intuition. It cannot operate within the commander’s intent and use initiative outside of its programming. It doesn’t have compassion and cannot extend mercy” (Davis 2007). In one sense, this could be taken to mean that the robot cannot function beyond the highest level objective of its maxim (the commander’s intent). In another sense, the concepts of compassion and mercy are reserved to humans and represent decisions that are made not because of, but in spite of, existing circumstances. Humans have the capacity to adopt these concepts (what many would call values) because we can understand the larger context and implications of a situation. We are not bound by the scope and structure of programming, and we operate using supplemental knowledge and information outside of our immediate situation.

Robots and Animals

At times, robots have been compared to animals in order to provide a precedent for how humans should interact with machines. Many modern robots for personal use as toys are designed with an animalistic appearance (Zoomer Zuppies, Genibo Robot Dogs), and those that do not represent an animal per se still tend to take on roles as pets. Faceless, task-oriented robots such as Roombas frequently receive names from their owners, and soldiers have notably formed emotionally-based relationships with military robots, going so far as to hold funerals for them after they have been destroyed (Dzieza 2014).

What is the root of this affection for objects that could be equated with smartphones or desktop computers? Especially in the case of the military robot, with its functional and frill-less design, it is intriguing that battle-hardened soldiers can be so emotionally affected by the “death” of a robot, despite being trained to kill other human beings and to mentally endure witnessing the deaths of their comrades. According to University of Calgary computer scientist Ehud Sharlin, these types of emotional attachments to robots have to do with empathy. “Our entire civilization is built on empathy…Societies are built on the principle that other entities have emotions,” he says. Additionally, the way a machine physically moves has a big effect on how humans perceive it; we make predictions about an entity’s thoughts and desires based on how it behaves (Koerth-Baker 2013).

Kate Darling makes a case for robot rights, founded on Kant’s argument for human duty toward animals. “The Kantian philosophical argument for preventing cruelty to animals is that our actions towards non-humans reflect our morality—if we treat animals in inhumane ways, we become inhumane persons,” she writes. “This logically extends to the treatment of robotic companions” (Darling 2012, 17). While this may be true, it leaves out a key part of Kant’s framework that separates machines from animals in their respective relationships to humans. Kant does argue that kindness to animals is part of the “perfection of our nature,” and that fostering feelings of sympathy leads humans to adopt better morals in general. Similarly, holding negative or sadistic feelings can lead an individual down the path toward more questionable moral standings (Denis 2000, 406–7). Kant’s argument is based heavily on the concept of sympathy and/or empathy.3 For Kant, sympathy arises from the principle that a subject recognizes in him/herself a similar defining trait that also applies to the entity s/he observes. Kant identifies that trait as humanity for humans observing other humans and loosely defines it as “the ability to set and pursue ends, and to will morally.” Humanity is also related to animality, which is shared by both humans and animals and includes motivations to procreate, to preserve one’s self and offspring, and to exist in some semblance of community (Denis 2000, 406–7). It is this shared trait that obligates us to animals—the idea that I can witness an animal in pain and because I recognize something like myself in that animal, I feel sympathy. This sympathy gives rise to my obligation and duty to help the animal.

This is not the case with robotic systems. If I witness someone “abusing” a robot, it may be shocking to watch because the robot is anthropomorphic or because it moves like an animal, but on a rational level, I cannot say that I recognize something of myself in that robot. The robot does not share my humanity; that is, it does not set and pursue ends outside those of the programmer, and it certainly does not “will morally.”4 Moreover, the robot does not share my animality. It does not seek to procreate, interact socially with other robots, or preserve itself if not specifically designed to do so. Obviously, some of these goals could be included in the robot’s programming, but they are still commands rather than inherent overarching objectives that the robot creates for itself.

Perhaps most important to this debate is the subject of pain. The idea that humans and robots share the Kantian traits of humanity or even animality has been refuted here, and this paper is not long enough to do a deep inventory of every possible shared trait and its implications. However, by looking at how robots and humans/animals experience and respond to physical abuse, we can identify a key difference. A widely-viewed YouTube video showcasing Boston Dynamics’ robotic dog (Lomas 2015) includes shots of it walking, running, climbing inclines, and withstanding a forceful kick from a human. Consequently, the video’s comment section received numerous protests to the treatment of the machine when it was kicked. The robot moves like an animal and even corrects its stance in a way that is reminiscent of a dog’s motion. Though the movement is similar, the robot does not experience physical pain while being kicked. Its response is based purely on its sensors’ readings in order to maintain balance. One could argue that pain in humans and animals is a comparable sensory system: for instance, if I place my hand on a hot stove, the nerves in my hand send an electrical signal to the neurons in my brain indicating that the stove is excessively hot and I should remove my hand. (The immediate impulse is to do so, however, humans have the masochistic ability to override this initial signal and continue to endure pain.) Likewise, a dog that is kicked receives a signal to its brain that registers a sensation of pain and an impulse to move away. In both cases, the sense of pain is primarily unpleasant, and in general, pain is perceived in this way. However, the robot is not programmed to experience pain or pleasure, and it is important to note that the robotic dog neither flees nor attacks its abuser, which is a likely outcome of kicking an actual organic dog.

Conclusions and Real World Consequences

The primary message to take away from these arguments is that robotic systems are vastly different from organic entities in terms of essential functioning, and this should be kept in mind in attempting to govern or apply these technologies through any means. Because of a program’s structure, it cannot operate outside of the widest scope of its code, and thus it does not possess any quality resembling the free will that humans and animals (in a limited sense) exhibit. Additionally, humans and animals function using not only their immediate sensory perceptions, but also a more unified understanding of the context in which events and stimuli occur. Programs, on the other hand, do not have a capacity to subconsciously identify significant factors and combine information freely to form something like intuition. Furthermore, when it comes to basic functional structure, programs do not “experience” things in the way that humans and animals do, no matter how similar their reactions may seem. These facts should influence our opinions on governance and application of robotics for military use, private recreational use, medical use, and in general.

While we attempt to create programs and robots that are “smarter,” more organic-looking, cuter, more “independent,” and more powerful, it is important to keep these questions in mind. As our computational abilities develop further, we should reassess some of the points raised here, as well as our other beliefs about robots and programming. Particularly, the more we try to truly recreate organic entities through methods like quantum computing, the more frequently we should review our computational history. The key to governing computational technologies like robots is honesty about our own nature so that we can know when to resist the sometimes misleading aesthetic effects of our creations.

References

Darling, Kate. “Extending Legal Rights to Social Robots.” Social Science Research Network. 23 April 2012. Web. 13 Dec. 2015. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2044797.

Davis, Daniel L. “Who Decides: Man or Machine?” Armed Forces Journal. 1 Nov 2007. Web. 13 Dec. 2015. http://www.armedforcesjournal.com/who-decides-man-or-machine/.

Denis, Laura. “Kant’s Conception of Duties regarding Animals: Reconstruction and Reconsideration.” History of Philosophy Quarterly, Vol. 17, No. 4 (Oct., 2000), pp. 405-423. Web. 13 Dec. 2015. http://www.jstor.org/stable/27744866.

Dzieza, Josh. “Why Robots Are Getting Cuter.” The Verge. 5 Aug. 2014. Web. 13 Dec. 2015. http://www.theverge.com/2014/8/5/5970779/rise-of-the-adorable-machines.

Kant, Immanuel. Groundwork for the Metaphysics of Morals. Edited by Jonathan Bennett, Sept. 2008. Web. 13 Dec. 2015. http://www.earlymoderntexts.com/assets/pdfs/kant1785.pdf.

Koerth-baker, Maggie. “How Robots Can Trick You Into Loving Them.” The New York Times, 21 Sept. 2013. Web. 13 Dec. 2015. http://www.nytimes.com/2013/09/22/magazine/how-robots-can-trick-you-into-loving-them.html?pagewanted=all&_r=0.

Lomas, Claire. “Watch Robot Dog ‘Spot’ Run, Walk…and Get Kicked.” YouTube. Web. 13 Dec. 2015. https://www.youtube.com/watch?v=aR5Z6AoMh6U.

Robot, Dongbu. “Genibo Robot Dog.” Robotshop. Web. 9 April 2016. http://www.robotshop.com/en/dasa-robot-genibo-robot-dog.html.

“Zoomer, your new best friend.” Zoomerpup.com. Web. 13 Dec. 2015. http://www.zoomerpup.com/.


1If the reader is somewhat familiar with programming and does not accept this statement, allow me to explain my reasoning. Code does, of course, utilize a variety of commands and functions, and the statement I have made here may seem to suggest that all code is in the form of IF statements. While far more complicated functions are incorporated into the code that runs most of our modern technologies, at some level, every function is based on a binary decision. For example, consider a FOR loop, whose temporal runtime is much longer than an IF statement and whose returned value feeds continuously back into the FOR loop until a certain point. On every run through the FOR loop, the program is determining whether or not a certain threshold has been reached, and the basic options for action are either “Repeat loop because the threshold has not been reached” or “Continue to the next line because the threshold has been reached.” Put far more simply, the binary distinction is between a “1” and a “0.”

2Kant’s system of imperatives is the basis for his theory of morality and underlies much of his writing on human action in general. He builds out his descriptions of various types of imperatives throughout his writings, but most notably in his Critique of Pure Reason.

3These words mean different things, the first meaning the ability to understand another’s experience because something similar has happened to the subject and the latter meaning a sense of pity or understanding felt by the subject, though s/he has not personally experienced an observed hardship. This distinction will become significant later.

4The question of various tiers of “ends” or goals will not be discussed here, but robots at this point inarguably do not set the ultimate goals that their entire structures of programming function to fulfill. Similarly, the question of “What is morality?” is too long to address here, but contemporary robots do not possess morals in the sense of weighing right and wrong, even in the face of contradictory legislation/rules/commands.

Sam Red is a Master's candidate in Communication, Culture & Technology at Georgetown. His research focuses on technology's potential to bring the richness of all our senses to what is predominantly visual, or how to design more effectively with consideration of the multisensory nature of human experience. He may be reached at srr54@georgetown.edu.