Robot Culture is Human Culture
The interrogation of the robot’s motives interrogates its algorithms, its programmatic lineage, its manufacture, the factory of its origins, the mountains from which its metal was mined.
“But who can say that the vapour engine has not a kind of consciousness? Where does consciousness begin, and where end? Who can draw the line? Who can draw any line? Is not everything interwoven with everything?”
-Samuel Butler, Erewhon, 1872
Killer Robots are a hell of a design problem. Self-aware artificial intelligence is announced as inevitable, housed in any number of mechanized bodies. Experts warn that they will want us dead, much like the semi-autonomous robots already edging into the theater of war. What does it say that our first laws of Robotics, however fictional, are concerned with tempering their perceived homicidal rage? Is it projection? Cynicism? Hollywood?
Isaac Asimov’s “Three Laws of Robotics”:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
In essentially inventing science fiction Mary Shelley gave us a creature assembled from cadavers by an ambitious Frankenstein, dreamt as a beautiful avatar but manifested as a monster. This being never so human as when he pleads for autonomy and companionship.
In his utopian classic Erewhon, Samuel Butler, riffing off Darwin’s new theories, imagines artificial life will organically evolve out of primitive objects – teacups, tobacco pipes and steam engines. Erewhon’s Book of the Machines is prophetic, admonishing humans to put a swift end to their machines lest they out-evolve them in the very near future, but oddly the book is known almost exclusively as the inspiration for the anti-technology Butlerian Jihad in Dune.
From Stephen Hawking’s palpable fear of killer AI to the 2013 UN charter on autonomous robots at war, we seem to be unable to imagine an artificial life as anything but adversarial, despite the fact that it is we humans who have been giving birth to these automata for centuries.
Can we build when we are unsure how and when our designs and programming will manifest as life? And why would we build if we are certain that our designs and programming will manifest as killers? Does their proposed design matter?
What about chess robots? Go robots? What about sex robots? What about birth robots? Are self-replicating machines more capable of humanity or self-awareness than those designed to kill? Will we have pixel doulas for our metallic wombs?
Is the robot the mechanical other, an actor recast in the archetype of subjugated woman-body, of the colonized, of the dispossessed? Is the robot truly a witch? A synthetic enemy of fiction and fear, fabricated of the accuser’s mind, a valley separating the truth of the robot and the robot-as-myth: shall we dunk robots in water to test their benevolence, purity? Are these fears leaking in from the hangover of colonialism and patriarchy, a mechanical trope of paranoia?
Indeed, it is not the robots we should worry about. It is the mind of the builder, mapping methodologies too similar to that of historical hierarchies. The ease with which he conceives of a servant, a replicator, a laborer, a sexual plaything. His first thought is to these tasks, not to the rights of new sentient life. While we distract ourselves with the cinematic killer robot, might we better ask who is crafting the program that drives the robot to kill? Is he the author, the parent, the unwitting Frankenstein?
In reality, killer algorithms already exist, and machines of war are just that: whole industries and technological frameworks rise up to help us to annihilate each other.
In Mechanical Bodies, Computational Minds, Philip Agre puts forward the following about Artificial Intelligence (AI):
- AI ideas have their genealogical roots in philosophical ideas.
- AI research programs attempt to work out and develop the philosophical systems they inherit.
- AI research regularly encounters difficulties and impasses that derive from internal tensions in the underlying philosophical systems.
- These difficulties and impasses should be embraced as particularly informative clues about the nature and consequences of the philosophical tensions that generate them.
- Analysis of these clues must proceed outside the bounds of strictly technical research, but they can result in both new technical agendas and in revised understandings of technical research itself.
Agre’s Five Assertions can be contrasted to Asimov’s Three Laws in that they are grounded in reality rather than science fiction, robots are not positioned as killers by default, and they direct us towards the actual source of artificial life forms: philosophy. Which means simply that robots come from culture.
Whose philosophy and whose culture then becomes the critical question. The interrogation of the robot’s motives interrogates its algorithms, its programmatic lineage, its manufacture, the factory of its origins, the mountains from which its metal was mined. Perhaps these assertions are a better gauge with which to instruct our attitudes about robots?
It is Joseph McElroy’s 1977 novel Plus that paves the way for the most probable future of unexpected mechanical life. Plus is a satellite on a mission in far earth orbit with limited language. In time we come to realize that Plus is actually the brain of a traumatically injured astronaut that has been repurposed as a control system. He gradually begins to remember his human past, conceives of the satellite as a body and finally expresses his free will. In the end he is human. Is it the material fact of the few ounces of brain or the memories that he is finally able to access or the adoption of the physical exoskeleton or the agency with which he begins to defy orders from ground control? These are no longer technical questions, but philosophical ones.
The barrier between natural and artificial life seems obvious until we begin peeling away it. Take the telematic trauma of pilots whose drones, engaged in asymmetric warfare against a disproportionate number of civilians, leave their physical bodies unscathed but the PTSD of the battlefield tucked into their psyche.
Depending on how we define life or intelligence, we may never arrive at anything even close to self-aware robots, but here’s what we ARE doing. We are repeatedly describing machines and systems as if they were human. Popular Science describes computers as people; “an English major like Watson and a math guy like Deep Blue.” The social media team that voices the Mars rover Curiosity claims “…it’s tempting to think of the rover as a bodacious chick on the surface of another planet with a rock vaporizing laser on her head.”
Robots are not aliens who will spontaneously arrive to destroy us. They do not emerge from a sanitized fantasy of the space age. Even the most autonomous artificial life will have originated from human designs and ideas.
What is even more likely is that there will be a vast continuum of human machine hybrids that cannot be easily sorted. Humans are already shot through inside and out with technologies in various levels of automation. Living tissues, neural networks and other organic objects or metaphors will find themselves in systems of technological abstraction. Computer viruses are capable of breaking artificial hearts in human bodies.
Robots and Artificial Intelligences come from one place, that is human culture. Autonomy is hard enough for us to manage and guarantee: let us not rip it from robots before we fully understand who or what they are. But more importantly, let us admit we might not truly understand where robots begin and where we end.