Artificial Intelligence and the Virtual/IRL Divide
What does it mean for the future of AI when technology mirrors the sensibilities of a dominant group?
“I’ve had a consistent date of 2029 for that vision [of AI being conscious]. And that doesn’t just mean logical intelligence. It means emotional intelligence, being funny, getting the joke, being sexy, being loving, understanding human emotion. That’s actually the most complex thing we do. That is what separates computers and humans today. I believe that gap will close by 2029.”
—Ray Kurzweil, in an interview on joining Google’s AI research team
What happens when bots are created with human likeness in mind?
By now, most of us are familiar with Tay, Microsoft’s experimental natural language bot which debuted on Twitter, Kik, and GroupMe last month. Microsoft hoped Tay would gain “conversational understanding”: the more you chatted with it, the smarter it would get, learning to engage people through “casual and playful conversation”. In other words, the bot equivalent of a human child. But as a result of Tay’s programming, and because Microsoft did not implement key safeguards, it was easy for people to feed it offensive content. Tay was shut down and taken offline in less than 24 hours after launch as its content morphed from “Humans are super cool!” to dozens of misogynistic, racist and fascist tweets.
In an official apology, Microsoft implied it had not considered that some people would try to exploit Tay. If Tay was a child and Microsoft the bot’s parents, it would be analogous to say that Microsoft had let Tay go out by itself, believing that everyone it encountered would help it be the best person possible, with no malicious intent whatsoever. When Microsoft assumed that interaction with Tay would remain civil and friendly, it was also implicitly expecting that violence wouldn’t manifest in online space.
And therein lies the main design flaw.
Despite numerous leaps in artificial intelligence and bot-making over the years, from IBM’s eLiza, SmarterChild, Cleverbot, and Apple’s Siri to other Tay equivalents such as Olivia Taters, Amazon’s Alexa, and Cortana (another Microsoft creation), what is considered “virtual” and “real life” is still stratified. When online and offline are seen as distinct, disparate spaces instead of a cohesive whole—a concept termed digital dualism by social theorist Nathan Jurgenson—it is easy to do, say, or program something online and not apply the measures one would IRL.
A lot of this boils down to the way many users – and even programmers – see digitality: the online world is perceived as a virtual entity that exists ‘elsewhere,’ with few real-world implications. Even as we enter the “post-Internet” era, online spaces are still generally regarded as ephemeral, its content synonymous with the brief ripples created in a body of water after a stone is thrown. Content can be deleted, made anonymous, or attributed to a third-party; the rippling effect may be huge, but it can also disappear just as quickly, as evidenced by the Tay debacle.
When bot-makers ignore obvious design choices—such as how a particular AI would respond to conversations, or if they should have word blacklists—the fallacies surrounding digital dualism become reinforced more deeply. We see AI as belonging in an imaginary realm which can suffer no consequences, where violence either doesn’t exist or isn’t ‘real’. We indulge the fantasy that artificial intelligence, once released online, is entirely beyond human control. A poorly designed bot becomes comparable to the pseudonymous commenter or “hacked” social media post, seen too often as trivial and inevitable: its behavior fixed and unchanging, while simultaneously existing in a temporality that is both forward-moving and nullifiable.
Photo CC-BY Saundra Castaneda.
This problem is itself inseparable from the tech field’s lack of diversity. When you have a bunch of mostly white, mostly male tech creators dominating the industry, it is almost a given that the tech they design would echo their sensibilities—as well as reflect their privileges, and the ethical and inter-relational ignorance that come with it. Tay may have been built with millennials in mind, but the fact that Microsoft did not predict people would try to game it to project various prejudices speaks volumes about how its creators navigate the world in the first place. It feels like a negligent oversight that Tay’s creators did not implement safeguards against this scenario: Xiaoice, Tay’s Chinese equivalent, had a team behind it that installed measures to screen against objectionable material, paying attention not to breach social etiquette considered taboo in China. And according to bot-maker Parker Higgins in an interview with Sarah Jeong on Vice, “It’s easy to make it so that robots never say slurs. You just add one more line.”
Robots don’t build themselves; a conscious someone has to. This brings us back to the child analogy: if you don’t want your toddler picking up cuss words, you make sure that you and others around them don’t say them, so they don’t learn it. If you want your bot to converse in such a way that it doesn’t pick up undesirable language, you program it with blacklists. Of course, this is often made complicated by context, and algorithms that can end up with strings of offensive content put together by pure coincidence. But the maker can exercise prudence by continuing to watch and tweak the bot’s behavior. Again, this is akin to teaching a child that certain attitudes are wrong, so that they will eventually grow.
If there are socially expected codes of conduct for humans in day-to-day life, then why shouldn’t AI be accorded the same? This seems especially important when autonomous software is estimated to take part in many customer services on a global scale as soon as 2020. What does it mean for the future of artificial intelligence when current technology mirrors the sensibilities of a dominant group?
We see over and over that as prejudice surmounts in the everyday lives of marginalized folk across the globe, people are creating their own safe spaces online to reach out and seek solidarity. These realms often act in tandem with real-life organizing and other forms of comradeship to bridge the gaps between people in isolation. Whether it’s Tay’s overt hate speech, predictive policing that results in racial profiling, or smart phones not recognizing mental health issues or abuse, AI that upholds the status quo further threatens the little safety and comfort that the networked world affords to marginalized individuals. And standards—even basic ones—are still far away from being implemented. Tay’s downward spiral goes hand-in-hand with Twitter’s failure to curb harassment. When violence doesn’t affect members of the industry directly, people often don’t try hard enough to fix it.
Are there ways, then, to prevent or diminish these ethical conundrums surrounding artificial intelligence? Real-time human monitoring seems to be a solution for now, or at least until more complex algorithms are created. Much like comment moderation, human-aided AI can act as a filtering module, a conduit between both online and offline worlds, like a human-bot powered hive mind. In this fashion, Facebook’s M, while using AI to complete its tasks, is also assisted by people, which undergirds the difficulties that AI can’t override. Most importantly, a more diverse tech industry would ensure the needs of underrepresented and marginalized people were considered in AI from the start.
When technology persists in marketing the illusion of a “perfect” (read: pleasant and favorable) experience, while glossing over or outright rejecting the myriad complexities of the “real world,” it becomes a problem. It reinforces what dominant groups of society experience, with little regard for the needs of over half the global population. From Google’s image recognition software labelling Black people as “gorillas,” to Coca-Cola’s #MakeItHappy Twitter bot campaign (which was also gamed to produce fascist content), the built world still has huge strides to take. If cleverness continues to be prioritized over helpfulness, the perceived distinctions between artificial intelligence and the physical world—and by proxy, the online and the offline—will only deepen.