Good Turned Evil: When AI Goes Awry
“I’m sorry, Dave. I’m afraid I can’t do that.”
– HAL 9000
As ghoulishly clad alter-egos come out of the closet and classic horror flicks return to our screens, it feels timely to reflect on some of the stories of AI gone wrong, the fateful outcomes that have ensued and how they could be avoided; as well as sharing some perspective on the idea of AI “replacing us”.
Iconic movies such as Kubrick’s “2001 A Space Odyssey” – where antagonist HAL 9000 turns evil upon discovering plans to shut him down – are still representative of much of the negative hype that surrounds AI. Countless other modern-day narratives, depicting villainous robots taking over to the demise of the human race, continue to emerge across film and literature, feeding a growing appetite for dystopian fantasy.
Although sentient machines aren’t about to conquer the world with minds of their own; when it comes to AI’s role in ethics, fairness and social inclusivity, there have definitely been some real-life cases that are worthy of concern. Let’s look at a few examples where, despite good intentions, AI has gone awry.
“Is anybody there?”
In a time when businesses come under more scrutiny than ever, the last thing you need is for your AI models to display bigotry or antisocial behavior. This is exactly what happened when a computer was deemed racist as its facial recognition software failed to acknowledge the presence of a black person. Similarly, a social media chatbot built using NLP turned sour when it ended up regurgitating highly inappropriate and offensive language. Of course, the brands in question acted fast to take down these systems, but the fact that these dire problems occurred was clearly a cause for concern. So what happened?
In the case of the chatbot, in effect the AI model worked correctly by replicating human language patterns. Unfortunately, it learned from an undesirable source of real-world data, by being exposed to deliberately negative commentary generated by trolls that in no way represented the company’s views and attitudes. As for the facial recognition software, clearly the data didn’t cover a broad enough range of people to work effectively. Having a large quantity of data isn’t enough; just as important is a wide variety of data sources to ensure fair and inclusive representation.
“Guilty as charged”
Examples of AI-enhanced racism become even more sinister in the penal and judicial systems, where it can have truly nightmarish outcomes for individuals. Predictive policing methods have led to harsher and more frequent sentencing of people in poorer black communities, raising concerns that rather than improving safety they are actually creating even wider social gaps and further perpetuating existing inequalities. Things only get worse when sentencing tools are used to measure the likelihood of recidivism and are found to label black people as more likely reoffenders than whites. The problem here is that the algorithms used to drive these tools are based on historical data, patterns, and statistics – and when history is checkered with blatantly biased attitudes towards specific racial and social groups, it means the tools simply end up propagating those preconceived ideas and judgments.
Unfortunately, without a carefully managed human-in-the-loop process representing people of diverse backgrounds and views – and further validation steps to evaluate judgments – algorithms of this kind can simply lead to a horror re-run of racist attitudes and incorrect sentencing.
“We regret to inform you…”
In a similar way to prison sentencing tools, hiring automation technology that rates candidates based on historical patterns could also be found guilty of perpetuating bias. For example, they may recommend men over women by observing the lack of female candidates in previously successful job applications, and subsequently learning from this trend. Once again, without conscious human supervision and fine-tuning, the algorithms behind these tools can lead to unfair hiring decisions, with qualified individuals having a lower chance of being hired simply due to gender, age or other social factors.
“I, robot”
More worrying for some though is the notion that people are gradually being replaced by robots in the workplace. The existence of sites such as willrobotstakemyjob.com is testament to this. There is of course some truth to machines and computers reducing the number of jobs previously carried out by humans, but this isn’t due exclusively to the rise of AI. Since the industrial revolution, technology has been transforming the way companies manufacture, manage and maintain production. Moreover, it’s predicted that AI will create 2.3 million jobs in the education, healthcare and public sectors. The world is changing and while the nature of AI is of course to “stand in” for humans on some level, it’s also opening many doors of opportunity.
As machine learning becomes more instrumental in automated decision making, it’s essential to ensure models are properly trained with high-quality unbiased data, to ensure they don’t turn into something of a nightmare for individuals, socio-ethnic groups, as well as for the businesses behind them.
To end on a lighter note, some people’s nightmares have quite literally been the “training data” behind AI tools. MIT’s Media Lab created Shelley, the AI who wrote horror stories in collaboration with human tweeters. The same team came up with the Nightmare Machine , designed to generated gruesome images – you can still help it learn scariness by voting on images, but click at your own peril. Happy Halloween!