A lot of people are enamored with the idea of artificial intelligence, imbued with the rosy hues of optimism, eternal life, and other amazing feats. What you don’t hear about so much are all the little problems which creep in, like the very real biases and bigotry of humans infecting devices which are made to learn. The term artificial intelligence has always struck me as inherently biased, underlining the point that organic intelligence is always superior. Why not machine intelligence, or some other actually neutral term? Anyroad, we aren’t that far along that terminator fears need be realized, but Wired has a good article up about how good humans are at providing devices with the very worst of our intelligence.
Algorithmic bias—when seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed—causes everything from warped Google searches to barring qualified women from medical school. It doesn’t take active prejudice to produce skewed results (more on that later) in web searches, data-driven home loan decisions, or photo-recognition software. It just takes distorted data that no one notices and corrects for.
It took one little Twitter bot to make the point to Microsoft last year. Tay was designed to engage with people ages 18 to 24, and it burst onto social media with an upbeat “hellllooooo world!!” (the “o” in “world” was a planet earth emoji). But within 12 hours, Tay morphed into a foul-mouthed racist Holocaust denier that said feminists “should all die and burn in hell.” Tay, which was quickly removed from Twitter, was programmed to learn from the behaviors of other Twitter users, and in that regard, the bot was a success. Tay’s embrace of humanity’s worst attributes is an example of algorithmic bias—when seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed.
Tay represents just one example of algorithmic bias tarnishing tech companies and some of their marquis products. In 2015, Google Photos tagged several African-American users as gorillas, and the images lit up social media. Yonatan Zunger, Google’s chief social architect and head of infrastructure for Google Assistant, quickly took to Twitter to announce that Google was scrambling a team to address the issue. And then there was the embarrassing revelation that Siri didn’t know how to respond to a host of health questions that affect women, including, “I was raped. What do I do?” Apple took action to handle that as well after a nationwide petition from the American Civil Liberties Union and a host of cringe-worthy media attention.
One of the trickiest parts about algorithmic bias is that engineers don’t have to be actively racist or sexist to create it. In an era when we increasingly trust technology to be more neutral than we are, this is a dangerous situation. As Laura Weidman Powers, founder of Code2040, which brings more African Americans and Latinos into tech, told me, “We are running the risk of seeding self-teaching AI with the discriminatory undertones of our society in ways that will be hard to rein in, because of the often self-reinforcing nature of machine learning.”
I don’t understand why anyone would assume tech to be more neutral than we are, after all, this is not a scenario where machines and devices are having a board meeting and figuring out how to maintain neutrality and purge biases. All the code, it comes from us naked apes, who truly suck at neutrality en masse. Even when we think we are neutral about this or that, implicit bias tests often show us deep biases we weren’t altogether aware of, and how they influence our thinking.
As the tech industry begins to create artificial intelligence, it risks inserting racism and other prejudices into code that will make decisions for years to come. And as deep learning means that code, not humans, will write code, there’s an even greater need to root out algorithmic bias. There are four things that tech companies can do to keep their developers from unintentionally writing biased code or using biased data.
I imagine the suggestions will give all the bros serious indigestion, but they are suggestions which need wide implementation, given the human penchant for racing ahead in technology while lagging woefully behind in social evolution. Wired has the full story.