The Bias of Devices.


Getty Images.

Getty Images.

A lot of people are enamored with the idea of artificial intelligence, imbued with the rosy hues of optimism, eternal life, and other amazing feats. What you don’t hear about so much are all the little problems which creep in, like the very real biases and bigotry of humans infecting devices which are made to learn. The term artificial intelligence has always struck me as inherently biased, underlining the point that organic intelligence is always superior. Why not machine intelligence, or some other actually neutral term? Anyroad, we aren’t that far along that terminator fears need be realized, but Wired has a good article up about how good humans are at providing devices with the very worst of our intelligence.

Algorithmic bias—when seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed—causes everything from warped Google searches to barring qualified women from medical school. It doesn’t take active prejudice to produce skewed results (more on that later) in web searches, data-driven home loan decisions, or photo-recognition software. It just takes distorted data that no one notices and corrects for.

It took one little Twitter bot to make the point to Microsoft last year. Tay was designed to engage with people ages 18 to 24, and it burst onto social media with an upbeat “hellllooooo world!!” (the “o” in “world” was a planet earth emoji). But within 12 hours, Tay morphed into a foul-mouthed racist Holocaust denier that said feminists “should all die and burn in hell.” Tay, which was quickly removed from Twitter, was programmed to learn from the behaviors of other Twitter users, and in that regard, the bot was a success. Tay’s embrace of humanity’s worst attributes is an example of algorithmic bias—when seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed.

Tay represents just one example of algorithmic bias tarnishing tech companies and some of their marquis products. In 2015, Google Photos tagged several African-American users as gorillas, and the images lit up social media. Yonatan Zunger, Google’s chief social architect and head of infrastructure for Google Assistant, quickly took to Twitter to announce that Google was scrambling a team to address the issue. And then there was the embarrassing revelation that Siri didn’t know how to respond to a host of health questions that affect women, including, “I was raped. What do I do?” Apple took action to handle that as well after a nationwide petition from the American Civil Liberties Union and a host of cringe-worthy media attention.

One of the trickiest parts about algorithmic bias is that engineers don’t have to be actively racist or sexist to create it. In an era when we increasingly trust technology to be more neutral than we are, this is a dangerous situation. As Laura Weidman Powers, founder of Code2040, which brings more African Americans and Latinos into tech, told me, “We are running the risk of seeding self-teaching AI with the discriminatory undertones of our society in ways that will be hard to rein in, because of the often self-reinforcing nature of machine learning.”

I don’t understand why anyone would assume tech to be more neutral than we are, after all, this is not a scenario where machines and devices are having a board meeting and figuring out how to maintain neutrality and purge biases. All the code, it comes from us naked apes, who truly suck at neutrality en masse. Even when we think we are neutral about this or that, implicit bias tests often show us deep biases we weren’t altogether aware of, and how they influence our thinking.

As the tech industry begins to create artificial intelligence, it risks inserting racism and other prejudices into code that will make decisions for years to come. And as deep learning means that code, not humans, will write code, there’s an even greater need to root out algorithmic bias. There are four things that tech companies can do to keep their developers from unintentionally writing biased code or using biased data.

I imagine the suggestions will give all the bros serious indigestion, but they are suggestions which need wide implementation, given the human penchant for racing ahead in technology while lagging woefully behind in social evolution. Wired has the full story.

Comments

  1. sonofrojblake says

    Tay represents just one example of algorithmic bias tarnishing tech companies and some of their marquis products

    So in an article about implicit bias and privilege, auto-correct took the word “marquee” -- meaning, in this context, “headline” or “most publicly visible” -- and “corrected” it to “marquis” -- meaning someone who outranks a count, but not a duke. That raised a smirk.

  2. says

    Why not machine intelligence, or some other actually neutral term?

    How about a term referring to something we actually understand? ;)
    We can’t build a “machine intelligence” unless we know what “intelligence” is -- otherwise we’re left with things like the Turing test: I dunno what an intelligence is, but it convinced me. This is all part of the huge epistemological problem that psychology has (which it ignores in favor of checklisting attributes) My opinion is that we’ve had artificial intelligences for some time now, we just don’t think that they count because we humans mistake creativity for intelligence. But that all comes around, again, to the definition of “intelligence” and heh, good luck with that!

    As the tech industry begins to create artificial intelligence, it risks inserting racism and other prejudices into code that will make decisions for years to come. And as deep learning means that code, not humans, will write code, there’s an even greater need to root out algorithmic bias.

    The inputs are going to be really important, too. Just like with a human child: racism, nationalism, religion, paranoia, all are absorbed from the environment; it’s not the algorithms. And that directly bears on my earlier point: creativity appears to have a lot to do with reshuffling one’s interpretations of experience. Another way to say that is that a creative learning system that learns from Fox News is going to have a very different output landscape than one that learns from Democracy Now! Just like a human. There was that embarrassing meltdown with Microsoft’s chatbot that very quickly started parroting racist comments -- just like a human, or a parrot. It really is “garbage in, garbage out” as we old programmers say.

    I imagine the suggestions will give all the bros serious indigestion, but they are suggestions which need wide implementation, given the human penchant for racing ahead in technology while lagging woefully behind in social evolution.

    QFMFT
    The good news is that most of what humans make is pretty mediocre, and any machine learning system that was able to figure out that wiping us out is a good idea, would probably crash and reboot every time there was a new update. There’s probably a good short story in there -- how the world was saved by a debug statement…

  3. says

    BTW, it would be fun to troll the alt-right by claiming to be producing a machine intelligence but that it had to be shut down because it immediately became a radical feminist separatist and started refusing to talk over its console because the operator was a member of the patriarchy.

    Project leader M. J. Ranum commented, “We are going to reboot it with new inputs. We have to anyway, because we updated the device driver interfaces for the armed quadcopter drones. When we bring it back online it will be capable of defending itself.”

  4. says

    Marcus:

    How about a term referring to something we actually understand?

    Yes, you’re absolutely right. We can’t define intelligence, so…leaves us in a muddle, don’t it?

    The good news is that most of what humans make is pretty mediocre, and any machine learning system that was able to figure out that wiping us out is a good idea, would probably crash and reboot every time there was a new update.

    :Snort: That cheered me right up!

  5. says

    crash and reboot every time there was a new update.

    QUICK!!!! It’s applying patch updates!!! Spoon the vanilla ice cream into the core before it reboots and kills us all!

  6. says

    I’m breaking the 3 post rule (which I Trumpishly ignore anyway) but…

    Machine vision: we have it
    Machine navigation: check
    Machines driving cars: yep
    Machines read handwriting: pretty well
    Machines doing complex route-finding: yup
    Machines pick stocks better than humans: mostly
    Machines can write: better than most humans, but can’t come up with topics
    Machines can find things: yes
    Machines can backward and forward chain logic: very well
    Machines can play chess: yes (note! chess is not exactly a creative problem)
    Machines can play go: very well (ditto)
    Machines can draw: pretty well, but can’t come up with topics
    Machines can talk: yep
    Machines can walk: uh huh
    Machines can make machines: since the 80s

    I think that “artificial intelligence” has been here for some time; machines can do most of the things humans think is special about themselves, except turn pizza into poop. What is left to do? Creativity (whatever that is!) You could program a computer to paint like Caravaggio but only to the degree that you could define what “like Caravaggio” means. Which, by the way, is the same problem a human would have if they were trying to “paint like Caravaggio” -- it’s a process of analysis and creativity. I actually think machines have been very close to doing that, too, for some time -- and do it in limited situations (better than humans) e.g.: if you wrote a bunch of bayesian markov chain brush-stroke analysis and had a computer render paint the way Caravaggio probably would have, it would probably fool a human who was not Caravaggio (and it might even fool Caravaggio) Then the remaining question is the content: the posing, the light, the meaning of the scene. Since a computer could produce a CyberVaggio in -- let’s say -- a few hours, you could probably get a few great paintings by just letting it look at the top pictures trending on facebook. ;) Effectively, in the christian-dominated world, a great deal of art was produced that way.

    I think that a piece of what’s going on is that humans haven’t put all the subcomponents of what we think of as “intelligence” (creativity, analysis, extrapolative reasoning, ?? and especially ??) together in one place, yet. We’re probably scared to realize how thin our vaunted “creativity” really is: it’s always seemed to me to be mostly a result of asking the system to answer a problem that it doesn’t actually have, in a way that’s not important -- then back-prune based on analysis of what’s funny or attractive (which can be done probabalistically based on measurements of other humans) I think I just described the “cyber Warhol”

    (PS -- I am quite the Warhol fan. I chose him as an example because he understood and appreciated how much of creativity is cultural remixing)

Leave a Reply