Identifying criminals using facial features alone


The idea that criminality is not contingent on external factors like need and opportunity but that some people are intrinsically prone to be criminals based on their biology has been around for a long time and led to efforts to create all manner of metrics to determine those markers. Sam Biddle writes about a troubling new study that claims that artificial intelligence (AI) software can tell whether you will be a criminal based on your facial features alone.

In a paper titled “Automated Inference on Criminality using Face Images,” two Shanghai Jiao Tong University researchers say they fed “facial images of 1,856 real persons” into computers and found “some discriminating structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle.” They conclude that “all four classifiers perform consistently well and produce evidence for the validity of automated face-induced inference on criminality, despite the historical controversy surrounding the topic.”

The study contains virtually no discussion of why there is a “historical controversy” over this kind of analysis — namely, that it was debunked hundreds of years ago. Rather, the authors trot out another discredited argument to support their main claims:, that computers can’t be racist, because they’re computers:

This misses the fact that no computer or software is created in a vacuum. Software is designed by people, and people who set out to infer criminality from facial features are not free from inherent bias.

The problems with such a system are immense. Apart from the whole idea that class and race-based prejudices and other forms of bias have played roles in previous attempts to identify criminals based on physical features, there is always the issue of what you do with such information if you do find possible markers. Do you pre-emptively lock people up? Place them under constant surveillance? Arrest them if a crime is committed anywhere in their vicinity?

Then there is the issue that what is considered a crime itself is not free of bias. People may agree that murder, rape, and other forms of physical violence are crimes. But what about stealing? A person caught shoplifting is considered a criminal even if there are strong mitigating circumstances and the damage is slight. But what about people in white collar jobs who commit all manner of acts that have devastating effects on large numbers of people? One need go no further than the major banks and other financial institutions that were behind the last financial crisis to find people who should be in jail but walk free. How about those who bribe or otherwise influence others to gain an unfair advantage? Are those identified as crimes by this software?

We have a legal system that is heavily biased towards protecting the property and well-being of the wealthy and this kind of tool will simply be used by them to further add to the bias.

Comments

  1. Anders Kehlet says

    Do you pre-emptively lock people up? Place them under constant surveillance? Arrest them if a crime is committed anywhere in their vicinity?

    You mean business as usual?
    How very fortunate that cops don’t need any sophisticated software to distinguish between skin tones.

  2. says

    I remember a lecture by my dad about a fellow…. (picks up the telephone)
    Ah, yes -- Joseph D’hemery -- who was a bureaucrat in the 1770s who built a massive physiognomic database about which authors were most likely to write proscribed books, and so forth. His ideas had some traction and, if I recall, there was a chief of police who built a physiognomic database of criminals in Paris at the same time (i.e.: you could go and cross-index “pickpockets with warts” and “warts on nose” and get a list of pickpockets with warty noses. Very useful if it causes pickpocketry. He was the spiritual ancestor of FBI director J. Edgar Hoover, who built a database that was much the same thing. Except for the name, I recall thinking that the fellow represented the beginning of the technological police state.
    There’s not much about him, unfortunately, and dad never put his lectures online because he did/does them all on an old Smith Corona.

    From book below

    He describes Voltaire as “Tall, dry, and the bearing of a satyr” as well as a “bad subject”

    I suspect a certain amount of confirmation bias, Voltaire had already shown his true colors as a child.

    There’s a bit here:
    http://www.robertdarnton.org/authors
    Also (google books urls are awful)
    https://books.google.com/books?id=cmobV4fXRtgC&lpg=PA30&ots=6sqIhAs4-u&dq=Joseph%20d'H%C3%A9mery&pg=PA30#v=onepage&q=Joseph%20d'H%C3%A9mery&f=false

  3. says

    We have a legal system that is heavily biased towards protecting the property and well-being of the wealthy and this kind of tool will simply be used by them to further add to the bias.

    Which is funny, because by any reasonable objective measure, it is the wealthy’s crimes that are the most damaging to society. Any system of this type that worked correctly would flag investment bankers and venture capitalists.

  4. Owlmirror says

    @Marcus Ranum:

    dad never put his lectures online because he did/does them all on an old Smith Corona.

    Scanners and OCR are real things. Scanners with sheet feeders, even.

    google books urls are awful

    Google books really only needs the “id=” and “pg=” parts of the URL in order to work, thus:

    https://books.google.com/books?id=cmobV4fXRtgC&pg=PA30

    links to the same page your link @#2 does.

    (And after reading the page in question — “Secret police report” is a genre? Hm.)

  5. Owlmirror says

    (Digressing further to my previous comment)

    The reference on the linked page to “Ash 1997” is The file: a personal history, by Timothy Garton Ash.

    A snippet from text about the book:

    In this memoir, Garton Ash describes what it was like to rediscover his younger self through the eyes of the Stasi, and then to go on to confront those who actually informed against him to the secret police.

    I am also reminded of the film “The Lives of Others” — and I see that Garton Ash reviewed the film.

  6. Johnny Vector says

    According to the paper, they hit 90% detection rate at a false positive rate of 10%. So yeah, exactly what do you do with that information?

    There are three possible explanations for the results that I can think of offhand:

    1. They screwed up the analysis somehow.
    2. There is an actual effect. For instance, childhood lead ingestion is almost certainly a cause of criminality; perhaps it also affects facial growth.
    3. People who “look like criminals” (which their analysis finds means “are outside a smaller ‘normal’ range of looks”) are more likely to be charged and convicted, even given an equal underlying rate of criminality. We already know this is true if we define “look like criminals” as “black”.

    They only examined #1 and, finding no error, determined that their results are robust. Personally I suspect #3 is fairly likely. And if so, that’s a very interesting result, that cries out for further study and mitigation. I find it curious that it apparently didn’t occur to them. (Or if it did, it was hidden in the parts I skimmed but elided from the conclusion.)

  7. says

    Johnny Vector@#6:
    According to the paper, they hit 90% detection rate at a false positive rate of 10%. So yeah, exactly what do you do with that information?

    Oh, no, I feel a posting about the base rate fallacy coming on…

  8. Johnny Vector says

    Oh, no, I feel a posting about the base rate fallacy coming on…

    What’s that sound I hear? Sounds like a voice in the wind.

    Marcuuuuuussssss…. Wriiiite meeee…..

  9. Mano Singham says

    Go for it, Marcus!

    I wrote about false positives and the base rate fallacy some time ago but I think you would do a better job.

  10. Mark Dowd says

    Is it just a coincidence I wonder that all the “criminal” faces look thuggish?

    I tried skimming the paper(there’s a Cornell link in the intercept article), but I don’t speak academic very well. There’s one specific part of the methodology I’m looking for that I can’t tell if they did or not.

    I would expect any rigorous examination of this would be to take a completely new set of faces of the same type as the first, untouched by the machine learning algorithm, and run it through and see how well the results correlate against the first set.

    Did they do that? One paragraph in Validation mentions using Chinese females and Caucasian people, but I can’t tell what they’re actually doing with those sets.

    Is someone (maybe Mano?) able to translate the key parts of that paper from “academic” to “interested layperson”? I think there’s a lot of background I’m missing and jargon I’m not getting.

    Even before talking about the complications Johnny Vector brought up (#3 is definitely a biggy), I’m wondering how correct the methodology actually was.

  11. says

    The alleged “AI” won’t be determining who is or isn’t a criminal. The programmers will be doing that, basing the results upon their own prejudices.

    It reeks of phrenology, pseudo-scientific nonsense invented to rationalize and legalize prejudices. Odds are, those deemed “likely to become criminals” won’t come from wealthy, elite or white backgrounds, the same way that phrenology claimed non-whites were “lower” and “less intelligent”.

Leave a Reply

Your email address will not be published. Required fields are marked *