I learned two awful things from this article on pseudo-scientific physiognomy: 1) Cesare Lombroso’s head is preserved in a jar in a museum in Italy, and 2) people are even now trying to identify criminals from photos. Not from their records, oh no, but from the implied fact that they look like criminals.
Lombroso, you may recall, had this terrible idea that you could identify bad people by their looks, by their facial features, bumps on their head, etc. These were familiar notions held by Nazis, who actually published school books to instruct kids on how to recognize Jews (it is often hard to recognize the Jew as a swindler and criminal […] How to tell a Jew: the Jewish nose is bent. It looks like the number six…
), and Lombroso’s ideas fed directly into the eugenics movement. People who don’t look exactly like us must be lesser, don’t you know.
Modern neo-Nazis are saying the same things now, like that wretched MRA/MGTOW/PUA/Whatever at Chateau Heartiste:
You CAN judge a book by its cover: ugly people are more crime-prone.
Shitlibs have a look. Shitlords have a look. And you can predict with better than 50/50 chance which 2016 presidential candidate a person supports based on nothing more than their photograph.
Thanks, Cesare, for your contributions to bigotry. It seems kind of appropriate that in death, your head was chopped off and dropped in a bucket of formaldehyde. It’s unseemly, I suppose, but I do hope other body parts suffered similar indignities.
I also learned, unsurprisingly, that some people are trying to make physiognomy seem more scientific by making computers do it. As the article explains it length, but I’ll simply summarize in brief: if you train a neural network to find patterns, it will find them whether they’re actually there or not. As we know from those surreal images produced by Deep Dream, if all a piece of software knows how to do is highlight dogs and eyeballs in an image, it will find dogs and eyeballs everywhere.
You will rightly point out that the real test is if they spot the signal they’re searching for in some images, but not all, and if the software guesses correctly. Apparently, this software was trained on a small number of images, and they aren’t making the data available, so it’s hard to guess exactly what features they’re cueing on; the article speculates that it may be as trivial as whether the innocent faces were smiling and the criminal faces were scowling.
So another thing I learned is that if the Nazis take over and start scanning all our faces for criminal tendencies, you’d better smile like a giddy idiot all the time.