The idea that criminality is not contingent on external factors like need and opportunity but that some people are intrinsically prone to be criminals based on their biology has been around for a long time and led to efforts to create all manner of metrics to determine those markers. Sam Biddle writes about a troubling new study that claims that artificial intelligence (AI) software can tell whether you will be a criminal based on your facial features alone.
In a paper titled “Automated Inference on Criminality using Face Images,” two Shanghai Jiao Tong University researchers say they fed “facial images of 1,856 real persons” into computers and found “some discriminating structural features for predicting criminality, such as lip curvature, eye inner corner distance, and the so-called nose-mouth angle.” They conclude that “all four classifiers perform consistently well and produce evidence for the validity of automated face-induced inference on criminality, despite the historical controversy surrounding the topic.”
The study contains virtually no discussion of why there is a “historical controversy” over this kind of analysis — namely, that it was debunked hundreds of years ago. Rather, the authors trot out another discredited argument to support their main claims:, that computers can’t be racist, because they’re computers:
This misses the fact that no computer or software is created in a vacuum. Software is designed by people, and people who set out to infer criminality from facial features are not free from inherent bias.
The problems with such a system are immense. Apart from the whole idea that class and race-based prejudices and other forms of bias have played roles in previous attempts to identify criminals based on physical features, there is always the issue of what you do with such information if you do find possible markers. Do you pre-emptively lock people up? Place them under constant surveillance? Arrest them if a crime is committed anywhere in their vicinity?
Then there is the issue that what is considered a crime itself is not free of bias. People may agree that murder, rape, and other forms of physical violence are crimes. But what about stealing? A person caught shoplifting is considered a criminal even if there are strong mitigating circumstances and the damage is slight. But what about people in white collar jobs who commit all manner of acts that have devastating effects on large numbers of people? One need go no further than the major banks and other financial institutions that were behind the last financial crisis to find people who should be in jail but walk free. How about those who bribe or otherwise influence others to gain an unfair advantage? Are those identified as crimes by this software?
We have a legal system that is heavily biased towards protecting the property and well-being of the wealthy and this kind of tool will simply be used by them to further add to the bias.