A Florida sheriff’s office is repeatedly harassing certain families within a county, abusing authority and writing up people for false charges, filing multiple nuisance charges like “disorderly conduct” repeatedly against the same people. He claims his “predictive AI” says certain people have committed crimes, so he’s targeting them. That’s bad enough, but his false arrests are fed into that AI and used as “data” or proving they’re committing more crimes.
That’s not “evidence”, that’s self-fulfilling prophecy. It’s the ignorant assuming they have knowledge when they don’t even understand how it’s supposed to work.
Everyone who reads this story keeps saying “Minority Report”, but I hated that movie. I prefer to quote Arthur Conan Doyle from “The Sign of Four”:
You can, for example, never foretell what any one man will do, but you can say with precision what an average number will be up to. Individuals vary, but percentages remain constant. So says the statistician.
Predictions and trends are only guides, not proof. And no matter how good a model, or how much you know about human behaviour, you cannot convict people for things they haven’t done. A serial criminal who commits the same crime a hundred times still can’t be assumed guilty until you prove they did it the 101st time. It is possible for repeat offenders to stop.
Some of those harassed in Florida have no criminal history or arrest record before this.
A Florida sheriff’s office deployed a futuristic algorithm that uses crime data to predict who is likely to commit another crime.
In a sweeping six-month investigation published this week, the Tampa Bay Times reported that the algorithm relied on questionable data and arbitrary decisions and led to the serial harassment of people without any evidence of specific crimes.
According to the report, former sheriff’s office employees said officers went to the homes of people singled out by the algorithm, charged them with zoning violations, and made arrests for any reason they could. Those charges were fed back into the algorithm.
– – – – – – – – – – – – – – – – – – – –
I can imagine a conversation with a journalist going something like this:
Reporter: “Why did you arrest them?”
Sheriff: “Our AI said to because they were arrested before.”
Reporter: “Why were they arrested previously?”
Sheriff: “Because our AI said they would commit crimes.”
Reporter: “But what crimes did they actually commit in the past?”
Sheriff: “That doesn’t matter, this is about prevention.”
Reporter: “So you’re arguing that you can arrest people because you’ve arrested them before? That your unjustified arrests in the past are justification for repeatedly arresting them in the future?”
Sheriff: “Maybe I should feed your name into the database.”
I was expecting Marcus Ranum to post this story first.