When the FBI and DHS fusion centers started building vast, unregulated, facial recognition databases, they shrugged elaborately and said that there weren’t any standard protections for doing so, and that they were just experimenting, and it wasn’t going to be used operationally until the legalities were all sorted out.
What they were actually doing was running out the clock on regulation. They were waiting until it was too late for someone to establish sensible rules for how to operate that sort of database. Regulation would have included important things like mandating the quality of sources, and having processes for updating incorrect information. For example, perhaps it would mandate that only drivers’ license photos (which are, presumably, authentic) could be used and not content scraped from social media sites. Reglation would address the question of whether or not it was appropriate for the federal agencies building face recognition databases to share them with state law enforcement, or whether or not it was appropriate to strong-arm airlines into ‘sharing’ the photos they take of passengers as they board or check in. You’d imagine that the relationship between public data and private data would be something to clarify, here, but – nope.
The “there is no way to update the database” dodge was successfully used by DHS for the “no fly list” – deny that it exists, then deny that it can be updated. To me, it’s mind-blowing that someone can claim, with a straight face, that it’s impossible to search a database for “Marcus Ranum” and delete images that I flag as not me? That’s exactly what databases do – search and update. So, rather than confront that challenge, let’s deny that it exists and deny that it’s possible. Whatever it takes to run out the clock. It’s not going to be used for anything, right?
Cops in Miami, NYC arrest protesters from facial recognition matches
Cops’ use of the tech among the list of things protesters are demonstrating against.
Law enforcement in several cities, including New York and Miami, have reportedly been using controversial facial recognition software to track down and arrest individuals who allegedly participated in criminal activity during Black Lives Matter protests months after the fact.
Miami police used Clearview AI to identify and arrest a woman for allegedly throwing a rock at a police officer during a May protest, local NBC affiliate WTVJ reported this week. The agency has a policy against using facial recognition technology to surveil people exercising “constitutionally protected activities” such as protesting, according to the report.
In other words, Miami police self-regulate their use of facial recognition. That’s a nice way of saying “trust us.”
“If someone is peacefully protesting and not committing a crime, we cannot use it against them,” Miami Police Assistant Chief Armando Aguilar told NBC6. But, Aguilar added, “We have used the technology to identify violent protesters who assaulted police officers, who damaged police property, who set property on fire. We have made several arrests in those cases, and more arrests are coming in the near future.”
An attorney representing the woman said he had no idea how police identified his client until contacted by reporters. “We don’t know where they got the image,” he told NBC6. “So how or where they got her image from begs other privacy rights. Did they dig through her social media? How did they get access to her social media?”
One part of the problem is that the sources of the images has been deliberately obscured. It’s in the database is all anyone knows. How did it get there? Uh. We don’t know. It’s easier not to know than to classify it or have to deal with annoying freedom of information act requests.
This is an example of a particularly subtle form of “parallel construction.” You have footage of someone throwing a rock at a cop, you let the facial recognition suggest some names. Then, you look at the images on the person’s facebook page and – yup, that’s the right person. It gets around one of the problems with facial recognition, namely that it’s not very accurate. It winnows the haystack down to a handful of things that may or may not be needles, and a real intelligence makes the final assessment.
Similar reports have surfaced from around the country in recent weeks. Police in Columbia, South Carolina, and the surrounding county likewise used facial recognition, though from a different vendor, to arrest several protesters after the fact, according to local paper The State. Investigators in Philadelphia also used facial recognition software, from a third vendor, to identify protestors from photos posted to Instagram, The Philadelphia Inquirer reported.
This is the same technique, basically, that has been used to identify karens and nazis, too. It’s a generally useful feature of the retro-scope. Clearview AI is careful to point out how their technology is making the world a lot safer, by identifying child porn predators, etc. Of course it also identifies dissenters and people who attack cops – which is especially problematic because some cop agencies (e.g.: DHS) consider it “assault on an officer” if you yell at them. They also consider being present when someone else is rioting, to be part of the riot. Things have not changed much since the police rained bullets into the crowd after the Haymarket bombing. [stderr]
If someone were looking for a conspiracy, the way the facial recognition systems have been deployed is a good example of an emergent conspiracy: everyone in these companies and agencies acted as though it was all someone else’s problem, so now they can say “who, me?” we’re just using an existing resource. That ignores the fact that it did not exist, at some point. It also ignores the fact that Clearview AI appears to have all the photos of everyone in their database and nobody’s asking where they got them and whether that’s not a bit creepy? One question I’d want to have an expensive lawyer ask is “how do you make sure your database is not full of photos of minors?” (Because minors can’t consent to having their images harvested)
But, it’s OK. Apparently, the cops are going to do a good job of regulating themselves. According to the cops:
New York City Mayor Bill de Blasio promised on Monday the NYPD would be “very careful and very limited with our use of anything involving facial recognition,” Gothamist reported. This statement came on the heels of an incident earlier this month when “dozens of NYPD officers – accompanied by police dogs, drones and helicopters” descended on the apartment of a Manhattan activist who was identified by an “artificial intelligence tool” as a person who allegedly used a megaphone to shout into an officer’s ear during a protest in June.
Having dogs, drones, cops, and helicopters drop in to your apartment is not, in any way shape or form, going to affect someone’s willingness to engage in free speech.
It’s just going to get worse. Cops are already treating “you hurt my feelings” as an excuse for shooting someone. What could possibly go wrong?
Strangely, the cybersphere is absent of stories about how AI is being used to identify tax cheats or how Palantir is being used to track “dark money” going into political campaigns. Surely, that oversight is coincidence.
There are companies that offer digital printed breath masks for the COVID era. I wish everyone could have a mask with a print of Donald Trump’s face.