In 2009 I was a consultant to a company that produces cop cameras and nonlethal weapons. My remit was specific: they were setting up a cloud service and wanted a security design expert to review their system.
My methodology was always simple: sit down with the senior systems engineers responsible for a component, and ask them “why do you think this is secure?” It’s a trick an old friend of mine (now a top-tier consultant at IBM) and I developed; there’s really only one answer that gets credit and that is, “what do you mean by ‘secure’ in this context?”
In that context, what they were worried about was the integrity of the data being offloaded from cop-cams. My client was planning on offering a new technology: a cloud service that would store evidence from the cameras where it was tamper-proof and safe-stored. I proposed that my deliverables be in the form of two reports, one which was technical recommendations, strategic recommendations, and system design notes – the other was to be something that could be shared with a potential customer to help explain the system and demonstrate that impressively-credentialed outside experts had examined it and felt it was good. I used to get marketing/sales requests like that, sometimes, and I finally decided that I was alright with them as long as I stuck to the truth and kept things dry and straightforward.
At the time, there was no competition at all except for expensive in-house solutions. For example, LAPD had a pretty powerful system for collecting cop-cam data, involving allegedly a locked secure room with all the video servers, a cleared system administrator, and a set of policies and procedures that outlined system governance. For a security design, a look at the procedures is highly desirable – the system was being designed to automate exactly such a system and knowing the customers’ procedures was a view-port into their (implied) expectations. I innocently asked for a copy, which request was relayed to the LAPD, and turned down flat. As the process continued, one of the sales guys took me aside and said, “they don’t really have any idea what they are doing with the stuff. They have some low-paid guy who barely knows how to run the system, and they like it that way.” That was about the time that I realized that all the stuff I had been doing, data flows, cryptography, audit trail, etc., needed to be filed under “solving parts of the problem that the customer does not care about.” In any IT project, that’s a great big red flag because it means you are about to build a system that nobody wants.
I did some interesting work (some of my best actually, and I wish I had been able to patent it, see explanation below the divider) but I began to realize that the problem I thought I was working on was not really the problem that my client had. The police departments were deeply concerned about who had authorization to administer the system, but they wanted to maintain the ability to access the data untraceably. Finally, I realized that they were not coming directly out and saying it, but they did not want a system that was reliable. They wanted a system that they could cause to fail in certain specific ways. If the body cam unit encrypted data and directly submitted it for ingestion, there wasn’t anyplace where the data could get – you know – “lost”. Worse, if someone tried to “lose” any of it, it would be completely obvious. A security system designer would think that was a desirable property of the system, but it turned out that integrity and audit was a problem for several police departments, not just LAPD. It doesn’t take a genius, at this point, to realize that the ability to cause a “system error” that resulted in data loss, was a requirement.
My work done, I wrote my report, and made a few oblique references to the need to do a better requirements analysis in order to ensure alignment with market needs or something like that.
There are a few other attributes of such a system that are notable:
- Since it is server-centric, its got a limited attack-surface from the outside. The ability to interfere with the system is limited to only the command options that are exposed by the system. If the system does not give a “delete this file” option, then files can’t be deleted.
- When a system exposes limited command options, the system can maintain an audit trail that is highly specific to the commands that are allowed. If every file access is logged and attached to the user who did it, it’s still possible to copy a file to somewhere else, but it’s easy to see who did it and when.
- The limited interface of the system can be used to control image operations – for example, the ability to drop “highlight circles” that brighten an image in a specific location to illustrate some part of a scene. Or the ability to blur out faces, or whatever. The system could tag such edited images as ‘altered’ and annotate who had performed the alteration.
- The system can provide limited-detail options for review, and can completely control the review process. For example an officer might be able to review a down-sampled version of the video of an incident, in the form of thumbnail interval-streams. But a review committee might be able to access the full video. Naturally, all of such operations would be logged in an audit trail.
- The system might allow, under no circumstances, a full-resolution copy of the video to be downloaded. By only providing a down-sampled copy it would be impossible for someone to edit the down-sampled copy to make it look like the original. Alternatively, all downloaded copies of a video might be water-marked by the system using cryptographic time-codes embedded in the data-stream. Edited versions would be unable to duplicate the time-codes without the encryption key for the file, which would make it extremely obvious where an edit began or stopped, since the time-code would vanish.
- The system can enforce options that are tailored to an organization’s governance policy. I suggested that the system include a “governance engine” that offered a few options for enforceable work-flows.
When I finished the project, I felt pretty satisfied that it was an interesting system that represented an improvement over many of the other things that were out there, especially the in-house options. It was the in-house options that made me uncomfortable.
I was really naive.
Recently, I have been listening to a podcast about the murder of Freddie Gray, by the Baltimore Police Department. They don’t call it that, of course, but that’s what it is. They call it “The Killing of Freddie Gray” and, in doing so, they participate a little bit in the white-wash that sanitized the whole affair. I eventually intend to do a posting about the whole series, which is excellent but they drag it out a bit much; I think it could be half as long. The whole podcast is an excruciating blow-by-blow of who was where and when and there’s a great deal of discussion about what was shown on various police surveillance cameras, and where, and when. Spoiler Alert: fucking cops broke Freddie’s neck before they put him in the van, and the whole situation unspooled on them when they realized they had a paraplegic (that was their fault) in the back of their van. Needless to say, they didn’t give a shit about the man they had just sent on a lingering death-spiral, but they were worried enough to put together a remarkably incompetent cover-up. A cover-up that the media and local politicians played along with.
The episode I listened to, today, was about the edits that were made to the surveillance videos. Imagine my surprise when it turned out that the system that was storing and manipulating the videos was the one I had worked on. From the sound of it, however, the customers got what they wanted: the ability to obscure, edit, loop – full video edit capabilities. That’s nice if you’re making a cloud video-editing solution, but for an evidence archiving system? It seems that the system’s requirements had changed.
The podcast describes how the police released videos as exonerating evidence, but when the analyst actually sat down and looked at them carefully, there are moments where people jump from place to place and another spot where the camera’s time-code keeps moving forward though the action has frozen on a still frame. I have some experience editing video and working with digital media and I can say categorically that such things do not happen as a result of any unintentional error. The podcasters have to, I suppose, use diminishing language to avoid directly accusing the Baltimore police department of deliberately fabricating evidence – but it’s pretty obvious what is going on.
Cam1A’s rotation pattern, for example, is at its most inconsistent during the 6 minutes Freddie and the various officers are on Presbury St.
This is a surveillance camera that is supposedly on automatic rotation, which allegedly recorded video (released by Baltimore police department) in which the automatic rotation is jerky and pans around irregularly. Automatic cameras don’t do that, but edited video might.
Normally, if you wanted to do forensic analysis for video alterations, it’s pretty straightforward: you start with the original and compare it to the edited version. But Baltimore police didn’t release the original. They released something they said was the video. And that’s where the 16 ton weight fell on me: naturally, I know the system that the video was stored under. It’s the one that I reviewed in 2009. The Undisclosed podcasters discuss a similar case:
https://podcasts.apple.com/us/podcast/undisclosed/id984987791?i=1000384739060 at 59:28 [un]
Is it possible to edit video evidence? The answer is “yes, it certainly is possible.” But can I prove it here in this case? Definitely not without the raw footage, and even then it would be difficult. Proving video evidence has been tampered with is much harder than doing the actual tampering. I fully expect that there will be those who think I am crazy to suggest such a thing, but I’m not the first to consider the possibility that the police would alter video evidence.
In fact, right now, in Albuquerque New Mexico, the families of two people killed by officers of the Albuquerque police department are claiming just that. And they have damning allegations made by a whistle-blower on their side. Reynaldo Chavez was the central records supervisor and records custodian for the Albuquerque police department from 2011 to 2015. He has filed a whistle-blower lawsuit against the city of Albuquerque alleging that his employment was terminated after he came forward with allegations that he was instructed to withhold information from attorneys and media outlets that filed requests under the state’s inspection of public records act. He also says that he knew of others in the department who were tampering with evidence including the 2014 to 2015 officer-involved shootings of Mary Hox and Jeremy Robertson. The Hox and Robertson families have filed civil rights lawsuits against the city. Chavez has signed a sworn affidavit and gave testimony under oath explaining how body camera and surveillance footage in those shootings was altered, deleted, or hidden using an online evidence-management and cloud storage service called evidence.com.
Evidence.com is owned by Taser, best known for manufacturing stun guns, but in recent years they have come to dominate the market for body cameras. Footage from their body cameras automatically uploads to evidence.com where authorized users can assign the footage to a case along with other evidence. Taser has publicly marketed the body cameras and evidence.com as being equally beneficial to both police and civilians. But behind closed doors it’s been clear where the company’s loyalties lie. “We got into this space to try and change this whole conversation,” said Taser International CEO Rick Smith, in February 2015, while giving a presentation at the California highway patrol headquarters in Sacramento. He was referring to the outrage over increasingly common officer-involved shootings and other allegations of misconduct and brutality. He continued, “my personal bias is – you guys get a ton of false complaints.”
Well, I guess “the customer is always right” – though, if the system maintains good audit trail and controls it ought to be impossible to delete evidence or alter it. That tells me either that police departments are being stupidly sloppy, or that the system has changed profoundly from its original design. Or… There’s one other option, which is worse: the actual unedited videos are in the digital vault, un-tampered-with, and the custodians of the vault are choosing to knowingly allow police departments to present edited versions. Imagine if you knew that the Baltimore police’s posted version of the arrest of Freddie Gray was a carefully selected tidbit that didn’t show the moment when the cops broke his neck – imagine you knew that and did nothing, said nothing. The whole notion of body cameras is that they are impartial witnesses, but this system is not impartial at all, it’s concealing evidence. That’s criminal.
If you search for views of the command interface of the evidence archive, it includes some commands I’d say are rather sketchy for an “evidence” system. Like “remove file.” Or “edit metadata.” If I were working as an expert for the team that is litigating this, I’d be subpoenaing evidence.com’s transaction logs: who edited what and when? If the system doesn’t record that it’s dodging basic IT Security 101 audit trails, which would be odd indeed for a digital evidence archive.
Remember what I said earlier about a limited command interface? Another element of that is that you oughtn’t provide a direct interface to review or access a device, because then device security becomes an issue.
Wow, it’s almost as though the system has morphed into one in which an officer can download their body camera to a laptop and review it, then decide whether or not to put the body camera under the wheel of their squad car and back over it a couple of times. ** As I listened to the podcast, I became more and more uncomfortable with what I was hearing.
More than 5000 law enforcement agencies use evidence.com including police departments in San Diego, San Francisco, Fort Worth, Dallas, Chicago, and yes: Baltimore.
In an email, detective Jeremy Silbert from BPD’s public information office explained that the department signed a contract with Taser to purchase their body cameras along with evidence.com cloud storage in May, 2016 following a two month pilot program in late 2015. So, post Freddie Gray, at least officially. As of April 2017, approximately 1500 BPD employees have evidence.com accounts, as does the state’s attorney’s office. Though Silbert says, “It cannot be characterized as full access.”
I wonder if full access allows the “delete” button. A button which (in my professional opinion) should never have been part of the system design. But it does sound like there is some kind of governance process in place:
“Roles determine each users’ permissions or access and restrictions to various features and functions,” Silbert wrote. None of our patrol officers have administrative privileges to evidence.com. So what does full access to evidence.com look like? Here is how Chavez described some of the capabilities during his sworn deposition: “again, it goes back to if you can upload on a case file, if you can upload an image, then you can actually go in, you can size the image – you can place it into an actual file. A mask… it can be a circle, it can be a square, and it’s like Photoshop, you can go into an actual frame and you can mask something so – and this is just a ‘for example’ – if the perpetrator is holding a gun, you can actually mask that where if it’s a perpetrator or an officer, you can mask it so now you can’t tell if that’s a gun or he’s holding a comb in his hand so it’s hard to differentiate.
The manual for the redaction system is [here] What Chavez is referring to is an auto-blurring capability that the system includes so presumably you can blur out the faces of someone who is not involved.
It appears that the edits are performed on a copy, but I’m not entirely sure. If a security system designer who was concerned with the integrity of evidence designed such a system, and videos produced that had redacted regions would have a clear indication of the redaction, as we’d expect for evidence. Here’s what I imagine it might look like:
The system would, naturally, maintain an unredacted copy and a complete audit trail of all redactions. For sure, the redactions would be obvious; really obvious, as in my example.
[Chavez continues] Same token with that square on an officer’s item. You can just do the square on the entire clip and mask that. And that’s where the gradients come in; so at that point you can start making it – you know – it’s a very subtle change. You’re looking at a slide variable [presumably a ‘strength of effect’ controller in the software] it’s going along pretty good and now you start seeing a little discoloration and it’s not as bright, it’s darker. Usually when it’s darker that means there’s a mask because its a gradient now that’s put on it.
I’ve done expert testimony and affidavits and I’ve got to say Chavez’s testimony sucks. It’s technically accurate but it’s a very poor description of what’s going on. In the edit I did above, I would call the elliptical areas “selection regions” (per Photoshop) with an “effect” applied to the selection regions. What Chavez is talking about with regard to ‘gradients’ is the ability of advanced editing software to control the degree to which an effect is applied in a selection region. This is all cracking good and interesting stuff if you’re talking about digital image editing but it’s “WTF” if you’re talking about evidence. If I were presenting evidence that the guy with the black hair and the neat beard was at a protest, it doesn’t matter if the rest of the scene is messed up. There is absolutely no need for edits to evidence to be subtle.
[The podcast narrator resumes] Taser claims that any modifications made to evidence in the system will be reflected in the audit trail. In addition, the original piece of evidence always remains intact. But whether that audit trail is immediately provided with evidence that is turned over, is up to the department. Never mind the fact that Chavez’ testimony makes it clear that very it’s easy to hide what evidence, including videos has been edited.
[Chavez again] “You upload a video clip to evidence.com, that becomes the parent – at that time you go in and do whatever operations you want on that original, then it becomes the orphan – it becomes the orphan so at the end of the operation you still have the original which hasn’t been altered and now you have the new orphan which you’ve added/deleted/redacted, whatever operation you’ve done. So you store that file – you can go in at the same time, close everything up, and the one that was the parent, actually go ahead and do the deletion, come back, make your orphan and upload it as if it were a new piece of evidence that came in that was uploaded to the cloud.
That’s reasonable behavior if the system you’re building is a cloud photo scrapbook, but evidence? You want the system to function so that only authorized devices and authorized users are allowed to create chains of evidence under tightly controlled circumstances.
What I’m getting at is that it sounds like the system has built-in failure-points that don’t have to be there. I suspect that the police departments wouldn’t buy it if it didn’t. The point is: there were people who looked at the system and tried to make it better. If it’s got these problems it’s because the system’s requirements changed so they could cater to their real customers: the cops.
In the case of the edited “evidence” that the Baltimore police department released, if it was edited using the evidence.com online tools, that would be reflected in evidence.com’s logs and the audit trails of the videos that were released. That information ought to be public, but it won’t be. Because (if you listen to the whole series) it’s pretty clear that everyone involved in the Freddie Gray murder, from the cops who broke his neck, to the driver of the van who ignored the newly paraplegic in the back, to the commissioners and even the prosecutors, were corrupt and engaging in a cover-up. Not only that, they provoked a crowd, called it a “riot” and used it as an excuse to tear-gas civilians who were sitting in their homes in the vicinity.
The whole story is layered shit. And under the shit, there’s more shit. It’s shit from the top to the bottom, and that’s a pretty good metaphor for American policing.
Interesting work: one of the problems was that the cop cam unit was battery powered and did not have enough energy to run any fancy encryption. That was presented to me as the reason why the cam unit did not simply stream data automatically to the service’s ingestion routines, which would have provided a strong guarantee that data loss or corruption might not occur. Assuming some magic encryption fairy dust. How to do it? As I explored the system I learned that the cam unit spent its non-duty time in a battery-charger/holder that also synchronized data to a local server. The local server then handed the data off to the service’s ingestion routines. After the synchronization was complete, the cam unit’s local memory was overwritten with zeroes by an erasure routine in the cam unit’s software. While the cam unit was in the charger, it had lots of CPU cycles to burn since it was not running on battery – so I proposed that the local server generate a session-ID each time the cam unit was put in the holder, then the cam unit would use a device-key (populated into the cam unit when it was first put in service) and the session-ID to encrypt all of the system’s zeroized memory. When the data was transmitted off the cam unit, all it needed to do was XOR the encrypted zeroized blocks against the captured video, and transmit the results. The fun part of the whole thing is that, once that process is complete, the device no longer needs to hold the session-ID. Someone attacking the communications would have to know the device-key and have the other key, which never left the local server. The local server needed only to forward that key to the server ingestion routines, and the cam unit could simply submit the data directly in the form of encrypted blocks. If a unit was lost and someone found it and tried to get at the data, they’d just have big chunks of random stuff and, in order to extract the video, they’d need to already have a copy of the video. In cryptography, that use of an encryption algorithm is called “electronic code book mode” and is as strong as the key and the underlying cryptosystem; using the plugged-in state of the cam unit was not a tremendous innovation – I would bet a bag of donuts that Roger Schell or someone like that invented the technique in the 1970s – but it neatly solved the problem as it had been framed. I did some estimates of power-consumption and determined that my proposed system was basically “free” in terms of battery life. The body cam could stream thumbnails over the encrypted link and save full-resolution video on the local memory, to be offloaded and ingested when the cam unit is back in the cradle. In the mean time, if something unfortunate happens to the cam unit, the thumbnail stream is still there (640×480 50% quality jpegs 15 seconds apart would consume about 100kb/minute)
** another nice feature of the suggestion I made above is that you can also send very small “heartbeat” messages securely (my suggestion was a thumbnail, battery level, gps data, and timecode), so if the cam unit goes offline because someone – I dunno – backed a car up over it, the system has an exact detail of when the cam unit ceased functioning, and all the data right up to that point.