The Elephant In The Connected Room


Computer security is a new(ish) field, so we get to make up names for things. That’s an advantage and a disadvantage – it means that marketing people can come up with new-sounding names for old stuff, and sometimes customers get all excited and buy it because it sounds so new!

Then there are the problems. Long-standing problems in computer security don’t seem to go away and, aside from the theoreticians or industry insiders, nobody wants to talk about them. I suspect that talking about certain problems is kind of like talking about friction to a mechanical engineer. If you don’t take it into account, you’ve just hoisted the clueless-roger – but you can’t come along and claim to have solved it, either. Familiarity with the problem is a sign that you understand it and you understand that you have to take it into account.

One of security’s unpleasant facts is the problem of transitive trust.

If you trust something, what does that thing trust? (“Thing” here can be a person, too) – all systems rely on trust to some degree, the question is not if something is trusted it’s whether it’s trustworthy. Equifax was trusted but turns out to not be completely trustworthy – that’s just one example. A trustworthy component is one that behaves the way your trust model expects it to. That’s a nice way of saying “you’ve got to be kidding me, pal!”

In the big scheme of things, transitive trust bites us when we’re looking at entity relationships. Hackers barely bother using transitive trust-based attacks, but when they do they are almost inevitably successful. It’s one of the reasons I tend to be skeptical of claims that ${Russians, Chinese, North Koreans, random group of sociopaths} are using super-sophisticated hacking techniques. When you’re talking about people like John Podesta falling for basic phishing attacks, you don’t need to flex your muscles very hard.

Here’s a fairly typical transitive trust attack scenario: there’s a political campaign that relies on an external legal counsel. The attacker researches some court filings and determines the name and the firm (that information might also be contained in linkedn or facebook or a variety of other sources) – then, they attempt to appear more trusted by sending a fake email appearing to come from the attorney. Basically, the attacker is trading on the trust that the target has in their legal counsel. Sometimes this works, sometimes it doesn’t – but it has a higher probability of working the more credible-seeming it is. I’ve seen phishing emails come in that claim to be from a correctly named attorney at a correctly named law firm, and the Subject: line reads Re: ${docket number} – perhaps this is why I am so contemptuous of common-or-garden spam: compared to the more targeted phishing attacks, it’s really very bad.

Here’s a more sophisticated one: a certain company provides air conditioning and power system maintenance and repair services in an area where there are a lot of data centers. The attacker reasons (correctly) that the HVAC company may have access to systems within a certain data center, so they launch an attack against the supplier, hoping to identify a backdoor into the actual target. Speaking of targets, that’s how Target(tm) got compromised. How did the HVAC company get compromised? Someone there appears to have opened a spreadsheet that looked like it came from Target.

These sort of attacks are possible anywhere where one business sits across another’s supply chain, or their data chain, staffing, or services provided. In other words, it’s an extremely “target rich” environment. The worst thing is: there is no way to do business, have customers, exchange data, or perform transactions without trusting someone. So it’s probably an insoluble problem.

That’s why what happened the other day is not exactly a big failure on the part of the companies that depended on a certain map database: [nyt]

Users of a variety of popular apps and services, including Snapchat, awoke Thursday morning to find that New York City had been relabeled “Jewtropolis” on maps displayed in the apps.

People on Twitter quickly posted screen shots of the maps, calling them racist and anti-Semitic. Maps on Snapchat, Citi Bike, StreetEasy and even The New York Times all appeared to be affected.

All of those affected use embeddable maps from a third-party company called Mapbox. The company said in a statement that its New York City map had been “vandalized.”

“Mapbox has a zero-tolerance policy against hate speech and any malicious edits to our maps,” the company said, adding that the label was deleted within an hour. “The malicious edit was made by a source that attempted several other hateful edits. Our security team has confirmed no additional attempts were successful.”

Snap, the parent company of Snapchat, said that the third-party data it uses for its maps tool, Snap Map, had been “subject to vandalism.”

Many organizations appear to have trusted Mapbox. Who did Mapbox trust to edit their database, who they should not have? This sort of thing is a particular danger with user-provided content. Imagine if someone decided to implement a protest by manipulating Google Maps and Waze’ perception of the traffic that was going across some of the main arteries into/out of a mega-city? Whose inputs do we trust? We cannot trust no inputs because then we’d have no customers, and do nothing.

This was just a map database, it wasn’t one (we think) that was being used by self-driving cars, or nuclear weapons. Imagine if there was a self-driving car that blindly trusted that map and went, “oh, well, I am clearly not in New York anymore” and started going to who knows where?

------ divider ------

It feels weird to see these old powerpoint slides again.

See why computer security people are weird and paranoid? We’re supposed to think of all this stuff before it happens. There’s no way that’s going to work.

Comments

  1. says

    I just love it when people blindly follow electronic maps, instead of using what’s left of their brains. Back when consumer GPS became common, some dimwit transcribing our area didn’t notice the road block between our road and the one further down the escarpment. So now every known GPS device gives people instructions to drive 10 miles out of their way only to end up at a steel barrier two miles downhill from our place. If I remember I tell new chums to take the freeway exit nearest our place and ignore the plaintive bleating of the GPS voice. Great fun.

  2. avalus says

    @Lofty: Nothing beats hearing “You are driving the wrong way around. Please turn imideathly!” out of the blue while driving at 130 km/h on a highway/freeway/autobahn. Sadly some people do.

  3. Some Old Programmer says

    See why computer security people are weird and paranoid?

    The paranoia is communicable–for which I thank you. I’ve changed a lot of my personal computer practices over the past few years due in no small part to your blog (and a couple of failed or mostly-failed attacks). I now have a much smaller target profile. For instance, a couple of months ago I got an e-mail attempting to extort $7k out of me by claiming to have used a key logger and web cam to take embarrassing videos of me; the only thing that wasn’t horseshit was the subject line, which contained a password I’d used for decades, so was ripe for some untrustworthy website to store in plain text (or easily decoded with a rainbow table). A quick search in my password vaults confirmed that I wasn’t using that password any more. No doubt there’s a database for sale with e-mail addresses and an associated password that lazy assholes are using to generate some easy money.

    A few years ago, that attack would have been seriously alarming.

  4. jimmf says

    It’s worse than the rosy picture you paint. For example: suppose A is required to trust B and B may not be trustworthy and B “trusts” an unknown number of other parties (an example might be law enforcement). The trustworthiness of any party can change without warning. “Accidents” happen. “Competence” is intermittent. People you may or may not know who happen to know where you live, took a picture, recognize your car, know your son can “share” whatever they’ve observed on social media. I’ve had knots in my stomach much of my life over this stuff.

  5. consciousness razor says

    Here’s a fairly typical transitive trust attack scenario: there’s a political campaign that relies on an external legal counsel. The attacker researches some court filings and determines the name and the firm (that information might also be contained in linkedn or facebook or a variety of other sources) – then, they attempt to appear more trusted by sending a fake email appearing to come from the attorney. Basically, the attacker is trading on the trust that the target has in their legal counsel.

    Maybe I’m missing something, but that doesn’t sound like transitivity to me. At least, it’s not a very straightforward example….
    A = campaign; B = attorney; C = attacker
    A trusts B. C exploits this by deceiving A, essentially making A believe that C is B. But there’s nothing in here suggesting that B trusts C. It’s a case where both (A trusts B) and (A trusts C) are true, which is different.
    I mean, I guess you could try to say that not being concealed to the general public (which includes C) is something that B does, in a sense…. But is that really supposed to be a matter of “trusting” them? All B would have to do is “allow” others to know that it exists?

    How did the HVAC company get compromised? Someone there appears to have opened a spreadsheet that looked like it came from Target.

    This looks like a clearer example. Someone in B trusted the spreadsheet from C, and this consequently affects A who trusted B (a customer or whatever).

  6. says

    And it gets even, even worse as time goes on and systems are dismantled – likely to make quick money. Here’s an unsettling for example:

    I use online banking to pay my bills. As most people are aware, the way it typically works is that you set up a transaction amount, vendor, and date on your end and the money ‘automagically’ transfers to your vendor. These days, most vendors contract somehow through ACH (Automatic Clearing House) so the transfer is basically account to account without the need to process using the paper checks of old. But, there are some small, local businesses who do not have ACH access and in those cases, the bank does mail a paper check to the vendor – time for that is built into the system.

    So far, so good.

    What I discovered recently is that there have been unannounced (or, at least I was unaware) changes to the way banks now process paper checks. It used to be that if you issued a paper check, the bank had to wait until the VENDOR cashed the check to withdraw funds from your account. In fact, checks might ‘stale date’ and never be cashed, right?

    Not anymore.

    I don’t know when the change happened but after my storage place called to tell me they did not receive payment – to which I was skeptical because I had a bank statement with the cleared check in my account – I went down there in person to sort this whole mess out. They had the original paper check in hand – they hadn’t cashed it because it arrived in the mail between the time they called and the time I managed to haul myself in there. (I hadn’t thought it was too emergent because, darn it, I was convinced they’d been paid!)

    Color me gobsmacked.

    Ok, well, everyone makes mistakes, the bank is clearly in error (or, credit union in this case). Happens. I replaced the bank’s check with one of my own and took it down to the credit union to alert them of the problem. The credit union immediately apologized and put the funds back in my account where they belonged but something still didn’t seem right to me because financial institutions tend to be a bit more upset when things like that might be happening.

    Then, the manager explained: They use a 3rd party to cut paper checks if ACH is not available for a my vendor. In those cases, they withdraw the funds from the account before the check is even sent and forward it onto the 3rd party vendor. They do not actually HAVE a system (according to the manager) to track paper checks because of some legislation many years ago that made electronic records of checks functionally equivalent to paper checks. (There was a time when you could demand your bank return your cleared checks to you for audit but the change in the law made that impossible.) So, unless the customer complains, the bank no longer considers it ‘their problem’ that the money eventually makes its way into the hands of my vendor. Although, I was reassured that they would always put the money back into my account if I return to them with the uncashed paper check in hand to void. Nice of them, don’t you think?

    It’s much, much worse than you might think because this isn’t about phishing, hackers, or outside attacks. You are trusting systems that directly impact your ability to pay people, manage your wealth, etc. to invisible, untrusted systems that you don’t even know exist! And, it’s really subtle! It’s not that EVERY online transaction goes through a 3rd party – only those that the target vendor does not participate in ACH – which for most people is a tiny subset of the bills they have to pay. Someone is making a mint on this.

    I watch my emails and put a post-it over my webcam but I don’t want to even try to function without online banking – businesses will punish you with fees if you want to mail them payments these days.

    This is horrifying.

  7. DonDueed says

    This morning I found a voicemail from my primary credit card company’s anti-fraud department. Turns out my card had been hijacked. The charges didn’t amount to much — around $50 — but the algorithms flagged them.

    But now, how did the card get jacked? Was it a recent 3rd-party purchase through Amazon? Was it skimmed at the parking garage at a medical facility? It would be nice to know who I should no longer trust.

  8. says

    consciousness razor@#5:
    Maybe I’m missing something, but that doesn’t sound like transitivity to me. At least, it’s not a very straightforward example….
    A = campaign; B = attorney; C = attacker
    A trusts B. C exploits this by deceiving A, essentially making A believe that C is B. But there’s nothing in here suggesting that B trusts C.

    You’re right – it’s a weak example. I was thinking that they’re trusting the document (a Microsoft Office document is a chunk of software that looks like a document which, when executed, usually gives one a document)

    This is what I mean about the terminology of computer security being vague. I’m guilty of being vague using it, myself. We don’t really know what “trust” is. Or “risk” (though “risk” has a whole epistemological stack under economics, and security practitioners have been trying to figure out some way to hitch security to economic models (I think it’s a bad idea – economics hasn’t done a very good job of predicting the future, which is what risk models are trying to do)

  9. says

    DonDueed@#7:
    But now, how did the card get jacked?

    It’d be nice to know, wouldn’t it?

    Some of the companies that have experienced leaks don’t understand how the leaks happened, either so they’re basically pumping the water out of the boat and heading back out to sea.

  10. Owlmirror says

    This was just a map database, it wasn’t one (we think) that was being used by self-driving cars, or nuclear weapons.

    I am pretty sure that no navigating vehicle would use a map that required current internet access.

    Of course, it occurs to me that the source used for map updates is trusted…

    And having written the above, it occurred to me that vehicles that do have current updates on traffic trust the system to not tell them that there are accidents/detours/road works where there are none, or vice versa.

  11. says

    Julie T@#6:
    I use online banking to pay my bill

    You know what I’m going to say, right?

    By the way, most banks will let you set a maximum ACH amount. So I asked the bank that I use for Paypal, “can I set it to zero for anything except Paypal?” They went (literally!) “I don’t know! Let’s see what happens and set it to zero.” Poof. Seems to work.

    However, there is one thing I could not get them to figure out: how to not do overdraft “protection.” Naturally, you understand that there is no ‘protection’ about it – it’s debt-farming – but I asked over and over to have that turned off, because I would rather that the bank refuse to pay an overdraft and just notify me. They say they cannot and that’s how ACH WORKS so I set ACH to zero. The bank says that if someone presents an ACH on my account that is not Paypal, they will bounce it because ACH is set to zero – but they will still charge me for a bounce because they can.

    Remember, the ‘story’ of overdraft charges is that it’s a service fee because someone is basically writing you a very small loan. Now, no human is involved at all; therefore there is no service. It’s just debt-farming.

  12. says

    Owlmirror@#10:
    I am pretty sure that no navigating vehicle would use a map that required current internet access.
    Of course, it occurs to me that the source used for map updates is trusted…

    True story: a bunch of years ago I paid a semi-official visit to NOAA. What that means is: I was not there on a consulting contract, I was ‘just hanging out’ but some of the people there had some questions about security. Basically, I gave them a day of my time in return for thai lunch at this nice place… Anyhow, one of the problems was that the weather map is accessed by everyone and anyone. I said, “well, naturally, you must have some separation between ‘read only’, ‘update only through an approval process’, and ‘generated via a trusted process’?” No, why? There are researchers who ‘need’ access and they complain if they’ve got restrictions.
    I replied: “Because the weather map you produce is relied upon by Channel 7 News, but it’s also relied upon by NORAD and FEMA*?” (turns out I had done my homework and made a list of entities that depended on weather maps)

    (* FEMA: this is serious shit. What if there was a ‘nuclear excursion’ at Three Mile Island, upwind from NYC? Whose weather map will they use to project the fallout? And is it trustworthy? That raises another question: is there only one weather map? If so, what do you check it against?)

  13. says

    Julie T@#6:
    I don’t want to even try to function without online banking

    Convenience is the reciprocal of security.

    If you have any significant amount of money (i.e.: more than you are willing to lose) in an account, do not use it for online banking. I am not joking. Open a free checking account across at the other side of the wall, and hand-walk checks between your account with your ‘real’ money, and the account you use for basic bill payments, etc.

    I know that some of you (most of you!) are reading that and thinking “Old Marcus, he’s crazy. He’s too hardcore paranoid!” but what you don’t understand is that every year I get a phone call from someone who has seen their life’s savings vanish. Sometimes they get some of it back. But never all of it.

    The “inconvenience” of you walking a check from one bank (you can use the ATM to do deposit…) to another is you being a firewall for your money manually enforcing a barrier between two domains. In computer security terms you are a “trusted guard” implementing a “cross-domain solution” and when you write the check you are implementing a policy-based control.*

    The ACH system is like SWIFT only it was designed to bypass SWIFT’s security controls because they got in the way. That is the nicest way I can say that.

    (* I do not know what the bank’s liability is if they cash a check that has a falsified signature on it. Be worth looking into that. Because in the system I described the security is still pretty bad. At my local bank, they know my face. That helps, too. Someone who is not me had better have a death certificate or a good fake if they’re going to get into my safe deposit box.)

  14. says

    OK, a SWIFT/ACH story: I used to consult for a big bank in Oklahoma, and I knew the security team there really well. For the better part of a decade they fought this battle against having the SWIFT system become a networked system. For a long time, it was separately wired to certain terminals in the branch office, with separate security and credentials. So if you wanted to steal all the bank’s money you had to be able to jump the barrier between the SWIFT network, and the physical access required to operate it. Then, it all got outsourced to FISERV, which centralizes/aggregates all that kind of stuff – it’s basically “cloud banking.”

    In other words, in a world where people are worried that the money may be too accessible online, the industry’s response has been “let’s put it all in one place, then that way we can do it extra super good and easy.”

    When you hear about things like the North Koreans allegedly attacking banks in Pakistan by compromising their SWIFT credentials and moving all the bank’s money: that’s what I’m talking about. It has gotten easier to do that not harder. (The alleged NK attacks were transitive trust attacks – SWIFT and systems like it are designed to work based on transitive trust.)

  15. says

    Owlmirror@#10:
    And having written the above, it occurred to me that vehicles that do have current updates on traffic trust the system to not tell them that there are accidents/detours/road works where there are none, or vice versa.

    This is a thought experiment. Do not do this. It could do significant damage to a city like Washigton DC or LA and mostly the people suffering would be Uber drivers, delivery people, taxi passengers, and ordinary folks.

    Google and Waze both generate ‘traffic blockage’ hypotheticals based on slow-down/immobility in traffic that is on specific routes. So if there are 1,000 cars on a specific route and they all come to a stop, then the route is probably blocked and people behind them start getting re-routed. It would be very easy to crash the traffic in an area that heavily depended on this sort of system. It sounds to me like you need a relatively small-sized number of protesters who are willing to place their phone in a ziploc bag with their name on it, pull over and hand a trusted person their smart-phone for an afternoon. They they drive off and the trusted person continues to sit there with a large box of smart-phones that are no longer in motion. Later that night they meet somewhere and get their phones back.

    Systems that allow drivers to flag “speed trap” are open to a disambiguiation cost attack/denial of service. It’s my opinion that a dynamic map that accepts inputs from ‘drivers’ or ‘subscribers’ is a bad idea. All one has to do is look at systems like Yelp and Amazon reviews, and you can see the end-game there. Interesting thought – I wonder if Waze or Google traffic maps could be gamed into making an entire city’s worth of traffic go to one’s BBQ joint. “Be careful what you wish for” is probably the answer there.

    Where do cop cars fancy new systems get their maps?

  16. Owlmirror says

    Marcus @#15:

    Regarding your scenario – If there a bunch of stationary phones in one segment of road, but other phones are (presumably) zooming through that same segment, does the system infer all lanes blocked, or that a bunch of cars have pulled over for whatever reason? It might show up as a slight slowdown, at best, and not switch over to rerouting.

  17. says

    Owlmirror@#16:
    Yeah, good point. I guess it depends on the numbers and how the algorithm processes its inputs.

    Usually if you’re trying to disturb a system deliberately, you need to find an asymmetry. I think you are right, my idea lacks the asymmetry that would make a successful attack.

  18. says

    I know that some of you (most of you!) are reading that and thinking “Old Marcus, he’s crazy. He’s too hardcore paranoid!”

    I certainly don’t think so. What you are suggesting here sounds only reasonable for me. The whole online banking system is poorly designed, hence trusting it and hoping that I won’t lose any money is akin to gambling.

    Hmm, this got me thinking. I have pieces of duct tape covering all the webcams on my electronic devices. Is that justified or maybe I’m the crazy and too hardcore paranoid one?

  19. Pierce R. Butler says

    Marcus Ranum @ # 15: This is a thought experiment. Do not do this.

    That reminds me of a minor sf novel from the ’60s, The Day They Invaded New York. Part of Their insidious plan: send a small squad of goons, gunning down anyone in the way, to break into the central NYC traffic-control room at rush hour and simultaneously set all the lights to green. The ensuing collisions, with streets so full that tow-trucks couldn’t get through, would paralyze the city for days. (Boringly – spoiler alert! – the Good Guy foiled Their scheme first.)

    Prob’ly much easier to do now… Moo-hoo-bwa-ha-ha!

  20. Some Old Programmer says

    Pierce R. Butler @#19
    And your citation reminds me of Thomas Perry’s novel Metzger’s Dog. From what I recall, the scheme included jamming key city intersections and instigating a wildcat strike of public transit workers. It’d definitely be easier to accomplish with modern traffic management systems.

  21. John Morales says

    Ieva Skrebele @18, (to Marcus)

    Hmm, this got me thinking. I have pieces of duct tape covering all the webcams on my electronic devices. Is that justified or maybe I’m the crazy and too hardcore paranoid one?

    Your prudence is warranted, and hardly paranoid. Those cameras are fully software-controllable, so is the indicator light. And it’s certainly possible to compromise them, store the info and steganographically dribble it out at leisure.

    It should operate only when you want it to operate, and your low-tech fix is ideal.

    (The question you should ask yourself is not “am I paranoid?”, but “am I paranoid enough?”)

    (BTW, same for microphones. That data compresses even better!)

    Amusingly (and relevantly to this post’s thrust), even though this site indicated I was logged in when I first composed and tried to post the above comment I was informed had to be logged in to post. So, during the process of so doing, I was then asked (by WordPress) whether I really wanted to log out — which I did, and then relogged.

    (This one should go through)

  22. cvoinescu says

    It should be exceptionally difficult to set all traffic lights to green.

    In older systems, relays are wired to provide an interlock — even if commanded otherwise by the logic, the wiring makes it physically impossible to turn conflicting lights green. No idea about newer systems — but I would either keep the relays, or use gates to prevent conflicting greens from showing, even if the microcontroller was compromised.

    Train signals take this seriously, and even detect burned-out bulbs — you can’t get a green unless the conflicting signals are commanded to show red, and are actually showing it. This is done at the signal, in relay logic, so it’s not overridable by bad computers or rogue operators in the control center. (This does not mean there aren’t other ways of attack, just that this particular one doesn’t work.)

  23. says

    John Morales @#21

    (The question you should ask yourself is not “am I paranoid?”, but “am I paranoid enough?”)

    I’m not so sure about this one. It is possible to be too paranoid and too cautious. Whatever security measures you choose to implement, they have what could be called “costs.” In order to increase your security, you have to spend time or money, sometimes you have to reduce your lever of comfort or convenience (for example, not using certain services might increase your security, but doing so would be plain inconvenient). The question is about whether your level of paranoia is appropriate for how large the risk is.

    And this is the case not just with electronically devices. When you get into a hobby that requires handling toxic substances, how much safety gear should you use? When you purchase insurance, should you get the more expensive option with wider coverage? “Always be paranoid and go for as much security measures as possible” isn’t the right approach, because you will just end up wasting resources.

  24. John Morales says

    leva, you refer to rational cost benefit analysis and risk assessment, not to paranoia, which definitionally is not rational.

    (Or: if you are being rational about it, if you don’t think someone somewhere is out to get you, then you are not being paranoid)

  25. jazzlet says

    I’ve a friend who, among other things, is in charge of traffic lights locally. If he gets sufficiently drunk he will pull out the key which will over-ride the lights at any junction, and cackle about the chaos he could cause.