Transitive trust is when A trusts B, and B trusts C, A trusts C and probably doesn’t realize it.
Currently, the state of the art of network defense is so poor that hackers don’t bother designing transitive trust attacks. They just smash and grab systems, see what’s on the other side, and occasionally a system they’ve compromised gets plugged into something interesting.
When that happens, the victim traditionally howls about it being an incredibly sophisticated attack, probably from a nation-state. Because only nation-states launch sophisticated attacks. Because they buy sophisticated attacks from hackers, who … wait… if the nation-states are buying tools from hackers that must mean the hackers have better tools than the nation-states, right?*
Anyway, the US Navy is now red-facedly admitting that it’s got a transitive trust bite: a contractor from HP who had a compromised laptop connected it to a US Navy network and blammo. The US Navy(A) trusts Hewlett Packard(B) and Hewlett Packard trusts the employee’s laptop(C). This was not a very sophisticated attack, in all likelihood. Some hacker woke up and checked the catch from a phishing campaign and discovered that they had hooked a minnow swimming in the middle of a great big bunch of tuna.
Unknown hackers compromised a number of the U.S. Navy’s computers and gained access to sensitive information, including names and social security numbers, of more than one hundred thousand current and former sailors, officials said Wednesday.
Hewlett Packard Enterprise Services has informed the Navy that at least one of the company’s laptops used by their employees, under a naval contract, was compromised. HP first notified the Navy on Oct. 22.
Thing 1: The Navy didn’t detect it. HP did. Apparently once that HP laptop was behind the firewall, on the Navy’s network, it was trusted. So HP had to tell them “one of our people screwed up.”
Thing 2: The Navy’s investigative service appears to have been able to identify that data was stolen, and how much. Judging from the number, “all of it” from a certain database.
Thing 3: If you’re running a network where any asshole that plugs in a laptop can parlay access to your employee database, your employee database is not set up right, and neither is the rest of your network. Back in the late 1980s Bill Cheswick used to call this “the hard shell around a soft chewy center” network architecture. Security people who understand security have been explaining why this is a bad idea for a very long time. It’s still a bad idea, but network engineers (and their bosses) still seem to operate in the mode that network engineering is “you stick the wire into the big Cisco box and if it turns green, you are good.” Breaking a network up by purpose and putting access controls and detection-points that trigger on access violations: nah, that’s hard.
I am seriously scared that having horrible security has become so normalized that organizations will begin adopting the strategy of “Whatever. We’ll just have one of our CSOs throw themself on their sword.” About a decade ago I was invited to meet with some people at a huge E-tailer about the job of CSO. As we talked I pretty quickly realized that that was what they wanted: a figurehead that could be chopped overboard if they needed someone to take a fall. So I asked, “what ability will I have to affect change?” And that was the end of the courtship.
It’s so bad, and security is so lame – especially in the government – that the only people who generally take a fall for failures are a contractor or two, and (only after years of culpable negligence) a department head.
In 1994 I explained it as:
One way to view the result of a firewall being compromised is to look at things in terms of what canbe roughly termed as “zones of risk”. In the case of a network that is directly connected to the Internetwithout any firewall, the entire network is subject to attack. This does not imply that the network isvulnerable to attack, but in a situation where an entire network is within reach of an untrusted network, itis necessary to ensure the security of every single host on that network. Practical experience shows thatthis is difficult, since tools like rlogin that permit user-customizable access control are often exploited byvandals to gain access to multiple hosts, in a form of “island hopping” attack. In the case of any typicalfirewall, the zone of risk is often reduced to the firewall itself, or a selected subset of hosts on the network,significantly reducing the network manager’s concerns with respect to direct attack. If a firewall is brokenin to, the zone of risk often expands again, to include the entire protected network; often a vandal gainingaccess to a login on the firewall can begin an island hopping attack into the private network, using it as abase. In this situation, there is still some hope, since the vandal may leave traces on the firewall, and maybe detected. If the firewall is completely destroyed, however, the private network is entirely in the zone ofrisk, but can undergo attack from any external system, and the chances of having useful logginginformation to analyze the attacks are very small.
(* Take for example Dave Vincenzetti. Dave’s organization supposedly only sells to governments and police and I guess it depends on the color of your cash.)
Marcus Ranum, “Thinking About Firewalls“(1994)