Security through obscurity
This year's BlackHat computer security conference was, by all accounts, pretty ordinary - until Cisco replaced all the conference CDs, hired temporary workers to rip out pages 789 through 820 from every copy of the conference book, and hit the creator of the presentation in question with a restraining order and a criminal complaint. He was also summarily fired from his job. And then the FBI got involved.
The subject of all this attention? A man named Michael Lynn, aka "Abaddon." (He's a hacker; hackers have flashy online handles. It's part of the culture.) His unforgivable sin? A talk in which he demonstrated the ability to remotely hack into, and take over - "remote root", in hacker parlance - Cisco routers. Yes, the same routers that conduct pretty much all Internet traffic.
Lynn had been working for a security-consulting company called ISS - and working closely with Cisco - to, quite legitimately, reverse-engineer Cisco's systems and search for vulnerabilities. He found a flaw. (And, researching online, he came across a number of documents in Mandarin Chinese that seemed to be exploring the same line of attack.) He found an exploit; he told Cisco; Cisco released a patch for their routers that fixed the problem. All this well before BlackHat. Then he went to the conference, and Cisco got cold feet and tried to pull the plug. Instead, the security community blew up in their face.
In itself the story's just a soap opera. And those are nothing new at BlackHat and its subsequent more-fun less-tech DefCon weekend; a couple years ago, a Russian programmer was arrested for teaching attendees how to crack Adobe's copy-protection software. For more on this story, just google it; Cory Doctorow, Bruce Schneier, SecurityFocus, and zillions of others have written about it at length, and there's no point in revisiting their territory.
It does bring up one kind of interesting point, which is old news, and one much more interesting one, which really should be but isn't.
The first is that it highlights the fundamental unresolved dilemma in the field of security: Security Through Obscurity vs Security Through Transparency. And this in turn is a special case of the fundamental divide in the software world today, the Cathedral and the Bazaar, but I don't have time or space to get into that one...
It's pretty easy to see where Cisco was coming from. Their products were vulnerable to a remarkably powerful new attack. They had released a patch, but of course not all their customers had applied it. They didn't want any public release of information that might help an attacker. Heck, they didn't want any public confirmation that the attack was even possible, partly to dissuade attackers, partly for PR purposes. So they spent a lot of money to squelch the talk. It blew up in their faces, of course, but if they hadn't waited until so late in the day, it might not have, they could have stamped out Lynn's disclosure early on. And their products, and their clients, and the Internet itself, would have been safer. Right?
Like hell, argue the other side, led by Bruce Schneier, but including pretty much all the computer security professionals I know. Their claim: security through obscurity never works. It just papers over the cracks, and creates a false sense of security, until weaknesses go from problematic to utterly disastrous. The only way to ensure security is, paradoxically, to show the whole world what you're doing. It's the same principle as that behind open-source software; the more pairs of eyes that scrutinize your work, the faster that flaws are found and repaired. And if you don't tell people why they need to patch their routers right away, they're a lot less likely to comply.
This, again, is pretty old news to anyone even tangentially connected to the world of software security. But what really caught my eye about the whole Michael Lynn debacle was this, on the eighth of his censored slides, when he talks about Cisco's Internetworking Operating System (IOS), and writes:
Much Better Than Most Systems
If you don't understand the details there, never mind. The point is: Lynn was actually going out of his way to compliment Cisco. His exploit was such a big deal because it is the first such exploit ever found that works against Cisco routers. The only one, in fifteen years.
Compare and contrast that to the operating system with which you are reading this.
I have often, very often, heard people argue that the spree of gaping security flaws that riddle personal-computing operating systems - Windows in particular, though others are certainly not immune - are inevitable. Because they're so complex. There are so many million lines of code. There are so many programmers working on it. Lots of things are bound to not quite fit together. There are bound to be lots of ratholes in the walls. There's no way around it.
Guess again. Your browser, your word processor, your email client, and your operating system are so insecure - and they are so insecure, don't kid yourself, Microsoft reveals new critical security fixes pretty much every week, and I sure as hell wouldn't trust my life to Linux or OS X either - only because the people who wrote them couldn't be bothered to make them secure. And that wasn't really a matter of money; it was just a matter of bad design.
I'm not going to doomsay here. For all the worms and viruses and botnets that riddle the Net, there really hasn't been a colossal security disaster yet, and there's no real indication that there will be. Microsoft is getting its act together. My guess is that everyone will muddle along until we finally get competence into the system. But if you've ever had the impression that computers, for all their sophistication, are at the same time ridiculously inept kludges held together by slightly more abstract versions of baling wire and duct tape? You're not so wrong. And it didn't have to be that way.
The subject of all this attention? A man named Michael Lynn, aka "Abaddon." (He's a hacker; hackers have flashy online handles. It's part of the culture.) His unforgivable sin? A talk in which he demonstrated the ability to remotely hack into, and take over - "remote root", in hacker parlance - Cisco routers. Yes, the same routers that conduct pretty much all Internet traffic.
Lynn had been working for a security-consulting company called ISS - and working closely with Cisco - to, quite legitimately, reverse-engineer Cisco's systems and search for vulnerabilities. He found a flaw. (And, researching online, he came across a number of documents in Mandarin Chinese that seemed to be exploring the same line of attack.) He found an exploit; he told Cisco; Cisco released a patch for their routers that fixed the problem. All this well before BlackHat. Then he went to the conference, and Cisco got cold feet and tried to pull the plug. Instead, the security community blew up in their face.
In itself the story's just a soap opera. And those are nothing new at BlackHat and its subsequent more-fun less-tech DefCon weekend; a couple years ago, a Russian programmer was arrested for teaching attendees how to crack Adobe's copy-protection software. For more on this story, just google it; Cory Doctorow, Bruce Schneier, SecurityFocus, and zillions of others have written about it at length, and there's no point in revisiting their territory.
It does bring up one kind of interesting point, which is old news, and one much more interesting one, which really should be but isn't.
The first is that it highlights the fundamental unresolved dilemma in the field of security: Security Through Obscurity vs Security Through Transparency. And this in turn is a special case of the fundamental divide in the software world today, the Cathedral and the Bazaar, but I don't have time or space to get into that one...
It's pretty easy to see where Cisco was coming from. Their products were vulnerable to a remarkably powerful new attack. They had released a patch, but of course not all their customers had applied it. They didn't want any public release of information that might help an attacker. Heck, they didn't want any public confirmation that the attack was even possible, partly to dissuade attackers, partly for PR purposes. So they spent a lot of money to squelch the talk. It blew up in their faces, of course, but if they hadn't waited until so late in the day, it might not have, they could have stamped out Lynn's disclosure early on. And their products, and their clients, and the Internet itself, would have been safer. Right?
Like hell, argue the other side, led by Bruce Schneier, but including pretty much all the computer security professionals I know. Their claim: security through obscurity never works. It just papers over the cracks, and creates a false sense of security, until weaknesses go from problematic to utterly disastrous. The only way to ensure security is, paradoxically, to show the whole world what you're doing. It's the same principle as that behind open-source software; the more pairs of eyes that scrutinize your work, the faster that flaws are found and repaired. And if you don't tell people why they need to patch their routers right away, they're a lot less likely to comply.
This, again, is pretty old news to anyone even tangentially connected to the world of software security. But what really caught my eye about the whole Michael Lynn debacle was this, on the eighth of his censored slides, when he talks about Cisco's Internetworking Operating System (IOS), and writes:
Much Better Than Most Systems
- They check heap linkage
- They are very aware of integer issues
- They almost never use the stack
- They have a process to check all heaps
- Very old, very well tested code
If you don't understand the details there, never mind. The point is: Lynn was actually going out of his way to compliment Cisco. His exploit was such a big deal because it is the first such exploit ever found that works against Cisco routers. The only one, in fifteen years.
Compare and contrast that to the operating system with which you are reading this.
I have often, very often, heard people argue that the spree of gaping security flaws that riddle personal-computing operating systems - Windows in particular, though others are certainly not immune - are inevitable. Because they're so complex. There are so many million lines of code. There are so many programmers working on it. Lots of things are bound to not quite fit together. There are bound to be lots of ratholes in the walls. There's no way around it.
Guess again. Your browser, your word processor, your email client, and your operating system are so insecure - and they are so insecure, don't kid yourself, Microsoft reveals new critical security fixes pretty much every week, and I sure as hell wouldn't trust my life to Linux or OS X either - only because the people who wrote them couldn't be bothered to make them secure. And that wasn't really a matter of money; it was just a matter of bad design.
I'm not going to doomsay here. For all the worms and viruses and botnets that riddle the Net, there really hasn't been a colossal security disaster yet, and there's no real indication that there will be. Microsoft is getting its act together. My guess is that everyone will muddle along until we finally get competence into the system. But if you've ever had the impression that computers, for all their sophistication, are at the same time ridiculously inept kludges held together by slightly more abstract versions of baling wire and duct tape? You're not so wrong. And it didn't have to be that way.
Comments