Secret snooping keeps us vulnerable
This is an obvious point.
Part of NSA’s mission, a very noble part, has always been to play digital defense. They call this “information assurance”, and describe it as “the formidable challenge of preventing foreign adversaries from gaining access to sensitive or classified national security information.” In practice, their role is much broader than that. I run NSA software — on purpose! Thank you, National Security Agency, for SELinux. I’m not worried about foreign adversaries, in particular. I just don’t want my server hacked. NSA helps evaluate and debug encryption standards that find their way into civilian use. With all the talk about 21st century cyberwarfare, about dams being made to malfunction or cars hacked to spin out of control, you’d think the best way to keep the homeland safe from terrorists and foreign adversaries would be an exceptionally secure domestic infrastructure.
However, NSA faces a conflict of mission. The organization’s more famous, swashbuckling “signals intelligence” is about maintaining a digital offense. It relies on adversaries using vulnerable systems. NSA discovers (or purchases) uncorrected “exploits” in order to break into the systems on which it hopes to spy. Normally, a good-guy “white hat” hacker who discovers a vulnerability would quietly inform the provider of the exposed system so that the weakness can be eliminated as quickly and safely as possible. Eventually, if the issue is not resolved, she might inform the broad public, so people know they are at risk. Vulnerabilities that are discovered but not widely disclosed are the most dangerous, and the most valuable, to NSA for intelligence gathering purposes, but also to cyberterrorists and foreign adversaries. There are tradeoffs between the strategic advantage that come from offensive capability and the weakness maintaining that capability necessarily introduces into domestic infrastructure. If the mission is really about protecting America from foreign threats (rather than enjoying the power of domestic surveillance), it is not at all obvious that we wouldn’t be better off nearly always hardening systems rather than holding exploits in reserve. Other countries undoubtedly tap the same backbones we do (albeit at different geographical locations and with the help of different suborned firms). Undoubtedly, passwords that nuclear-power-plant employees sloppily reuse occasionally slip unencrypted through those pipes.
Of course there is a trade-off. If security agencies did work aggressively to harden civilian infrastructure as soon as they discover vulnerabilities, the spooks would not have been able, for example, to stall Iran’s nuclear program with Stuxnet. But the same flaws that we exploited might also have been known to terrorists or foreign adversaries, who could have caused catastrophic industrial accidents in the US or elsewhere while that window was left open. Rather than applauding our clever cyberwarriors, perhaps we ought to be appalled at them for having left us dangerously exposed so that the Iranians would be too. When a cyberattack does come, via some vulnerability NSA might have patched, will we know enough to blame our cyberwarriors, or will we just shovel more money in their direction?
Before we let spy agencies make these tradeoffs for us (tradeoffs between security and security, for those who prioritize security über alles) we might want to think about institutional bias. Would it be rude to point out, given recent events, that NSA’s Power-Point-blared enthusiasm for awesome, eyes-of-the-President offensive capabilities may have eclipsed the unglamorous but critical work of running a good defense? And no, going all North-Korea with personnel is not a solution. I’m very grateful that what’s leaked has leaked, but if reports about what Snowden got are accurate, the absence of ordinary precaution is shocking. There is no irreducible danger from sysadmins that would excuse such a failure. Root access to some machines does not imply pwning the organization. I am speculating, but both Snowden’s claims of expansive access and Keith Alexander’s assessment of “irreversible damage” suggest NSA prioritized analyst convenience over data compartmentalization and surveillance of use. That’s great for helping analysts get stuff into the President’s daily briefing while avoiding blowback for, uhm, questionable trawling. It should be incredibly embarrassing to an organization whose mission is securing data.
Perhaps my speculations are misguided. The point remains. At an organizational level and at a national level, there are tradeoffs between offensive capacity (surreptitious surveillance, sabotage) and defensive security. Maintaining a killer offense requires tolerating serious weaknesses in our defense. The burgeoning, sprawling surveillance state has its own incentives that render it ill-suited to make judgments about how much vulnerability is acceptable in pursuit of an impressive offense. That shouldn’t be their call.
Sometimes the best defense is a great defense. Even if it is a lot less awesome.
Note: The trade-offs described here apply especially to covert, surreptitious means of accessing computer systems. If “we” (however constituted) decide that we want systems that are both secure and susceptible to government surveillance, we can make use of “key escrow” or similar schemes. There would be significant technical challenges to getting these right, but at least the systems could be openly designed and vetted, and could include software-enforced auditing to document use and deter abuse. Systems designed to allow third-party access will always be weaker than well-designed systems without that “feature”, but they can be made a lot more secure than systems whose flaws are intentionally uncorrected in order to enable access. It would be important to avoid implementation monocultures and centralized, single-point-of-failure key repositories. A public review process could see to that. It would not be necessary to ban alternative systems, if we wish to maintain status quo capabilities.
I’m not arguing any of this would be a good idea. But if we decide that we want data-mining or widespread surveillance, we can implement them in ways that are overt and publicly auditable rather than clandestine, insecure, and unaccountable. The status quo, a peculiar combination of lying a lot and demanding the public’s trust, is simply unsupportable.