Over the last few years, Iran, China and the United States have all deployed weapons capable of damaging physical infrastructure, all without a single explosion.
Unlike conventional weapons, these cyberweapons aren't restricted by international treaties — partly because governments know so little about their neighbors' electronic arsenals.
"With nuclear weapons, we at least had some idea from satellites about how many weapons the Soviet Union had and what they were capable of," Robert Axelrod, a political scientist at the University of Michigan, told NBC News. "Cyberweapons are different. They can be stockpiled with other countries knowing it, causing them to be more frightened than they need to be, or not frightened enough."
A new study from Axelrod and fellow University of Michigan researcher Rumen Iliev tries to shed some light on why governments choose to launch cyberattacks, the timing behind them, and what kind be done to prevent them from getting out of hand.
But the most famous attack almost certainly originated in the United States. In 2010, Stuxnet made headlines.
It seemed like the perfect computer worm. For 17 months, it sped up the centrifuges at Iran's nuclear enrichment center in Natanz while undetected, damaging but not destroying them. Then, quietly, it self-destructed.
In the end, Stuxnet temporarily disabled one-fifth of the facility's centrifuges, setting back Iran's nuclear program by two years, cyberdefense expert Ralph Langner said in Foreign Policy. It was a big win for for U.S. and Israeli intelligence — who, according to documents leaked by former NSA contractor Edward Snowden, developed the worm together.
"The capabilities that were employed in Stuxnet were far beyond the capabilites of what individual hackers could do," Axelrod said.
Escape into the wildThe problem? Stuxnet escaped from Iran, possibly on somebody's laptop. Now it's out in the wild, available to both foreign governments and individual hackers who might want to attack anything from water treatment plants to electrical grids to other nuclear power plants.
If U.S. intelligence officials had used Axelrod and Iliev's model, they probably would not have chosen a different path, said Axelrod, mostly because the people who created it had to move fast. Stuxnet depended on exploiting at least three different vulnerabilities in Iran's nuclear facilities, any of which could have been fixed by the time the worm was deployed.
That short time window, the high stakes involved with delaying Iran's nuclear program and the ability of Stuxnet to operate undetected for so long made it seem like a good idea at the time. (Axelrod's study breaks down these factors down into "persistence," "stealth" and "stakes" — basically, whether a cyberattack needs to be used quickly before becoming irrelevant, whether an attack will be useless later if used immediately and whether the stakes are high enough to risk the blowback from an attack).
Stuxnet certainly met its objective — delaying Iran's nuclear progress. But its "escape" was probably unforeseen, said Axelrod, and is just one of the many dangers of letting cyber conflicts go unregulated.
The model he developed could help countries at least begin a dialogue about what is acceptable and what isn't, possibly leading to a ban on attacking things like civilian or banking infrastructure, Axelrod said.
"I think it could lead countries to realize that that they can't exactly judge another country's capabilities on what they see on a day-to-day basis," he said. "That makes any kind of established norms or agreements on limiting the use of cyberweapons more valuable."