July marks one year since the world learned of Stuxnet, the mysterious computer worm that launched the first successful cyberattack on infrastructure facilities.
Stuxnet, which crippled an Iranian nuclear-fuel processing plant, served notice that state-sponsored hackers are out there — and that everything from power grids to water supplies could be the next target.
But is that likely? Maybe not. For a number of reasons, security experts think it might be a long time before we ever see anything like Stuxnet again.
Stuxnet spread from USB drive to USB drive in Asia for months until it infected computers that controlled centrifuges at Natanz, Iran's main uranium-enrichment facility.
The worm stealthily sped up Natanz's centrifuges to their breaking points, even as it hijacked the facility's monitoring system to falsely show that the machines were functioning normally.
It took the Iranians weeks to catch on to what was happening, and the resulting damage set back the suspected Iranian nuclear-weapons program by years.
Not a rogue operation
No similar piece of malware has since appeared, but some experts say that if one does, it will almost certainly be state-supported.
"Stuxnet was a highly specific operation which could only have been carried out with a nation-state's backing," said Roel Schouwenberg, senior malware researcher in the Boston-area office of the Russian information-security firm Kaspersky Lab.
The reason is the sophistication. Stuxnet attacked a very specific type of computer system — industrial supervisory control and data acquisition (SCADA) systems built by the German company Siemens.
No nation has admitted creating Stuxnet, but suspicion immediately fell on Israel, Iran's arch-enemy. Investigative journalists found evidence that Israel and the United States co-operated to make the worm.
Stuxnet launched a multilayered attack, first exploiting four "zero-day" (previously unknown) vulnerabilities in Microsoft Windows, then exploiting vulnerabilities in Siemens applications for Windows, and finally taking over the programmable logic controllers — specialized industrial-control computers — that ran on Siemens' own operating system.
Exploiting multiple zero-day vulnerabilities at once was highly unusual, as was the specifically targeted nature of the malware (the worm was designed to attack uranium-refinement facilities that exactly matched Natanz).
Those features are precisely what make it unlikely that a lone hacker will ever make anything quite like it.
Out of reach of civilian budgets
Costin Raiu, Kaspersky’s director of global research and analysis, noted that the source code for Stuxnet has never been released to the public.
The hacker sites that claim to have it have so far offered only fakes, making it more unlikely that anyone except a state-sponsored actor with a lot of money and time could re-create it.
Schouwenberg said that in any case, the data-security industry is now so aware of Stuxnet, as well as the vulnerabilities that allowed it to work, that any new variants would have to try a very different approach.
"While perhaps some of the philosophies behind Stuxnet could be used in a new piece of malware," Schouwenberg said, "it is very unlikely to see any code re-used for an attack on a different type of network/hardware."
It would also be very difficult to design another piece of malware that could go undetected for so long.
Stuxnet was originally released into the wild in 2009, but it took several months — it was designed to use USB sticks as the infection “vector,” because Natanz's SCADA systems were not Internet-connected — before the worm hit enough targets to make itself noticed.
Then there's the cost factor. Stuxnet was likely a multi-million-dollar project. It would have required not only people very familiar with industrial processes to develop it, but expensive industrial hardware to test it on — not the kind of things the average lone hacker is likely to have lying around.
(The New York Times alleges that a full-scale mock-up of Natanz was built for testing purposes in the Israeli desert.)
Schouwenberg said the simplest way for systems administrators to protect against a future Stuxnet-like attack would be to have their machines run only approved, or "white-listed,” software.
Stuxnet did prompt a rethinking of security, agreed John Burnham, vice president of corporate marketing at Massachusetts' Q1 Labs. For example, Microsoft has since disabled the "autorun" feature that allowed the worm to quietly infect computers from USB drives, and vice versa.
Such attacks have to be hardware specific, which makes it harder to "mutate" a virus or malware — but that doesn't mean it can't happen, Burnham said.
"Why not take Stuxnet and adapt it to Emerson?" Burnham said, referring to another large maker of SCADA industrial-control systems.
A positive outcome from the Stuxnet infection, Burnham said, was that the victims of cyberattacks realized the value in going public.
By doing so, they alerted others to their problems, and allowed systems administrators to patch vulnerabilities more quickly, blunting the effectiveness of future Stuxnet-like attacks.
The danger from Stuxnet-like software would not lie in the traditional cyberwar scenario of bringing down electrical grids or poisoning water supplies. Rather, it would be in the disruptions such an attack would cause, which would take enormous amounts of time and money to fix.
For example, an attack on an electrical grid could shut down regional Internet service for a short time, which could cost a company such as Amazon many millions of dollars.
The real problem, Burnham added, is not the attacks that have happened already, or in the vulnerabilities that are already known.
"What about the things we haven't seen?" he said.