SEATTLE — Amateurs hack systems. Professionals hack people.
That dictum, famously enunciated by Bruce Schneier, chief security technology officer of British Telecom, underlies a basic premise of computer security, according to analysts and information technology leaders:
No matter how much time and effort institutions put into safeguarding systems, 100 percent security will never be possible.
Defense systems designed to stop hackers, spies, phishers and frauds, after all, are designed and used by human beings, so they are hostage to timeless human weaknesses: inattention, incompetence and complacency.
It is a reality that President-elect Barack Obama is wrestling with as he tries to find a way to keep his beloved BlackBerry 8830 World Edition, which he acknowledged last week in an interview on NBC’s TODAY was a concern not only to White House lawyers, but also to the Secret Service, which has trained officers assigned to identify cybersecurity threats.
Security experts have said Obama’s BlackBerry communications are vulnerable to being hacked and his movements could possibly be tracked because the device’s signals travel over cell phone circuits. In 2007, the French government banned the use of the devices in ministries and the presidential palace over such concerns.
“I’m still in a scuffle around that,” Obama said.
‘People ... become too trusting’
“Society ultimately expects computer systems to be trustworthy — that is, that they do what is required and expected of them despite environmental disruption, human user and operator errors, and attacks by hostile parties, and that they not do other things,” the Committee on Improving Cybersecurity Research in the United States wrote in a report for the National Academy of Sciences.
But the computer systems are unable to gauge the trustworthiness of their human users, leaving them at the mercy not only of bad guys but also of millions of Americans who don’t recognize how limited the systems are.
“Most of the people and organizations that increasingly depend on cyberspace are unaware of how vulnerable and defenseless they are, and all too many users and operators are poorly trained and equipped,” said the report, titled “Toward a Safer and More Secure Cyberspace.” “Many learn only after suffering attacks.”
When you put the bad guys together with careless or clueless computer users, the combination will inevitably overwhelm even the most sophisticated defenses, said Shawn Henry, assistant director of the FBI’s Cyber Division.
“We see people lose money regularly, most often because they’re just not aware of the potential scam and they become too trusting,” Henry said.
Legitimate risks misunderstood
Andrew Plato, president of Anitian Enterprise Security of Beaverton, Ore., which manages corporate computer security systems, agreed that people were “the number one risk to your organization.”
The problem is that people misunderstand the scope of threats of all kinds. While Americans are far more likely to die of heart disease or cancer, it is “exotic threats” like terrorist attacks and lightning strikes that transfix people.
The same applies to computer attacks, Plato said. While stunts like the hacker attack on Twitter accounts of CNN jounalists and other celebrities last week create a sense of insecurity in users, they are isolated incidents unlikely ever to affect the vast majority of computer users.
Instead, “mundane, often well-understood, persistent, slow-acting threats are the most likely to lead to problems,” Plato said at a cybersecurity conference Thursday in Seattle.
For systems operators and companies, “your greatest source for understanding risks in your environment” is not an assessment of the potential for a malicious outside attack, he said. It is “the people that work there.”
“Laziness, bureaucracy, bad attitudes and ignorance create far, far, far more problems in an organization than what kind of firewall you pick,” Plato warned.
For example, when veterans’ personal information was put at risk of identity theft in 2006, federal agencies were ordered to implement five procedures to protect the data. The Government Accounting Office reported that employees at nearly two dozen agencies — including the Small Business Administration and the National Science Foundation — had not gotten around to making the changes more than two years later.
Automation, automation, automation
In the view of Plato and Muhammed El-Harmeel, a security analyst with Raya Integration, a technology security company in Giza, Egypt, the goal should be to automate security as much as possible, limiting human involvement to as close to zero as you can.
That begins with clamping down on any sensitive information, keeping it out of the hands of anyone who doesn’t absolutely have to have it — a policy known as “least privilege.”
In a white paper for the SANS Institute, the research organization that runs the Internet Storm Center early warning system, El-Harmeel said that in those cases where humans have to handle sensitive information or monitor key systems, policies and systems should be as simple and explicit as possible, leaving little room for human error.
“Develop policies that you plan to enforce,” El-Harmeel wrote — clear, simple and concrete enough that they don’t shift over time.
Plato said that’s because people can’t be trusted, because no matter how well-intentioned they may be, “everybody does dumb things from time to time.”
“The more manual processes you have, the more likely it is your process will fail,” he said. “... There is a limit to what technology can do to overcome human factors in security.”