Rapporteur: Metta Spencer
Unless you live in a cave, you probably depend on a refrigerator, online bank account, airline traffic control system, oil pipeline, water treatment plant, car, subway, electric power plant, WiFi router, and maybe your pacemaker(1) and insulin pump.(2) Nowadays all of those things can be controlled by computers that can be hacked.(3) When that happens, whose fault is it, and what can you do about it?
If you ask a court who’s to blame, the judge will probably pin it all on a hacker criminal, who probably cannot be found. Yes, the hacker is the main culprit, but the programmers enabled him by writing buggy software that their company’s executives hurriedly sold without having it tested properly. The negligent vendors of such inferior products should be held accountable.
If you buy a TV set that explodes (and that has actually happened!) the manufacturer is liable for damages, but if you buy software, you probably don’t actually own it; you’ve just paid for a license to use it. (Remember that “terms of service” agreement you signed without reading it? That’s when you signed away your claims against the manufacturer, who now cannot be held liable for the software’s shoddy performance or its vulnerability to hacking. But you didn’t have much choice. You could take or leave it, so you signed, as we all do.)
The relevant laws are unlikely to be changed until internet insecurity becomes lethal. So far, the harm that hackers inflict is mostly inconvenience or financial loss—and the financial losses are far greater than the public knows. Banks and corporations avoid publicity about such events.
Yet the potential also exists for massive attacks on infrastructure that cost lives. Airliners are now acknowledged to be vulnerable to hacking,(4) though none of them have been crashed by it yet. Russian teams have hacked more than 20 US power stations, but without causing damage; apparently they were only testing their capability.(5) We don’t know how many Russian power stations the American teams have penetrated, but NSA is the acknowledged “gold standard” organization for such activity.
Now imagine that thousands of airliners, banks, electric grids, gas pipelines, and electric cars are seized all at the same time. Finally, as the cyber security expert Benoit Morel writes, “there is a realization that through the process of unlimited reliance on computer and ICT technology, the United states is increasingly exposed to potential devastating cyberattacks on its critical infrastructure, a kind of cyber Pearl Harbor.”(6) And not only the United States but all other nations too. But finally, there are people looking for ways to prevent the disastrous consequences of faulty software.
With the swift emergence of the Internet of Things (IoT) everything around us is turning into computers that can do things they were not originally invented for. Already your refrigerator, your printer,(7) and your camera have turned into computers with astonishing new capacities. (One fellow even programmed a Canon printer, a Honeywell thermostat, and a Kodak digital camera to play the computer game Doom.)(8) By next year – 2020 — about 75 billion devices are predicted to be connected to the Internet of Things(9) that can be hacked.
The cyber world has developed with astounding rapidity, partly by allowing for fast-and-loose standards of quality. Facebook’s old motto reflected this: “Move fast and break things.” Instead of perfecting a product through rigorous monitoring and in-house testing, software and even hardware producers rush their products to market full of coding errors, knowing that somewhere malevolent hackers are watching to detect and exploit them.
Software vendors expect to correct the problems afterward when the end users begin to complain. You probably receive notifications almost every day to “update” an app or two. These updates are actually “patches”—repairs to coding errors that have been discovered in the software.
Many of us are slow about applying these patches, but that is dangerous. When such a notice is sent out, it informs a criminal somewhere as to where in the software the new bug had been found. He will then search for an easy victim — you — who has not installed the update. And a survey commissioned by Skype in 2012 showed that 40 percent of adults do not update their software when prompted to do so, and about a quarter do not do so because they do not understand the benefits. But the benefits are great: About 90 percent of successful exploits are against unpatched systems.(10)
Even patches won’t solve everything, and not all coding errors can be spotted and fixed before being released. Programmers, being human, will err. But their errors will depend largely on the effectiveness of their monitoring and testing procedures. The industry averages about 15 – 50 errors per 1000 lines of delivered code. But Microsoft has brought its applications down to an average of about 10-20 defects per 1000 lines during in-house testing. One other testing system has achieved rates as low as 3 defects per 1000 lines of code during in-house testing and 0.1 defects per 1000 lines of code in released product.(11)
Thus, although some mistakes will inevitably happen, the number can be greatly reduced by rigorous use of quality control measures. As the risks to the public continue to increase, it is time to begin demanding improved quality of software and hardware. Instead of selling buggy software and expecting to fix it later with patches, vendors should be required by law to ensure that any product they sell meets reasonable standards of security, safety, and reliability.
But at present, the purchaser is at a severe disadvantage vis-à-vis the vendor, for generally purchasers who allege defects and security breaches get their cases thrown out of court (Jane Chong, “We need strict laws if we want more secure software,” The New Republic, Oct. 30, 2013.) Bruce Schneier maintains that the purchaser will not get secure software until the producers have a strong incentive to provide it—and now they do not. What would create such incentives? Knowing that the vendor can be sued for supplying a shoddy product that causes harm to the buyer. It will require government action to bring about that legal change.(12)
While some legal liability is being imposed on software vendors in a few places — Australia, for example(13) — there are few prospects for making significant changes in most countries. We should consider some of the main reasons for this lag.
First, much software is free. The courts will not hold software providers liable for harms for which the users did not pay in some form. However, the lawyer Jane Chong suggests that a different kind of payment occurs when the providers of free software do not take money from the users but rather data that they are able to monetize. If there were a legal obligation created to secure this data or restrict its use (and certainly there is a current public demand for such obligations) the users might be able to sue for breach of security under tort theories of negligence or misrepresentation.(14)
Second, Jane Chong notes that courts tend to assume that coding errors are inevitable and that, therefore, it is impossible to hold the vendors liable for any of them. Her own argument is that, while some errors are indeed inevitable, one product can be safer than another, and vendors should be liable if they produce unsafe ones. In fact, “just ten percent of vulnerabilities are responsible for 90 percent of all cybersecurity exposures.”(15) Chong uses the analogy of a car manufacturer. All cars are vulnerable to accidents. There is no such thing as a crash-proof car. But the courts have established that it was possible for General Motors to have produced a car that would minimize the effect of accidents and that their cars should be so designed.
Third, Chong states that the courts tend to hold only hackers, not providers, responsible for security breaches. Her own argument is that negligence is grounds for a civil lawsuit if the defendant failed to carry out a duty and caused harm as a a result. He should pay damages “to make Humpty Dumpty whole again.” And when it comes to software, a negligent creator is also a source of injury.
But most security breaches to do not lead to physical injury but instead to losses of data or money. This fact causes another legal problem for those who would hold the vendors liable. Oddly, tort liability does not apply to financial losses, so Chong argues that it would be better to invoke contract law when seeking damages that are purely financial from software vendors. Nevertheless, for all the reasons listed above, she is not optimistic about the likely outcome of most such court cases.
Bruce Schneier shares Chong’s bleak prognosis. While he shows what legal changes would benefit us, he admits that no such changes can be foreseen.(16)
Actually, not everyone wants greater software security. Obviously, criminal hackers want all opportunities to be left open for them to ply their trade. However, for a plethora of reasons, many states and law enforcement agencies also want to retain means of surveillance or even sabotage through bugs deliberately left in software.
One such bug is the “backdoor” — a piece of code that secretly placed in software before releasing it. Anyone with access to that code can spy on the its user, seize control of the computer to change or destroy data, or even lock it up and hold it for ransom.
Presumably no software producers would intentionally leave a backdoor or other bug open for criminals to exploit, but when a security flaw is discovered, they do not always appreciate being told about it. The most important discoveries are made by public-spirited hackers who enjoy revealing their discoveries in hacker conferences. Often when they have announced their plan to give a speech revealing an important new bug, they have to suddenly cancel the talk, since the software manufacturer urgently intervenes to prevent the disclosure.
On the other hand, some corporations offer rewards to any hackers who can tell them in private about the bugs in their software, which they can patch quietly and quickly.
Software producers sometimes experience enormous pressure from governments and other organizations to build backdoors into their products. As Edward Snowden revealed, the NSA has long spied on Internet communications worldwide, and several other countries have highly sophisticated systems to do likewise. Russia, China, Israel, and Iran are the main examples. Law enforcement organizations often demand that they be admitted to backdoors for surveilling suspected criminals—especially when it comes to backdoors on encrypted messaging apps such as WhatsApp.
These disputes can become major areas of tension between nations. For example, China does not allow Silicon Valley’s social media platforms access to their countries unless they can monitor the online conversations. And lately there is a new international conflict between China and the United States over the sale of fifth generation Internet devices. The Trump administration has refused to allow the Chinese firm Hua-Wei to sell its products in the US because they presumably contain backdoors that could let the Chinese follow American secrets.
And of course it is not only secret information that states may want to collect about each other; computers also control physical things—including weapons. Only two cases are publicly known about the use of computer hacking to interfere with military preparations in other country. One was the US effort to sabotage North Korea’s nuclear weapons manufacture. Little is known publicly about that, and presumably the effort failed. The other case was the use of a computer worm, Stuxnet, to interrupt Iran’s enrichment of uranium at its factory in Natanz. The bug was allegedly created by US and Israeli cyber experts, then smuggled into the plant where it caused the centrifuges to speed up and slow down repeatedly until they were ruined.
In addition to the sophisticated worm, the Stuxnet bug included four “zero days,” just for good measure. A zero day is a computer vulnerability that is unknown to the people who should have eliminated it, and which they will find out about with a shock on the day when it is exploited – the “zero day.” There is a lively trade in the buying and selling of zero days on the “dark web,” a portion of the Internet that is invisible because inaccessible to ordinary browsers. Many illegal activities, such as the sale of drugs and weapons, involve communications on the dark web.
The only way of reducing these threats is by legislating new rules to incentivize the producers of software to be scrupulous in designing and testing their software before releasing it. This will require government action. Computer professional generally agree as to how the law needs to be changed, but few politicians know enough to propose better legislation. Thus it is the responsibility of well-informed citizens to tell their parliamentarians what is needed. You are invited to take that task upon yourself.
References for this article can be seen at the Footnotes 3 page on this website (link will open in a new page).
We produce several one-hour-long Zoom conversations each week about various aspects of six issues we address. You can watch them live and send a question to the speakers or watch the edited version later here or on our Youtube channel.