Quantcast
Channel: CodeSection,代码区,网络安全 - CodeSec
Viewing all articles
Browse latest Browse all 12749

Patching Is Failing as a Security Paradigm

$
0
0

The Weakest Link is Motherboard's third annual theme week dedicated to the future of hacking and cybersecurity.Follow along.

Listen to Motherboard’s new hacking podcast, CYBER, here .

The following is an excerpted chapter from the Bruce Schneier's book, Click Here to Kill Everybody: Security and Survival in a Hyper-connected World .

There are two basic paradigms of security. The first comes from the real world of dangerous technologies: the world of automobiles, planes, pharmaceuticals, architecture and construction, and medical devices. It’s the traditional way we do design, and can be best summed up as “Get it right the first time.” This is the world of rigorous testing, of security certifications, and licensed engineers. At the extreme, it’s a slow and expensive process: think of all the safety testing Boeing conducts on its new aircraft, or any pharmaceutical company conducts before releasing a new drug in the market. It’s also the world of slow and expensive changes, because each change has to go through the same process.

We do this because the costs of getting it wrong are so great. We don’t want buildings collapsing on us, planes falling out of the sky, or thousands of people dying from a pharmaceutical’s side effects or drug interaction. And while we can’t eliminate all those risks completely, we can mitigate them by doing a lot of up- front work.

The alternative security paradigm comes from the fast- moving, freewheeling, highly complex, and heretofore largely benign world of software. Its motto is “Make sure your security is agile” or, in Facebook lingo, “Move fast and break things.” In this model, we try to make sure we can update our systems quickly when security vulnerabilities are discovered. We try to build systems that are survivable, that can recover from attack, that actually mitigate attacks, and that adapt to changing threats. But mostly we build systems that we can quickly and efficiently patch. We can argue how well we achieve these goals, but we accept the problems because the cost of getting it wrong isn’t that great.

There are undiscovered vulnerabilities in every piece of software.

In a world where we increasingly rely on internet-connected devices, these two paradigms are colliding. They’re colliding in your cars. They’re colliding in home appliances. They’re colliding in computerized medical devices. They’re colliding in home thermostats, computerized voting machines, and traffic control systems― and in our chemical plants, dams, and power plants. They’re colliding again and again, and the stakes are getting higher because failures can affect life and property.

Patching is something we all do all the time with our software― we usually call it “updating”― and it’s the primary mechanism we have to keep our systems secure. How it works (and doesn’t), and how it will fare in the future, is important to understand in order to fully appreciate the security challenges we face.

There are undiscovered vulnerabilities in every piece of software. They lie dormant for months and years, and new ones are discovered all the time by everyone from companies to governments to independent researchers to cybercriminals. We maintain security through (1) discoverers disclosing a found vulnerability to the software vendor and the public, (2) vendors quickly issuing a security patch to fix the vulnerability, and (3) users installing that patch.

It took us a long time to get here. In the early 1990s, researchers would disclose vulnerabilities to the vendors only. Vendors would respond by basically not doing anything, maybe getting around to fixing the vulnerabilities years later. Researchers then started publicly announcing that they had found a vulnerability, in an effort to get vendors to do something about it― only to have the vendors belittle them, declare their attacks “theoretical” and not worth worrying about, threaten them with legal action, and continue to not fix anything. The only solution that spurred vendors into action was for researchers to publish details about the vulnerability. Today, researchers give software vendors advance warning when they find a vulnerability, but then they publish the details. Publication has become the stick that motivates vendors to quickly release security patches, as well as the means for researchers to learn from each other and get credit for their work; this publication further improves security by giving other researchers both knowledge and incentive. If you hear the term “responsible disclosure,” it refers to this process.

Lots of researchers―from lone hackers to academic researchers to corporate engineers―find and responsibly disclose vulnerabilities. Companies offer bug bounties to hackers who bring vulnerabilities to them instead of publishing those vulnerabilities or using them to commit crimes. Google has an entire team, called Project Zero, devoted to finding vulnerabilities in commonly used software, both public- domain and proprietary. You can argue with the motivations of these researchers―many are in it for the publicity or competitive advantage―but not with the results. Despite the seemingly endless stream of vulnerabilities, any piece of software becomes more secure as they are found and patched.

It’s not happily ever after, though. There are several problems with the find-and-patch system. Let’s look at the situation in terms of the entire ecosystem―researching vulnerabilities, disclosing vulnerabilities to the manufacturer, writing and publishing patches, and installing patches― in reverse chronological order.

Installing patches: I remember those early years when users, especially corporate networks, were hesitant to install patches. Patches were often poorly tested, and far too often they broke more than they fixed. This was true for everyone who released software: operating system vendors, large software vendors, and so on. Things have changed over the years. The big operating system organizations―Microsoft, Apple, and linux in particular―have become much better about testing their patches before releasing them. As people have become more comfortable with patches, they have become better about installing them more quickly and more often. At the same time, vendors are now making patches easier to install.

Still, not everyone patches their systems. The industry rule of thumb is that a quarter of us install patches on the day they’re issued, a quarter within the month, a quarter within the year, and a quarter never do. The patch rate is even lower for military, industrial, and healthcare systems because of how specialized the software is. It’s more likely that a patch will break some critical functionality.

People who are using pirated copies of software often can’t get updates. Some people just don’t want to be bothered. Others forget. Some people don’t patch because they’re tired of vendors slipping unwanted features and software into updates. Some IoT systems are just harder to update. How often do you update the software in your router, refrigerator, or microwave? Never is my guess. And no, they don’t update automatically.

Three 2017 examples illustrate the problem. Equifax was hacked because it didn’t install a patch for its Apache web server that had been available two months previously. The WannaCry malware was a worldwide scourge, but it only affected unpatched windows systems. The Amnesia IoT botnet made use of a vulnerability in digital video recorders that had been disclosed and fixed a year earlier, but existing machines couldn’t be patched.

The situation is worse for the computers embedded in IoT devices. In a lot of systems―both low-cost and expensive―users have to manually download and install relevant patches. Often the patching process is tedious and complicated, and beyond the skill of the average user. Sometimes, ISPs have the ability to remotely patch things like routers and modems, but this is also rare. Even worse, many embedded devices don’t have any way to be patched. Right now, the only way for you to update the firmware in your hackable DVR is to throw it away and buy a new one.

At the low end of the market, the result is hundreds of millions of devices that have been sitting on the Internet, unpatched and insecure, for the last five to ten years. In 2010, a security researcher analyzed 30 home routers and was able to break into half of them, including some of the most popular and common brands. Things haven’t improved since then.

Hackers are starting to notice. The malware DNSChanger attacks home routers, as well as computers. In Brazil in 2012, 4.5 million DSL routers were compromised for purposes of financial fraud. In 2013, a Linux worm targeted routers, cameras, and other embedded devices. In 2016, the Mirai botnet used vulnerabilities in digital video recorders, webcams, and routers; it exploited such rookie security mistakes as devices having default passwords.

The difficulty of patching also plagues expensive IoT devices that you might expect to be better designed. In 2015, Chrysler recalled 1.4 million vehicles to patch a security vulnerability. The only way to patch them was for Chrysler to mail every car owner a USB drive to plug into a port on the vehicle’s dashboard. In 2017, Abbott Labs told 465,000 pacemaker patients that they had to go to an authorized clinic for a critical security update. At least the patients didn’t have to have their chests opened up.

This is likely to be a temporary problem, at least for more expensive devices. Industries that aren’t used to patching will learn how to do it. Companies selling expensive equipment with embedded computers will learn how to design their systems to be patched automatically. Compare Tesla to Chrysler: Tesla pushes updates and patches to cars automatically, and updates the systems overnight. Kindle does the same thing: owners have no control over the patching process, and usually have no idea that their devices have even been patched.

Writing and publishing patches: Vendors can be slow to release security patches. One 2016 survey found that about 20 percent of all vulnerabilities―and 7 percent of vulnerabilities in the “top 50 applications”―did not have a patch available the same day the vulnerability was disclosed. (To be fair, this is an improvement over previous years. In 2011, a third of all vulnerabilities did not have a patch available on the day of disclosure.) Even worse, only an additional 1 percent were patched within a month after disclosure, indicating that if a vendor doesn’t patch immediately, it’s not likely to get to it anytime soon. Android users, for example, often have to wait months after Google issues a patch before their handset manufacturers make that patch available to users. The result is that about half of all Android phones haven’t been patched in over a year.

Patches also aren’t as reliable as we would like them to be; they still occasionally break the systems they’re supposed to be fixing. In 2014, an iOS patch left some users unable to get a cell signal. In 2017, a flawed patch to Internet- enabled door locks by Lockstate bricked the devices, leaving users unable to lock or unlock their doors. In 2018, in response to the Spectre and Meltdown vulnerabilities in computer CPUs, Microsoft issued a patch to its operating system that bricked some computers. There are more examples.

If we turn to embedded systems and IoT devices, the situation is much more dire. Our computers and smartphones are as secure as they are because there are teams of security engineers dedicated to writing patches. The companies that make these devices can support such big teams because they make a huge amount of money, either directly or indirectly, from their software―and, in part, compete on its security. This isn’t true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin and in much smaller quantities, and are often designed by offshore third parties. Engineering teams assemble quickly to design the products, then disband or go build something else. Parts of the code might be old and out- of- date, reused again and again. There might not be any source code available, making it much harder to write patches. The companies involved simply don’t have the budget to make their products secure, and there’s no business case for them to do so.

We’re already seeing the effects of systems so old that the vendors stopped patching them, or went out of business altogether.

Even worse, no one has the incentive to patch the software once it’s been shipped. The chip manufacturer is busy shipping the next version of the chip, the device manufacturer is busy upgrading its product to work with this next chip, and the vendor with its name on the box is just a reseller. Maintaining the older chips and products isn’t a priority for anyone.

Even when manufacturers have the incentive, there’s a different problem. If there’s a security vulnerability in Microsoft operating systems, the company has to write a patch for each version it supports. Maintaining lots of different operating systems gets expensive, which is why Microsoft and Apple― and everyone else― support only the few most recent versions. If you’re using an older version of Windows or macOS, you won’t get security patches, because the companies aren’t creating them anymore.

This won’t work with more durable goods. We might buy a new DVR every 5 or 10 years, and a refrigerator every 25 years. We drive a car we buy today for a decade, sell it to someone else who drives it for another decade, and that person sells it to someone who ships it to a Third World country, where it’s resold yet again and driven for yet another decade or two. Go try to boot up a 1978 Commodore PET computer, or try to run that year’s VisiCalc, and see what happens; we simply don’t know how to maintain 40-year-old software.

Consider a car company. It might sell a dozen different types of cars with a dozen different software builds each year. Even assuming that the software gets updated only every two years and the company supports the cars for only two decades, the company needs to maintain the capability to update 20 to 30 different software versions. (For a company like Bosch that supplies automotive parts for many different manufacturers, the number would be more like 200.) The expense and warehouse size for the test vehicles and associated equipment would be enormous.

Alternatively, imagine if car companies announced that they would no longer support vehicles older than five, or ten, years. There would be serious environmental consequences.

We’re already seeing the effects of systems so old that the vendors stopped patching them, or went out of business altogether. Some of the organizations affected by WannaCry were still using Windows XP, an unpatchable 17-year-old operating system that Microsoft stopped supporting in 2014. About 140 million computers worldwide still run that operating system, including most ATMs. A popular shipboard satellite communications system once sold by Inmarsat Group is no longer patched, even though it contains critical security vulnerabilities. This is a big problem for industrial- control systems, because many of them run outdated software and operating systems, and upgrading them is prohibitively expensive because they’re very specialized. These systems can stay in operation for many years and often don’t have big IT budgets associated with them.

The current system of patching is going to be increasingly inadequate as computers become embedded in more and more things. The problem is that we have nothing better to replace it with.

Certification exacerbates the problem. Before everything became a computer, dangerous devices like cars, airplanes, and medical devices had to go through various levels of safety certification before they could be sold. A product, once certified, couldn’t be changed without having to be recertified. For an airplane, it can cost upwards of a million dollars and take a year to change one line of code. This made sense in the analog world, where products didn’t change much. But the whole point of patching is to enable products to change, and change quickly.

Disclosing vulnerabilities: Not everyone discloses security vulnerabilities when they find them; some hoard them for offensive purposes. Attackers use them to break into systems, and that’s the first time we learn of them. These are called “zero-day vulnerabilities,” and responsible vendors try to quickly patch them as well. Government agencies like the NSA, US Cyber Command, and their foreign equivalents also keep some vulnerabilities secret for their own present and future use. Every discovered but undisclosed vulnerability― even if it is kept by someone you trust― can be independently discovered and used against you.

Even researchers who want to disclose the vulnerabilities they discover sometimes find a chilly reception from the device manufacturers. Those new industries getting into the computer business―the coffeepot manufacturers and their ilk―don’t have experience with security researchers, responsible disclosure, and patching, and it shows. This lack of security expertise is critical. Software companies write software as their core competency. Refrigerator manufacturers, or refrigerator divisions of larger companies, have a different core competency―presumably, keeping food cold―and writing software is always going to be a sideline.

Just like the computer vendors of the 1990s, IoT manufacturers tout the unbreakability of their systems, deny any problems that are exposed, and threaten legal action against those who expose any problems. The 2017 Abbott Labs patch came a year after the company called the initial report of the security vulnerability― published without details of the attack― “false and misleading.” That might be okay for computer games or word processors, but it is dangerous for cars, medical devices, and airplanes― devices that can kill people if bugs are exploited. But should the researchers have published the details anyway? No one knows what responsible disclosure looks like in this new environment.

Finally, researching vulnerabilities: In order for this ecosystem to work, we need security researchers to find vulnerabilities and improve security, and a law called the Digital Millennium Copyright Act (DMCA) is blocking those efforts. It’s an anti-copying law that includes a prohibition against security research. Technically, the prohibition is against circumventing product features intended to deter unauthorized reproduction of copyrighted works. But the effects are broader than that. Because of the DMCA, it’s against the law to reverse engineer, locate, and publish vulnerabilities in software systems that protect copyright. Since software can be copyrighted, manufacturers have repeatedly used this law to harass and muzzle security researchers who might embarrass them.

One of the first examples of such harassment took place in 2001. The FBI arrested Dmitry Sklyarov at the DefCon hackers conference for giving a presentation describing how to bypass the encryption code in Adobe Acrobat that was designed to prevent people from copying electronic books. Also in 2001, HP used the law to threaten researchers who published security flaws in its Tru64 product. In 2011, Activision used it to shut down the public website of an engineer who had researched the security system in one of its video games. There are many examples like this.

In 2016, the Library of Congress―seriously, that’s who’s in charge of this―added an exemption to the DMCA for security researchers, but it’s a narrow exemption that’s temporary and still leaves a lot of room for harassment.

Other laws are also used to squelch research. In 2008, the Boston MBTA used the Computer Fraud and Abuse Act to block a conference presentation on flaws in its subway fare cards. In 2013, Volkswagen sued security researchers who had found vulnerabilities in its automobile software, preventing them from being disclosed for two years. And in 2016, the Internet security company FireEye obtained a court injunction against publication of the details of FireEye product vulnerabilities that had been discovered by third parties.

The chilling effects are substantial. Lots of security researchers don’t work on finding vulnerabilities, because they might get sued and their results might remain unpublished. If you’re a young academic concerned about tenure, publication, and avoiding lawsuits, it’s just safer not to risk it.

For all of these reasons, the current system of patching is going to be increasingly inadequate as computers become embedded in more and more things. The problem is that we have nothing better to replace it with.

This gets us back to the two paradigms: getting it right the first time, and fixing things quickly when problems arise.

These have parallels in the software development industry. “Waterfall” is the term used for the traditional model for software development: first come the requirements; then the specifications; then the design; then the implementation, testing, and fielding. “Agile” describes the newer model for software development: build a prototype to meet basic customer needs; see how it fails; fix it quickly; update requirements and specifications; repeat again and again. The agile model seems to be a far better way of doing software design and development, and it can incorporate security design requirements, as well as functional design requirements.

You can see the difference in Microsoft Office versus the apps on your smartphone. A new version of Microsoft Office happens once every few years, and it is a major software development effort resulting in many design changes and new features. A new version of an iPhone app might be released every other week, each with minor incremental changes and occasionally a single new feature. Microsoft might use agile development processes internally, but its releases are definitely old- school.

We need to integrate the two paradigms. We don’t have the requisite skill in security engineering to get it right the first time, so we have no choice but to patch quickly. But we also have to figure out how to mitigate the costs of the failures inherent in this paradigm. Because of the inherent complexity of the internet and internet-connected devices, we need both the long-term stability of the waterfall paradigm and the reactive capability of the agile paradigm.

Excerpted from Click Here to Kill Everybody by Bruce Schneier. Copyright 2018 by Bruce Schneier. With permission of the publisher, W. W. Norton & Company, Inc. All rights reserved.


Viewing all articles
Browse latest Browse all 12749

Trending Articles