Quantcast
Channel: CodeSection,代码区,网络安全 - CodeSec
Viewing all articles
Browse latest Browse all 12749

Preventing the next Heartbleed and making FOSS more secure

$
0
0

David Wheeler is a long-time leader in advising and working with the U.S. government on issues related to open source software. His personal webpage is a frequently cited source on open standards, open source software, and computer security. David is leading a new project, the CII Best Practices Badging project , which is part of thelinux Foundation's Core Infrastructure Initiative (CII) for strengthening the security of open source software. In this interview he talks about what it means for both government and other users.


Preventing the next Heartbleed and making FOSS more secure
Let's start with some basics. What is the badging project and how did it come about?

In 2014, the Heartbleed vulnerability was found in the OpenSSL cryptographic library. In response, the Linux Foundation created the Core Infrastructure Initiative (CII) to fund and support open source software (OSS) that are critical elements of the global information infrastructure. The CII has identified and funded specific important projects, but it cannot fund all OSS projects. So the CII is also funding some approaches to generally improve the security of OSS.

The latest CII project, which focuses on improving security in general, is the best practices badge project. I'm the technical lead of this project. We think that OSS projects that follow best practices are more likely to be healthy and to produce better software, including having better security. But that requires that people know what those best practices are, and whether or not a given project is following them.

To solve this, we first examined a lot of literature and OSS projects to identify a set of widely accepted OSS best practices. We then developed a website where OSS projects can report whether or not they apply those best practices. In some cases we can examine the website of a project and automatically fill in some information. We use that information (automatic and not) to determine whether the project is adequately following best practices. If the project is adequately following best practices, the project gets a "passing" badge.

I should note here that the badging project is itself an OSS project. You can see its project site . We'd love to have even more participation, soplease join us! To answer the obvious question, yes, it does earn its own badge. Other OSS projects that have earned a badge include the Linux kernel, curl, Node.js, GitLab, OpenBlox, OpenSSL, and Zephyr. In contrast, OpenSSL before Heartbleed fails many criteria, which suggests that the criteria are capturing important factors about projects.

If you participate in an OSS project, I encourage you to get a badge. If you're thinking about using an OSS project, I encourage you to see if that project has a badge. You can see more information at CII Best Practices Badge Program .

Can you tell me more about the badge criteria?

Sure. Of course, no set of practices can guarantee that software will never have defects or vulnerabilities. But some practices encourage good development, which in turn increases the likelihood of good quality, and that's what we focused on.

To create the criteria we reviewed many documents about what FLOSS projects should do; the single most influential source, was probably Karl Fogel's book Producing Open Source Software . We also looked at a number of existing successful projects. We also got very helpful critiques of the draft criteria from a large number of people including Karl Fogel, Greg Kroah-Hartman (the Linux kernel), Rich Salz (OpenSSL), and Daniel Stenberg (curl).

Once the initial criteria were identified, they were grouped into the following categories: basics, change control, reporting, quality, security, and analysis. We ended up with 66 criteria. Each criterion has an identifier and text. For example, criterion floss_license requires that, "the software MUST be released as FLOSS." Criterion know_common_errors requires that, "At least one of the primary developers MUST know of common kinds of errors that lead to vulnerabilities in this kind of software, as well as at least one method to counter or mitigate each of them." The criteria are available online, and an LWN.net article discusses the criteria in more detail. It only takes about an hour for a typical OSS project to fill in the form, in part because we automatically fill in some information. More help in automating this would be appreciated.

The criteria will change slowly, probably annually, as the badging project gets more feedback and the set of best practices in use changes. We also intend to add higher badge levels beyond the current "passing" level, tentatively named the "gold" and "platinum" levels. However, the project team decided to create the criteria in stages; we're more likely to create good criteria once we have experience with the current set. A list of some proposed higher-level criteria is already posted; if you have thoughts, please post an issue or join the badging project mailing list.

In digging into this area, what surprised you the most in your findings?

Perhaps most surprising is that many OSS projects do not describe how to report security vulnerabilities. Many do, of course. Many projects, like Mozilla Firefox, describe how to report vulnerabilities to a dedicated email address and provide a PGP key for sending encrypted email. Some projects, like Cygwin, have a clear policy that all defect reports must be publicly reported (in their case via a mailing list), and they expressly forbid reporting via private email. I'm not a fan of "everything public" policies (because they can give attackers a head start), but when everyone knows the rules it's easier to handle vulnerabilities. However, many projects don't tell people how to report vulnerabilities at all. That leads to a lot of questions. Should a security researcher report a vulnerability using a public issue tracker or mailing list, or do something else? If the project wants to receive private reports using encryption, how should the information be encrypted? What keys should be used? Vulnerability reporting and handling can be slowed down if a project hasn't thought about it ahead of time.

There are several reasons to be surprised about this. First, it should be surprising that developers don't anticipate that there are vulnerabilities in their code and that they should be prepared to address them. The news is filled with vulnerability reports! Second, it's surprisingly easy to solve this problem ahead of time; just explain on a project website how to report vulnerabilities. It only takes one to three sentences! Of course, writing that down requires project members to think about how they will handle vulnerabilities, and that's valuable. It's much better to decide how to handle vulnerabilities before they are reported.

What didn't surprise you, but might surprise others?

Some things that might surprise you are:

1. There are still projects (mostly older ones) that don't support a public version-controlled repository (e.g., using Git), making it difficult for others to track changes or collaborate.

2. There are still projects (mostly older ones) that don't have a (useful) automated build or automated test suite. This makes it much harder to make improvements, and much easier for defects to slip through undetected. It also makes it hard to upgrade dependencies when a vulnerability is found in them, because there's

Viewing all articles
Browse latest Browse all 12749

Trending Articles