Quantcast
Channel: CodeSection,代码区,网络安全 - CodeSec
Viewing all articles
Browse latest Browse all 12749

DevOps and the Art of Secure Application Deployment

0
0

Secure application deployment principles must extend from the infrastructure layer all the way through the application and include how the application is actually deployed, according to Tim Mackey, Senior Technical Evangelist at Black Duck Software. In his upcoming talk, “ Secure Application Development in the Age of Continuous Delivery ” at linuxCon + ContainerCon Europe , Mackey will discuss how DevOps principles are key to reducing the scope of compromise and examine why it’s important to focus efforts on what attackers’ view as vulnerable.

tim-mackey.jpg
DevOps and the Art of Secure Application Deployment

Tim Mackey, Senior Technical Evangelist, Black Duck Software

Used with permission

Linux.com: You say that the prevalence of microservices makes it imperative to focus on vulnerabilities. Are microservices inherently more vulnerable or less? Can you explain?

Tim Mackey: With every new development pattern, we need to ensure operations and security teams are deeply involved in deployment plans so their vulnerability response plans keep pace. Microservices development doesn’t change that requirement; even with a focus on creating tasks which perform a single operation. When developing a microservice we’re already thinking about the minimum code required to perform the task. We’re also thinking about ways to reduce the attack surface. This makes vulnerability planning a logical component of the design process, and by extension something which should be communicated throughout the component lifecycle.

If we make an assumption that our services are deployed using continuous delivery, we’re also accepting more frequent deployments for our services. This gives us an opportunity to resolve security issues as they arise, potentially without outage windows or downtime. In such an environment, we really want active monitoring for vulnerability disclosures not only for what we’ve deployed, but also what’s in our library and currently under development.

One other point to note: If the lifespan of a given microservice is very short, we’ve raised the bar for attackers. While that’s a really good thing, we don’t want to become complacent about vulnerability planning. After all, a short service lifespan can also mask attempts at malicious activity and addressing that should be part of a microservice-centric vulnerability response plan.

Linux.com: Can you give us some examples of how vulnerabilities get into production deployments?

Tim: We see from numerous sources that open source development models are all but de facto in 2016. The freedom developers have to incorporate ideas from other projects, either directly or via a fork, has increased the pace of innovation. It is precisely this freedom which provides an avenue for upstream security issues to impact downstream projects.

For practical purposes, we can assume that most code open source or otherwise has some critical bug with exploit potential. It’s not uncommon to find that such bugs have been present in code for significant periods of time and may have been subject to multiple reviews and even tests from a variety of tools. We then see a security researcher identify the significance of the bug as a security issue and a vulnerability report is disclosed.

Once disclosed, the big question for users becomes “is this issue present in our environment?” If we were talking about packaged commercial products, it would be up to the vendor to provide both a fix and guidance for mitigation. Open source projects also provide guidance and fixes, but only for direct usage of their components. With the source for a given product often coming from upstream efforts, tracking the provenance of the source and associated security issues is a critical requirement for any vulnerability response plan.

Linux.com: How can those be mitigated? What are some tools to determine the vulnerabilities?

Tim: Mitigation starts with understanding the scope of the problem, and ends with implementation of some form of “fix.” Unfortunately, the available information on a vulnerability is often written for developers and the people needing to perform the mitigation are on the operations side.

If we consider the glibc vulnerability from February, CVE-2015-7547 , the bug was first reported in July 2015, and over the course of nine months, the development team determined the nature of the bug, then how to fix it, and subsequently disclosed it as a vulnerability. This is the normal process for most vulnerabilities disclosed against projects under active development. In the case of CVE-2015-7547, the disclosure occurred first on the project list and two days later in the National Vulnerability Database (NVD) maintained by NIST . The contents of the NVD are freely available and form the basis of many vulnerability scanning solutions, including the Black Duck Hub.

What differentiates basic vulnerability scanning solutions from the leaders are two key attributes:

Independent security research activities. These activities are primarily focused on identifying activity within projects which signal an impending disclosure. In the case of CVE-2015-7547, such research would have identified the impending disclosure from the development list activity.

Breadth of the underlying knowledge base against which potentially vulnerable code is validated. As I mentioned earlier, vulnerable code is often incorporated from multiple sources and disclosed against specific product versions. Being able to clearly identify the vulnerable aspects of a project based on commits allows for easier identification of latent vulnerabilities in forked code.

Linux.com: What level of certainty can be achieved regarding the vulnerability status of a container?

Tim: Like any security process, container vulnerability status is best determined using a variety of tools, each with a clear focus, and each gating the delivery of a container image into a production registry. This includes static and dynamic analysis tools, but a comprehensive vulnerability plan also requires active monitoring of dependent upstream and forked components for their vulnerability status. No single tool will ever provide a guarantee a container is free of known vulnerabilities, or is otherwise free of vulnerabilities. In other words, even if you follow every available best practice and create a container image with no known issues, that doesn’t mean that a day later vulnerabilities won’t be disclosed in a dependent component.

Linux.com: It sounds like DevOps principles come into play in achieving greater security. Can you explain further? Tim: DevOps principles are absolutely a key component to reducing the scope of compromise from any vulnerability. The process starts with a clear understanding of what upstream components are included in any container image available for deployment. This builds a level of trust for a container image and a requirement that only trusted images can be deployed. From there, a set of deployment requirements can be created which govern the expected usage for the container. This includes simple things like network configuration, but als

Viewing all articles
Browse latest Browse all 12749

Latest Images

Trending Articles





Latest Images