Earlier this year at KubeCon in Copenhagen, the message from the community was resoundingly clear: “this year, it’s about security” . If Kubernetes was to move into the enterprise, there were real security challenges that needed to be addressed. Six months later, at this week’s KubeCon in Seattle, we’re happy to report that the community has largely answered that call. In general, Kubernetes has made huge security strides this year, and giant strides on Google Cloud. Let’s take a look at what changed this year for Kubernetes security.
Kubernetes attacks in the wild
Where developers go, hackers follow. This year, Kubernetes graduated from the CNCF , and it also earned another badge of honor: weathering its first real security attacks. Earlier this year, several unsecured Kubernetes dashboards made the news for leaking cloud credentials. At the time, Lacework estimated there of over 20,000 public dashboards, 300 were open without requiring any access credentials. (Note that Google Kubernetes Engine no longer deploys this dashboard by default .) Elsewhere, attackers added binaries to images on Docker Hub to mine cryptocurrency , which were then downloaded an estimated five million times and deployed to production clusters.
The majority of attacks against containers, however, remain “drive by” attacks―where an attacker is only interested in finding unpatched vulnerabilities to exploit. This means that the best thing you can do to protect your containers is to patch: your base image, your packages, your application code―everything. We expect attackers to start targeting containers more, but since containers make it easier to patch your environment , hopefully they’ll have less success.
Luckily, we also saw the community responding to security threats, by donating multiple security-related projects to the CNCF including SPIFFE , OPA , and Project Harbor .
Developing container isolation, together
Isolation was a hot topic for the container community this year, even though there still haven’t been any reports of container escapes in the wild, where an attacker gains control of a container, and uses it to gain control of other containers on the same host. The Kata Containers project kicked things off in December 2017, and other sandboxing technologies quickly followed suit in 2018, including gVisor and Nabla containers . While different in implementation, the goal of each of these technologies is to create a second layer of isolation for containerized workloads and bring defense-in-depth principles to containers, without compromising performance.
Container isolation is frequently misunderstood (after all, they don’t contain), and lack of isolation has been a primary argument against adopting them. Unlike virtual machines, containers don’t provide a strong isolation boundary on par with a hypervisor. That makes some users hesitant about running multi-tenant environments―deploying two containers for different workloads on the same VM―because they are worried that the workload in one container affecting the other. To address this, Kubernetes 1.12 added RuntimeClass, which lets you use new sandboxing technologies to isolate individual pods. RuntimeClass gives you the ability to select which runtime to use with each pod, letting you select hardening runtimes like gVisor or Kata depending on how much they trust the workload . With this tooling, the primary argument against containers is now one of its greatest strengths.
Protecting the software supply chain
At Google Cloud, we focused our efforts on securing the software supply chain―protecting your container from the base image, to code, to an application image, to what you deploy in production. Recently we released two new products in this space: Container Registry Vulnerability Scanning scans your images for known vulnerabilities; and Binary Authorization lets you enforce your policy requirements at deployment time. Both of these products are currently in beta.
Since a container is meant to be immutable, you’re constantly redeploying, and constantly pushing things down your supply chain. Binary Authorization gives you a single enforcement point where you can dictate what’s running in your environment. In addition to the GCP-hosted product, we also published an open-source reference implementation of Kritis , to ensure that your containers are scanned and patched for any known vulnerabilities before you let them into your environment.
Hardening GKE and its network
We keep GKE up to date with Kubernetes open-source releases, but we also introduce new features and new defaults to help you better protect your clusters. We made huge headway in network security recently, namely with the general availability of Private Clusters and Master Authorized Networks . Together, these help you further limit access to your cluster by malicious attackers who are scanning IP addresses for vulnerabilities. Now, you can restrict access to your cluster’s master to a set of whitelisted IP addresses, and can further ensure that your cluster’s nodes only have private IP addresses. And since GKE now works with shared Virtual Private Cloud , your network team can manage this environment directly. To learn more about GKE networking and network security, see the GKE network overview .
Then, in the small-but-mighty category, we turned node auto-upgrade on by default in the GCP Console. Unpatched environments are an easy target for attackers, and it only takes one missed security notice or delayed patch to be suddenly vulnerable. Node auto-upgrade delivers security patches automatically to keep your node up to date. Note that on GKE, Google manages and patches the control plane. While you probably didn’t notice it, our team has been very active patching GCP and GKE for linux and Kubernetes vulnerabilities this year, most notably last week !
In addition to new network security features, we are always striving to improve GKE’s default security settings, so you can implement security best practices without having to be a security expert. We’ve consolidated our hardening advice into a single guide that’s easy to follow, and noted when we’ve changed defaults. Note that this is an easy link to share with auditors.
There’s so much more we want to do and we’re going to keep on keeping on, so that 2019 can be all about security too. If you’re at KubeCon this year, check out some of our container security talks:How Symlinks Pwned Kubernetes (And How We Fixed It) Tues Dec 11th, 10:50-11:25
Recent Advancements in Container Isolation Tues Dec 11th, 1:45-2:20 This Year, It’s About Security Tues Dec 11th, 4:30-5:05 So You Want to Run Vault in Kubernetes? Wed Dec 12th, 11:40-12:15 Navigating Workload Identity in Kubernetes Wed Dec 12th, 4:30-5:05 Shopify’s $25k Bug Report, and the Cluster Takeover That Didn’t Happen Thurs Dec 13th, 4:30-5:05
Hope to see you there!