Basics of Securing Kubernetes Services

Categories

Containerization has been swiftly gathering momentum in the IT industry with Gartner predicting that 50% of companies will use some element of container technology by 2020, up from under 20% in 2017.

Kubernetes (also known as K8S) is the leading container orchestration platform. It was initially created by Google then open-sourced and donated to the Cloud Native Computing Foundation. K8S is used in production environments to handle docker containers (although it also supports other containers tools) in a fault-tolerant manner, giving users the opportunity to automatically scale, distribute and handle faults on containers.

Kubernetes is Vulnerable to Attack

As Kubernetes has become popular, it has also attracted the attention of attackers interested in compromising its clusters. Ransomware, cryptomining, data theft and DDoS, just as in traditional computing environments, are all potential security risks from outsiders and insiders alike. Container based virtualized environments are vulnerable to cyberattacks in both the public and private cloud.

The Tesla, Aviva (the multinational insurance company) and Gemalto (the world’s largest SIM card makers) malware hacks early last year were proof of this. Hackers infiltrated hundreds of Kubernetes administration consoles, which were left open to the Internet without any password protection. Hackers then used the cloud accounts to which they had gained access to run crypto-mining software.

Researchers at RedLock security firm said the Tesla attack saw the hackers gain access to a Kubernetes pod, then saw its access credentials exposed to Tesla’s AWS environment, which contained an Amazon S3 (Amazon Simple Storage Service) bucket that had sensitive data within it, including telemetry. The hackers used sophisticated evasion measures to gain access to the Kubernetes pod (including hiding behind the Cloudflare services that were using the non-standard port) and similarly deceptive means to run the mining software within it, deliberately keeping usage low to evade detection.

Security Flaws

In December, the platform’s first major security hole was discovered. The bug, CVE-2018-1002105, also known as the Kubernetes privilege escalation flaw, is a CVSS 9.8 critical security hole. Via a carefully crafted network request, a connection can be established to a backend server through the Kubernetes API server. Once the connection is set up, an attacker is able to send arbitrary requests across the network connection directly to the backend server. These requests are authenticated using Kubernetes’ own API server’s TLS credentials. If Kubernetes users had left it set up in its default configuration, then all users (authenticated and unauthenticated) were able to perform discovery API calls that allowed this escalation to happen.

Not only could all users perform the hack, but the company said, “there is no simple way to detect whether this vulnerability has been used”. This is because when the unauthorized requests were made via an established connection, they didn’t show up in the Kubernetes API server audit logs or server log. And while they would show up in the kubelet or aggregated API server logs, they were “indistinguishable from correctly authorized and proxied requests via the Kubernetes API server.”

Red Hat summarized it as such, “The privilege escalation flaw makes it possible for any user to gain full administrator privileges on any compute node being run in a Kubernetes pod. This is a big deal. Not only can this actor steal sensitive data or inject malicious code, but they can also bring down production applications and services from within an organization’s firewall.”

The only fix for users was to upgrade the entire platform – from Kubernetes v1.0.x-1.9.x to a patched version.

Challenges of Securing Kubernetes

The hyper-dynamic nature of containers creates various security challenges:

  • A huge growth in East-West traffic – as containers are deployed across hosts or clouds, the amount of east-west or internal traffic has massively gone up, increasing the amount of traffic that must be monitored for attacks;
  • The wide breadth of features in the Kubernetes platform makes security issues a manifold problem that necessitates many layers of defense – from reconfiguring default settings in Docker container images and upstream Kubernetes to invoking cutting-edge security monitoring tools and utilizing advanced networking concepts, such as service mesh;
  • A greater attack surface – every container might have a different attack surface with a different set of vulnerabilities open to exploitation. There is also the additional attack surface introduced by the container orchestration tools themselves to be considered;
  • Configuration challenges specific to Kubernetes, such as SELinux containers being difficult to set up properly as Linux was built to run on servers and their setup involves careful crafting of the container file, which can be complicated for less experienced DevOps teams;
  • Old security models and tools for security need to be continually updated to keep pace with the rapid speed of updates to the container within a CI/CD environment.

Security Best Practices in Kubernetes

Change the Default Settings

At KubeCon and KubeSec, there was a lot of discussion around the fact that upstream Kubernetes versions and Docker container images aren’t secured out of the box. In a presentation by Sarah Young, IT security architect at Versent, an Australian IT consulting firm, she said, “The [Kubernetes] orchestrator defaults are terrible, so please change them”.

The K8S API server, for instance, listens on port 8080 on which there are no security checks in place by default; encryption and secrets management within etcd (the key-value store that maintains cluster state) are still under development. When they are used with etcd version 3 and Kubernetes version 1.13 (which was released in December 2018), encryption of data at rest is now offered. However, enterprises have trouble keeping up with upgrades to the continually changing versions of Kubernetes; which brings us to our next best practice.

Upgrade to the Latest Version

Every quarterly update includes a new set of security features, not just bug fixes. The best way to take advantage of them is to update to the latest version of Kubernetes with the most recent patches. This way you can avoid falling victim to security flaws. The further behind you fall, the harder it is to implement the most up-to-date upgrades and support, so build in upgrading at least once per quarter into your management strategy. A managed Kubernetes provider can make the process of upgrades simple.

Ensure Proper Configuration

If you are just migrating over to containers, building in a strong security posture from the start is essential.

Threat Stack found that almost three quarters of companies had at least one serious cloud security misconfiguration that could have left entire systems vulnerable. If the Kubernetes API isn’t properly configured, for instance by not having a strong password, attackers can easily hide behind DNS systems and perform activities while eluding detection. Using credentials that expire or rotating them frequently are two easy ways to stop hackers from getting into the Kubernetes administration console. Kubernetes has a Configuration Best Practices page, which highlights and consolidates best configuration practices. It includes:

  • General Configuration Tips
  • “Naked” Pods vs. ReplicaSets, Deployments, and Jobs
  • Services
  • Using Labels
  • Container Images
  • Using kubectl

Protect Cluster Components

Some parts of a cluster are likely to be more subject to attack than others, for instance, hackers are able to identify unprotected Kubernetes’ backend database (etcd) fairly easily by searching on Shodan. Giving access to one component can open up access to the entire cluster.

Cluster network policies can be created that let you control network access into and out of your containerized applications.

Third party vendors have also created documentation specifically focused on security for their services, such as Hashicorp Vault’s documentation linked to introducing Vault tokens into a Kubernetes pod, assisting with the challenge of secrets management for infrastructure components.

Make Effective Use of RBAC

Role-based access control (RBAC) is an important element of security in a Kubernetes environment. The principle to abide by is that of least access: only provide access to those parts of your infrastructure that people need to do their jobs. Too much access can lead to problems, even if the user doesn’t intend to create problems.

Setting up RBAC boundaries between resources can help guarantee security for your Kubernetes deployments. Many Kubernetes clusters are by default configured for a token to provide access to the Kubernetes API, which mounts into every container. If the token has cluster admin rights, an attacker who gains access to just one container in the cluster can quickly escalate the privileges to assume control of the entire cluster. Cluster-wide permissions should by and large be avoided in favor of namespace-specific permissions.

Kubernetes provides a clear set of guidelines in its documentation on how to configure RBAC to dynamically set up and administer policies through the Kubernetes API. The documentation includes sections on:

  • API Overview
  • Default Roles and Role Bindings
  • Privilege Escalation Prevention and Bootstrapping
  • Command-line Utilities
  • Service Account Permissions
  • Upgrading from 1.5
  • Permissive RBAC Permissions

Create Namespaces to Set Security Boundaries

A critical first level of isolation between components involves the creation of separate namespaces. It is easier to apply security controls like Network Policies when different types of workloads are deployed in separate namespaces. You can find out if your team is effectively using namespaces by checking for any non-default namespaces.

Run Sensitive Workloads on a Dedicated Set of Machines

By separating sensitive workloads, you will limit the potential impact of any attack. By running sensitive workloads on a dedicated set of machines, you reduce the potential for a sensitive application to be accessed via a less-secure application that shares a container runtime or host. This separation can be set up using node pools (on-premises or in the cloud) and Kubernetes namespaces, taints, tolerations, along with further controls.

Ensure Cloud Metadata Access

Sensitive metadata e.g. kubelet admin credentials can be stolen or misused to escalate cluster privileges. A recent Shopify bug bounty illustrated the way in a user could escalate privileges by confusing a microservice into leaking information from the cloud provider’s metadata service. Kubernetes has a metadata concealment feature specifically designed to help avoid this exposure, which is worth setting up.

Use a Combination of Third-Party Tools and Technologies

According to early enterprise adopters, it is worth addressing security issues in Kubernetes with a range of third-party tools and technologies. Starbucks Corp., for instance, uses a range of tools to protect its GKE clusters from exposure to the open web, for instance, an ingress controller from NGINX, an internally-built tool called the Ingress Orchestrator and a WAF from Signal Sciences. The combination of these specific tools means that platform engineers at Starbucks don’t need to manually provision namespaces every time that developers launch a service.

Looking Ahead

Security flaws in Kubernetes, such as the recently discovered one, are to be expected. As Rani Osnat, VP of Product Marketing at Aqua Security explained at KubeCon 2018, “There are always going to be vulnerabilities. The fact that one was found was to be expected. And I expect more will be found going forward. That’s just what should be expected with software.”

The flaw was patched within 24 hours of being discovered and disseminated to the Kubernetes community. It worked to protect Kubernetes versions as far back as the 1.10 release from last year.

A large part of container security has to do with keeping up-to-date with current practices and cutting-edge approaches to security. It’s important for enterprises to invest in training and education opportunities for DevOps teams to have time to do so. Travis Jepsson, director of engineering at Nav Inc. sums it up in saying, “The saying in security is, ‘If you have one [tool], you have none.’”

Some security issues in Kubernetes involve risks not even connected to the technology. Vendors who lead the pack in terms of Kubernetes security are frequently startups without the stability of established software makers behind them, however, all IT security teams must stay up to date with the latest container security issues and be ready to switch vendors if necessary.

Further References

Scroll to Top