In an increasingly interconnected environment, information is exposed to a growing number and wider variety of risks. Threats such as malicious code, computer hacking, and denial-of-service attacks have become more common. Implementing, maintaining, and updating information security in an organization is more of a challenge.
The number of cyberattacks is increasing alarmingly. Ransomware is leading the way in modern cybersecurity events. Cybercrime damages will cost the world $6 trillion annually by 20221. For many organizations, information is their most important asset, so protecting it is crucial.
Also, with the increased popularity of microservices and containers, many organizations are moving more and more toward containerized environments due to the unparalleled agility, and cost and time savings that containers offer.
In this paper, we will explore some of the security threats for containers and how organizations are mitigating them.
However, like all other software, containers and containerized applications too can fall prey to security vulnerabilities of various kinds, including bugs, inadequate authentication and authorization, and misconfiguration. The first of these challenges stems from an obvious fact: breaking your application up into containerized services (and perhaps microservices) requires some way for those services to communicate. And even though they’re all potentially running on the same Kubernetes cluster, you still need to worry about controlling access between them. After all, you might be sharing that Kubernetes cluster with other applications, and you can’t leave your containers open to these other apps.
Controlling access to a container requires authenticating its callers, then determining what requests these other containers are authorized to make. It’s typical today to solve this problem by using a service mesh Like Istio.
Just like VMs, containers can still be compromised through various attacks, or left vulnerable through misconfigurations or unpatched components that can lead to unauthorized access to your workloads and your compute resources, and even the potential to recreate your application (and its data) somewhere else.
Perhaps the other most obvious source of security issues in a containerized environment is problems lurking within application images themselves. Flawed or malicious software isn’t the only threat. Poorly configured images can also be a source of vulnerabilities. For example, an image might launch an extraneous daemon or service that allows unwanted access from the network, or it might be configured to run with more user privileges than are necessary. Secrets stored within images, such as authentication keys or certificates, are another danger to watch out for.
Pulling images only from trusted sources, such as private container registries, but a poorly configured registry can also be a security issue. Access to the registry should require encrypted and authenticated connections, preferably using credentials that are federated with existing network security controls. Any efforts to secure container images can be rendered meaningless if the registry can be easily compromised. Also, the registry should undergo frequent maintenance to ensure that it doesn’t contain stale images with lingering vulnerabilities.
But in the last couple of years, a great deal of effort has been devoted to developing software to enhance the security of containers. Fortunately, best practices for securing containerized infrastructure are emerging. The National Institute of Standards and Technology (NIST) has published “NIST Special Publication 800-190: Application Container Security Guide”: a set of guidelines that can serve as a useful starting point and a baseline for security audits which is worth referring.
How to mitigate security risks in containers
Some ways to mitigate the risk in containers are by locking it down, especially the management layer of multi-container orchestration platforms needs to be tightened with two-factor authentication, data encryption at rest, configuring the orchestrators in separate networks, i.e. low sensitive workloads must be separated from High sensitivity workloads.
In addition, workloads should be distributed such that each host runs containers only of a given security level, include end-to-end encryption of all network traffic between cluster nodes and mutually authenticated network connections between cluster members. One of the more serious concerns arises when the container runtimes that launch and manage containers—software such as containers, CRI-O, and rkt—themselves contain vulnerabilities. These flaws can lead to “container escape” scenarios where an attacker could potentially gain access to other containers or the host operating system itself, so admins should make installing runtime security patches a high priority. The host OS represents the most critical target for attacks, if compromised, it can expose all of the containers running on it. For this reason, running a pared-down, container-specific OS that limits the number of installed components to the bare minimum of software required to create and manage containers. Fewer components means fewer potential vulnerabilities that can be exploited.
Google has some renowned in-built security standards to mitigate the risks for containers as follows:
Infrastructure Security: Container infrastructure security is about ensuring that your developers have the tools they need to securely build containerized services.
Identity and authorization
On Google Kubernetes Engine, use Cloud IAM to manage access to your projects and role-based access control (RBAC) to manage access to your clusters and namespaces.
On Google Kubernetes Engine, Cloud Audit Logs records API audit logs automatically for you.
One can create a network policy to manage pod-to-pod communications in your cluster. Use private clusters for private IPs and include Google Kubernetes Engine resources in a shared VPC.
Google Kubernetes Engine features many compliance certifications including ISO 27001, ISO 27017, ISO 27108, HIPAA, and PCI-DSS.
Google Kubernetes Engine uses Container-Optimized OS (COS) by default, an OS purpose-built and optimized for running containers. COS is maintained by Google in open source. Automatically upgraded components
On GKE, masters are automatically patched to the latest Kubernetes version, and you can use node auto-upgrade to keep your security up to date by automatically applying the latest patches for your nodes.
Customer-managed encryption keys
Users in regulated industries may need to be in control of the keys used to encrypt data stored in GKE. With customer-managed encryption keys, one can pick a key from Cloud KMS to protect your GKE persistent disk.
Application layer secrets encryption
By default, Kubernetes secrets are stored in plaintext which are encrypted by GKE.
Your containerized application probably needs to connect to other services. Workload Identity uses a Google-managed service account to share credentials for authentication, following the principles of least privilege for application authentication.
Managed SSL certs
In GKE, HTTPS load balancers need to be associated with an SSL certificate. You can obtain, manage, and renew these certificates yourself, or have Google automatically obtain, manage, and renew those certificates for you.
Latest Trends & Players in Container Security:
Security will continue to be a key issue because of shared operating systems. Significant vulnerabilities impacted Kubernetes, causing IT operations to reconsider their deployments. This is another reason why Google will continue to attract more users with their secure and scalable offerings. Google is delivering ease of use, reliability, and less risk around containers. Having the market presence and global reach, infrastructure partners with the scale of Google will provide the necessary resources for containers and orchestration to truly reach the next level of market performance.
More specialized container security software has also been developed. For example, while Twistlock offers software that profiles a container's expected behavior and "whitelists" processes, networking activities, and even certain storage practices, so that any malicious or unexpected behaviour can be flagged. Polyverse takes advantage of the fact that containers can be started in a fraction of a second to relaunch containerized applications in a known good state every few seconds to minimize the time that a hacker must exploit an application running in a container.
Santosh Harinath is the GCP lead for Cloud Practice. He has diverse IT experience spanning 22 years. He has worked with global IT giants like Novell, IBM, Telstra & currently works with Wipro. Santosh has been involved in large DC sizing, migration & optimization deals. He has rich cloud experience in sizing & setting up of the colocation, PVT & Public clouds. He holds a BE in Electronics, Masters in Management from IIMB. Santosh is also a certified professional in GCP, AWS, Azure, VCE, Softlayer, CNE, CCNA etc.