How to Keep Your Kubernetes Deployments and Containers Secure with Amazon EKS
Nov 7, 2023
Amazon EKS is a managed service for Kubernetes that Amazon offers for its cloud platform users. Kubernetes is a popular tool for managing containers that can require a lot of work to maintain availability and scalability. Amazon EKS abstracts this work away so that engineering teams can focus on their applications and not on the architecture required to deploy them efficiently at scale. Amazon EKS also enables organizations to integrate other powerful AWS solutions into their workloads.
When using Amazon EKS, one area that many teams overlook or don’t understand fully is how to enforce robust security measures. Although Amazon EKS is a managed service, it’s still the responsibility of users to ensure that everything that happens through EKS is secure. Why? Because of AWS’ Shared Responsibility Model. Once teams have a firm understanding of this paradigm, they can implement EKS security best practices with confidence.
Understanding AWS’ Shared Responsibility Model
The Shared Responsibility Model aims to clarify what AWS is responsible for and what platform users are responsible for concerning security. Having such a model is important because there are so many areas where security gaps and vulnerabilities may exist. As cloud architectures grow and become increasingly complex, it’s helpful to have a framework like the Shared Responsibility model that clarifies who keeps what secure.
The specific separation of responsibility differs depending on how customers are using the platform. Concerning EKS deployments, there are three service model options:
- The infrastructure model
- The container model
- The abstracted service model
The more abstraction customers leverage, the less security responsibility they have (although they always have some!). An easy way to think about the Shared Responsibility model is that AWS is on the hook for the security of the cloud, while customers are responsible for security in the cloud. This is true no matter what services organizations use on AWS, including Amazon EKS.
Security Through an EKS Lens
Security issues can surface from all levels of EKS deployments:
- Hosts – ex: Linux package errors
- Containers – ex: loose access permissions
- Dependencies – ex: overly large surface areas
- Source code – ex: log leaks
- Configuration – ex: non-separated access
- User data – ex: data leaks
A strong security defense addresses the biggest pitfalls in each layer. Fortunately, AWS EKS accounts for many of these vulnerabilities. The key decision cloud engineers have to make is the degree to which they rely on AWS within their Amazon EKS deployments.
Organizations can use Amazon EKS with self-managed nodes, managed node groups, or through Fargate, AWS’ serverless containers compute engine. These three options parallel the AWS Shared Responsibility Model. In all cases, AWS is responsible for the security of the Kubernetes control plane, container scheduler, API server, controller manager, and ETCD. AWS and Amazon EKS users split security for everything that happens on top of these elements.
Those who use self-managed nodes are responsible for everything from customer data all the way down to worker node scaling, the operating system, and Amazon Machine Image (AMI) security. With managed node groups, AWS takes over the OS, kubelet, CRI & AMI configuration work. Users still do EKS configuration, network policies, and container images, just like they would on the self-managed node path.
The Fargate route allows Amazon EKS to pass off the worker node scaling responsibility in addition to the OS, kubelet, CRI & AMI configuration layer. In general, AWS recommends using Fargate with Amazon EKS for several reasons:
- AWS does the patching for the OS, container runtimes, and more
- AWS will isolate pods, which is not the default setting
- Users don’t have runtime access through SSH or interactive Docker
- Containers don’t run in privileged mode, which reduces security risk
All of these benefits reduce security exposure and the likelihood of any security issues affecting containerized application performance. For a more in-depth look at EKS security best practices, visit this page.
Secure Your Amazon EKS Deployment with ClearScale and AWS
ClearScale is an AWS Premier Tier Service partner with 11 AWS competencies and more than 10 years of experience exclusively on the AWS platform. We work with organizations in all industries to deliver secure cloud architectures and applications, including those involving Kubernetes deployments through Amazon EKS.
For example, we worked with a company in the digital commerce space to modernize its AWS cloud environment, which hadn’t been updated in a decade. After a comprehensive assessment process, we discovered an opportunity to migrate certain workloads to Amazon EKS and implement better security, search, and auto-scaling processes. Overall, our client was able to cut cloud expenditures by 25% while bolstering container orchestration and security.
We also partnered with a popular online cashback company to develop an entirely new cloud architecture that provided more elasticity and a better customer experience overall. We incorporated Amazon EKS into the new design and helped the client implement network restrictions, container authentication to AWS resources, IAM access controls, and useful monitoring tools. By moving several hundred virtual machines to AWS, we were able to reduce costs by 60% while boosting security.
In a third engagement, we helped an online marketplace company automate its server infrastructure provisioning for AWS EKS. This automation took a lot of pressure off of the company to be able to scale with high demand and ensure reliable disaster recovery.
If you’re considering deploying Kubernetes on the AWS cloud, we can help you implement Amazon EKS and address all potential security risks related to your architecture.
Call us at 1-800-591-0442
Send us an email at sales@clearscale.com
Fill out a Contact Form
Read our Customer Case Studies