by Maksim Yankovskiy

How To: Best Practices for Securing Your DevOps Environment

  •  
  •  
  •  
  •  
  •  
  •  
  •  

Containerization has given developers the ability to quickly create and deploy applications without compromising security. This virtualization alternative addresses the unique needs of today’s modern enterprise; the creation of a fast-paced feature development, deployment and modernization of application architecture. 

And yet, while the benefits of containerization have helped to exponentially increase its adoption, the age-old challenge of security remains. Containerized applications present numerous new attack vectors and require protection technologies far more advanced, performant, and transparent than legacy systems and applications. How are enterprise organizations supposed to maintain the security of application data stored in container environments without compromising developer speed and collaboration? 

We joined forces with Liran Tal, Senior Developer Advocate at Snyk, the leader in developer-first security, to help you navigate the challenges of maintaining data security in your DevOps environments. Hopefully after reading this blog, you’ll walk away with newfound knowledge on two important aspects of security: protecting the application data accessed by containers and protecting the software stack within the container.

A comprehensive security system includes a collection of products and technologies that protect enterprise environments from the infrastructure layer all the way up to the application layer. We will focus on two important aspects of protection: protecting application data that is accessed by containers and protecting the software stack within the container. Working together, these two technologies provide comprehensive last line of defense protection. 

Where Does the Responsibility Fall? 

Protecting container data may seem as though it is a simple undertaking, one that is tempting to delegate to the infrastructure – use encrypted storage in the cloud or self-encrypting drives in on-premise and hybrid deployments. But, there are two major downsides to this approach. 

  • First, infrastructure lacks the level of awareness of containerized environments and therefore may not provide adequately granular protection. 
  • Second, trusting infrastructure providers with ensuring security of sensitive data is a major concern. This is especially true for highly regulated industries such as healthcare, finance, and government. Infrastructure providers increasingly treat applications and data security as a “shared responsibility” with the majority of said responsibilities resting on shoulders of the data owner.

It’s important to understand that when it comes to ensuring the protection of container environments, relying on the infrastructure providers and legacy security vendors to offer “good enough security” i.e., data encryption at the infrastructure level, just isn’t good enough. Compromising security should never be considered to achieve a greater level of simplicity. 

Don’t Trade Security for Simplicity 

Container environments are fluid by nature: the same container application or service might be scheduled to run on different worker nodes at different times. Worker nodes will share storage volumes, and encrypting at the infrastructure layer may result in the same encryption key used to encrypt the entire storage volume that stores data for multiple containers and applications. You definitely do not want this level of encryption granularity, as one compromised container will expose data for all containers, which will be devastating especially in multi-tenant environments. 

Modern data at rest encryption solutions are designed for containerized environments provide substantial benefits compared to their infrastructure and legacy-based alternatives. Data at rest encryption solutions offer an uncompromising level of security without sacrificing performance. Encryption is applied at the granular level, which means the container storage volume is encrypted with its own unique encryption key and is only available when in use by the container. This level of protection is completely transparent to the container and the application running within it, which means that application container footprint will not increase as no additional software is added to the application container, and container applications will not have to include special function calls or processes to take advantage of encrypted storage. Developers and administrators won’t need to change their processes to work with encrypted systems. 

Encrypted volumes are provisioned automatically as containers request storage, they are not shared with other containers, and they are securely removed when no longer required. This last step is especially important and includes removal of the unique encryption key used to protect the data for this container volume. 

Zettaset XCrypt software encrypts at the container storage layer and transparently integrates into existing deployment workflows with Docker and Kubernetes. All critical encryption services, including Key Management Server, Certificate Authority Server, and Encrypted Storage Manager run natively in containers.

Coupled with the fact that deployment and use of this solution does not require any specialized expertise and virtually no knowledge of encryption, this enables developers and enterprise users to automatically protect container data without sacrificing performance while focusing on feature development and increasing business value.

Turning security on by default for Docker containers and cloud infrastructure

Many times, the decision to adopt container technology and enable container production by development teams introduces an implicit trust on conventions and defaults which aren’t inherently secure by nature.

For example, when developers maintain a Dockerfile, how much thought is spent on the base image used to build it? Would users most often make use of FROM node as a base image, which pulls in the latest version, or would they explicitly use a hash to explicitly use a stable, and minimal base image such as FROM node:301b58626afec806d3caa7063e94352aa685b4451ef505e0f520aecf94e71af9 ?
Unfortunately, the latter isn’t a well adopted practice and a blind use of a generic base image is more prominent.

Another frowned upon practice often encountered is the misuse of least privilege principle for Docker containers in the build stage. If a Docker image is built without an explicit USER directive which specifies a non-privileged user, it will default to using the superuser (root) as the user owning the process in the container, which needlessly expands the increasing attack surface.

To help developers stay secure, Snyk has compiled a list of 10 Docker image security practices that engineers should follow. These practices should be seriously considered as solid fundamental security practices for your teams. 

But, container security goes beyond just securing directives in a Dockerfile. In fact, container security directly impacts the running container. Typically, we will use a base image to build the container image because it introduces open source software dependencies which could later include known security vulnerabilities that potentially impact the container.

Snyk’s State of Open Source Security 2020 report looked at the security posture of 10 of the most popular Docker container images as presented on Docker Hub, the biggest registry of open source container applications. The report reveals that all 9 of the 10 most popular container images include at least 47 publicly known security vulnerabilities by default, just by pulling these image’s latest versions when building a docker image:

Vulnerabilities in official container images

As we transition from the data layer to other infrastructure components, aside from the container image itself, we face other security challenges such as cloud configuration security practices that are often missed. The State of Open Source Security 2020 further reports on Kubernetes configuration security issues and highlights that over 32% of participants in the survey stated they didn’t know or don’t have any practices in place for security tooling around the configuration of their Kubernetes clusters.

Container data at rest protection solutions are a critical part of a comprehensive protection suite for any containerized environment. But these environments also require trust and assurance that the software stack running within containers is authentic, non malicious, and up to date with respect to security patches and bug fixes. This is especially true considering that many containerized applications make extensive use of open source software and libraries.