Containers Enable Consistent Deployment and Execution


Containers are popular with both developers and operators because they offer a simpler way to achieve deployment and execution consistency. They can also help to improve development and operations (DevOps) team handoffs.
Share

Why Containers Matter

When containers began gaining popularity, CIO.com wrote: “Containers are a solution to the problem of how to get software to run reliably when moved from one computing environment to another. This could be from a developer's laptop to a test environment, from a staging environment into production and perhaps from a physical machine in a data center to a virtual machine in a private or public cloud.”




Deployment consistency reduces time to market

Uniquely developing and packaging every application—plus its dependencies—to handoff to operations for deployment can be a time-consuming and costly process for enterprises. Containers couple code and processes—the app, as well as libraries, binaries, and configuration files—together to improve consistency and speed in pushing new applications and updates into production.

Execution consistency improves quality

Because code and processes are prepackaged to conform to release and management processes, teams can improve the quality of releases and thus, boost customer satisfaction.

Easier developer-to-operator handoff speeds delivery

Without having to customize each delivery, developers and operations (DevOps) staff have more time to focus on doing what they do best: developing business-critical applications and ensuring robust app delivery.



“What are the reasons for opting to use container technologies?
(Such as Docker, rkt, CoreOS)?”



Saurabh Gupta. "Containers and Pivotal Cloud Foundry" 2016.




A Closer Look at Container Technology

Containers have their roots deep in the history of the Linux operating system. Containers make use of the kernel features of Linux. The core underpinnings of containers in Linux—cgroup, namespaces, and filesystems—operate in a somewhat isolated area. The magic of containers is in how these technologies combine together for maximum convenience.

Cgroup, or control group, is a mechanism to organize processes hierarchically and distribute system resources along the hierarchy in a controlled and configurable manner. Namespaces control what view of the virtual machine (VM) a process can have. Namespaces allow a process to treat a global resource as if the process had its own isolated instance of that resource.

Once Linux controls what a process can do (cgroup) and what a process sees (namespaces), it controls the file system the process has access to by mounting it in the namespace of the process. Additionally, the process can have read-only access to most of the file system and read-write access to a small part of it. These file systems are at the core of the concept of an “image”: an image (e.g. a Docker image) is just a set of serialized file systems with some configuration and metadata. When an image is deployed as a container, the container process gets the image’s file systems expanded as a mounted file system in its namespace.

Although one company often dominates the conversation about containers because of its developer-friendly user experience, open source container projects—including the Docker project—enable any organization to take advantage of the model. The standards body, Open Container Initiative (OCI) has created a clear specification for an image:



The contents of an OCI image, sourced from the OCI spec on GitHub



Saurabh Gupta. "Containers and Pivotal Cloud Foundry" 2016.