In the last few blogs, we’ve seen a bit about containers. Now let's understand how exactly it works?
As we all know, containers are an application layer abstraction that enables the compilation or packaging of code and dependencies. Containers are created from container images. But what exactly is an image?
A container image is simply a collection of bits and bytes that contains all of the files and settings required to create a container. Container images serve as templates for creating specific containers, and the same image can be used to launch an unlimited number of containers. These images can be saved on your local machine and shared with others.
Container images are extremely efficient as they’re made up of layers. A container image's layers function in a hierarchical fashion. Each image layer is built on top of the one before it. When the container begins, all of the image layers are combined. This sets up the container's file system.
You can create your own container image by simply adding a new layer on top of an existing image. Because many different images can share the same layer, container images are efficient. This can result in significant savings when moving container images across a network.
Images can have different versions, called tags. This allows you to make multiple copies of the same image. Perhaps you're working on an app and want to publish a different container image for each minor release, such as v1.0, v1.1, v1.2, and so on. Or you're working on a new app feature and want to share a test version, so you publish an image labelled 'test.' That is possible with tags.
To run any application you require libraries and system utilities. Or maybe the applications rely on another program. However, when you create a container image, you include all of your app's dependencies. This means that if you download someone else's container image (also known as pulling an image), you shouldn't need to fetch or install anything else. The container image should include everything required to run the application.
To form a starting image, the container image is compiled from file system layers. On a containerization platform, this is typically accomplished by using the relevant build command. When the file system layers are compiled, various components are reused. The developer does not have to recreate everything; when a new container is needed, the same starting image can be used. The container starting image can be altered to add or remove functionality as well as to correct errors.
Containers are an application layer abstraction that allows code and dependencies to be compiled or packaged together. Multiple containers can run on the same machine. Each container instance, which runs as an isolated process, shares the OS kernel with other containers. A sample application, or microservice, is packaged into a container image and deployed to the container platform for use. The container platform is a client-server software that facilitates container execution by providing three key operational components:
1. A daemon is a process running at the background. This daemon manages objects such as images, containers, and other communication (network) and storage (data volume) objects required by the container-encapsulated microservice.
2. An API, which allows programs to interact with and direct the daemon process.
3. A command line interface (CLI) client is used to access container images from a configured registry by issuing commands such as "pull" and "run." The API is used by the command line to control or interact with the daemon via direct commands or scripts containing commands. In turn, the daemon sends the results to the Host OS System for further processing or as a final output.
In short, a container is built from a base image, and the sample application or microservice is packaged into a container image and deployed via the container platform.(Examples of containerization platforms include Docker, Red Hat OpenShift, D2IQ-Mesosphere, Amazon Web Services ECS/EKS, Microsoft Azure Container Service, and Google Container Engine-GKE, among others. ) The container platform is a client-server application that facilitates container execution by providing three key operational components: a daemon service, an API, and a CLI interface. When deployed, the container remains active as long as the application or microservice is required to perform its function in the overall application, and it terminates when the delivery is complete. The container can be activated again as needed.
Containers are not a panacea, but they do have some appealing advantages over running software on bare metal or other virtualization technologies. Containers provide great flexibility and control both during development and throughout their operational life cycle by providing lightweight, functional isolation and developing a rich ecosystem of tools to help manage complexity.
I'd also recommend you to go through the excellent 3 part blog post series to understand more about containers from a historic perspective, container runtimes and container images.
Thank you for reading!
*** Explore | Share | Grow ***
Comments