The theme of Containers and Orchestration is now widespread for its growth in the world of information technologies, its advantages and the great promises it makes. The different vendors and brands (Oracle, IBM, Microsoft, RedHat, AWS) are introducing new concepts related to these topics and therefore new products are emerging. But what is the usefulness of container technology? What do we gain from the containment? Why is it a solution to some of the current technology problems?
Most people have heard of Docker, and they see it as the only Containment technology, but it's not the only solution on the market, although it's true that to this day it's the most successful and it's the one that has allowed the generalization of the concept. The Containing technology already existed before Docker, is a technical concept related to the construction of Linux kernels, and how certain levels of silage allow.
In the world of container technology we have:
- Docker: It's the most important product of the Docker Company
- Cri-o: It is the technology that RedHat chose to be able to support its containers in OpenShift which is its orchestration platform
- LXC: Native deployment of Linux-based containers
And as Orchestration platforms for these containers we have:
- Docker Swarm,
- Kubernet is a Google product and
- RedHat Open Shift.
We all know virtualization technology, where Virtual Machines require a hardware, hypervisor, base operating system for applications, and a level of resources to operate. It is a very useful technology that is still used and will continue to be used for a long time and more in the cloud world.
Unlike virtualization, containers are a new way to encapsulate an instance, placing everything in a single "Container" product where inside you place what the application needs to operate, the Operating System, the platform, and the application. Although it's really an abstraction, it doesn't require, for example, the entire Operating System. The container is not a virtual machine, so I don't need 100% of all the elements, only what the application really needs, this makes it a very lightweight way to encapsulate.
Unlike the traditional approach, where the developer shares everything necessary for the application, packages, patches, libraries and that is installed in the QA and production environment and then they start the problems of environment, configuration, versions etc. Containers avoid these problems, because the image comes with everything the application needs and in this way there is no difference between the environments, since the developer delivers the container and it is exactly the same what is sent to QA and Production.
The following graphic shows a Docker platform will be taken as an example to explain how the container concept works, although all products are similar and even have similar names.
Doker is simple to understand, you have a client-server architecture that is the Docker host where the "Docker" software is installed and where it runs from an image, application, or operating system, an instance that is called a container. This allows multiple containers of the same image to be installed, you might have an image that had a Java application, and might have instances with different names that would be on the same Docker host in the same environment.
On the other hand, you have the "Registry" is a kind of repository that I centrally publish my images, the idea is that the same Registry is used and that there is a Docker Host for Development, one for QA, one for Production, etc. This way you always get the image from the same repository so you ensure that the exact copy, Operating System, Platform and environment is installed.
In addition, it has a client layer that allows the execution of commands that can be done locally, and allows you to quickly and easily run different types of applications and different technologies whether I can prepare it or I can get it from public repositories like Docker Hub(https://hub.docker.com/),which is one of the best known.. Currently the different brands already have published certified Docker images, which allows developers to obtain them and streamline the development process.
Then the focus changes, the Containers technology allows us to move or move with the image of the application for each environment, creating instances of the image because physically correspond to the same object revolutionizing the work cycle and the DevOps world and automation, because now the only concern is to get the image with the correct version in the Registry in the Registry , making the duty cycle much faster there are no longer such large times of installation steps or parameterization because the application is already ready in its environment.
However, those tasks associated with connecting to databases, integrations with external applications or services, i.e. external integrations are not solved with this technology. However, it remains a major advance in the world of application development and automation, reducing risks associated with the difference in environments.
This technology makes the image agnostic, it completely forgets the environment because everything that the application requires is in the container, so practices such as the configuration of artifacts conditioned to each environment are not recommended, because it does not make sense in this new scenario.
We already describe in a shallow way what the concept of containers consists of, the benefits it brings and to some extent its limitation on external connections and services. However, if I want to use containers in complex environments, where for example you have applications in containers, the application server is separate from the data layer and the web layer, both in different containers in which you have three VMs.
In this scenario, new issues appear, which happens if I want to scale, if I need to balance and have high availability, how to communicate those containers that require being connected, how I can monitor the containers and see which one requires a greater or lesser number of resources at some point.
So since we solve these questions, I need a tool that allows me to solve these problems. To meet these needs, Orchestration platforms arise: Kubernetes, Openshift, and Docker Swarm, which are similar solutions that allow me to work more elaborately with containers.
The image shown takes the Structure of Kubernetes as an example and shows the different elements and concepts that are similar in all Orchestration systems. They have enterprise support, an infrastructure base system that saves time in the installation and configuration of the infrastructure to run a cluster and other elements such as the Master, the Workers, the latter is the one that runs the applications. It has clusters that are formed with more than one instance (pods) of containers and an Etcd repository, database where all configuration is saved among other components.
It also has a component, in this case the kubetctl, which allows to interact through command lines with the master and it sends instructions to each of the workers, so that the master allows me to scale horizontally or vertically redistributing the load if a worker is added, this allows us to really have what in the literature is defined as a dynamic cluster.
"The concepts are very similar to the architectures of a Java and J2E application server with domain concepts, node agents, cluster, and so on. Unlike J2E where only enterprise applications are allowed under the standard, in the case of Container Orchestration platforms any type of technology-independent applications can be deployed"
Today, different cloud providers have monitoring solutions that allow you to automate the scalability of resources on Orchestration platforms that allows you to manage the amount of up or down resources in a time depending on demand.
In this sense we have two great strengths: containers gain all the power to encapsulate the applications independent of the technology in which it develops and with the Orchestration layer you get the power to manage from high availability, scalability, elasticity and share environments which allows you to use the resources when required by those applications that at certain times generate demand hikes. All of the above conceptually and technically is possible. However, these concepts require a process of understanding and maturation. Both container technology and Orchestration layers require a lot of resources, for example, if we want a high availability solution for a master, cluster solutions of 3 – 5 – 9 are required, which means having resources, machines of certain capacity.
IN WHICH SCENARIO WE CAN USE THE TECHNOLOGY OF CONTAINMENT.
Today we have two scenarios or focus on which we turn to the Technology of Containment and Orchestration: the first focus is the modernization of existing applications. Specifically enterprise applications running in a Unix environment or in an environment that lends itself to being portable, Java, J2E, and so on. There are companies that are leaving the world Unix owner and are opting for other variants. This modernization is given for a few reasons:
- Accelerate the lifecycle.
- Improving application availability and scalability
- Prepare for migration to a Cloud platform
Many brands are already supporting their platforms in containers, many factory servers that already have portability for containers, not so the world of database that still has a long way to go. One of the modernization techniques used in this approach is Reshapes, a Cloud Adoption technique, which allows with few modifications to create a container.
Modernization faces some challenges, one is the approval, configuration and dependency of applications which has several techniques to solve these tasks. Another challenge is that some of the brands do not support older versions of their same products. Another aspect is the structure of applications that you want to migrate that are not designed or developed to allow elasticity and are not ready to work with more than one instance. In addition, in this new environment, developers should know not only About Containment, they should also be aware of the development of traditional applications
The other approach is design and development from the beginning for Containers and Orchestration. This technology gives me very novel elements of designs to work in a modular way, where you can have components in different containers and that their sum according to a more complex, more elastic application, more traceability and tolerance to different scenarios
Native container development also offers possibilities to advance, there are brands that support their products clearly in this technology, there are a wide variety of components for different problems that are presenting. On the other hand, there are some challenges such as the knowledge that still needs to be acquired about this technology and break the inertia and paradigms in the design and development of applications forgetting traditional architectures.
Containment and Orchestration form one of the verticals and types of technologies within the levels of the Cloud Landscape that allows the development of applications in a more portable way, allows to correct or decrease deficiencies that are haveed in the processes of development and deployment of solutions, has still great lines of development and challenges to overcome but is one of the variants to consider when making a decision in the world of information technologies.
Containing is one of the technologies that are present in the Cloud world, it allows to greatly reduce many of the problems that occur today in the processes of development, automation and deployment of an application.
We can say that containers are a portable way to define the software, and the Orchestration is a portable definition of how that software should run, deploy and monitor.Renzo Disi Director 3HTP ? Santiago, Chile ? Containment & Orchestration Talk 2020