Category in Articles

CONTAINERIZATION AND ORCHESTRATION

The theme of Containers and Orchestration is now widespread for its growth in the world of information technologies, its advantages and the great promises it makes. The different vendors and brands (Oracle, IBM, Microsoft, RedHat, AWS) are introducing new concepts related to these topics and therefore new products are emerging. But what is the usefulness of container technology? What do we gain from the containment? Why is it a solution to some of the current technology problems?

Most people have heard of Docker, and they see it as the only Containment technology, but it's not the only solution on the market, although it's true that to this day it's the most successful and it's the one that has allowed the generalization of the concept. The Containing technology already existed before Docker, is a technical concept related to the construction of Linux kernels, and how certain levels of silage allow.

In the world of container technology we have:

  • Docker: It's the most important product of the Docker Company
  • Cri-o: It is the technology that RedHat chose to be able to support its containers in OpenShift which is its orchestration platform
  • LXC: Native deployment of Linux-based containers

And as Orchestration platforms for these containers we have:

  • Docker Swarm,
  • Kubernet is a Google product and
  • RedHat Open Shift.

CONTAINMENT

We all know virtualization technology, where Virtual Machines require a hardware, hypervisor, base operating system for applications, and a level of resources to operate. It is a very useful technology that is still used and will continue to be used for a long time and more in the cloud world.

Unlike virtualization, containers are a new way to encapsulate an instance, placing everything in a single "Container" product where inside you place what the application needs to operate, the Operating System, the platform, and the application. Although it's really an abstraction, it doesn't require, for example, the entire Operating System. The container is not a virtual machine, so I don't need 100% of all the elements, only what the application really needs, this makes it a very lightweight way to encapsulate.

Fig. 1. Disi, R 2020. VIrtualization Vs Containment Comparison, 3HTP Design

Unlike the traditional approach, where the developer shares everything necessary for the application, packages, patches, libraries and that is installed in the QA and production environment and then they start the problems of environment, configuration, versions etc. Containers avoid these problems, because the image comes with everything the application needs and in this way there is no difference between the environments, since the developer delivers the container and it is exactly the same what is sent to QA and Production.

The following graphic shows a Docker platform will be taken as an example to explain how the container concept works, although all products are similar and even have similar names.

Fig. 2 Cuervo. V (2019). Arquitectura Docker. [Figura 2]. Recuperado de http://www.arquitectoit.com/docker/arquitectura-docker/

Doker is simple to understand, you have a client-server architecture that is the Docker host where the "Docker" software is installed and where it runs from an image, application, or operating system, an instance that is called a container. This allows multiple containers of the same image to be installed, you might have an image that had a Java application, and might have instances with different names that would be on the same Docker host in the same environment.

On the other hand, you have the "Registry" is a kind of repository that I centrally publish my images, the idea is that the same Registry is used and that there is a Docker Host for Development, one for QA, one for Production, etc. This way you always get the image from the same repository so you ensure that the exact copy, Operating System, Platform and environment is installed.

In addition, it has a client layer that allows the execution of commands that can be done locally, and allows you to quickly and easily run different types of applications and different technologies whether I can prepare it or I can get it from public repositories like Docker Hub(https://hub.docker.com/),which is one of the best known.. Currently the different brands already have published certified Docker images, which allows developers to obtain them and streamline the development process.

Then the focus changes, the Containers technology allows us to move or move with the image of the application for each environment, creating instances of the image because physically correspond to the same object revolutionizing the work cycle and the DevOps world and automation, because now the only concern is to get the image with the correct version in the Registry in the Registry , making the duty cycle much faster there are no longer such large times of installation steps or parameterization because the application is already ready in its environment.

Fig. 3. Pedraza, I. 2020, Pipeline Contenerización, Artículo Innovación en la Tecnología, http://www.3htp.com/devops-y-contenedores-actores-importantes-de-la-agilidad/

However, those tasks associated with connecting to databases, integrations with external applications or services, i.e. external integrations are not solved with this technology. However, it remains a major advance in the world of application development and automation, reducing risks associated with the difference in environments.

This technology makes the image agnostic, it completely forgets the environment because everything that the application requires is in the container, so practices such as the configuration of artifacts conditioned to each environment are not recommended, because it does not make sense in this new scenario.

Orchestration.

We already describe in a shallow way what the concept of containers consists of, the benefits it brings and to some extent its limitation on external connections and services. However, if I want to use containers in complex environments, where for example you have applications in containers, the application server is separate from the data layer and the web layer, both in different containers in which you have three VMs.

In this scenario, new issues appear, which happens if I want to scale, if I need to balance and have high availability, how to communicate those containers that require being connected, how I can monitor the containers and see which one requires a greater or lesser number of resources at some point.

Fig. 4 Paunin, D. 2018, The best architecture with Docker and Kubernetes — myth or reality? https://medium.com/@dpaunin/the-best-architecture-with-docker-and-kubernetes-myth-or-reality-77b4f8f3804d

So since we solve these questions, I need a tool that allows me to solve these problems. To meet these needs, Orchestration platforms arise: Kubernetes, Openshift, and Docker Swarm, which are similar solutions that allow me to work more elaborately with containers.

Fig 5. Kubernetes Architecture

The image shown takes the Structure of Kubernetes as an example and shows the different elements and concepts that are similar in all Orchestration systems. They have enterprise support, an infrastructure base system that saves time in the installation and configuration of the infrastructure to run a cluster and other elements such as the Master, the Workers, the latter is the one that runs the applications. It has clusters that are formed with more than one instance (pods) of containers and an Etcd repository, database where all configuration is saved among other components.

It also has a component, in this case the kubetctl, which allows to interact through command lines with the master and it sends instructions to each of the workers, so that the master allows me to scale horizontally or vertically redistributing the load if a worker is added, this allows us to really have what in the literature is defined as a dynamic cluster.

"The concepts are very similar to the architectures of a Java and J2E application server with domain concepts, node agents, cluster, and so on. Unlike J2E where only enterprise applications are allowed under the standard, in the case of Container Orchestration platforms any type of technology-independent applications can be deployed"

Today, different cloud providers have monitoring solutions that allow you to automate the scalability of resources on Orchestration platforms that allows you to manage the amount of up or down resources in a time depending on demand.

In this sense we have two great strengths: containers gain all the power to encapsulate the applications independent of the technology in which it develops and with the Orchestration layer you get the power to manage from high availability, scalability, elasticity and share environments which allows you to use the resources when required by those applications that at certain times generate demand hikes. All of the above conceptually and technically is possible. However, these concepts require a process of understanding and maturation. Both container technology and Orchestration layers require a lot of resources, for example, if we want a high availability solution for a master, cluster solutions of 3 – 5 – 9 are required, which means having resources, machines of certain capacity.

IN WHICH SCENARIO WE CAN USE THE TECHNOLOGY OF CONTAINMENT.

Today we have two scenarios or focus on which we turn to the Technology of Containment and Orchestration: the first focus is the modernization of existing applications. Specifically enterprise applications running in a Unix environment or in an environment that lends itself to being portable, Java, J2E, and so on. There are companies that are leaving the world Unix owner and are opting for other variants. This modernization is given for a few reasons:

  • Accelerate the lifecycle.
  • Improving application availability and scalability
  • Prepare for migration to a Cloud platform

Many brands are already supporting their platforms in containers, many factory servers that already have portability for containers, not so the world of database that still has a long way to go. One of the modernization techniques used in this approach is Reshapes, a Cloud Adoption technique, which allows with few modifications to create a container.

Modernization faces some challenges, one is the approval, configuration and dependency of applications which has several techniques to solve these tasks. Another challenge is that some of the brands do not support older versions of their same products. Another aspect is the structure of applications that you want to migrate that are not designed or developed to allow elasticity and are not ready to work with more than one instance. In addition, in this new environment, developers should know not only About Containment, they should also be aware of the development of traditional applications

The other approach is design and development from the beginning for Containers and Orchestration. This technology gives me very novel elements of designs to work in a modular way, where you can have components in different containers and that their sum according to a more complex, more elastic application, more traceability and tolerance to different scenarios

Native container development also offers possibilities to advance, there are brands that support their products clearly in this technology, there are a wide variety of components for different problems that are presenting. On the other hand, there are some challenges such as the knowledge that still needs to be acquired about this technology and break the inertia and paradigms in the design and development of applications forgetting traditional architectures.

Containment and Orchestration form one of the verticals and types of technologies within the levels of the Cloud Landscape that allows the development of applications in a more portable way, allows to correct or decrease deficiencies that are haveed in the processes of development and deployment of solutions, has still great lines of development and challenges to overcome but is one of the variants to consider when making a decision in the world of information technologies.

Fig. 6 Cloud Native Landscape, 2019.

Containing is one of the technologies that are present in the Cloud world, it allows to greatly reduce many of the problems that occur today in the processes of development, automation and deployment of an application.

We can say that containers are a portable way to define the software, and the Orchestration is a portable definition of how that software should run, deploy and monitor.

Renzo Disi Director 3HTP ? Santiago, Chile ? Containment & Orchestration Talk 2020

Night Talk ? Containment & Orchestration Exhibitor. Renzo Disi. Director 3HTP ? 12-05-2020.

INNOVATION IN TECHNOLOGY

DevOps & CONTAINERS IMPORTANT ACTORS OF AGILITY

The world of technology continues to advance and innovation, and DevOps and Containing technologies are an example of this. Over the years they have developed new concepts and are the focus of the technical and operational areas, which seek agility, automation in the management and reduction of times, advantages that promises the correct adoption of DevOps processes and the use of container technology.

Due to the increase and speed of technological leaps, the software market requires producing high quality results in a short time, increasing agility and limiting the number of lifecycle errors (ALMs). In this scenario DevOps is challenged to be part of the solution and to overlay the traditional problems of IT areas: definition meetings, management of procurement processes and infrastructure, coupling areas for the governance model among others.

On the other hand, containing technologies bring all the features to simplify the application management process, allow portability between environments (onpremises – Cloud) and centralize the tasks of the operating teams, thus giving agility and mobility to the software, without losing sight of the implications of automation and management with DevOps.

"Everything as code" has been opening up an important framework for skill development within the software cycle, which suggests incorporating scripting strategies within DevOps and thus being able to optimize the way the included processes are automated.

We will involve those concepts to expose the benefits of successful DevOps implementation, bringing innovation technologies hand in hand.

Phases of the 3HTP methodology

Within the adoption phase (first phase of 3HTP methodology) it is important to transform the concepts of onpremises software installations and lead to dynamic models where solutions such as code (Iac) participate, containerization of applications, platforms as a service and enablement of scripts and/or pluggs for the execution of toolchain software; that's called the Hybrid DevOps, and what's branded is the ability to define a faster deployment strategy.

In the DevOps Software market we find many solutions that offer these possibilities, but what is really important is to know how to define the tools that "contribute to my current business reality", that is, that the analysis of the toolchain to implement this given by the scope that each tool has versus the functionalities that are needed in the objective plotted as a roadmap implementation.

Hybrid DevOps allows you to mix these technology concepts to accelerate the transition to professionalization (Phase 2 of 3HTP methodology) and in turn provide as an example automation measures, reduction of infrastructure costs, use of resources and above all agility. We also find continuous integration engines that allow the execution of Jobs, with integration to tasks of automation to the software and execution tasks of availability and provisioning of complete platforms for applications.

The Graph Diagram Nr2 shows an example of an implementation that could be given by different types of software solutions to contribute to the toolchain: The Artifact and Image Source repositories (now with the Containers trend) that generally make almost an asset for the company for its value and importance. On the other hand, a continuous integration engine with a pipeline that allows the execution of the DevOps phases to involve the creation of the container where this software that needs to be called to fulfill a task at a certain point in time (compilation, code analysis, unit tests, etc.), called infrastructure scripts as code, that allow the creation and provisioning of an environment for example for the QA Testing phases , which are in need "volatile" and finally the call to scripts that automate a work by itself or through a software. Finally, it also allows team collaboration and continuous improvement with feedback and feedback to the ALM solution which, could be a software as a service (JIRA, VSTS, EWM, etc).

The speed of the Development teams and the technology area must be given by the implementation of solutions that provide and give that speed that the market exists and that agility that is so much talked about. The adoption of technological tools today can be seen from different points of view and leveraging resources is a vital part of IT's goals.

It's time to refocus adoption model and processes, and embark on managing a Hybrid DevOps architecture to meet your organization's needs; It's time to increase the scripting skills of the teams involved in DevOps (Development, Testing, Infrastructure, Production) and start labs for those new solutions that help on the path to lower times and costs.

Over the next few years, DevOps faces optimizing and improving the partitions that have been created in the intermediate phases of the CI/CT/CD/RM lifecycle process, and getting started delivering a tunnel that truly connects end-to-end to Development with Operations uninterruptedly and with the automation, quality, agility, security standards that software and applications require.

AWS Cognito

Cognito for User Authentication

Cognito is Amazon's proposal among its services to organizations to reduce the investment of time and effort in the development of the authentication platform for your applications.

Read more