CHARLA – DevOps Contiuous testing

NIGHT LECCTURE

DevOps

Continuous Testing: Test Automation and Virtualization of services.

Tuesday 10 - AUGUST

| 8:00 PM COL - PER | 9:00 PM CL |

SIGN UP

CONTENT
  • We will talk about the challenges that Automation has in the Software Quality process, and current approaches to speed up and increase testing with concepts such as “shift-left” in a DevOps framework.
  • We will review a technical example with a suite of tools for software testing automation and service virtualization.
PRESENTOR

Ivan Camilo Pedraza

Lider DevOps

PREVIOUS LECTURES
DevOps for Containers
Play Video
AWS STORAGE
CONTAINERIZATION AND ORCHESTATION

AWS IMMERSION DAY – KUBERNETES

EVENT CONCLUDED

AWS IMMERSION DAYS is a free workshop lasting approximately 4 hours guided by 3HTP professionals certified as AWS architects

AWS Immersion Days allows AWS Business Associates with the Consulting Advanced and Premier categories to deliver workshops to clients with content and tools developed by AWS solution architects. These workshops include presentations, hands-on labs, and other customized assets that help customers understand AWS's value offering.

DO YOU WANT TO HAVE YOUR IMMERSION DAY?

CONTENT IMMERSION DAY KUBERNETES ON AWS

OBJECTIVE

Learn the concepts of containerization and orchestration and interact with AWS EKS service guided hands-on workshops.

TEAM 3HTP IMMERSION DAY

Meet the members of the 3HTP team of instructors who teach the KUBERNETES IMMERSION DAY

Nicolas Rodríguez

3HTP

AWS Architect

Katty Jaramillo

3HTP

Cloud Architect

Daniel Muñoz

Daniel Muñoz

3HTP

AWS Architect

Jonathan Galán

3HTP

AWS Architect

Luis Ramírez

Luis Ramírez

3HTP

Arquitecto AWS

LECTURE – DevOps in and for CONTAINERS

-NIGHT LECTURE-

DEVOPS
in / for CONTAINERS

EVENT CONCLUDED

CONTENT

How to deal with containerized application automation and how to use containerization to do DevOps

Play Video

3HTP has created the Pijama Lecture Party initiative, an innovative space to learn about technology through videos-articles-ppt, you will see IT modernization strategies, SW brand-OpenSource products and services associated with our pillars: Bridge to Cloud, Born2Cloud and Devops . Do not miss the opportunity to have fun and learn, stay tuned for our invitations.

PRESENTOR
PREVIOUS LECTURES
AWS STORAGE
CONTAINERIZATION AND ORCHESTATION

CONTAINERIZATION AND ORCHESTRATION

At present, the topic of Containers and Orchestration is generalized due to its growth in the world of information technologies, its advantages, and the great promises it raises. Different vendors and brands (Oracle, IBM, Microsoft, RedHat, AWS) are introducing new concepts related to these issues and therefore new products are emerging. But what is the usefulness of container technology? What do we gain from containment? Why is it a solution to some of the current problems in technology areas?

Most people have heard of Docker, and see it as the only Containerization technology, but it is not the only solution on the market, although it is true that to date it is the most successful and is the one that has allowed generalization. of the concept. Containerization technology already existed before Docker, it is a technical concept related to the construction of Linux kernels, and how they allow certain levels of isolation.

In the world of container technology we have:

  • Docker: It is the flagship product of the Docker Company
  • Cri-o: It is the technology that RedHat chose to be able to support its containers in OpenShift, which is its Orchestration platform
  • LXC: Native Linux-based Container Deployment

Orchestration platforms for these containers we have:

  • Docker Swarm,
  • Kubernetes a Google Product
  • RedHat Open Shift

CONTAINERIZATION

We all know virtualization technology, where Virtual Machines require hardware, a hypervisor, a base operating system for applications, and a level of resources for their operation. It is a very useful technology that is still used and that will continue to be used for a long time and longer in the cloud world.

Unlike virtualization, containers are a new way of encapsulating an instance, placing everything in a single product “Container” where inside is placed what the application needs to operate, the Operating System, the platform, and the application. Although it really is an abstraction, it does not require, for example, the entire Operating System. The container is not a virtual machine so I don’t need 100% of all items, just what the application actually needs, this makes it a very lightweight way to encapsulate.

Fig. 1. Disi, R 2020. Comparación VIrtualización Vs Contenerización, Diseño 3HTP

Unlike the traditional approach, where the developer shares everything that is necessary for the application, packages, patches, libraries, and that is installed in the QA and production environment and then the environment, configuration, version problems, etc. start. With the containers, these problems are avoided, since the image comes with everything that the application needs, and in this way, there is no difference between the environments since the developer delivers the container and it is exactly the same as what is sent to QA and Production.

The following graphic shows a Docker platform that will be taken as an example to explain how the container concept works, although all the products are similar and even have similar names.

Fig. 2 Cuervo. V (2019). Arquitectura Docker. [Figura 2]. Recuperado de http://www.arquitectoit.com/docker/arquitectura-docker/

Doker is easy to understand, it has a client-server architecture that is the Docker host where the “Docker” software is installed and where it runs from an image, an application, or an operating system, an instance called a container. This allows multiple containers of the same image to be installed, you could have an image that had a Java application, and you could have other instances with different names that would be on the same Docker host in the same environment.

On the other hand, you have the “Registry” which is a kind of repository that I publish my images centrally, the idea is that the same Registry is used and that there is a Docker Host for Development, one for QA, one for Production, etc. . In this way, the image of the same repository is always obtained, so it is ensured that the exact copy, Operating System, Platform and the environment is installed.

In addition, it has a client layer that allows the execution of commands that can be done locally and allows quickly and simply to execute different types of applications and different technologies, whether I can prepare it or I can obtain it from public repositories such as Docker Hub (https://hub.docker.com/), which is one of the best known. Currently, the different brands have already published certified Docker images, which allows developers to obtain them and speed up the development process.

Then the approach changes, the Container technology allows us to move or move with the image of the application through each environment, creating instances of the image because they physically correspond to the same object, revolutionizing the work cycle and the DevOps and automation world, well now The only concern is to obtain the image with the correct version in the Registry, making the work cycle much faster, since there are no longer such long installation steps or parameterization, since the application is already ready in its environment.

Fig. 3. Pedraza, I. 2020, Containerization Pipeline, Article Innovation in Technology, https://www.3htp.com/devops-y-contenedores-actores-importdamientos-de-la-agILIDAD/

However, there are still those tasks associated with connecting to databases, integrations with external applications or services, that is, the external is not solved with this technology. However, it is still a great advance in the world of development and automation for the deployment of applications, reducing risks associated with different environments.

Esta tecnología hace que la imagen sea agnóstica, se olvida completamente del entorno pues todo lo que requiere el aplicativo está en el contenedor, por eso no se recomiendan prácticas como la configuración de artefactos condicionados a cada ambiente, pues no tiene sentido en este nuevo escenario.

ORCHESTRATION

We have already briefly described what the concept of containers consists of, the benefits it brings and to some extent its limitation with respect to external connections and services. However, if I want to use containers in complex environments, where for example I have containerized applications, the application server is separated from the data layer and the web layer, both in different containers instead of having three VMs.

In this scenario, new problems appear, what happens if I want to scale, if I need to balance and have high availability, how to communicate those containers that need to be connected, how can I monitor the containers and see which one requires a greater or lesser number of resources at some point.

Fig. 4 Paunin, D. 2018, The best architecture with Docker and Kubernetes – myth or reality? https://medium.com/@dpaunin/the-best-architecture-with-docker-and-kubernetes-myth-or-reality-77b4f8f3804d

So how do we solve these questions, I need a tool that allows me to solve these problems. To respond to these needs, the Orchestration platforms emerge Kubernetes, Openshift, and Docker Swarm, which are similar solutions that allow me to work in a more elaborate way with containers.

Fig 5. Arquitectura Kubernetes

The image shown takes the Kubernetes structure as an example and shows the different elements and concepts that are similar in all Orchestration systems. They have business support, an infrastructure base system that saves time in the installation and configuration of the infrastructure to run a cluster and other elements such as the Master, the Workers, the latter is the one that executes the applications. It has clusters that are formed with more than one instance (pods) of containers and an Etcd repository, a database where all the configuration is stored among other components.

It also has a component, in this case, the kubetctl, which allows interacting with the master through command lines and it sends instructions to each of the workers, in such a way that the master allows me to scale horizontally or vertically, redistributing the load if a worker is added to it, this really allows us to have what is defined in the literature as a dynamic cluster.

 “The concepts are very similar to the architectures of a Java and J2E application server with the concepts of domain, node agents, cluster, and so on. Unlike J2E, where only business applications are allowed to run under the standard, in the case of container orchestration platforms, any type of technology-independent applications can be implemented “

Nowadays the different cloud providers have monitoring solutions that allow automating the scalability of the resources in the Orchestration platforms that allows, at a time, to manage the number of resources up or down depending on the demand.

In this sense we have two great strengths: with containers, you gain all the power to encapsulate applications independent of the technology in which it is developed, and with the Orchestration layer you get the power to manage to have high availability, scalability, elasticity, and sharing. environments, which allows the use of resources when required by those applications that at certain times generate increases in demand. All of the above is conceptually and technically possible. However, these concepts require a process of understanding and maturing. Both the container technology and the Orchestration layers require a large number of resources, for example, if we want a high availability solution for a master, cluster solutions of 3 – 5 – 9 are required, which means having resources, machines of a certain capacity.

IN WHICH SCENARIO CAN WE USE CONTAINMENT TECHNOLOGY?

Today we have two scenarios or focuses in which we turn to Containerization and Orchestration technology: the first focus is the modernization of existing applications. Specifically, business applications that run in a Unix environment or in an environment that lends itself to being portable, Java, J2E, etc. There are companies that are exiting the proprietary Unix world and opting for other variants. This modernization is given for a few reasons:

  • Accelerate the life cycle.
  • Improve application availability and scalability
  • Prepare for migration to a cloud platform

Many brands are already supporting their containerized platforms, many factory servers that already have container portability, not the database world that still has a long way to go. One of the modernization techniques used in this approach is Reshapes, a Cloud Adoption technique that allows with little modification to create a container.

Modernization faces some challenges, one is the homologation, the configuration and the dependency of the applications which has several techniques to solve these tasks. Another challenge is that some of the brands do not support old versions of the same products. Another aspect is the structure of the applications that you want to migrate and that are not designed or developed to allow elasticity and are not prepared to work with more than one instance. In addition, in this new environment, developers must know not only about Containerization, but they must also know about the development of traditional applications.

The other focus is design and development from the ground up for Containers and Orchestration. This technology gives me very innovative design elements to work in a modular way, where you can have components in different containers and that their sum conforms to a more complex, more elastic application, more traceability, and tolerance to different scenarios.

The native development in containers also offers possibilities to advance, there are brands that clearly support their products in this technology, there is a great variety of components for different problems that they present. On the other hand, some challenges remain, such as the knowledge that must still be acquired about this technology and breaking the inertia and paradigms in the design and development of applications, forgetting traditional architectures.

Containerization and Orchestration form one of the verticals and types of technologies within the Cloud Landscape levels that allows the development of applications in a more portable way allows correcting or reducing deficiencies that exist in the development and deployment processes of The solutions still have great lines of development and challenges to overcome, but it is one of the variants to take into account when making a decision in the world of information technology.

Fig. 6 Cloud Native Landscape, 2019.

Containerization is one of the technologies that are present in the Cloud world, it allows to greatly reduce many of the problems that exist today in the development, automation and deployment processes of an application.

We can say that containers are a portable way of defining software, and Orchestration is a portable definition of how that software should be executed, deployed and monitored.

Renzo Disi | Director 3HTP | Santiago de Chile | Talk on Containerization and Orchestration | 2020

Pijama Party | Containerization and Orchestration | Exhibitor. Renzo Disi. Director 3HTP | 05-12-2020.

INNOVATION IN TECHNOLOGY

DEVOPS AND CONTAINERS | IMPORTANT ACTORS OF AGILITY

The world of technology continues with great advancement and innovation and DevOps and Containerization technologies are an example of this. Over the years they have developed new concepts and are the focus of the technical and operational areas, which seek agility, automation in management and reduction of time, advantages that the correct adoption of DevOps processes and the use of container technology.

Due to the increase and speed of technological leaps, the software market needs to produce high-quality results in a short time, increasing agility and limiting the number of life cycle errors (ALM). In this scenario, DevOps has the challenge of being part of the solution and overcoming the traditional problems of the IT areas: definition meetings, management of procurement and infrastructure processes, coupling of areas for the governance model, among others.

On the other hand, containerization technologies bring all the characteristics to simplify the application management process, allow portability between environments (on-premises – Cloud), and centralize the tasks of the operation teams, thus giving agility and mobility to the software, without losing sight of the implications of automation and management with DevOps.

The “everything as code” has been opening an important framework for the development of skills within the software cycle, which suggests incorporating scripting strategies within DevOps in order to optimize the way the included processes are automated.

We are going to involve these concepts to expose the benefits of the successful implementation of DevOps, taking innovation technologies hand in hand.

Phases of the 3HTP methodology

Within the adoption phase (first phase of the 3HTP methodology) it is important to transform the concepts of on-premises software installations and lead to dynamic models where solutions from infrastructure as code (Iac), application containerization, platforms as a service and script enablement participate. and/or plugins for the software execution of the toolchain; This is called Hybrid DevOps and what is known is the ability to define a faster implementation strategy.

In the DevOps Software market we find many solutions that offer these possibilities, but what is really important is to know how to define the tools that “contribute to my current business reality”, that is, that the analysis of the toolchain to be implemented is given by the scope it has each tool versus the functionalities that are needed in the objective outlined as an implementation roadmap.

Hybrid DevOps allows mixing these technology concepts to accelerate the step to professionalization (Phase 2 of the 3HTP methodology) and at the same time provide as an example automation measures, decrease in infrastructure costs, use of resources, and above all agility. We also find continuous integration engines that allow the execution of Jobs, with integration to automation tasks to the software and execution tasks of availability and provisioning of complete platforms for the applications.

The diagram of the graph Nr2 shows an example of an implementation that could be given by different types of software solutions to contribute to the toolchain: The Artifact and Image Source repositories (now with the Containers trend) that are generally almost part of an asset for the company for its value and importance. On the other hand, a continuous integration engine with a pipeline that allows the execution of the DevOps phases to involve the creation of the container where the software that needs to be called to fulfill a task at a certain point in time (compilation, code analysis, unit tests, etc.), called infrastructure scripts as code, which allow the creation and provisioning of an environment, for example for the QA Testing phases, which are of “volatile” necessity and finally the call to scripts that automate a task by itself or by means of software. Finally, it also allows team collaboration and continuous improvement with feedback and feedback to the ALM solution, which could be software as a service (JIRA, VSTS, EWM, etc).

The speed of the Development teams and the technology area must be given by the implementation of solutions that contribute and give that speed that the market exists and that agility that is talked about so much. The adoption of technological tools today can be seen from different points of view and the use of resources is a vital part of the goals that IT has.

It is time to refocus the processes and adoption model, and launch into the management of a Hybrid DevOps architecture to meet the needs of the organization; It is time to increase the scripting skills of the teams involved in DevOps (Development, Testing, infrastructure, production) and start laboratories of these new solutions that help in the way of reducing time and costs.

In the next few years, DevOps faces optimizing and improving partitions that have been created in the middle phases of the CI / CT / CD / RM lifecycle process, and getting started delivering a tunnel (pipeline) that truly connects end to end. end-to-end Development with uninterrupted Operations and with the standards of Automation, quality, agility, security that the software and applications require.