Case: “Embarking on a Digital Journey: Transforming from Manual to Automated Deployments”, Legops tech S.A.S

Introduction

Many organizations are focused on undertaking technology upgrades, and willing to improve their internal processes and project execution, to deliver superior experiences in performance, availability, and efficiency to their customers. However, tackling a technology upgrade, especially in the cloud environment, goes beyond mere willingness; it involves a solid commitment at both the organizational and managerial levels, aimed at implementing improvements that will be truly fruitful.

In this context, the automatic deployment of applications emerges as a practically indispensable requirement to ensure the high availability demanded by the current scenario. Manual processes of dependency installation and service execution are obsolete practices that belong to the past. This challenge is being enthusiastically taken up by several organizations seeking not only to remain competitive, but also to adopt best practices and focus on business strategies that benefit both the processes and the people who depend on them.

In this context of transformation, we highlight the key relevance of dockerization, using readme as a basis for this process. In addition, the implementation of Continuous Delivery (CI/CD) practices, supported by services such as AWS CodePipeline, stands as a cornerstone to accelerate and optimize development cycles, providing a continuous and reliable flow from code writing to production deployment. These strategies not only drive operational efficiency, but also lay the foundation for agile, future-oriented innovation in a constantly evolving technology environment.

About the customer

It is a company that provides professional virtual services for the legal world, where through its developments it helps its clients to more efficiently manage their legal matters, such as collaborative work, document signing, process traceability. Their clients are law firms and independent lawyers and companies that want to keep track of certifications, regulations, and legal processes through the use of technology.

Number of employees: 11-50
Age: 8 years

Customer Challenge

Phase 1: customer request

3HTP Cloud Services’ proposal for LEGOPS focuses on providing Specialized Technical Consulting and Support to optimize the architecture and operation model of its systems in AWS. In the initial phase, we address the review and improvement of the existing architecture, the implementation of environments and the empowerment of DevOps processes, including the creation of automation pipelines. This project is presented as a comprehensive commitment to elevate the operational efficiency and performance of LEGOPS, providing customized and certified solutions by DevOps and AWS experts.

Phase 2: proof of concept

A first phase of the project with LEGOPS, which should be considered for the context that will be explained later, is the phase in which the applications were dockerized. Here LEGOPS (hereinafter the client) showed the readme (informative document that indicates the necessary commands for the deployment and additional information) with which they deployed the applications and installed the necessary dependencies for the operation is these. In AWS but this was completely manual.

Here the Dockerfile files were built, containerization tests were performed exposing the applications locally and deployed in an AWS environment with the minimum requirements, which was preceded by the provisioning of an infrastructure in the cloud provider Amazon Web Service (AWS), where application container services, automated pipelines for change management at the development and deployment level (DevOps lifecycle), databases in the RDS service, compute capacity supported in the EC2 service and an EKS (Elastic Kubernetes Services) cluster for application management and scaling were implemented; a positive result was obtained from the initial requirements set forth by the LEGOPS client.

Phase 3: deployment in controlled environments

In phase 2, according to the success of the first phase, the opportunity arose to replicate what was done in that first phase in three different accounts (development, testing and production), which were done in a progressive and controlled manner that allowed fine- tuning the performance parameters in terms of computation, networks, security, database consumption and application availability. In addition, stress and performance tests were performed on the applications. The greatest gain of phase two was the knowledge generated that was delivered to the client and the capacity acquired by the 3HTP architects’ team.

Phase 4: production start-up

The last phase consisted of making available what was done in the 3 accounts, in a productive account to automate the life cycle of the 9 applications that the client has. Here the observations of the 3 controlled accounts were considered, a synchronization was made between Github and Codecommit for the versioning of the repositories, as additional measures CPS policies were created in Cloudfront, cookies optimization, JS libraries update and performance details that made more efficient the consumption of the applications.

Partner Solution

3HTP focused activities

The Customer, in collaboration with its teams of architects, administrators and developers, recognized in AWS a strategic partner to automate the deployment of its web applications.

In the context of this project, 3HTP Cloud Services played an active role in providing guidance, establishing key definitions and implementing technical solutions both in infrastructure and in supporting the development and automation teams.

3HTP Cloud Services’ participation focused on the following areas:

  • Validation of the architecture proposed by the client.
  • Deployment of infrastructure-as-code (IaC) projects on AWS.
  • Implementation of automated processes and continuous integration/deployment (CI/CD) for infrastructure and microservices management.
  • Execution of stress and load tests in the AWS environment.

This strategic and multi-faceted involvement of 3HTP Cloud Services contributed significantly to building a robust, efficient infrastructure aligned with AWS standards of excellence, thus supporting the overall success of the project.

Services Offered by 3HTP Cloud Services:

The organization already had an initial cloud adoption structure for its customer portal. Therefore, as a multidisciplinary team, we proceeded with an analysis of the present situation and the proposal made by the client. The following relevant suggestions for the architecture were derived from this comprehensive assessment:

  • Differentiation of architectures for productive and non-productive environments.
  • Use of Infrastructure as Code to generate dynamic environments, segmented by projects, business units, among others.
  • Implementation of Continuous Integration/Continuous Delivery (CI/CD) practices to automate the creation, management, and deployment of both infrastructure and microservices.

Architecture

To replicate what was done in phase 1, the following architecture was proposed with all the immersed services used, which was received and approved by the client. This also has the best practices of AWS Well Architected where security, performance efficiency, operational excellence, reliability, cost optimization and sustainable architecture are protagonists.

Several services work together to support applications and data. Key services include Amazon S3 for secure file storage, Amazon CloudWatch for monitoring, and AWS IAM for access control. In addition, we use Amazon SQS for scalable and reliable message queuing between distributed application components. This facilitates asynchronous communication and improves overall system resiliency. In addition, SSL/TLS certificates were easily managed with AWS Certificate Manager, ensuring secure connections for applications and data.

The Virginia region hosts an ECR as a container image repository, which made it possible to efficiently manage and store Docker images for containerized applications. DevOps services such as AWS CodePipeline, CodeBuild, CodeCommit and CodeDeploy automate these software release processes, optimizing the deployment pipeline. AWS CloudFront serves as a content delivery network, optimizing content delivery to end users and is protected by AWS Web Application Firewall to defend against web-based attacks.

Within the Virtual Private Cloud (VPC) created, which spans two Availability Zones, there is an Internet gateway, a public subnet with an Application Load Balancer, and a secure EC2 instance known as “Bastion” for additional protection. Traffic is routed from the public subnet to the private subnets through a NAT Gateway. One private subnet hosts an Amazon RDS instance for database management and an OpenSearch service for search functionality. In addition, the second private subnet includes an EKS service, responsible for managing containerized applications, along with an auto-scaling EC2 worker node. All services within the VPC are protected by their respective security groups and interconnected via routing tables.

Used services

AWS Certificate Manager: ACM is a service that makes it easy for users to provision, manage and deploy SSL/TLS certificates for use with AWS services. It provides a simple way to secure network communications and encrypt data transmitted between clients and servers.

S3 (Simple Storage Service): S3 is a highly scalable and secure cloud storage service that allows users to store and retrieve data, such as files, documents, images, and videos, from anywhere at any time.

Amazon CloudWatch: CloudWatch is a monitoring and management service that provides real-time insights into the operational health of your AWS resources, helping you monitor metrics, collect, and analyze historical archives, and set alarm clocks.

AWS IAM (Identity and Access Management): IAM is a service that allows you to securely manage user access and permissions to AWS resources, allowing you to control who can access what is in your AWS environment.

Amazon SQS (Simple Queue Service): a fully managed message queuing service that enables decoupling and scaling of microservices, serverless applications and distributed systems, provides a reliable and scalable platform for sending, storing and receiving messages between software components, ensuring smooth and efficient communication.

ECR (Elastic Container Registry): ECR is a fully managed container image registry that simplifies the storage, management, and deployment of container images, providing a secure and scalable solution.

AWS CodePipeline: CodePipeline is a continuous integration and continuous delivery (CI/CD) service that automates your software release processes, allowing you to build, test and deploy your applications from trusted sources.

AWS CodeDeploy: CodeDeploy is a fully managed deployment service that automates application deployments to EC2 instances, on-premises instances and serverless Lambda functions, ensuring fast and reliable application updates.

AWS CodeBuild: CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages, eliminating the need to maintain your build infrastructure.

CodeCommit: CodeCommit is a fully managed source control service that hosts secure and scalable Git repositories, allowing you to store and manage your code well.

AWS CloudFront CDN: is a global content delivery network (CDN) that accelerates the delivery of web content to end users around the world with low latency and high transfer speeds.

AWS Web Application Firewall: WAF is an AWS service that connects to AWS Cloudfront, helps protect your web applications from common web exploits and attacks. It allows you to define rules to filter and monitor HTTP and HTTPS requests to applications.

VPC (Virtual Private Cloud): VPC is a virtual network environment that allows you to create a private, isolated section of the AWS cloud, giving you control over your virtual network and allowing you to define IP ranges, subnets and network gateways.

Availability Zones (AZ): Availability Zones are isolated locations within an AWS region that provide fault tolerance and high availability. They are interconnected with low latency links to support resiliency and redundancy.

Internet Gateway: Internet Gateway is a horizontally scalable gateway that enables communication between your VPC and the Internet, allowing you to access resources in your VPC from the Internet and vice versa.

Routing tables: Routing tables are rules that govern network traffic within a VPC, allowing you to manage the flow of data between subnets and control connectivity to the Internet and other resources.

EC2 (Elastic Compute Cloud): EC2 provides scalable virtual servers in the cloud, known as EC2 instances. It allows you to quickly launch and manage virtual machines with various configurations, providing flexible and resizable compute capacity for your applications.

Application Load Balancing: ALB is an AWS load balancing service that evenly distributes incoming application traffic across multiple destinations, improving scalability, availability, and responsiveness. It supports advanced features such as content-based routing and SSL/TLS termination.

AWS OpenSearch Service: AWS OpenSearch is a fully managed, highly scalable, open-source search service based on the Apache OpenSearch project. It simplifies the deployment and management of search functionality within your applications.

RDS (Relational Database Service): RDS is a fully managed relational database service that simplifies the configuration, operation and scaling of relational databases. It supports popular database engines and provides automated backups, patching and high availability for reliable and scalable data storage.

EKS (Elastic Kubernetes Service): EKS is a managed Kubernetes service that simplifies the deployment and scaling of containerized applications. It automates the management of the underlying Kubernetes infrastructure, allowing you to focus on developing and running your applications efficiently.

Security Groups: Acting as virtual firewalls, security groups provide granular access control for instances by defining authorized protocols, ports and IP ranges, ensuring secure inbound and outbound traffic.

Infrastructure as code

A significant advantage of this technology refresh and paradigm shift lies in the adoption of Infrastructure as Code (IaC). Previously, our client was manually managing its entire infrastructure through the AWS console and command line interface (CLI), which presented considerable challenges in terms of scalability and operational agility.

Manual management became increasingly complex as the infrastructure grew, as upgrades and modifications required manual interventions that slowed down the process. Implementing Terraform was a significant change, allowing us to create a project where, through code, planning, building and destroying infrastructure in the AWS cloud became as simple as a click, giving precise control over every action.

With Terraform, we have achieved more efficient and agile infrastructure management, overcoming the limitations of manual administration. This code-based approach not only improves speed and consistency in resource deployment, but also provides greater flexibility to adapt to changes in scale and the demands of the evolving technology environment. In summary, the adoption of IaC, through Terraform, has enabled our client to optimize their infrastructure on AWS in an effective and simplified manner.

Terraform

Terraform is an IaC (Infrastructure as Code) technology that allows us to define and maintain through configuration files using HCL (Hashicorp Configuration Language, a domain-specific language developed by them) all our infrastructure deployed through different cloud providers (AWS, Azure, GCP, etc). Also in on-premise installations, as long as we expose an API through which Terraform can communicate.

Code commit and Github

In our ongoing commitment to adapt to the specific needs of our clients, we implemented a synchronization process between GitHub and CodeCommit. Recognizing the importance of flexibility and efficient collaboration in software development, this strategic integration allowed our client to take advantage of the best of both platforms.

GitHub, known for its robust community and collaborative tools, integrated with CodeCommit, the AWS version control service, to streamline repository management and facilitate collaboration across distributed teams. This hybrid approach not only improved workflow efficiency, but also ensured greater security and compliance with industry standards.

By implementing this synchronization between GitHub and CodeCommit, our client experienced improved synergy in software development, allowing for agile collaboration and accurate version tracking. As such, the client was able to continue their workflow as normal while changes were reflected in AWS Codecommit.

Dockerization of applications

Docker, a cornerstone of modern application management, represents an open-source platform that simplifies application containerization. By encapsulating applications and their dependencies in lightweight, self-contained containers, Docker provides a standardized solution for efficiently packaging, distributing, and running software. Containerization allows applications to operate in a consistent and isolated environment, ensuring reliability from development to production. Docker stands out for its ability to simplify deployment, improve scalability, and optimize application management, making it an essential tool for development and operations teams.

Application containerization using DockerFiles represents a crucial step towards portability and consistency in software development. By using DockerFiles, the application’s runtime environment is described, facilitating its uniform deployment across multiple environments, including development, test and production. This approach provides a solid foundation to ensure that applications run reliably, regardless of differences between environments. Containerization with DockerFiles not only simplifies dependency management, but also streamlines the development cycle by eliminating concerns related to environment variations.

How did we do it?

In response to the need to optimize the deployment of our client’s web applications, we undertook a significant transformation through the containerization of their applications. Initially, we observed that the manual deployment process, which involved the installation of dependencies through specific commands, resulted in a considerable investment of time and effort.

To address this issue, we dove into the existing README files of the applications, taking advantage of the detailed information provided there. We implemented a containerization process that encapsulates each application in separate Docker containers. This approach allowed our client to eliminate the tedious manual tasks associated with installing dependencies and configuring the environment, significantly simplifying the deployment process.

Containerization not only accelerated deployment, but also brought an additional level of consistency and portability to the applications. Now, with container images ready for deployment, our customer can deploy new versions and upgrades more quickly and confidently, reducing the risk of errors associated with manual methods.

We are pleased to have delivered a solution that not only improves operational efficiency, but also lays the foundation for a more agile and scalable deployment in the future. Containerization has proven to be a key strategy for modernizing and optimizing development and deployment workflows, putting our client in a stronger position to meet the dynamic challenges of today’s technology environment.

Environment configuration files

Environment configuration files complemented the Dockerfiles containerization, adding a crucial layer of flexibility. These files contain environment variables and settings specific to each environment, such as development, test, or production. By separating the configuration from the source code, Environment configuration files enabled easy adaptation of the application to different contexts without requiring modifications to the source code. This not only improves the portability of the application, but also ensures a flexible and secure configuration that can be managed independently throughout the development and deployment lifecycles.

Publication of Docker images in ECR

After having the applications containerized, and images ready they were published to the Amazon Elastic Container Registry (ECR) and is an essential part of the lifecycle of containerized applications. This process involved storing and managing Docker container images in a highly scalable and secure repository. Opting for ECR ensured the availability of the images.

ECR not only acted as a centralized repository for images, but also provided secure access control and retention policies, ensuring integrity and efficiency in image lifecycle management. In addition, its native integration with other AWS services simplified the implementation and ongoing deployment of applications, allowing client development teams to focus on innovation and development, without worrying about the logistical management of images.

This approach not only addresses security and availability, but also improves image version traceability, crucial for maintaining consistency and integrity in development and production environments.

AWS EKS Configurations

In the process of deploying containerized applications, hosted in the AWS Elastic Container Registry (ECR), we play a crucial role in creating thorough Kubernetes manifests and services to orchestrate the execution of these containers. Kubernetes manifests represent the very essence of application orchestration in containerized environments.

These YAML files serve as detailed documents that describe and define the configuration and resources required to deploy applications on a Kubernetes cluster. By addressing crucial elements such as deployments and services, Kubernetes manifests provide comprehensive guidance for Kubernetes to efficiently orchestrate and manage applications.

In particular, the creation of deployments is a fundamental part of this process. Deployments in Kubernetes allow us to define and declare the desired state of applications, managing the creation and update of the corresponding pods. This feature is essential to ensure the availability and scalability of our containerized applications.

In addition, exposing services is another crucial aspect. By creating services in Kubernetes manifests, we establish an abstraction layer that facilitates communication between different components of client applications. This allows external or internal users to access applications in a controlled and secure manner.

The declarative nature of manifests means that developers can define the desired state of their applications, allowing Kubernetes to interpret and act accordingly. These files not only make deployment easier, but also ensure consistency in application management, regardless of the complexity and scale of the cluster.

By leveraging Kubernetes manifests, teams can ensure efficient management and seamless deployment of applications in container-based environments. This consolidates operations and simplifies application lifecycle management, bringing greater reliability and stability to the automated deployment ecosystem.

Automatic CI/CD deployment

CodePipeline Deployment

We set up an end-to-end workflow using AWS development and deployment tools, specifically AWS CodePipeline, AWS CodeBuild and AWS CodeDeploy. This integration enables seamless continuous delivery (CI/CD), allowing us to deliver fast and secure updates to your application.

Continuous Integration and Continuous Delivery (CI/CD) pipelines represent an essential framework for efficient automation of the entire application development and deployment process. Designed to optimize code quality and delivery speed, these pipelines provide a complete solution from change integration to continuous deployment in test and production environments.

Continuous Integration (CI)

CI pipelines automatically initiate testing and verification as soon as changes are made to the source code. This ensures that any modifications are seamlessly integrated with existing code, identifying potential problems early and ensuring consistency in collaborative development.

Continuous Delivery (CD)

The Continuous Delivery phase involves automating deployment in test and production environments. CD pipelines enable fast and secure delivery of new application releases, reducing cycle times and improving deployment efficiency.

AWS CodeBuild: We configure custom build environments that match the specific requirements of your web applications. Builds are automatically triggered when there are changes to the repository, ensuring consistency and availability of deployment artifacts.

AWS CodeDeploy: We implemented a deployment group that ensures a smooth and gradual deployment. Monitoring policies were configured to automatically revert the deployment in case problems are detected.

AWS CodePipeline: We have created a customized pipeline that mirrors your development process, from source code retrieval to production deployment. The pipeline consists of several stages, including source code retrieval from your repository, building the application with AWS CodeBuild, and automated deployment using CodeDeploy.

Result and Benefits

  • Rapid Continuous Delivery: With full automation, updates are deployed faster, improving delivery time for new features and bug fixes.
  • Greater Confidence: Gradual implementation and automatic rollback policies ensure greater confidence in the stability of new versions.
  • Visibility and Monitoring: Detailed monitoring dashboards were implemented to provide a complete view of the implementation process, making it easier to identify and correct problems.
  • Refinement and definitions of standards to be used in the infrastructure.

Cloud Front

As a cloud services company, we provisioned Amazon CloudFront on AWS for our client, a solution that optimizes the delivery of web content quickly and securely. In addition, we implemented Crucial Preloaded Security (CPS) and HTTP Strict Transport Security (HSTS) policies to strengthen the security and reliability of the system.

CPS policies ensure greater protection by establishing predefined rules to mitigate known threats, thus guaranteeing a more attack-resistant environment. On the other hand, the implementation of HSTS adds an additional layer of security by forcing exclusive communication over HTTPS, preventing possible attacks such as downgrading to HTTP. These measures not only optimize content delivery, but also reinforce security, meeting the highest standards in data protection and user experience.

AWS SQS

As a leading cloud services company, we successfully implemented Amazon Simple Queue Service (SQS) to optimize our client’s application architecture. SQS, a managed messaging service, has allowed us to decouple and scale critical components, improving the efficiency and reliability of their systems.

By adopting SQS, we apply the “First In, First Out” (FIFO) principle to ensure sequential processing of messages in the order in which they were placed in the queue. This is crucial in applications where sequentiality and time synchronization are essential, such as in order systems and transaction processing, in this specific case for customer applications such as electronic signature of documents and traceability of legal matters.

Nginx-Ingress-Controller

We have successfully integrated NGINX into our client’s architecture, empowering their web applications with a robust, multi-functional server. NGINX not only excels as an open source web server, but also plays essential roles as a reverse proxy, load balancer and cache server. Its efficient architecture improves both the performance and security of web applications, making it the ideal choice for handling large volumes of concurrent connections.

In addition, we have implemented the NGINX-specific Ingress Controller as an integral part of Kubernetes environments. This component manages HTTP and HTTPS traffic rules, using NGINX to direct traffic to services deployed in the cluster. By configuring Ingress rules, such as routes and redirects, the Ingress Controller optimizes the efficient exposure and management of web services. The specific implementation of NGINX enhances security and efficiency, offering a complete solution for the controlled exposure of web services, thus contributing to the construction of modern and scalable architectures in container-based environments.

Stress and load testing for AWS implementation

We conducted extensive testing on our client’s applications to ensure optimal performance in a variety of situations. These tests included:

  • Stress Testing: Focused on evaluating how applications handle extreme situations or sudden load peaks. Conditions that exceed normal operating limits are simulated to identify potential failure points and ensure stability under stress.
  • Load Tests: Designed to evaluate the behavior of applications under sustained and significant loads. These tests seek to determine the capacity of applications to handle constant demand, identifying possible bottlenecks and optimizing resources for continuous performance.
  • Performance Testing: Focused on measuring the speed and efficiency of applications under normal conditions. System response to various operations is evaluated to ensure fast loading times and a smooth user experience.

These tests allowed us to identify areas for improvement, optimize resources and ensure that applications perform reliably under various circumstances. Our dedication to comprehensive testing reflects our commitment to delivering technology solutions that are not only robust but also highly efficient and capable of adapting to the changing demands of the environment.

Benefits obtained by the customer

The adoption of advanced containerization and orchestration solutions has significantly transformed the technology landscape for our client. After dockerizing their applications and leveraging key services such as Kubernetes on Amazon EKS, automated deployment with AWS Pipelines, and strategic implementation of services such as CloudFront, SQS, along with robust security measures and a distributed VPC infrastructure across multiple availability zones, our client has experienced several notable benefits.

Scalability and Efficiency: Containerization with Docker and orchestration using Kubernetes in EKS has allowed our client to scale their applications efficiently, dynamically adapting to changing traffic demands without compromising performance.

Continuous and Automated Deployment: The implementation of AWS Pipelines has facilitated continuous and automated deployment, streamlining the development lifecycle and enabling fast and secure delivery of new features and updates to applications.

Performance Optimization: The use of CloudFront has significantly improved the delivery of web content, reducing load times and providing a faster and more efficient user experience across the board.

Efficient Message Handling: SQS has optimized message handling, enabling seamless communication between different application components, ensuring reliability and consistency in data processing.

Robust Security: The implementation of advanced security measures, within a well- structured VPC distributed across multiple availability zones, has strengthened data protection and application integrity against potential threats.

High Availability and Fault Tolerance: The distribution of the infrastructure in several availability zones has improved availability and fault tolerance, ensuring service continuity even in adverse situations.

Together, these solutions have enabled our client not only to improve operational efficiency and security, but also to offer an enhanced and scalable user experience, consolidating its position in a constantly evolving technological environment.

Conclusions

In conclusion, the road to a successful technological upgrade proves to be a fundamental journey for modern organizations. Beyond mere aspiration, it requires a solid commitment and a strategic vision that allows the implementation of truly fruitful improvements. In this context, the automatic deployment of applications is an indispensable component to ensure the high availability demanded by today’s dynamic landscape.

Abandoning obsolete practices and embracing dockerization, using readme as a guide, has emerged as a fundamental step in this evolution. In addition, the adoption of Continuous Delivery (CI/CD) practices, supported by innovative services such as AWS CodePipeline, emerges as a cornerstone to accelerate and optimize development cycles. This approach not only drives operational efficiency, but also lays the foundation for agile, future-oriented innovation in a constantly evolving technology environment.

By highlighting the relevance of strategies such as dockerization and CI/CD implementation, these organizations not only seek to remain competitive, but also to adopt best practices. Thus, they are orienting their efforts towards business strategies that not only benefit internal processes, but also the people who depend on them. In this context of transformation, technological evolution is not only a requirement, but an opportunity for growth and excellence in the delivery of advanced technological solutions.

Sobre nosotros

A professional services company with regional coverage with a presence in Chile, Colombia, Peru and the USA. It is exclusively oriented to cover all business needs and technology organizations, in the areas of Infrastructure Management (Middleware), Development of Cloud Technologies (Cloud) and Software Life Cycle Automation and Implementation Processes (DevOps). Our operational strategy is oriented to meet the needs of our clients in the areas involved on three fundamental pillar of technology.

Case: Evolving User Experience – A Three-Phase Banking Solution

Case: Evolving User Experience – A Three-Phase Banking Solution

 
Introduction:
The unprecedented impact of COVID-19 on several financial institutions reshaped user habits almost overnight. During lockdown, the surge in online banking showcased the unpreparedness of many portals to handle heavy workloads. While some lagged in adapting, others, like our client, a renowned Chilean financial institution with over 50 years of service and over a million users, swiftly began their digital transformation journey.
 
Phase 1: The Pandemic Response
During the peak of the pandemic, our client embarked on its initial digital transformation. They adopted the cloud to ensure their platforms remained stable, scalable, and secure. Consequently, they were able to introduce new functionalities, enabling users to execute tasks previously limited to physical branches. For this massive endeavor, they employed the Amazon Web Services (AWS) Cloud platform, paired with the expert guidance of a team of specialized professionals.
 
Phase 2: Post-Pandemic Enhancements
Post-pandemic, as users and businesses acclimated to online operations, our client entered the second phase of their transformation. This involved enhancing their architecture. Their first order of business was load testing to understand how concurrent online activity impacted their on-premises components. This was crucial in determining optimal configurations for AWS EKS and AWS API Gateway services.
 
Furthermore, as the business expanded its online offerings, new services were integrated. Security became paramount, prompting the institution to implement stricter validations and multi-factor authentication (MFA) for all users.
 
Phase 3: Continuous Improvements and Expansion
With more users adapting to online banking, the third phase saw the introduction of additional applications, enhancing the suite of services offered. The architecture was continuously revised and updated to cater to the ever-increasing demands of online banking.
 
Security was further tightened, and robust monitoring and traceability features were incorporated, providing deeper insights and ensuring system stability.

Proyectos Realizados

El Cliente, junto con sus áreas de Arquitectura, Desarrollo, Seguridad e Infraestructura, vio en AWS un aliado para llevar a cabo la construcción de un nuevo Portal, aprovechando las ventajas de la Nube como elasticidad, alta disponibilidad, conectividad y gestión de costos.

El proyecto concebido contempló la implementación de un frontend (Web y Mobile) junto con una capa de microservicios con integración hacia sus sistemas On-Premise vía consumo de servicios expuestos en su ESB que a su vez accede a sus sistemas legacy, formando así una arquitectura. híbrido.

En el marco de este proyecto, 3HTP Cloud Services participó activamente en el asesoramiento, definiciones e implementaciones técnicas de la infraestructura y soporte a los equipos de desarrollo y automatización, tomando como referencia los 05 pilares del marco bien arquitectónico de AWS.

La participación de 3HTP Cloud Services se centró en las siguientes actividades:

  • Validation of architecture proposed by the client 
  • Infrastructure as code (IaC) project provisioning on AWS 
  • Automation and CI / CD Integration for infrastructure and micro-services  
  • Refinamiento de infraestructura, definiciones y estándares para operación y monitoreo
  • Pruebas de estrés y carga para AWS y la infraestructura local

Outstanding Benefits Achieved

Our client’s three-phase approach to digital transformation wasn’t just an operational shift; it was a strategic masterstroke that propelled them to the forefront of financial digital innovation. Their benefits, both tangible and intangible, are monumental. Here are the towering achievements:

  1. Unprecedented Automation & Deployment: By harnessing cutting-edge tools and methodologies, the client revolutionized the automation, management, and deployment of both their infrastructure and application components. This turbocharged their solution’s life cycle, elevating it to industry-leading standards.
  2. Instantaneous Environment Creation: The ingeniousness of their automation prowess enabled the generation of volatile environments instantaneously, showcasing their technical agility and robustness.
  3. Robustness Redefined: Their infrastructure didn’t just improve; it transformed into an impregnable fortress. Capable of handling colossal load requirements, both in productive and non-productive arenas, meticulous dimensioning was executed based on comprehensive load and stress tests. This was consistently observed across AWS and on-premises, creating a harmonious hybrid system synergy.
  4. Dramatic Cost Optimization: It wasn’t just about saving money; it was about smart investing. Through astute utilization of AWS services, like AWS Spot-Aurora Server-less, they optimized costs without compromising on performance. These choices, driven by findings and best practices, epitomized financial prudence.
  5. Achievement of Exemplary Goals: The institution’s objectives weren’t just met; they were exceeded with distinction. Governance, security, scalability, continuous delivery, and continuous deployment (CI/CD) were seamlessly intertwined with their on-premises infrastructure via AWS. The result? A gold standard in banking infrastructure and operations.
  6. Skyrocketed Technical Acumen: The client’s teams didn’t just grow; they evolved. Their exposure to the solution’s life cycle made them savants in their domains, setting new benchmarks for technical excellence in the industry.

    Servicios realizados por 3HTP Cloud Services: Validación inicial de la arquitectura

    La institución ya contaba con una primera arquitectura de adopción en la nube para su portal de cliente, por ello, como equipo multidisciplinario, comenzamos con un diagnóstico de la situación actual y la propuesta realizada por el cliente; De este diagnóstico y evaluación se obtuvieron las siguientes recomendaciones relevantes para la arquitectura:

    • Separación de arquitecturas para entornos productivos y no productivos
    • El uso de Infraestructura como código para crear entornos volátiles, por proyectos, por unidad de negocio, etc.
    • Implementación de CI / CD para automatizar la creación, gestión y despliegue tanto de Infraestructura como de microservicios.

    Arquitectura del entorno productivo

    • Esta arquitectura se basa en el uso de tres zonas de disponibilidad (AZ), además, las instancias Bajo demanda se utilizan para los trabajadores de AWS EKS y el uso de instancias reservadas para la base de datos y la memoria caché con alta disponibilidad 24×7.

    Se define la cantidad de instancias que se utilizarán para el clúster de Redis.

    Diagrama productivo

    Arquitectura de entorno no productivo

    Considerando que los ambientes no productivos no requieren un uso 24/7, pero si es necesario que tengan al menos una arquitectura similar a la de producción, se definió una arquitectura aprobada, que permite ejecutar los diferentes componentes en alta disponibilidad y al mismo tiempo permite minimizar costes. Para ello se definió lo siguiente:

    • Reducción de zonas de disponibilidad para entornos no productivos, permaneciendo en dos zonas de disponibilidad (AZ)
    • Uso de instancias puntuales para minimizar los costos de los trabajadores de AWS EKS
    • Configuración de encendido y apagado de recursos para su uso en horario comercial.
    • Using Aurora Serverless 

    Las instancias a utilizar se definen considerando que solo hay dos zonas de disponibilidad, la cantidad de instancias para entornos de No Producción es simplemente 4.

    Diagrama de entornos de no producción

    Infraestructura como código

    Para lograr la creación de las arquitecturas de manera dinámica adicionalmente que los entornos pudieran ser volátiles en el tiempo, se definió que la infraestructura debe ser creada mediante código. Para ello, Terraform se definió como la herramienta principal para lograr este objetivo.

    Como resultado de este punto se crearon 2 proyectos Terraform totalmente variables los cuales son capaces de crear las arquitecturas mostradas en el punto anterior en cuestión de minutos, cada ejecución de estos proyectos requiere el uso de un Bucket S3 para poder almacenar la estados creados por Terraform.

    Además, estos proyectos se ejecutan desde Jenkins Pipelines, por lo que la creación de un nuevo entorno está completamente automatizada.

    Automation and CI / CD Integration for infrastructure and micro-services 

     Implementación de microservicios en EKS

    Ayudamos a la entidad financiera a desplegar los microservicios asociados a su solución empresarial en el Clúster de Kubernetes (AWS EKS), para ello se realizaron varias definiciones con el fin de poder llevar a cabo el despliegue de estos microservicios de forma automatizada. forma, cumpliendo así con el proceso Complete DevOps (CI y CD).

    Canal de implementación

     A Jenkins pipeline was created to automatically deploy the micro-services to the EKS cluster.

    Tareas ejecutadas por la canalización:

    En resumen, los pasos del pipeline:

    1. Obtener código de microservicio de Bitbucket
    2. Compilar código
    3. Crea una nueva imagen con el paquete generado en la compilación
    4. Enviar imagen a AWS ECR
    5. Crear manifiestos de Kubernetes
    6. Aplicar manifiestos en EKS

    Refinement and definitions and standards to be used on the infrastructure 

    Endurecimiento de la imagen

    Para la institución y como para cualquier empresa, la seguridad es crítica, para ello, se creó una imagen exclusiva de Docker, la cual no tenía vulnerabilidades conocidas ni permitía la elevación de privilegios por parte de las aplicaciones, esta imagen se utiliza como base para microservicios, Para este proceso, el Área de Seguridad de la Institución realizó PenTest concurrente hasta que la imagen no reportó vulnerabilidades conocidas hasta entonces.

    Configuraciones de AWS EKS

    Para poder utilizar los clústeres EKS de manera más productiva, se realizaron configuraciones adicionales en él:

    • Uso de Kyverno: Herramienta que nos permite crear diversas políticas en el clúster para llevar a cabo el cumplimiento de la seguridad y las buenas prácticas en el clúster (https://kyverno.io/)
    • Instalación de Metrics Server: Este componente se instala para poder trabajar con Horizontal Pod Autoscaler en los microservicios posteriormente
    • Radiografía: Se habilita el uso de rayos X en el clúster para tener un mejor seguimiento del uso de microservicios
    • Escalador automático de clúster: Este componente está configurado para tener un escalado elástico y dinámico sobre el clúster.
    • Malla de aplicaciones de AWS: Se lleva a cabo una prueba de concepto del servicio AWS App Mesh, utilizando algunos microservicios específicos para esta prueba.

    Definición de objetos de Kubernetes

    En implementación:

    • Límite de uso de recursos:para evitar desbordamientos en el clúster, la primera regla que debe cumplir un microservicio es la definición del uso de recursos de memoria y CPU tanto para el inicio del Pod como para la definición de su crecimiento máximo. Los microservicios de cliente se clasificaron según su uso (Bajo, Medio, Alto) y cada categoría tiene valores predeterminados para estos parámetros.
    • Uso de la sonda de preparación: Es necesario evitar la pérdida de servicio durante el despliegue de nuevas versiones de microservicios, es por eso que antes de recibir una carga en el clúster necesitan realizar una prueba del microservicio.
    • Uso de Liveness Probe: Cada microservicio a desplegar debe tener configurada una prueba de vida que permita comprobar el comportamiento del microservicio

    Servicios

    Se definió el uso de 2 tipos de servicios de Kubernetes:

    • ClusterIP: Para todos los microservicios que solo utilizan la comunicación con otros microservicios dentro del clúster y no exponen las API a clientes o usuarios externos.
    • NodePort: Para ser utilizados por servicios que exponen API a clientes o usuarios externos, estos servicios se exponen posteriormente a través de un equilibrador de carga de red y API Gateway.

    ConfigMap / Secretos

    Los microservicios deben traer sus configuraciones personalizables en archivos secretos o de configuración de Kubernetes.

    Horizontal Pod Autoscaler (HPA) 

    Cada microservicio que debe implementarse en el clúster de EKS requiere el uso de HPA para definir el número mínimo y máximo de réplicas requeridas.

    Los microservicios del cliente se clasificaron según su uso (Bajo, Medio, Alto) y cada categoría tiene un valor predeterminado de réplicas a utilizar.

    Pruebas de carga y estrés para AWS y la infraestructura local

    Uno de los grandes desafíos de este tipo de arquitectura (Híbrida) donde el backend y el core del negocio son On-Premise y las capas de Frontend y lógica están en nubes dinámicas elásticas, es definir hasta qué punto la arquitectura puede ser elástica sin afectar la Servicios locales y heredados relacionados con la solución.

    Para resolver este desafío, se realizaron pruebas de carga y estrés en el entorno, simulando cargas pico de negocio y cargas normales, este, se realizó un seguimiento en las diferentes capas relacionadas con la solución completa a nivel de AWS (CloudFront, API Gateway, NLB , EKS, Redis, RDS) a nivel local de ESB, heredado, redes y enlaces.

    Como resultado de las diversas pruebas realizadas, fue posible definir los límites de elasticidad mínimos y máximos en AWS, (N° Worker, N° Replicas, N° Instances, Types of Instances, entre otros), a nivel On-Premise (N° Worker, Bandwidth, etc).

     

      Conclusión

      Navigating the labyrinth of hybrid solutions requires more than just technical know-how; it mandates a visionary strategy, a well-defined roadmap, and a commitment to iterative validation.
      Our client’s success story underscores the paramount importance of careful planning complemented by consistent execution. A roadmap, while serving as a guiding light, ensures that the course is clear, milestones are defined, and potential challenges are anticipated. But, as with all plans, it’s only as good as its execution. The client’s commitment to stick to the roadmap, while allowing flexibility for real-time adjustments, was a testament to their strategic acumen.
      However, sticking to the roadmap isn’t just about meeting technical specifications or ensuring the system performs under duress. In today’s dynamic digital era, users’ interactions with applications are continually evolving. With each new feature introduced and every change in user behavior, the equilibrium of a hybrid system is tested. Our client understood that the stability of such a system doesn’t just rely on its technical backbone but also on the real-world dynamic brought in by its users.
      Continuous validation became their mantra. It wasn’t enough to assess the system’s performance in isolation. Instead, they constantly gauged how new features and shifting user patterns influenced the overall health of the hybrid solution. This holistic approach ensured that they didn’t just create a robust technical solution, but a responsive and resilient ecosystem that truly understood and adapted to its users.
      In essence, our client’s journey offers valuable insights: A well-charted roadmap, when paired with continuous validation, can drive hybrid solutions to unprecedented heights, accommodating both the technological and human facets of the digital landscape.

      Validador de identidad de referencia pública de AWS

      SISTEMA VALIDADOR DE IDENTIDAD DE REFERENCIA DEL PROYECTO – Administradora de Fondos de Pensiones

      El La Administradora de Fondos de Pensiones de Colombia hace parte de un reconocido holding colombiano y es una de las mayores administradoras de fondos de pensiones y censos del país con más de 1,6 millones de afiliados. Esta gestora de fondos de pensiones gestiona tres tipos de fondos: seguro de desempleo, pensiones voluntarias y pensiones obligatorias.

      En 2011, la empresa adquirió los activos de los fondos de pensiones en otros países de la región, en 2013 la firma completó un proceso de fusión con un grupo extranjero agregando a su cartera de administración de Fondos de Pensiones y Cesantías, seguros de vida y administración de inversiones.

      La dificultad técnica y la implicación empresarial.

      Actualmente, esta gestora de Fondos de Pensiones tiene un desarrollo constante de aplicaciones para fidelizar a sus clientes y también para estar a la vanguardia del negocio, por ello, actualmente con una gran cantidad de aplicaciones, estas aplicaciones se agrupan de acuerdo a los usuarios que la utilizan, hay dos grupos:

      • Aplicaciones internas o aplicaciones de uso interno de la empresa
      • Las aplicaciones tipo satélite son utilizadas principalmente por afiliados que realizan operaciones de autogestión en los diferentes canales existentes de acuerdo a sus requerimientos y / o necesidades.

      En aplicaciones de tipo satélite, el administrador debe permitir operaciones que por su naturaleza requieran diferentes mecanismos de seguridad como autenticación, identificación y autorización. Sin embargo,

      • Para lograr la autenticación, los afiliados utilizan el mecanismo de nombre de usuario y contraseña.
      • La autorización se realiza a través de un sistema de roles, perfiles y permisos, todos configurados en función del tipo de afiliado y los accesos que requieran para realizar sus respectivas operaciones.
      • La identificación del afiliado es una tarea más compleja, teniendo en cuenta que el objetivo de este mecanismo es conseguir que el usuario sea realmente quien dice ser y que no ha sido suplantado.

      Este último mecanismo de identificación es el núcleo del problema a resolver ya que debe permitir al administrador asegurar que los afiliados realicen los trámites, operaciones y / o uso de los servicios de manera confiable y segura con la calidad que merecen.

      Ahora, la combinación de diferentes factores de seguridad agrega más capas de seguridad al procedimiento de autenticación, brindando robustez a la verificación y dificultando la intrusión y el robo de identidad por parte de terceros. Para ellos, introducimos autenticación fuerte, que es cuando se combina con menos dos factores para garantizar que el usuario que se autentica sea realmente quien dice ser, toda autenticación fuerte es también una autenticación multifactor (MFA), donde el usuario verifica su identidad. Cuantas veces se combinen los factores, incluso si uno de los factores falla o es atacado por un ciberdelincuente, existen más barreras de seguridad antes de que el ciberdelincuente pueda acceder a la información.

      Como consecuencia de ello, surge el "sistema validador de identidad", que es el sistema como servicio que realiza el proceso de identificación de los afiliados en el administrador, el cual es utilizado por los demás sistemas que lo requieran para que puedan decidir si autorizan o no la ejecución de un procedimiento.

      Solución realizada, servicios de AWS, arquitectura 

      Para lograr una correcta identificación del afiliado, la recolección de datos se hace evidente como un primer paso, en función de estos datos se debe tomar la mejor decisión en cuanto a qué mecanismos de identificación se deben aplicar, luego se deben aplicar estos mecanismos, esperar la Afiliarse la respuesta y verificarla, en paralelo todo el proceso consta de su respectivo registro de operaciones y estadísticas.

      La arquitectura general del sistema se compone esencialmente de los siguientes componentes:

      • Satélite: son aquellos que consumen los servicios del sistema validador de identidad ya que necesitan validar la identidad de sus afiliados antes de realizar un trámite.
      • UI: Interfaz gráfica del sistema validador de identidades. Conjunto de componentes y librerías desarrolladas en React JS que pueden ser utilizadas por los Satélites, que contienen la lógica de conexión hacia los servicios del sistema validador de identidad. 
      • APIGateway: Contiene los endPoints que expone el sistema de validación de identidades 
      • Trazabilidad en Splunk: Componentes que se encargan de registrar los mensajes que intercambia sistema validador de identidad (externa e internamente)

      • Exhaustividad: Componente que se encarga de realizar las llamadas necesarias a los servicios del sistema validador de identidad externo que extrae la información necesaria del cliente para tomar la decisión de qué mecanismo se aplicará.

      • Validar Pass: Componente que se encarga de eliminar los mecanismos a aplicar al cliente aquellos que ya han sido validados teniendo en cuenta una serie de criterios configurables.

      • Gerente de Mecanismo: Encargado de ejecutar el mecanismo y realizar la validación a través de la comunicación con servicios de terceros e interpretar y validar sus respuestas.
      • Rule Manager: Encargado de tomar la decisión de los mecanismos que se aplicarán al cliente.

      Flujo de arquitectura

      El sistema validador de identidad es un sistema integrado internamente por varios microservicios que interactúan entre sí. El flujo general del sistema validador de identidad consiste en una solicitud de validación del mecanismo para un cliente que recorre los diferentes microservicios que componen el sistema. La siguiente es la arquitectura del flujo de mensajes del sistema de validación de identidad.

      La imagen muestra, de forma similar a la arquitectura lógica del sistema validador de identidad, la arquitectura interna de los microservicios utilizando colas de AWS SQS como canal de comunicación intermedio, que conforman el sistema y el flujo de datos de una solicitud. que se realiza al mismo. El flujo de imágenes es un flujo funcional, lo que significa que la solicitud no se cancela.

      El flujo se describe a continuación:

      1. El sistema validador de identidad recibe una solicitud de validación de uno de los canales correspondientes y, que están configurados, valida los datos enviados de acuerdo con una estructura definida y un ID de solicitud.
      2. Autentica la solicitud según el satélite que la realiza.
      3. Registre la información en Redis donde seguirá esperando la respuesta para el ID de solicitud (simulación de sincronización)
      4. Determina si es una solicitud no iniciada, valida si la transacción y el canal existen.
      5. Comienza el proceso de cumplimentación de los datos de la solicitud.
      6. Se denominan servicios de completitud externa del sistema validador de identidad para extraer la información requerida del afiliado, que será utilizada en la toma de decisiones del mecanismo de identificación que se deba aplicar.
      7. Los datos de completitud se envían al microservicio RuleProcessor, mediante el esquema de fan-out y utilizando SNS y SQS, que se encargará de orquestar las reglas para determinar la lista de mecanismos a aplicar al cliente.
      8. Están determinadas por los datos que se extrajeron en Completitud, más los datos de la propia Solicitud inicial, y teniendo en cuenta una serie de reglas que se debe cumplir con la lista de mecanismos a aplicar.
      9. Se determinan las validaciones que ha pasado el cliente en un tiempo determinado
      10. Se realizan las consultas necesarias para completar la información requerida
      11. Los datos de completitud y la lista de mecanismos a aplicar se envían al microservicio ExecuteMechanism, que busca en la lista de mecanismos a solicitar el primero que no ha sido validado y llama al servicio externo al sistema validador de identidad que inicia el mecanismo de validación.
      12. Los datos recopilados más la respuesta desde el inicio del primer mecanismo no válido se envían a SendResponse. Esto almacena la solicitud completa en la base de datos para solicitudes posteriores.
      13. Empuje los datos a Redis donde RestRequest está esperando para enviar la solicitud-respuesta
      14. Se inicia una solicitud de validación del mecanismo iniciado. Sigue leyendo de Redis la respuesta.
      15. Se valida que la solicitud de inicio se ha ejecutado correctamente y que la solicitud es válida
      16. Se envía para validar el mecanismo, donde se llama al servicio de validación correspondiente al mismo mecanismo iniciado
      17. Se verifica que la validación ha sido exitosa y el mecanismo está marcado como válido en la lista de mecanismos.
      18. Enviar respuesta de mecanismo no válida

      Arquitectura de bajo nivel 

      Una arquitectura de sistema de validación de identidad de nivel inferior muestra la complejidad del sistema y la cantidad de componentes que intervienen e interactúan con la información de la Solicitud que viaja de un microservicio a otro; donde cada uno va enriqueciendo y modificando su estado.

      ¿Cuáles son los beneficios de esta solución para el cliente?

      Como parte de la solución implementada, el cliente obtuvo mayor seguridad en los procesos de autogestión y operaciones que requirieron verificación de la identidad de la persona que lo requirió. Que en este sentido, el consumo del sistema se implementó en los satélites para tomar la decisión de permitir o no realizar operaciones, lo que trajo una mayor titulización de las operaciones y previene en un alto grado el spoofing. Con esto, ha ganado mayor prestigio y confianza por parte de los afiliados sabiendo que existen formas de verificar su identidad al operar con sus servicios y productos de su día a día.

      Proyecto AWS EKS – PROTECCIÓN S.A.

      Contenedores

      AFP Protección S.A., una subempresa del holding colombiano Grupo de Inversiones Suramericana, es la segunda administradora de fondos de pensiones e indemnizaciones más grande del país con casi 1,6 millones de afiliados. AFP Protección S.A., una unidad del holding colombiano Grupo de Inversiones Suramericana, es la segunda administradora de fondos de pensiones e indemnizaciones más grande del país con casi 1,6 millones de afiliados.
      www.proteccion.com

      Logo Protección Colombia

      PROTECCIÓN comenzó con el proyecto de implementación de aplicaciones con tecnología Docker a principios de 2017, orientado en su momento sobre la infraestructura totalmente OnPremise que 3HTP acompañó desde la administración de todas sus plataformas de middleware.

      En 2018 PROTECCIÓN comenzó con planes de transformación digital orientados a descubrir e implementar estrategias Cloud y por ello emprendió un análisis de los principales proveedores de este servicio en el mercado. Al mismo tiempo, se inició un proceso de llamada para encontrar servicios de administración de contenedores en la nube, de los proveedores más importantes del mercado y fue allí donde 3HTP ofreció a Protection la opción de un análisis del servicio AWS Elastic Container Service Container Management (ECS).

      A través del trabajo cooperativo entre AWS y 3HTP,se propuso al cliente realizar una prueba de concepto para mostrar las funcionalidades y beneficios de utilizar los servicios de gestión de contenedores ecs y a su vez, como una buena estrategia para el cliente, también comenzó a mostrar la compatibilidad, descripción e integración con otros servicios en la nube que permitirían a PROTECCIÓN tener en cuenta en la evaluación para sus proveedores de servicios en la nube. A pesar de que el cliente ya estaba bastante interesado en el servicio entregado por otro proveedor de nube, fue posible demostrar con el arduo trabajo técnico realizado en la implementación de una aplicación como prueba de funcionalidad y alcance, que AWS ECS entregaría una solución de mayor nivel e impacto en sus expectativas de funcionalidad e implementación.

      PROTECCIÓN,finalmente convencida de la solución, otorgó la licitación del servicio a 3HTP-AWS y designó dos aplicaciones relevantes para su operación, para ser migradas de contenedores desplegados en los entornos OnPremise a la nube de AWS.

      Tras la designación del proyecto y a petición de uno de los jefes de equipo, se inició una evaluación con la intención de ampliar el alcance del proyecto en cuanto a tecnología de gestión, administración y portabilidad entre nubes para casos solicitados por PROTECCIÓN y a partir de ahí se propuso la opción de implementar el uso de Kubernetes con el servicio AWS Amazon Elastic Kubernetes Service(EKS).