Case: Evolving User Experience – A Three-Phase Banking Solution

Case: Evolving User Experience – A Three-Phase Banking Solution

The unprecedented impact of COVID-19 on several financial institutions reshaped user habits almost overnight. During lockdown, the surge in online banking showcased the unpreparedness of many portals to handle heavy workloads. While some lagged in adapting, others, like our client, a renowned Chilean financial institution with over 50 years of service and over a million users, swiftly began their digital transformation journey.
Phase 1: The Pandemic Response
During the peak of the pandemic, our client embarked on its initial digital transformation. They adopted the cloud to ensure their platforms remained stable, scalable, and secure. Consequently, they were able to introduce new functionalities, enabling users to execute tasks previously limited to physical branches. For this massive endeavor, they employed the Amazon Web Services (AWS) Cloud platform, paired with the expert guidance of a team of specialized professionals.
Phase 2: Post-Pandemic Enhancements
Post-pandemic, as users and businesses acclimated to online operations, our client entered the second phase of their transformation. This involved enhancing their architecture. Their first order of business was load testing to understand how concurrent online activity impacted their on-premises components. This was crucial in determining optimal configurations for AWS EKS and AWS API Gateway services.
Furthermore, as the business expanded its online offerings, new services were integrated. Security became paramount, prompting the institution to implement stricter validations and multi-factor authentication (MFA) for all users.
Phase 3: Continuous Improvements and Expansion
With more users adapting to online banking, the third phase saw the introduction of additional applications, enhancing the suite of services offered. The architecture was continuously revised and updated to cater to the ever-increasing demands of online banking.
Security was further tightened, and robust monitoring and traceability features were incorporated, providing deeper insights and ensuring system stability.

Projects Realized

The Client, together with its Architecture, Development, Security, and Infrastructure areas, saw in AWS an ally to carry out the construction of a new Portal, taking advantage of Cloud advantages such as elasticity, high availability, connectivity and cost management.

The project conceived contemplated the implementation of a frontend (Web and Mobile) together with a layer of micro-services with integration towards its On-Premise systems via consumption of services exposed in its ESB that in turn accesses its legacy systems, thus forming an architecture hybrid.

Within the framework of this project, 3HTP Cloud Services actively participated in the advice, definitions, and technical implementations of the infrastructure and support to the development and automation teams, taking as reference the 05 pillars of the AWS well-architected framework.

3HTP Cloud Services participation focused on the following activities:

  • Validation of architecture proposed by the client 
  • Infrastructure as code (IaC) project provisioning on AWS 
  • Automation and CI / CD Integration for infrastructure and micro-services  
  • Refinement of infrastructure, definitions, and standards for Operation and monitoring
  • Stress and Load Testing for AWS and On-Premise Infrastructure 

Outstanding Benefits Achieved

Our client’s three-phase approach to digital transformation wasn’t just an operational shift; it was a strategic masterstroke that propelled them to the forefront of financial digital innovation. Their benefits, both tangible and intangible, are monumental. Here are the towering achievements:

  1. Unprecedented Automation & Deployment: By harnessing cutting-edge tools and methodologies, the client revolutionized the automation, management, and deployment of both their infrastructure and application components. This turbocharged their solution’s life cycle, elevating it to industry-leading standards.
  2. Instantaneous Environment Creation: The ingeniousness of their automation prowess enabled the generation of volatile environments instantaneously, showcasing their technical agility and robustness.
  3. Robustness Redefined: Their infrastructure didn’t just improve; it transformed into an impregnable fortress. Capable of handling colossal load requirements, both in productive and non-productive arenas, meticulous dimensioning was executed based on comprehensive load and stress tests. This was consistently observed across AWS and on-premises, creating a harmonious hybrid system synergy.
  4. Dramatic Cost Optimization: It wasn’t just about saving money; it was about smart investing. Through astute utilization of AWS services, like AWS Spot-Aurora Server-less, they optimized costs without compromising on performance. These choices, driven by findings and best practices, epitomized financial prudence.
  5. Achievement of Exemplary Goals: The institution’s objectives weren’t just met; they were exceeded with distinction. Governance, security, scalability, continuous delivery, and continuous deployment (CI/CD) were seamlessly intertwined with their on-premises infrastructure via AWS. The result? A gold standard in banking infrastructure and operations.
  6. Skyrocketed Technical Acumen: The client’s teams didn’t just grow; they evolved. Their exposure to the solution’s life cycle made them savants in their domains, setting new benchmarks for technical excellence in the industry.

    Services Performed by 3HTP Cloud Services: Initial Architecture Validation

    The institution already had a first adoption architecture in the cloud for its client portal, therefore, as a multidisciplinary team, we began with a diagnosis of the current situation and the proposal made by the client; From this diagnosis and evaluation, the following recommendations relevant to architecture were obtained:

    • Separation of architectures for productive and non-productive environments
    • The use Infrastructure as code in order to create volatile environments, by projects, by business unit, etc.
    • CI / CD implementation to automate the creation, management, and deployment of both Infrastructure and micro-services.

    Productive Environment Architecture

    • This architecture is based on the use of three Availability Zones (AZ), additionally, On-Demand instances are used for AWS EKS Workers and the use of reserved instances for database and cache with 24×7 high availability.

    The number of instances to use for the Redis Cluster is defined.

    Productive Diagram

    Non-Productive Environment Architecture

    Considering that non-production environments do not require 24/7 use, but if it is necessary that they have at least an architecture similar to that of production, an approved architecture was defined, which allows the different components to be executed in high availability and at the same time allows minimize costs. For this, the following was defined:

    • Reduction of availability zones for non-productive environments, remaining in two availability zones (AZ)
    • Using Spot Instances to Minimize AWS EKS Worker Costs
    • Configuration of off and on of resources for use during business hours.
    • Using Aurora Serverless 

    The instances to be used are defined considering that there are only two availability zones, the number of instances for Non-Production environments is simply 4.

    Non-production environments diagram

    Infrastructure as Code

    In order to achieve the creation of the architectures in a dynamic way additionally that the environments could be volatile in time, it was defined that the infrastructure must be created by means of code. For this, Terraform was defined as the primary tool to achieve this objective.

    As a result of this point, 2 totally variable Terraform projects were created which are capable of creating the architectures shown in the previous point in a matter of minutes, each execution of these projects requires the use of a Bucket S3 to be able to store the states created by Terraform.

    Additionally, these projects are executed from Jenkins Pipelines, so the creation of a new environment is completely automated.

    Automation and CI / CD Integration for infrastructure and micro-services 

     Micro-services Deployment in EKS

    We helped the financial institution to deploy the micro-services associated with its business solution in the Kubernetes Cluster (AWS EKS), for this, several definitions were made in order to be able to carry out the deployment of these micro-services in an automated way, thus complying with the process Complete DevOps (CI and CD).

    Deployment Pipeline

     A Jenkins pipeline was created to automatically deploy the micro-services to the EKS cluster.

    Tasks executed by the pipeline:

    In summary the steps of the pipeline:

    1. Get micro-service code from Bitbucket
    2. Compile code
    3. Create a new image with the package generated in the compilation
    4. Push image to AWS ECR
    5. Create Kubernetes manifests
    6. Apply manifests in EKS

    Refinement and definitions and standards to be used on the infrastructure 

    Image Hardening

    For the institution and as for any company, security is critical, for this, an exclusive Docker image was created, which did not have known vulnerabilities or allow the elevation of privileges by applications, this image is used as a basis for micro-services, For this process, the Institution’s Security Area carried out concurrent PenTest until the image did not report known vulnerabilities until then.

    AWS EKS configurations

    In order to be able to use the EKS clusters more productively, additional configurations were made on it:

    • Use of Kyverno: Tool that allows us to create various policies in the cluster to carry out security compliance and good practices on the cluster (
    • Metrics Server installation: This component is installed in order to be able to work with Horizontal Pod Autoscaler in the micro-services later
    • X-Ray: The use of X-Ray on the cluster is enabled in order to have better tracking of the use of micro-services
    • Cluster Autoscaler: This component is configured in order to have elastic and dynamic scaling over the cluster.
    • AWS App Mesh: A proof of concept of the AWS App Mesh service is carried out, using some specific micro-services for this test.

    Defining Kubernetes Objects

    In Deployment:

    • Use of Resources Limit: in order to avoid overflows in the cluster, the first rule to be fulfilled by a micro-service is the definition of the use of memory and CPU resources both for the start of the Pod and the definition of its maximum growth. Client micro-services were categorized according to their use (Low, Medium, High) and each category has default values for these parameters.
    • Use of Readiness Probe: It is necessary to avoid loss of service during the deployment of new versions of micro-services, that is why before receiving a load in the cluster they need to perform a test of the micro-service.
    • Use of Liveness Probe: Each micro-service to be deployed must have a life test configured that allows checking the behavior of the micro-service


    The use of 2 types of Kubernetes Services was defined:

    • ClusterIP: For all micro-services that only use communication with other micro-services within the cluster and do not expose APIs to external clients or users.
    • NodePort: To be used by services that expose APIs to external clients or users, these services are later exposed via a Network Load Balancer and API Gateway.

    ConfigMap / Secrets

    Micro-services should bring their customizable settings in Kubernetes secret or configuration files.

    Horizontal Pod Autoscaler (HPA) 

    Each micro-service that needs to be deployed in the EKS cluster requires the use of HPA in order to define the minimum and maximum number of replicas required of it.

    The client’s micro-services were categorized according to their use (Low, Medium, High) and each category has a default value of replicas to use.

    Stress and Load Testing for AWS and On-Premise Infrastructure

    One of the great challenges of this type of architecture (Hybrid) where the backend and core of the business are On-Premise and the Frontend and logic layers are in dynamic elastic clouds, is to define to what extent architecture can be elastic without affecting the On-Premise and legacy services related to the solution.

    To solve this challenge, load and stress tests were carried out on the environment, simulating peak business loads and normal loads, this, monitoring was carried out in the different layers related to the complete solution at the AWS level (CloudFront, API Gateway, NLB, EKS, Redis, RDS) at the on-premise ESB, Legacy, Networks and Links level.

    As a result of the various tests carried out, it was possible to define the minimum and maximum elasticity limits in AWS, (N ° Worker, N ° Replicas, N ° Instances, Types of Instances, among others), at the On-Premise level (N ° Worker, Bandwidth, etc).



      Navigating the labyrinth of hybrid solutions requires more than just technical know-how; it mandates a visionary strategy, a well-defined roadmap, and a commitment to iterative validation.
      Our client’s success story underscores the paramount importance of careful planning complemented by consistent execution. A roadmap, while serving as a guiding light, ensures that the course is clear, milestones are defined, and potential challenges are anticipated. But, as with all plans, it’s only as good as its execution. The client’s commitment to stick to the roadmap, while allowing flexibility for real-time adjustments, was a testament to their strategic acumen.
      However, sticking to the roadmap isn’t just about meeting technical specifications or ensuring the system performs under duress. In today’s dynamic digital era, users’ interactions with applications are continually evolving. With each new feature introduced and every change in user behavior, the equilibrium of a hybrid system is tested. Our client understood that the stability of such a system doesn’t just rely on its technical backbone but also on the real-world dynamic brought in by its users.
      Continuous validation became their mantra. It wasn’t enough to assess the system’s performance in isolation. Instead, they constantly gauged how new features and shifting user patterns influenced the overall health of the hybrid solution. This holistic approach ensured that they didn’t just create a robust technical solution, but a responsive and resilient ecosystem that truly understood and adapted to its users.
      In essence, our client’s journey offers valuable insights: A well-charted roadmap, when paired with continuous validation, can drive hybrid solutions to unprecedented heights, accommodating both the technological and human facets of the digital landscape.

      Terraform Cloud-Workshop Práctico

      Únete a otros profesionales locales de la industria para obtener una descripción general del conjunto de soluciones de HashiCorp y workshops prácticos de Terraform Cloud y Seguridad de Vault.

      Aumentar la productividad

      Mitigar el riesgo | Reducir el costo.





      Terraform Cloud: Workshop Práctico

      Los desarrolladores tienen un control cada vez mayor sobre el gasto en TI debido a los modelos de consumo de nube bajo demanda. Con la infraestructura en la nube, paga por lo que usa, pero también por lo que aprovisiona y no usa. Si las empresas no cuentan con un proceso para la gobernanza continua y el cumplimiento de las mejores prácticas, el desperdicio en la nube puede salirse de control

      Únase a nosotros en este workshop y aprenda a usar HashiCorp Terraform Cloud y la infraestructura como principios de código para el control de costos en la nube. Después de una descripción general de cómo reducir los gastos innecesarios, se le guiará a través de un workshop práctico sobre el aprovisionamiento de infraestructura utilizando Terraform Cloud.

      Este es un workshop intermedio y se recomienda experiencia básica en el uso de Terraform OSS.


      9:00 - 9:30 AM (COT)
      Llegadas + Desayuno
      9:30 - 10:00 AM (COT)
      Bienvenida y Presentación: Minimización de Los Desechos en La Nube + Aumento de La Productividad
      10:00 AM - 12:00 PM (COT)
      Laboratorio Práctico de Terraform Cloud
      12:00 - 12:15 PM (COT)
      Palabras de Cierre + Preguntas y Respuestas
      12:15 - 1:30 PM (COT)

      Seguridad de Vault + Zero Trust: Workshop Práctico

      Workshop práctico de seguridad basada en identidad y de confianza cero. Durante este workshop, los participantes aprenderán sobre el modelo de seguridad de HashiCorp, que se basa en el principio de acceso y seguridad basados ​​en identidad. Para que cualquier máquina o usuario pueda hacer algo, debe autenticar quién o qué es, y su identidad y políticas definen lo que se le permite hacer.

      Después de una descripción general de la seguridad de confianza cero, los participantes pasarán por un taller práctico de HashiCorp Vault. HashiCorp Vault permite a las empresas almacenar, acceder y distribuir de forma centralizada secretos dinámicos como tokens, contraseñas, certificados y claves de cifrado en cualquier entorno de nube pública o privada.

      Este es un workshop intermedio y se recomienda experiencia básica en el uso de Terraform OSS.


      12:15 - 1:30 PM (COT)
      Llegada + Almuerzo
      1:30 - 2:00 PM (COT)
      Bienvenido y confianza cero, descripción general de la seguridad basada en la identidad
      2:00 - 5:00 PM (COT)
      Laboratorio Práctico de Vault
      5:00 - 5:15 PM (COT)
      Preguntas y Respuestas + Comentarios de Cierre


      9:00 - 9:30 AM (COT)
      Llegadas + Desayuno
      9:30 - 10:00 AM (COT)
      Bienvenida y Presentación: Minimización de Los Desechos en La Nube + Aumento de La Productividad
      10:00 AM - 12:00 PM (COT)
      Laboratorio Práctico de Terraform Cloud
      12:00 - 12:15 PM (COT)
      Palabras de Cierre + Preguntas y Respuestas
      12:15 - 1:30 PM (COT)


      12:15 - 1:30 PM (COT)
      Llegada + Almuerzo
      1:30 - 2:00 PM (COT)
      Bienvenido y confianza cero, descripción general de la seguridad basada en la identidad
      2:00 - 5:00 PM (COT)
      Laboratorio Práctico de Vault
      5:00 - 5:15 PM (COT)
      Preguntas y Respuestas + Comentarios de Cierre



      AWS IMMERSION DAYS es un taller gratuito de aproximadamente 4 horas guiado por profesionales de 3HTP certificados como arquitectos de AWS Repasaremos las fases y servicios que nos permiten migrar  hacia un entorno seguro y escalable como AWS.


      AWS Immersion Days permite que los socios comerciales de AWS con las categorías de consultoría avanzada y premier brinden talleres a los clientes con contenido y herramientas desarrollados por los arquitectos de soluciones de AWS. Estos talleres incluyen presentaciones, laboratorios prácticos y otros recursos personalizados que ayudan a los clientes a comprender la oferta de valor de AWS.



      Aprenda los conceptos de creación de contenedores y orquestación e interactúe con los talleres prácticos guiados por el servicio AWS EKS.


      Conoce al equipo de 3HTP que forma el claustro de intructores del curso

      Katty Jaramillo


      Instrcutor Principal | AWS Cloud Architect | Instructora | Docker Certified Associate

      Alain Díaz


      Coordinador | AWS Cloud Architect | SMB Leader

      Daniel Muñoz

      Daniel Muñoz


      AWS Cloud Architect | Instructor | Docker Certified Associate

      Julio Purca


      AWS Cloud Architect | Instructor | Docker Certified Associate


      Aprende qué dimensiones se deben tener en consideración para modernizar y migrar aplicaciones hacia AWS. Qué servicios y alternativas proporciona el Cloud de AWS y cuáles son los costos de este tipo de proyecto.


      A medida que el viaje a la nube se acelera, las organizaciones han estado buscando formas de acelerar la adopción con un enfoque prescriptivo para la modernización de aplicaciones. En la charla abordaremos la estrategia de modernización de aplicaciones desarrolladas en ambientes monolíticos on-premises hacia AWS.  Repasaremos las fases y servicios que nos permiten migrar  hacia un entorno seguro y escalable como AWS.








      La charla técnica se realzará en las oficinas de Amazon Web Services en Santiago de Chile.

      • 9:00-9:30 | Desayuno de bienvenida Oficinas AWS.
      • 9:30-9:45 |Presentación de Charla Técnica y Partner Advance  -AWS – 3HTP.
      • 9:45-10:30 | PARTE 1 – Evaluación de Arquitecturas.
      • 10:30-11:00 | Break
      • 11:00-12:30 | PARTE II Modernización.

      Una estrategia exitosa de modernización de aplicaciones comienza con la necesidad comercial en mente y luego se enfoca en las tecnologías.

      Vijay Thumma, Gerente de práctica global, Servicios profesionales de AWS - 2020



      Evaluemos un proyecto de Migración a AWS Cloud

      Ejercicio práctico a partir de un escenario real, para valorizar un proyecto de migración Cloud:


      • Gerentes y líderes de tecnología.
      • Líder de arquitectura TI
      • Gerentes de Operaciones e Infraestructura.
      • Responsables de Desarrollado y QA.
      • Roles de áreas financieras y de compras.

      3HTP Cloud Services EXPOSITORES

      Renzo Disi

      Director 3HTP Cloud Services

      Ivan Camilo Pedraza

      Lider DevOps 3HTP Cloud Services

      3HTP Cloud Services ha creado la iniciativa Pijama Lecture Party, un espacio innovador para aprender de tecnología mediante videos-artículos-ppt, verás estrategias de modernización de TI, productos SW de marcas-OpenSource y servicios asociados a nuestros pilares: Bridge to Cloud, Born2Cloud y Devops. No pierdas la oportunidad de divertirte y aprender, mantente atento a nuestras invitaciones.



      Conversaremos acerca de estos temas y desmitificaremos juntos el camino a la nube.

      La llegada de las tecnologías Cloud trae retos importantes en la definición del camino para la adopción de la nube, bajo diferentes, frentes entre los que están:


      • Gerentes y líderes de tecnología
      • Líder de arquitectura TI
      • Gerentes de Operaciones e Infraestructura
      • Responsables de Desarrollado y QA

      3HTP Cloud Services EXPOSITORES

      Renzo Disi

      Director 3HTP Cloud Services

      Ivan Camilo Pedraza

      Lider DevOps 3HTP Cloud Services

      3HTP Cloud Services ha creado la iniciativa Pijama Lecture Party, un espacio innovador para aprender de tecnología mediante videos-artículos-ppt, verás estrategias de modernización de TI, productos SW de marcas-OpenSource y servicios asociados a nuestros pilares: Bridge to Cloud, Born2Cloud y Devops. No pierdas la oportunidad de divertirte y aprender, mantente atento a nuestras invitaciones.

      AWS Public Reference Identity Validator

      PROJECT REFERENCE Identity Validator System – Pension Fund Administrator

      The Pension Fund Administrator of Colombia is part of a well-known Colombian holding company and is one of the largest administrators of pension and census funds in the country with more than 1.6 million members. This pension fund manager manages three types of funds: unemployment insurance, voluntary pensions, and mandatory pensions.

      In 2011, the company acquired the assets of pension funds in other countries of the region, in 2013 the firm completed a merger process with a foreign group adding to its management portfolio of Pension and Unemployment Funds, life insurance, and administration of investments.

      The technical difficulty and business involvement.

      Currently, this Pension Fund management company has constant development of applications to retain its customers and also to be at the forefront of the business, therefore, currently with a large number of applications, these applications are grouped according to to the users who use it, there are two groups:

      • Internal applications or internally used applications of the company
      • Satellite-type applications are used mainly by affiliates who carry out self-management operations in the different existing channels according to their requirements and/or needs.

      In satellite-type applications, the administrator must allow operations that by their nature require different security mechanisms such as authentication, identification, and authorization. However,

      • To achieve authentication, affiliates use the username and password mechanism.
      • The authorization is carried out through a system of roles, profiles, and permissions, all configured depending on the type of affiliate and the accesses they require to carry out their respective operations.
      • The identification of the affiliate is a more complex task, bearing in mind that the objective of this mechanism is to ensure that the user is really who they say they are and that they have not been impersonated.

      This last identification mechanism is the core of the problem to be solved since it must allow the administrator to ensure that the affiliates carry out the procedures, operations, and/or use of services in a reliable and safe manner with the quality they deserve.

      Now, the combination of different security factors adds more layers of security to the authentication procedure, providing robustness to the verification and making intrusion and identity theft by third parties more difficult. For them we introduce strong authentication, which is when combined with minus two factors to guarantee that the user who authenticates is really who he claims to be, all strong authentication is also a multi-factor authentication (MFA), where the user verifies his identity as many times as factors are combined, even if one of the factors fails or is attacked by a cybercriminal, there are more security barriers before the cybercriminal can access the information.

      As a result of this, the “identity validator system” arises, which is the system as a service that performs the process of identifying the affiliates in the administrator, which is used by the other systems that require it so that they can decide whether or not they authorize the execution of a procedure.

      Solution realized, AWS services, Architecture 

      To achieve a correct identification of the affiliate, the collection of data becomes evident as a first step, depending on these data, the best decision must be made as to which identification mechanisms should be applied, then these mechanisms must be applied, wait for the Affiliate response and verify it, in parallel the entire process consists of its respective operations and statistics record.

      The general architecture of the system is essentially made up of the following components:

      • Satellite: they are those that consume the services of the identity validator system since they need to validate the identity of their affiliates before carrying out a procedure.
      • UI: Identity validator system graphical interface. Set of components and libraries developed in React JS that can be used by the Satellites, which contain the connection logic towards the identity validator system services. 
      • APIGateway: Contains the endPoints that the identity validator system exposes 
      • Traceability in Splunk: Components that are responsible for recording the messages it exchanges identity validator system (externally and internally)

      • Completeness: Component that is responsible for making the necessary calls to services external identity validator system that extract the necessary information from the client to make the decision of what mechanism will be applied.

      • Validate Pass: Component that is responsible for eliminating the mechanisms to be applied to the client those that have already been validated taking into account a series of configurable criteria.

      • Mechanism Manager: In charge of executing the mechanism and carrying out the validation through communication with third-party services and interpreting and validating their responses.
      • Rule Manager: In charge of making the decision of the mechanisms that will be applied to the client.

      Architecture flow

      The identity validator system is a system made up internally by several micro-services that interact with each other. The general flow of the identity validator system consists of a request for validation of the mechanism for a client which travels through the different micro-services that make up the system. The following is the identity validator system message flow architecture.

      The image shows, similar to the logical architecture of the identity validator system, the internal architecture of Micro-services using AWS SQS queues as an intermediate communication channel, which make up the system and the data flow of a request. that is performed at the same. The image flow is a functional flow which means that the request is not canceled.

      The flow is described below:

      1. The identity validator system receives a validation request from one of the corresponding channels and which are configured, validates the data sent according to a defined structure and request ID.
      2. Authenticate the request according to the Satellite that is making it.
      3. Register the information in Redis where it will keep waiting for the response for the Request ID (synchronization simulation)
      4. Determines if it is an uninitiated request, validates if the transaction and the channel exist.
      5. The process of completing the request data begins.
      6. They are called the external completeness services of the identity validator system to extract the information required from the affiliate, which will be used in the decision-making of the identification mechanism that should be applied.
      7. The completeness data is sent to the RuleProcessor micro-service, through the fan-out scheme and using SNS and SQS, which will be in charge of orchestrating the rules to determine the list of mechanisms to apply to the client.
      8. They are determined by the data that was extracted in Completeness, plus the data of the initial Request itself, and taking into account a series of rules that the list of mechanisms to apply must be met.
      9. The validations that the client has passed in a given time are determined
      10. The necessary consultations are made to complete the required information
      11. The completeness data and the list of mechanisms to be applied are sent to the ExecuteMechanism micro-service, which searches the list of mechanisms to apply for the first one that has not been validated and calls the external service to the identity validator system that starts the validation mechanism.
      12. The collected data plus the response from the initiation of the first invalid mechanism is sent to SendResponse. This stores the entire request in the Database for subsequent requests.
      13. Push the data into Redis where RestRequest is waiting to send the request-response
      14. A validation request of the initiated mechanism is started. It keeps reading from Redis the answer.
      15. It is validated that the start request has been executed correctly and that the request is valid
      16. It is sent to validate the mechanism, where the validation service corresponding to the same initiated mechanism is called
      17. It is verified that the validation has been successful and the mechanism is marked as valid in the list of mechanisms
      18. Send invalid mechanism response

      Low Level Architecture 

      A lower-level identity validator system architecture shows the complexity of the system and the number of components that intervene and interact with the Request information that travels from one micro-service to another; where each one is enriching and modifying its state.

      What are the benefits of this solution for the client?

      As part of the implemented solution, the client obtained greater security in the processes of self-management and operations that required verification of the identity of the person who required it. That In this sense, the consumption of the system was implemented in the satellites to make the decision whether to allow or not carry out operations, which brought a greater securitization of operations and prevents in a high degree spoofing. With this, it has gained greater prestige and confidence from affiliates knowing that there are ways to verify their identity when operating with their services and products of your day-to-day.




      Introduction to Core AWS Services

      November 19, 2021
      1:00 PM COL

      partnered with:

      AWS Immersion Days allows AWS Business Associates with the Consulting Advanced and Premier categories to deliver workshops to clients with content and tools developed by AWS solution architects. These workshops include presentations, hands-on labs, and other customized assets that help customers understand AWS's value offering.

      AWS IMMERSION DAYS is a free workshop lasting approximately 4 hours guided by 3HTP professionals certified as AWS architects



      This IMMERSION DAY offers an overview of the advantages of Cloud Computing and the possibilities offered by Amazon Web Services through its different services.

      Introduction to AWS

      A high-level introduction to the AWS cloud. This topic covers

      • AWS benefits
      • Pricing philosophy
      • Global infrastructure
      • EC2 instances
      • Virtual Private Cloud (VPC) overview
      • and more…


      Overview of AWS Documentation, Blogs, Quickstarts, and Solutions.

      This content covers

      • EC2
      • RDS
      • S3
      • Elastic Load Balancer (ELB)
      • Auto Scaling Group


      Meet the members of the 3HTP team of instructors who teach IMMERSION DAY.


      Alain Díaz


      AWS Architect

      Daniel Muñoz

      Daniel Muñoz


      AWS Architect

      Katty Jaramillo


      Cloud Architect




      AWS has a totally free tool called AWS Cloud Adoption Readiness Tool (CART), which allows, through a set of simple questions, to evaluate the status of your organization and deliver results with the important points of improvement to start a process of adopting Cloud services.  3HTP puts an Architect at your disposal to accompany you in the process of evaluation, interpretation of the CART and to establish the important guidelines of your path to the cloud.


      Contact us to schedule a session with a 3HTP architect who will help you fill out the CART, interpret the results, and outline your strategy to adopt AWS Cloud services.

      AWS Cloud Adoption Readiness Tool (CART)

      If you have 3 to 5 minutes to learn more about the CART.

      AWS Cloud Adoption Readiness Tool (CART)


      On this page we will provide an introduction to the AWS Cloud Adoption Readlines Tool (CART). Which allows you to assess the status of your organization for the adoption of Amazon Web Services Cloud services. CART is a powerful tool that will allow you to successfully face the use of Cloud AWS technologies. 3HTP offers you the following possibilities:

      Schedule an appointment with a 3HTP AWS Architect who will accompany you in the process of filling in and interpreting the results:


      If you would like to do it yourself send us your information. You will receive an email with detailed instructions on how to do it:


      The path to cloud services, the Bridge2Cloud as we call it in 3HTP, has become a necessity for companies in search of immediate resource provisioning, elastic scalability and high availability.

      However, one of the main barriers when thinking about migrating to the cloud is the ignorance of the current state of the organization when thinking about Cloud Adoption.

      • How to evaluate?
      • Where to start?
      • What areas should be considered?
      • How to set priorities?
      • What elements should be considered?

      These are some of the questions that remain unanswered when we want to start a cloud adoption plan.

      AWS Cloud Adoption Readiness Tool

      You will answer a questionnaire that allows, through simple questions, to evaluate the current state of the organization. This tool has the objective of providing a guide that helps organizations in the construction of a migration strategy to the Amazon Web Services cloud. The application allows us to:

      AWS Cloud Adoption Readiness Tool - CART

      The CART allows an assessment to be made to see how prepared you are to adopt cloud services by following 16 questions that are designed based on AWS best practices.

      It is useful for companies of any size and any sector, giving the possibility of detailing the state of readiness to adopt cloud services

      How is the CART structured?

      The evaluation questionnaire is made up of two types of questions.

      This survey and evaluation report details your preparation for the migration to the cloud through 16 questions grouped into six perspectives:

      CART is a totally free tool that AWS has for all users.

      Schedule an appointment with a 3HTP AWS Architect who will accompany you in the process of filling in and interpreting the results:


      If you would like to do it yourself send us your information. You will receive an email with detailed instructions on how to do it:


      "Customers migrating to AWS can experience a 51% reduction in operations costs, a 62% increase in IT staff productivity, and a 94% reduction in downtime."