Improving user experience: a banking solution

Case: Improving user experience: a banking solution

The high impact that COVID-19 produced on several financial companies was known to everyone. The lockdown made customers have to change their habits and they quickly began to consume the web portals, which clearly the majority were not prepared to receive high workloads. Some have not yet managed to adapt to the change, however, there were other companies that had a quick reaction by deriving their efforts. For this, our Customer started a digital transformation, adopting the cloud so that its platforms were stable, scalable, and secure, thus responding to the demands of its clients avoiding performance problems, our client added new functionalities that allowed end-users to carry out procedures that before the pandemic had to take place in physical branches.

One of these companies that managed to adapt and focus its efforts on the adoption of digital transformation is our client, a renowned Chilean financial institution with more than 50 years of existence and with more than one million users.

Our client made an effort to be able to respond quickly to the contingency produced by the pandemic and with this to be able to retain their customers, delivering a new Portal that will enhance the user experience of the business services provided through the Web Platform and Mobile.

To face this challenge, the Amazon Web Services (AWS) Cloud platform was selected together with the advice and accompaniment of a team of specialized professionals, in order to be advised in this process of adopting the cloud and implementing hybrid architectures.

Projects Realized

The Client, together with its Architecture, Development, Security, and Infrastructure areas, saw in AWS an ally to carry out the construction of a new Portal, taking advantage of Cloud advantages such as elasticity, high availability, connectivity and cost management.

The project conceived contemplated the implementation of a frontend (Web and Mobile) together with a layer of micro-services with integration towards its On-Premise systems via consumption of services exposed in its ESB that in turn accesses its legacy systems, thus forming an architecture hybrid.

Within the framework of this project, 3HTP Cloud Services actively participated in the advice, definitions, and technical implementations of the infrastructure and support to the development and automation teams, taking as reference the 05 pillars of the AWS well-architected framework.

3HTP Cloud Services participation focused on the following activities:

  • Validation of architecture proposed by the client 
  • Infrastructure as code (IaC) project provisioning on AWS 
  • Automation and CI / CD Integration for infrastructure and micro-services  
  • Refinement of infrastructure, definitions, and standards for Operation and monitoring
  • Stress and Load Testing for AWS and On-Premise Infrastructure 


The client achieved some relevant benefits, among the most outstanding we can mention:

  • Automation, management, and deployment of the infrastructure and application components that run on it, allowing the client to accelerate and strengthen the life cycle of the solution.
  • Generation of volatile environments in an automated way as a result of the previous point.
  • Improved infrastructure to support the high demand for load requirements, the improvement was made for both productive and non-productive environments, the appropriate dimensioning was defined based on the results and conclusions of the load and stress tests carried out in AWS and On- environments. The premise, in the different components that make up the hybrid system.
  • Significant cost reduction through the efficient use of the different AWS services (Example: use AWS Spot-Aurora Server-less) for non-productive environments, as a consequence of the different recommendations based on findings and application of good practices, applied during the project.
  • The institution was able to meet its governance, security, scalability, continuous delivery, and continuous deployment (CI / CD) objectives, as well as interaction with its on-premises infrastructure using the AWS cloud.
  • Growth and acquisition of technical experience of the different client work teams involved in the life cycle of the solution.

Services Performed by 3HTP Cloud Services: Initial Architecture Validation

The institution already had a first adoption architecture in the cloud for its client portal, therefore, as a multidisciplinary team, we began with a diagnosis of the current situation and the proposal made by the client; From this diagnosis and evaluation, the following recommendations relevant to architecture were obtained:

  • Separation of architectures for productive and non-productive environments
  • The use Infrastructure as code in order to create volatile environments, by projects, by business unit, etc.
  • CI / CD implementation to automate the creation, management, and deployment of both Infrastructure and micro-services.

Productive Environment Architecture

  • This architecture is based on the use of three Availability Zones (AZ), additionally, On-Demand instances are used for AWS EKS Workers and the use of reserved instances for database and cache with 24×7 high availability.

The number of instances to use for the Redis Cluster is defined.

Productive Diagram

Non-Productive Environment Architecture

Considering that non-production environments do not require 24/7 use, but if it is necessary that they have at least an architecture similar to that of production, an approved architecture was defined, which allows the different components to be executed in high availability and at the same time allows minimize costs. For this, the following was defined:

  • Reduction of availability zones for non-productive environments, remaining in two availability zones (AZ)
  • Using Spot Instances to Minimize AWS EKS Worker Costs
  • Configuration of off and on of resources for use during business hours.
  • Using Aurora Serverless 

The instances to be used are defined considering that there are only two availability zones, the number of instances for Non-Production environments is simply 4.

Non-production environments diagram

Infrastructure as Code

In order to achieve the creation of the architectures in a dynamic way additionally that the environments could be volatile in time, it was defined that the infrastructure must be created by means of code. For this, Terraform was defined as the primary tool to achieve this objective.

As a result of this point, 2 totally variable Terraform projects were created which are capable of creating the architectures shown in the previous point in a matter of minutes, each execution of these projects requires the use of a Bucket S3 to be able to store the states created by Terraform.

Additionally, these projects are executed from Jenkins Pipelines, so the creation of a new environment is completely automated.

Automation and CI / CD Integration for infrastructure and micro-services 

 Micro-services Deployment in EKS

We helped the financial institution to deploy the micro-services associated with its business solution in the Kubernetes Cluster (AWS EKS), for this, several definitions were made in order to be able to carry out the deployment of these micro-services in an automated way, thus complying with the process Complete DevOps (CI and CD).

Deployment Pipeline

 A Jenkins pipeline was created to automatically deploy the micro-services to the EKS cluster.

Tasks executed by the pipeline:

In summary the steps of the pipeline:

  1. Get micro-service code from Bitbucket
  2. Compile code
  3. Create a new image with the package generated in the compilation
  4. Push image to AWS ECR
  5. Create Kubernetes manifests
  6. Apply manifests in EKS

Refinement and definitions and standards to be used on the infrastructure 

Image Hardening

For the institution and as for any company, security is critical, for this, an exclusive Docker image was created, which did not have known vulnerabilities or allow the elevation of privileges by applications, this image is used as a basis for micro-services, For this process, the Institution’s Security Area carried out concurrent PenTest until the image did not report known vulnerabilities until then.

AWS EKS configurations

In order to be able to use the EKS clusters more productively, additional configurations were made on it:

  • Use of Kyverno: Tool that allows us to create various policies in the cluster to carry out security compliance and good practices on the cluster (
  • Metrics Server installation: This component is installed in order to be able to work with Horizontal Pod Autoscaler in the micro-services later
  • X-Ray: The use of X-Ray on the cluster is enabled in order to have better tracking of the use of micro-services
  • Cluster Autoscaler: This component is configured in order to have elastic and dynamic scaling over the cluster.
  • AWS App Mesh: A proof of concept of the AWS App Mesh service is carried out, using some specific micro-services for this test.

Defining Kubernetes Objects

In Deployment:

  • Use of Resources Limit: in order to avoid overflows in the cluster, the first rule to be fulfilled by a micro-service is the definition of the use of memory and CPU resources both for the start of the Pod and the definition of its maximum growth. Client micro-services were categorized according to their use (Low, Medium, High) and each category has default values for these parameters.
  • Use of Readiness Probe: It is necessary to avoid loss of service during the deployment of new versions of micro-services, that is why before receiving a load in the cluster they need to perform a test of the micro-service.
  • Use of Liveness Probe: Each micro-service to be deployed must have a life test configured that allows checking the behavior of the micro-service


The use of 2 types of Kubernetes Services was defined:

  • ClusterIP: For all micro-services that only use communication with other micro-services within the cluster and do not expose APIs to external clients or users.
  • NodePort: To be used by services that expose APIs to external clients or users, these services are later exposed via a Network Load Balancer and API Gateway.

ConfigMap / Secrets

Micro-services should bring their customizable settings in Kubernetes secret or configuration files.

Horizontal Pod Autoscaler (HPA) 

Each micro-service that needs to be deployed in the EKS cluster requires the use of HPA in order to define the minimum and maximum number of replicas required of it.

The client’s micro-services were categorized according to their use (Low, Medium, High) and each category has a default value of replicas to use.

Stress and Load Testing for AWS and On-Premise Infrastructure

One of the great challenges of this type of architecture (Hybrid) where the backend and core of the business are On-Premise and the Frontend and logic layers are in dynamic elastic clouds, is to define to what extent architecture can be elastic without affecting the On-Premise and legacy services related to the solution.

To solve this challenge, load and stress tests were carried out on the environment, simulating peak business loads and normal loads, this, monitoring was carried out in the different layers related to the complete solution at the AWS level (CloudFront, API Gateway, NLB, EKS, Redis, RDS) at the on-premise ESB, Legacy, Networks and Links level.

As a result of the various tests carried out, it was possible to define the minimum and maximum elasticity limits in AWS, (N ° Worker, N ° Replicas, N ° Instances, Types of Instances, among others), at the On-Premise level (N ° Worker, Bandwidth, etc).


At present it is very common to see this type of Project, which contemplate the development of components in cloud environments that interact with services or on-Premises components, thus forming a hybrid architecture, taking advantage of all the advantages that the Cloud provides by accessing business logic and data contained in on-premises legacy systems, thus generating a technical and governance challenge to ensure adequate performance and operation.

As we all know in the cloud almost everything is scalable, dynamic, and elastic, however, on-premise legacy systems are by their nature less elastic and resilient. That is why various aspects must be taken into consideration, a multidisciplinary approach is required (architects specialized in cloud and traditional middleware, network specialists, developers, and specialists in load testing in hybrid environments, among others) in order to be able to Obtain the best definitions for each of the aspects involved in this type of scenario to achieve a successful project. It is important to apply good practices and never lose sight of the fact that it is a solution that has two worlds that can be very different and requires a consolidated (non-isolated) vision to get the best out of each of them.

AWS Public Reference Identity Validator

PROJECT REFERENCE Identity Validator System – Pension Fund Administrator

The Pension Fund Administrator of Colombia is part of a well-known Colombian holding company and is one of the largest administrators of pension and census funds in the country with more than 1.6 million members. This pension fund manager manages three types of funds: unemployment insurance, voluntary pensions, and mandatory pensions.

In 2011, the company acquired the assets of pension funds in other countries of the region, in 2013 the firm completed a merger process with a foreign group adding to its management portfolio of Pension and Unemployment Funds, life insurance, and administration of investments.

The technical difficulty and business involvement.

Currently, this Pension Fund management company has constant development of applications to retain its customers and also to be at the forefront of the business, therefore, currently with a large number of applications, these applications are grouped according to to the users who use it, there are two groups:

  • Internal applications or internally used applications of the company
  • Satellite-type applications are used mainly by affiliates who carry out self-management operations in the different existing channels according to their requirements and/or needs.

In satellite-type applications, the administrator must allow operations that by their nature require different security mechanisms such as authentication, identification, and authorization. However,

  • To achieve authentication, affiliates use the username and password mechanism.
  • The authorization is carried out through a system of roles, profiles, and permissions, all configured depending on the type of affiliate and the accesses they require to carry out their respective operations.
  • The identification of the affiliate is a more complex task, bearing in mind that the objective of this mechanism is to ensure that the user is really who they say they are and that they have not been impersonated.

This last identification mechanism is the core of the problem to be solved since it must allow the administrator to ensure that the affiliates carry out the procedures, operations, and/or use of services in a reliable and safe manner with the quality they deserve.

Now, the combination of different security factors adds more layers of security to the authentication procedure, providing robustness to the verification and making intrusion and identity theft by third parties more difficult. For them we introduce strong authentication, which is when combined with minus two factors to guarantee that the user who authenticates is really who he claims to be, all strong authentication is also a multi-factor authentication (MFA), where the user verifies his identity as many times as factors are combined, even if one of the factors fails or is attacked by a cybercriminal, there are more security barriers before the cybercriminal can access the information.

As a result of this, the “identity validator system” arises, which is the system as a service that performs the process of identifying the affiliates in the administrator, which is used by the other systems that require it so that they can decide whether or not they authorize the execution of a procedure.

Solution realized, AWS services, Architecture 

To achieve a correct identification of the affiliate, the collection of data becomes evident as a first step, depending on these data, the best decision must be made as to which identification mechanisms should be applied, then these mechanisms must be applied, wait for the Affiliate response and verify it, in parallel the entire process consists of its respective operations and statistics record.

The general architecture of the system is essentially made up of the following components:

  • Satellite: they are those that consume the services of the identity validator system since they need to validate the identity of their affiliates before carrying out a procedure.
  • UI: Identity validator system graphical interface. Set of components and libraries developed in React JS that can be used by the Satellites, which contain the connection logic towards the identity validator system services. 
  • APIGateway: Contains the endPoints that the identity validator system exposes 
  • Traceability in Splunk: Components that are responsible for recording the messages it exchanges identity validator system (externally and internally)

  • Completeness: Component that is responsible for making the necessary calls to services external identity validator system that extract the necessary information from the client to make the decision of what mechanism will be applied.

  • Validate Pass: Component that is responsible for eliminating the mechanisms to be applied to the client those that have already been validated taking into account a series of configurable criteria.

  • Mechanism Manager: In charge of executing the mechanism and carrying out the validation through communication with third-party services and interpreting and validating their responses.
  • Rule Manager: In charge of making the decision of the mechanisms that will be applied to the client.

Architecture flow

The identity validator system is a system made up internally by several micro-services that interact with each other. The general flow of the identity validator system consists of a request for validation of the mechanism for a client which travels through the different micro-services that make up the system. The following is the identity validator system message flow architecture.

The image shows, similar to the logical architecture of the identity validator system, the internal architecture of Micro-services using AWS SQS queues as an intermediate communication channel, which make up the system and the data flow of a request. that is performed at the same. The image flow is a functional flow which means that the request is not canceled.

The flow is described below:

  1. The identity validator system receives a validation request from one of the corresponding channels and which are configured, validates the data sent according to a defined structure and request ID.
  2. Authenticate the request according to the Satellite that is making it.
  3. Register the information in Redis where it will keep waiting for the response for the Request ID (synchronization simulation)
  4. Determines if it is an uninitiated request, validates if the transaction and the channel exist.
  5. The process of completing the request data begins.
  6. They are called the external completeness services of the identity validator system to extract the information required from the affiliate, which will be used in the decision-making of the identification mechanism that should be applied.
  7. The completeness data is sent to the RuleProcessor micro-service, through the fan-out scheme and using SNS and SQS, which will be in charge of orchestrating the rules to determine the list of mechanisms to apply to the client.
  8. They are determined by the data that was extracted in Completeness, plus the data of the initial Request itself, and taking into account a series of rules that the list of mechanisms to apply must be met.
  9. The validations that the client has passed in a given time are determined
  10. The necessary consultations are made to complete the required information
  11. The completeness data and the list of mechanisms to be applied are sent to the ExecuteMechanism micro-service, which searches the list of mechanisms to apply for the first one that has not been validated and calls the external service to the identity validator system that starts the validation mechanism.
  12. The collected data plus the response from the initiation of the first invalid mechanism is sent to SendResponse. This stores the entire request in the Database for subsequent requests.
  13. Push the data into Redis where RestRequest is waiting to send the request-response
  14. A validation request of the initiated mechanism is started. It keeps reading from Redis the answer.
  15. It is validated that the start request has been executed correctly and that the request is valid
  16. It is sent to validate the mechanism, where the validation service corresponding to the same initiated mechanism is called
  17. It is verified that the validation has been successful and the mechanism is marked as valid in the list of mechanisms
  18. Send invalid mechanism response

Low Level Architecture 

A lower-level identity validator system architecture shows the complexity of the system and the number of components that intervene and interact with the Request information that travels from one micro-service to another; where each one is enriching and modifying its state.

What are the benefits of this solution for the client?

As part of the implemented solution, the client obtained greater security in the processes of self-management and operations that required verification of the identity of the person who required it. That In this sense, the consumption of the system was implemented in the satellites to make the decision whether to allow or not carry out operations, which brought a greater securitization of operations and prevents in a high degree spoofing. With this, it has gained greater prestige and confidence from affiliates knowing that there are ways to verify their identity when operating with their services and products of your day-to-day.


With a total of 131 participants (Colombia, Chile, and Peru), 3HTP concludes its first AWS IMMERSION DAYS held during the months of August and September. The acknowledgments and recommendations confirmed that the strategy outlined and the effort made paid off. We want to share the excellent results, challenges met, and experience gained in conducting these workshops.

A simple idea was the beginning of the results that we can see today. AWS proposed to 3HTP the opportunity to give an IMMERSION DAY related to the topic of Core Services of Amazon Web Services, however, it was considered that it was very positive to see the opinion of the customers and a survey was carried out that showed that 70% preferred the Kubernetes theme

Resultados encuesta Linkedin.

What differentiates these IMMERSION DAYS from those that were carried out previously.


  • Adapt an activity originally planned to be carried out in person for a long duration to a remote format.
  • How to achieve the interest, active participation and permanence of the participants throughout the activity.
  • Achieve continuous feedback from the participants throughout the activity to look for areas for improvement.

A methodology was designed for the delivery of the AWS IMMERSION DAY workshops to take an activity that was initially developed in-person to a 100% online modality.

Different steps were established that allowed each of these elements to be overcome and that sought to maintain the quality of the workshop and the satisfaction of the participants with the topic and the level of depth of the content.

  • Custom scheduling logistics.
  • Presentation of introductory concepts in a general session for Participants.
  • Division of work teams by instructor (7 to 10 people maximum) for personalized follow-up.
  • Progress validation through checkpoints by Instructors.
  • Progress control by the IMMERSION DAY coordinator asking the participants of the groups.
  • Survey at the end of the session to get early feedback.

The objective was to give an edition of the IMMERSION DAY of AWS KUBERNETES, however, the call presented 350 interested in the subject, so it was necessary to schedule several editions, including 2 workshops directed and exclusive to 2 companies in Colombia. (Grupo AVAL and AFP Protección).


The objective of the KIMMERSION OF KUBERNETES is for participants to learn the concepts of containerization and orchestration and interact with guided hands-on workshops on the AWS EKS service. 



A total of 5 AWS KUBERNETES IMMERSION DAYS Workshops were held:

COLOMBIA24 Participants27-August-2020
GRUPO AVAL26 Participants10-September-2020
AFP PROTECCIÓN37 Participants17-September-2020
CHILE20 Participants29-September-2020
PERÚ24 Participants1-October-2020


In the workshops held, 100% of the participants stated that their expectations were fully or partially met.

Cumplimiento de expectativas cumplidas

It is important to emphasize that the time for completion is one of the elements to take into account during the practical exercises. At this point, it is very favorable that the instructors keep their attention on the progress status of each participant.

In a relevant way, the satisfaction of the participants regarding the knowledge of the instructors and their follow-up during the workshops stands out, an element that was taken into account in the applied methodology.

Por ciento de satisfacción de los participantes por cada item.


“Excellent activity, the instructor attentive to all doubts and demonstrating extensive knowledge of the exposed tools. I am pleasantly satisfied and I recommend them 100% “

Participant of IMMERSION DAY Colombia

“Excellent space and methodology combining concept with practice. Keep explaining the commands like Jonathan did .. because then one follows the lab understanding what is being done. It was very useful to advance in my study plan of these concepts and it generated a lot of value and progress in understanding these concepts. THANK YOU SO MUCH!!! “

Participante del IMMERSION DAY Colombia

“Perhaps the workshop could be done in two parts so that it is not so exhausting, but the workshop is excellent.”

Participant of IMMERSION DAY Perú

“Excellent Workshop, very dynamic, very clear, I was attentive and entertaining all the time, I learned new concepts. Thank you.”

Participant of IMMERSION DAY AFP Protección


Experiences were obtained from each workshop held, as well as from the comments and suggestions of the participants. These elements helped us to see what needs to be improved for the next IMMERSION DAYS:

  • Have students share their screen if they are lagging behind, to ensure they can keep up with others.
  • Reinforce and guarantee that the steps of the initial Setup are fulfilled for the success of the other laboratories.
  • Profiling the audience to know their level of knowledge prior to IMMERSION and thus make groups with similar levels of knowledge, to personalize the co


3HTP is an AWS Certified Partner to teach IMMERSION DAYS, it has Certified Architects that will help you to know and master the architecture of Amazon Web Service. We can organize an IMMERSION DAY workshop just for your company team.



AFP Protección S.A., a subcompany of the Colombian holding company Grupo de Inversiones Suramericana, is the second largest pension and severance fund manager in the country with nearly 1.6 million affiliates. AFP Protección S.A., a unit of the Colombian holding company Grupo de Inversiones Suramericana, is the second largest pension and severance fund manager in the country with nearly 1.6 million affiliates.

Logo Protección Colombia

PROTECCIÓN started with the application implementation project with Docker technology at the beginning of 2017, oriented at the time on totally OnPremise infrastructure that 3HTP accompanied from the administration of all its middleware platforms.

In 2018 PROTECCIÓN began with digital transformation plans aimed at discovering and implementing Cloud strategies and therefore undertook an analysis of the leading providers of this service in the market. At the same time, it began a call process to find container management services in the cloud, from the most important providers in the market and it was there that 3HTP offered Protection the option of an analysis of the AWS Amazon Elastic Container Service Container Management service (ECS).

Through the cooperative work between AWS and 3HTP, the client was proposed to carry out a proof of concept to show the functionalities and benefits of using the ECS container management services and in turn, as a good strategy for the client, it also began to show the compatibility, description, and integration with other cloud services that would allow PROTECCIÓN to take into account in the evaluation for its cloud service providers. Despite the fact that the client was already quite interested in the service delivered by another cloud provider, it was possible to demonstrate with the arduous technical work carried out in the implementation of an application as a test of functionality and scope, that AWS ECS would deliver a higher level solution and impact on your expectations for functionality and implementation.

PROTECCIÓN, finally convinced of the solution, granted the service tender to 3HTP-AWS and designated two relevant applications for its operation, to be migrated from containers deployed in the OnPremise environments to the AWS cloud.

After the designation of the project and at the request of one of the team leaders, an evaluation was started with the intention of expanding the scope of the project in terms of management technology, administration, and portability between clouds for cases requested by PROTECCIÓN and from there The option of implementing the use of Kubernetes with the AWS Amazon Elastic Kubernetes Service (EKS) service was proposed.