You may change your big and cool application to micro-services for a lot of reasons. This article is not about why switching our applications to a micro-services oriented architecture, but on how we chose to do it.
Let me first explain what we had from start into production, that was working fine, but was not suitable due to how our internal company structure started to change in time (to micro-projects/micro-services oriented teams):
- A big monolithic application build on top of .NET;
- An Angular application for customers and admins;
- Different teams were working on several separated parts of the application in the monolithic;
- To separate teams work, we have decided that each team works on separate projects but finally they integrate the final .NET compiled dll into the monolithic application;
- Deployment was done by a single team which was also responsible for the production environment;
- Scalability was assured by launching or closing virtual servers based on several metrics like Traffic load/CPU load/Memory Load;
- Availability while deploying was assured by replacing instances in production while always keeping at least a defined minimum number of instances in service.
This article explains how to turn that monolithic into a full-featured MICRO-SERVICES architecture using Docker Containers and Amazon Web Services.
We have tried to keep us not stuck to Amazon resources, but still use their hosting and several features for container orchestration which are one of the most powerful in the area. So, finally, we can always switch to another hosting if we would like, but we choose to keep with Amazon for its stability.
Some of the services on Amazon we are about to use are are:
-
Amazon EC2 instances;
-
Amazon ECS for orchestrating the containers;
-
Amazon ECR as private docker images registry;
-
Amazon CloudFormation for declaring stacks for resources;
-
Amazon Application Load Balancer to balance traffic to containers;
-
And others, which will be explained later in the article;
We also took into consideration to keep scalable infrastructure based on the required resources and traffic load, but, this time, scaling containers instead of scaling servers which we already had. This, obviously, is way faster (starting a container takes seconds while starting a server takes minutes).
I am going to split this article in several chapters which explains step by step what to do to achieve the above.