We started to move a monolithic service oriented architecture(SOA) application towards micro-services and our first goal is to use container-based solutions for delivering our cloud services.
As it is the most common and stable product in this field, we will start using Docker containers.
Docker is now supported in both Windows 10 and Windows Server 2016, so we have started to test how deploying the app to a Docker container works.
First thing first, we’ve deployed the application on a Docker container on Windows 10 Development PC. It works, except that sometimes we get memory exceptions. Those are most likely because of the virtualization flow that Docker uses on Windows 10.
We investigated the issue on the Docker Image GitHub repository, and the developers are saying that on Windows Server 2016, Docker is more stable and memory errors are not reproduced, so we tested it on a Windows Server 2016 Operating System too.
Most likely the issues with Windows 10 will be fixed in the future too.
Move each sub application to a Docker container
The final goal is to switch to using Docker containers for each service, instead of virtual machines for production for each product/team. The advantages of using this against real virtual machines are many and can be easily found on the internet (e.g.: https://apiumhub.com/tech-blog-barcelona/top-benefits-using-docker/). From saving costs because of using multiple containers (with memory management and on the same machines), to better maintenance and deployment workflow for each team.
Hosting the micro-services
We investigated for the most convenient, stable, easy to implement and also the cheapest micro-service infrastructure.
We basically need something that is easy to migrate to, has good container-based management flows and it will be easy to train each micro-service owners to deploy their code to.
We came to the conclusion that the most convenient solution that fulfill our requirements is to use Amazon EC2 Container Service (Amazon ECS).
We have taken into consideration our current infrastructure, what will be possible to transition to and what will not ramp up our costs. Using Amazon ECS and Amazon ECR as middleware’s for micro-services costs nothing as they say, we will only pay for the EC2 Instances and the Load Balancer as we already do it now.
Another benefit is that we also have the possibility to attach new good services to it if we want/need/afford (e.g.: Amazon API Gateway, Amazon Cloud Watch, Amazon Elastic Search and so on).
Also, it is the easiest solution to implement and we have examples as NETFLIX, Gilt who migrated to similar infrastructures. The difference is that they added more and more Amazon Services which we might not do it. I’m thinking for several of them to use good open-source products (e.g.: for Logging, Tracing, Monitoring, Security etc.).
The requests endpoint will be an Application Load Balancer which will route requests to different target groups with different type of instances (Windows, Linux).
Each target group will have multiple EC2 instances of the same type on which production Docker containers will be deployed and managed by Amazon EC2 Container Services.
Initially we will only have one target group ELB with minimum 2 EC2 instances.
A target group will have an Auto Scaling group which will spin instances up/down based on the alarms we set on the instances and/or on the micro-services on them (CPU, Latency alarms etc.).
For the initial proposal we only need 1 target group with the same amount of T2.LARGE instances on which Docker instances for each micro-service will be deployed.
- The initial infrastructure considers having only one Docker Instance which will contain our REST API application in production.
- The 2nd phase of the transition is to start switching products to micro-services deployed on new Docker Instances on the same infrastructure.
- Finally, our application should all be spited into micro-services.