Service instance per container pattern
Need(s)
For throughput and availability, each service is deployed as a set of instances . How to package and deploy them ?
- We need to be polyglot on programming languages, frameworks and frameworks version. Some services use a java stack and others a javascript one (Angular). Some services need java 8 version and newers one a java 9 version.
- Service must be independently deployable and scalable in all environments (DEV, TEST, PRE-PROD, PROD).
- Service instances need to be isolated from one another.
- We need to be able to quickly build and deploy a service in all environments (DEV, TEST, PRE-PROD, PROD).
- We must deploy the application as cost-effectively as possible.
- Often this question arises question: "but I tested well in development environment, why it does not work in production?". We need equivalent run times (OS + services) between different environments (DEV, TEST, PRE-PROD, PROD).
- We need a flexible sandbox for developer to can run services locally in his developpement workstation.
Service instance per container pattern definition
Service instance per container pattern is about packaging the service as a container image and deploy each service instance as a container. Containerization, also called container-based virtualization and application containerization, is an OS-level virtualization method for deploying and running distributed applications without launching an entire VM for each application. Instead, multiple isolated systems, called containers, are run on a single control host and access a single kernel.
While often used interchangeably, microservices and containers are not the same thing. Microservices are an architectural approach to software development. Containers are encapsulated, individually deployable components running as isolated instances, provisioned with minimal resources required to do the job. A microservice can run in a container or as a process, but there are some very good reasons to use containers for easing both the development and operations side of microservices-based applications.
Service instance per container advantages
Isolation and speed to scale: Processes start fast, and dynamically share resources such as RAM. However, running one or more microservices per process can create “noisy neighbors.” Poorly coded microservices running as a process can potentially compromise the integrity of the entire machine. Running microservices inside a VM provides the necessary isolation, but at the cost of scalability due to lengthy boot times associated with the VM’s embedded operating systems (OS). Containers, on the other hand, boot in seconds, thanks to OS-level virtualization where calls for OS resources are made via API. Because they package the service code, runtime, dependencies, and system libraries together with their own view of operating system constructs, containers offer the isolation microservices require at a suitable speed needed to scale.
Simplify operations: One of the benefits of microservices is the flexibility of choosing the best programming language and tech stack for each service. While great for development, it can quickly become an operational nightmare when it comes time to deploy all these different applications. By packaging microservices in containers, your ops team need only know how to deploy containers, and nothing about the different types of applications running inside them. You can think of containers as a bridge between dev and ops. You can also avoid potential service failures due to missing dependencies or mismatched versions on the host system, as the code, frameworks, and everything the service needs is packaged together in an immutable environment. By running one service instance per container, it’s possible to tie system telemetry (CPU usage, memory, etc.) to the service itself. Containers further simplify operations by shielding developers from the need to concern themselves with machine and OS details. If your infrastructure team decides to switch the Linux distribution on the host, the application would not be affected as containers run on any Linux distro.
Improves productivity: Including Containerization for dev and ops have much benefits for productivity.
- Consistent development environments for the entire team. All developers use the same virtualized OS, same system libraries, same language runtime, no matter what host OS they are using (even Windows if you can believe it).
- The development and test environments are the exact same as the production environment. Meaning you can deploy and it will “just work”.
- If we’re having a hard time building something (by build, I mean compile), build it inside the container. This primarily applies to developers using MacOS and Windows.
- We only need a Container Implementation to develop. We don’t need to install a bunch of language environments on ower machine. Want to run a Ruby script but don’t have Ruby installed? Run it in a Ruby Docker image.
- We can use multiple language versions without having to resort to all the hack-arounds for ower language (python, python, ruby, ruby, java, node). Want to run a Python program in Python 3, but only have Python 2 installed? Run it using Python 3 image. Want to compile your Java program with Java 1.9 instead of the 1.8 that’s installed on your machine? Compile it in a Java 1.9 Container image.
- Deployment is easy. If it runs in your container, it will run on your server just the same. Just package up your code and deploy it on a server with the same image or push a new image with your code in it and run that new image.
- We can still use our favorite editor/IDE as we normally do. No need for running a VM in VirtualBox and SSHing in and developing from the shell just so we can build/run on a Linux box.
Continuous Delivery: Including The Containerization became a deployment standard in microservice architecture. It helps us to have a standard and common process for dev and ops to reach continuous delivery.