Cluster of container means a bunch of container are working together to provide the result. Some of the popular tools for container clustering tools are as follows and these tools behave like an orchestrator.
- Docker SWAM
Before going into deep and try to understand these tools we should have a general aspect about these tools like why they even exist? what problems they meant to solve? So, we will try to understand few of the features of these orchestrator tools.
To run our workload in “Highly Available” mode
“Highly available” mode refers to a configuration in which the orchestrator is designed to ensure that the services it manages are always available and able to handle requests, even if individual nodes or containers fail.
In the above figure some services are dedicated to some containers but uncertainly “Host-2” get failed. We want all the services should be highly available so here our “orchestrator” create a new “Host-4” and redirect that service to it. Hence, we can say our container should replicated and distributed across multiple machines due to which we can achieve high availability.
Better service discovery between the workloads
Service discovery is a process that enables micro services or other workloads to locate and communicate with each other over a network.
One common approach to service discovery is to use a central service registry, where each workload registers itself and provides information about its location and the services it offers. Other workloads can then query the registry to find the location of the services they need. This approach can be implemented using a tool like Consul or etcd. Some examples of tools which are designed to do this job of “service discovery” Netflix Eureka, Kubernetes Service discovery, etc.
Load balancing our workloads
Load balancing is the process of distributing the same information and resources across multiple servers. This lightens the load for each server as user requests can be spread across a cluster of servers as opposed to being directed to a single one. Load balancing also helps improve reliability and redundancy as if one server fails, requests will simply be redirected towards another working server.
Fault Tolerance for our workload
This is a ability of the cloud based systems to continue functioning correctly even when one or more of its components fail. To understand this we can refer to Figure 2. Suppose if #1 server and #2 server got failed due to some reason. And we will assume that there is a backup server present. In this case load balancer will route all the request to those backup servers until #1 server and #2 server came up.
Self healing of our workload
To understand this feature in the Figure 3 we can see that container marked 1,2,3,4 are running fine. But due to some fault container 3 failed to take the load. In this case our container orchestrator restart that container in a healthy running state.
Seamlessly scale our workload containers up and down
These features allow you to define the desired number of replicas for a given container or service, and they will automatically manage the scaling of the containers to meet that desired state. This can be understand though the figure given below:-
Note: This scaling is at “Container-level” not at “Workload-level”. I have already discuss about this in my previous blog.
Scheduling Workloads is easy
Scheduling workloads in a container cluster involves organizing and allocating the resources of the cluster, such as CPU, memory, and storage, to the containers that make up our application.
Platforms Docker Swarm, K8E allow us to define the resources that our application needs, such as CPU, memory, and storage, and to specify the desired number of replicas of each container. The orchestration platform will then automatically schedule the containers on available resources, ensuring that they are placed in a way that maximizes resource utilization and minimizes the risk of overloading any one node.
Rollout and rollback(revert) releases
Rolling out and rolling back releases in a container cluster involves deploying and managing updates to the containers that make up our application.
When we roll out a release, we are deploying a new version of our application, which may include updates to the code, configuration, or dependencies of your containers.
Rolling back a release involves returning to a previous version of the application, either because the update caused problems or because you need to revert to a known good state.
Isolation of workloads and resources
Isolation of workloads and resources in a container cluster refers to the process of separating and protecting different applications or processes from one another, and ensuring that they do not interfere with each other or compete for shared resources. This can be important for a number of reasons, including security, stability, and resource utilization. These isolation can be done in many form like namespace based isolation, endpoint based isolation, etc.
In the article I have discuss some of the general features of a container cluster or an orchestrator. These features can be depend upon the use case and requirement in our organizations.
- Figma to design figures.
- Shh!! I do take help of ChatGPT.