< BLOG HOME

What is container orchestration?

image

The past several years have brought the onset of applications where all the code, libraries, and even parts of the operating system are packaged in containers, such as Docker containers. However, running a production application means more than simply creating a container and running it on Docker Engine. It requires container orchestration — the process of coordinating, managing, and automating containers at scale. 

In this guide, we’ll explore how a container orchestrator works, explore popular platforms like Kubernetes and Docker Swarm, and show you how to plan your application architecture. 

Key highlights:

  • Container orchestration automates the deployment, scaling, and health management of containers.

  • Tools like Kubernetes and Docker Swarm help teams operate containers at scale in production.

  • Orchestration supports high availability, fault tolerance, and more efficient resource use.

  • Mirantis provides a robust orchestration platform that supports Kubernetes and Docker Swarm, offering flexibility across cloud services and on-prem environments.

How Does Container Orchestration Work?

Containerization of applications makes it possible to more easily run and scale them in diverse environments, because Docker Engine acts as the application's conceptual "home".  However, it doesn't solve all of the problems involved in running a production application workload, just the opposite, in fact.

A non-containerized application assumes that it will be installed and run manually or delivered via a virtual machine. However, a containerized application has to be placed, started, and provided with resources. This kind of automation is why you need a quality container orchestrator — a tool that can manage placement, scaling, and recovery across your environment.

These Docker container orchestration tools perform the following tasks:

  1. Determine what resources, such as compute nodes and storage, are available

  2. Determine the best node (or nodes) on which to run specific containers

  3. Allocate resources such as storage and networking

  4. Start one or more copies of the desired containers, based on redundancy requirements

  5. Monitor the containers, and in the event that one or more of them is no longer functional, replace them.

Multiple container orchestration tools exist, and they don't all handle objects in the same way.

Benefits of Container Orchestration

Container orchestration makes it easier for teams to deploy and manage applications made up of many containers. It takes care of the hard parts of running distributed systems so developers and operators can focus more on building features and delivering value. 

Key benefits include:

  • Simplified Operations: Orchestrators automate tasks like deployment, scaling, and recovery, reducing the need for manual intervention.

  • Improved Reliability: Health checks, automatic restarts, and rescheduling help keep services running even when things go wrong.

  • Faster DevOps Workflows: Declarative configs and repeatable patterns enable continuous delivery and speed up the path from code to production.

  • Resource Efficiency: Smart scheduling places containers on the right hosts, improving hardware usage across your environment.

  • Consistency Across Environments: Orchestration ensures the same configurations can be applied to dev, staging, and production with minimal drift.

  • Better Visibility and Control: Built-in monitoring, logging, and policy enforcement help teams understand what’s happening and keep systems secure.

Container Orchestration Examples and Use Cases

Container orchestration is widely used across industries to support modern application delivery, especially in large-scale and fast-moving environments. It helps teams deploy and manage complex systems more reliably and efficiently. Here are some real-world enterprise use cases:

Microservices Architectures

Enterprises break monolithic apps into microservices and use orchestration to deploy and manage them as independent, scalable containers. 

Example: AtVaudoise Insurance, Mirantis Kubernetes Engine helped the team quickly ship microservices—starting with Swarm for fast wins and advancing to Kubernetes for scale—speeding on-prem adoption and delivery.

Hybrid and Multicloud Deployments

Orchestrators make it easier to run containerized workloads consistently across data centers, cloud providers, and edge environments. 

Example:Société Générale integrated Mirantis Kubernetes Engine with existing NetApp storage and began moving stateful legacy apps into containers, hardening security while operating across environments.

CI/CD Pipelines in DevOps Workflows

Build, test, and deploy pipelines that use orchestration to spin up clean, disposable environments for faster delivery cycles. 

Example:Nebul’s sovereign AI cloud uses Mirantis k0rdent AI to automate the AI stack so teams can provision fresh environments and move from build to deploy faster—under strict EU data-sovereignty requirements.

AI/ML Model Training and Serving

Data science teams use orchestrators to run GPU-accelerated training jobs and deploy models for inference in production environments. 

Example:Nebul delivers GPU-accelerated AI supercomputing on a sovereign cloud powered by Mirantis k0rdent AI, linking GPUs to compliant training and production inference.

Real-Time Data Processing

Streaming platforms like Kafka and Flink are often run on Kubernetes to process large volumes of data with high availability and scalability. 

Example: Anedge project monitoring coral reefs used k0s and NATS on Raspberry Pi clusters to stream sensor data in real time from remote buoys, proving resilient, low-cost processing far from the data center.

Digital Transformation at Scale

Global enterprises orchestrate apps and services across thousands of containers to modernize legacy systems and roll out new digital experiences. 

Example:Société Générale’s cloud-native transformation standardized on Mirantis Kubernetes Engine, migrating legacy workloads and continually enhancing clusters with stronger security and shared observability.

How to Plan for the Orchestration of Containers

In an ideal situation, your application should not be dependent on which container orchestration platform you're using. Instead, you should be able to orchestrate your containers using any platform as long as you configure that platform correctly.

All of this relies, again, on knowing the architecture of your application so that you can implement it outside of the application itself. For example, let's say we're building an e-commerce site.

We have a database, web server, and payment gateway, all of which communicate over a network. We also have all of the various passwords needed to allow them to talk to each other. The compute, network, storage, and secrets are all resources that need to be handled by the container orchestration software, but how that happens depends on the solution that you choose.

Types of Container Orchestration Platforms

Because different environments require different levels of coordination, the market has spun off multiple container orchestration systems over the last few years, including some open source solutions. While each container orchestrator does the same basic job of automating the process, they work differently and were designed for different user scenarios.

Docker Container Orchestration with Swarm

To the engineers at Docker, orchestration was a capability to be provided as a first-class citizen.  As such, Swarm is included with Docker itself. Enabling Swarm mode is straightforward, as is adding nodes.

Docker orchestration via Swarm enables developers to define applications in a single file, such as:

version: "3.7"
services:
  database:
    image: dockersamples/atsea_db
    ports:
      - "5432"
    environment:
      POSTGRES_USER: gordonuser
      POSTGRES_DB_PASSWORD_FILE: /run/secrets/postgres-password
      POSTGRES_DB: atsea
      PGDATA: /var/lib/postgresql/data/pgdata
    networks:
      - atsea-net
    secrets:
      - domain-key
      - postgres-password
    deploy:
      placement:
        constraints:
          - 'node.role == worker'

  appserver:
    image: dockersamples/atsea_app
    ports:
      - "8080"
    networks:
      - atsea-net
    environment:
      METADATA: proxy-handles-tls
    deploy:
      labels:
        com.docker.lb.hosts: atsea.docker-ee-stable.cna.mirantis.cloud
        com.docker.lb.port: 8080
        com.docker.lb.network: atsea-net
        com.docker.lb.ssl_cert: wildcard_docker-ee-stable_crt
        com.docker.lb.ssl_key: wildcard_docker-ee-stable_key
        com.docker.lb.redirects: http://atsea.docker-ee-stable.cna.mirantis.cloud,https://atsea.docker-ee-stable.cna.mirantis.cloud
        com.libkompose.expose.namespace.selector: "app.kubernetes.io/name:ingress-nginx"
      replicas: 2
      update_config:
        parallelism: 2
        failure_action: rollback
      placement:
        constraints:
          - 'node.role == worker'
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 3
        window: 120s
    secrets:
      - domain-key
      - postgres-password

  payment_gateway:
    image: cna0/atsea_gateway
    secrets:
      - staging-token
    networks:
      - atsea-net
    deploy:
      update_config:
        failure_action: rollback
      placement:
        constraints:
          - 'node.role == worker'

networks:
  atsea-net:
    name: atsea-net

secrets:
  domain-key:
    name: wildcard_docker-ee-stable_key
    file: ./wildcards.docker-ee-stable.key
  domain-crt:
    name: wildcard_docker-ee-stable_crt
    file: ./wildcards.docker-ee-stable.crt
  staging-token:
    name: staging_token
    file: ./staging_fake_secret.txt
  postgres-password:
    name: postgres_password
    file: ./postgres_password.txt

In this example, we have three services: the database, the application server, and the payment gateway, all of which include their own particular configurations.  These configurations also refer to objects such as networks and secrets, which are defined independently.

The advantage of Swarm is that it's got a small learning curve, and developers can run their applications in the same environment on their laptop as it will use when it runs in production. The disadvantage is that it doesn't support as many features as its companion, Kubernetes.

Kubernetes Orchestration

While the Docker orchestrator, Swarm, is still widely used in many contexts, the acknowledged champion is Kubernetes container orchestration. Like Swarm, Kubernetes enables developers to create resources such as groups of replicas, networking, and storage, but it's done in a completely different way.

For one thing, Kubernetes is a separate piece of software; in order to use it, you must either install a distribution locally or have access to an existing cluster. For another, the entire architecture of applications and how they're created is totally different from Swarm. For example, the application we created in the earlier example would look like this:

apiVersion: v1
data:
  staging-token: c3RhZ2luZw0K
kind: Secret
metadata:
  creationTimestamp: null
  labels:
    io.kompose.service: staging-token
  name: staging-token
type: Opaque
---
apiVersion: v1
data:
  postgres-password: cXdhcG9sMTMNCg==
kind: Secret
metadata:
  creationTimestamp: null
  labels:
    io.kompose.service: postgres-password
  name: postgres-password
type: Opaque
---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    kompose.version: 1.21.0 (HEAD)
  creationTimestamp: null
  labels:
    io.kompose.service: payment-gateway
  name: payment-gateway
spec:
  replicas: 1
  selector:
    matchLabels:
      io.kompose.service: payment-gateway
  strategy: {}
  template:
    metadata:
      annotations:
        kompose.version: 1.21.0 (HEAD)
      creationTimestamp: null
      labels:
        io.kompose.network/atsea-net: "true"
        io.kompose.service: payment-gateway
    spec:
      containers:
        - image: cna0/atsea_gateway
          name: payment-gateway
          resources: {}
          volumeMounts:
            - mountPath: /run/secrets/staging-token
              name: staging-token
      nodeSelector:
        node-role.kubernetes.io/worker: "true"
      restartPolicy: Always
      volumes:
        - name: staging-token
          secret:
            items:
              - key: staging-token
                path: staging-token
            secretName: staging-token
status: {}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  creationTimestamp: null
  name: ingress-appserver
spec:
  ingress:
    - from:
        - namespaceSelector:
            matchLabels:
              app.kubernetes.io/name: ingress-nginx
        - podSelector: {}
  podSelector:
    matchLabels:
      io.kompose.network/atsea-net: "true"
  policyTypes:
    - Ingress
---
apiVersion: v1
data:
  domain-key: 
kind: Secret
metadata:
  creationTimestamp: null
  labels:
    io.kompose.service: domain-key
  name: domain-key
type: Opaque
---
apiVersion: v1
data:
  Domain-crt: 
kind: Secret
metadata:
  creationTimestamp: null
  labels:
    io.kompose.service: domain-crt
  name: domain-crt
type: Opaque
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    kompose.version: 1.21.0 (HEAD)
  creationTimestamp: null
  labels:
    io.kompose.service: database
  name: database
spec:
  ports:
    - name: "5432"
      port: 5432
      targetPort: 5432
  selector:
    io.kompose.service: database
status:
  loadBalancer: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    kompose.version: 1.21.0 (HEAD)
  creationTimestamp: null
  labels:
    io.kompose.service: database
  name: database
spec:
  replicas: 1
  selector:
    matchLabels:
      io.kompose.service: database
  strategy: {}
  template:
    metadata:
      annotations:
        kompose.version: 1.21.0 (HEAD)
      creationTimestamp: null
      labels:
        io.kompose.network/atsea-net: "true"
        io.kompose.service: database
    spec:
      containers:
        - env:
            - name: PGDATA
              value: /var/lib/postgresql/data/pgdata
            - name: POSTGRES_DB
              value: atsea
            - name: POSTGRES_DB_PASSWORD_FILE
              value: /run/secrets/postgres-password
            - name: POSTGRES_USER
              value: gordonuser
          image: dockersamples/atsea_db
          name: database
          ports:
            - containerPort: 5432
          resources: {}
          volumeMounts:
            - mountPath: /run/secrets/domain-key
              name: domain-key
            - mountPath: /run/secrets/postgres-password
              name: postgres-password
      nodeSelector:
        node-role.kubernetes.io/worker: "true"
      restartPolicy: Always
      volumes:
        - name: domain-key
          secret:
            items:
              - key: domain-key
                path: domain-key
            secretName: domain-key
        - name: postgres-password
          secret:
            items:
              - key: postgres-password
                path: postgres-password
            secretName: postgres-password
status: {}
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  creationTimestamp: null
  name: atsea-net
spec:
  ingress:
    - from:
        - podSelector:
            matchLabels:
              io.kompose.network/atsea-net: "true"
  podSelector:
    matchLabels:
      io.kompose.network/atsea-net: "true"
---
apiVersion: v1
kind: Service
metadata:
  annotations:
    kompose.version: 1.21.0 (HEAD)
  creationTimestamp: null
  labels:
    io.kompose.service: appserver
  name: appserver
spec:
  ports:
    - name: "8080"
      port: 8080
      targetPort: 8080
  selector:
    io.kompose.service: appserver
status:
  loadBalancer: {}
---
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    io.kompose.network/atsea-net: "true"
    io.kompose.service: appserver
  name: appserver
spec:
  containrs:
    - env:
        - name: METADATA
          value: proxy-handles-tls
      image: dockersamples/atsea_app
      name: appserver
      ports:
        - containerPort: 8080
      resources: {}
      volumeMounts:
        - mountPath: /run/secrets/domain-key
          name: domain-key
        - mountPath: /run/secrets/postgres-password
          name: postgres-password
  nodeSelector:
    node-role.kubernetes.io/worker: "true"
  restartPolicy: OnFailure
  volumes:
    - name: domain-key
      secret:
        items:
          - key: domain-key
            path: domain-key
        secretName: domain-key
    - name: postgres-password
      secret:
        items:
          - key: postgres-password
            path: postgres-password
        secretName: postgres-password
status: {}
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kompose.version: 1.21.0 (HEAD)
  creationTimestamp: null
  labels:
    io.kompose.service: appserver
  name: appserver
spec:
  rules:
    - host: atsea.docker-ee-stable.cna.mirantis.cloud
      http:
        paths:
          - backend:
              serviceName: appserver
              servicePort: 8080
  tls:
    - hosts:
        - atsea.docker-ee-stable.cna.mirantis.cloud
      secretName: tls
status:
  loadBalancer: {}

The application is the same, just created in a different way. As you can see, the web application server, the database, and the payment gateway are still created using Kubernetes, just with a different structure. In addition, the support structures, such as networks and secrets, must be created. 

The additional complexity does bring a number of benefits, however. Kubernetes is a much more full-featured container orchestration solution than Swarm, and can be appropriate in both small and large environments.

Where to Find Your Container Orchestrator

Not only are there different types of orchestration, but you can also find it in different places, depending on your situation.

Local Desktop/Laptop

Most developers work on their desktop or laptop machine, so it's convenient if the target container orchestration platform is available at that level. For Swarm users, the process is straightforward as it’s already part of Docker and just needs to be enabled. 

For Kubernetes, the developer needs to take an additional step to install Kubernetes on their machine, but there are several tools that make this possible, such as Kubeadm.

Mirantis Kubernetes Engine (formerly Docker Enterprise) is an option that supports Kubernetes and Swarm container orchestrator solutions.

Internal Network

Once the developer is ready to deploy, if the application will live on an on-premise data center, typically users won't need to install a cluster, because it will have been installed by administrators. Instead, they will connect using the connection information given to them.

Administrators can deploy a number of different cluster types; for example, enterprise-grade Docker Swarm clusters and Kubernetes clusters can both be deployed by Mirantis Container Cloud, a multi-cloud container platform. 

AWS

Businesses that run their infrastructure on Amazon Web Services have a number of different choices. For example, you can run install Mirantis Kubernetes Engine (MKE) on Amazon EC2 compute servers, or you can use Mirantis Container Cloud to deploy clusters directly on Amazon Web Services. You also have the option to use specific container resources, such as Amazon Elastic Container Services (ECS) or Amazon Elastic Kubernetes Service (EKS).

Google

Choices for Google Cloud are similar; you can install a container management platform such as MKE, or you can use Google Kubernetes Engine (GKE) to spin up and manage clusters using Google's hardware and software, and their API.

Azure

The situation is the same for Azure Cloud: you must choose between deploying a distribution such as MKE on compute nodes, providing Swarm and Kubernetes capabilities, or using the Azure Kubernetes Service (AKS) to provide Kubernetes clusters to your users.

What Are Container Orchestration Tools?

A container orchestration tool is software that helps manage containers across many machines. Instead of setting up and running each container by hand, these tools make it easier to deploy, schedule, and keep everything running smoothly. They’re used in environments where lots of containers need to work together, often across clusters of servers.

These tools are especially useful for teams working with microservices or cloud-native applications, where automation and scalability are important.

How Do Container Orchestration Tools Work?

Container orchestration tools like Kubernetes and Docker Swarm automate the complex work of running containerized applications across many servers. They make sure containers are placed correctly, stay healthy, and scale as needed. Here's how they handle the key tasks involved:

Define Application Configurations Using YAML or JSON

You begin by writing a configuration file in YAML or JSON that describes how your application should run. This file defines the desired state, including:

  • Which container images to use

  • How many replicas to run

  • CPU and memory requirements

  • Network ports, volumes, and environment variables

The orchestrator uses this file as the single source of truth. By standardizing configuration in a clear, machine-readable format, you reduce human error, ensure repeatable deployments, and give both technical teams and business stakeholders confidence that applications will behave the same way in every environment.

Automatically Schedule and Deploy Containers to Optimal Hosts

Once the configuration is submitted, the orchestrator figures out where to place each container across the cluster. It looks at:

  • Available compute resources on each node

  • Placement preferences and anti-affinity rules

  • Specialized hardware or storage needs

This ensures containers are deployed efficiently and with the right level of redundancy. Optimal placement maximizes hardware utilization, reduces latency, and prevents service interruptions, keeping performance high and costs under control, which directly impacts customer experience and operational margins.

Manage the Full Container Lifecycle Across Environments

Orchestration tools keep your containers running properly from start to finish. They manage:

  • Starting and stopping containers as needed

  • Automatically restarting failed or unhealthy containers

  • Handling rescheduling if a server goes down

This gives your app high availability across development, test, and production environments. Continuous lifecycle management protects uptime, prevents revenue loss from outages, and ensures teams can release features without jeopardizing system stability.

Configure Networking, Logging, and Health Monitoring

A complete system also needs visibility and communication. Orchestrators provide built-in support for:

  • Secure, cluster-wide networking between services

  • Centralized logging and metrics collection

  • Automatic health checks and self-healing behavior

These features make it easier to operate and troubleshoot containerized applications at scale. Strong observability and secure communication allow teams to spot and resolve issues before they affect customers, reducing downtime risk and safeguarding business reputation.

Apply Repeatable Patterns for Scaling, Updates, and Consistency

Orchestration tools help you grow and update applications with minimal risk by offering:

  • Declarative scaling for handling more traffic

  • Rolling updates with built-in rollback support

  • Environment consistency using reusable templates

These patterns promote safer deployments and help teams move faster with confidence. Consistent, low-risk change management allows the business to respond quickly to market demand while minimizing the risk of costly downtime or failed releases.

Mirantis Simplifies Container Orchestration for Enterprise

The best way to get started with a container orchestrator is to simply pick a system and try it out!  You can try installing kubeadm, or you can make it easy on yourself and install a full system such as Mirantis Kubernetes Engine, which provides you with multiple options for container orchestration platforms.

Book a demo today and see how Mirantis can help your enterprise streamline container orchestration.

Nick Chase

Director of Technical Marketing

Mirantis simplifies Kubernetes.

From the world’s most popular Kubernetes IDE to fully managed services and training, we can help you at every step of your K8s journey.

Connect with a Mirantis expert to learn how we can help you.

CONTACT US
k8s-callout-bg.png