< BLOG HOME

From Metal-to-Model™

Metal-to-Model™ Diagram

A platform architecture that was always heading here

When we first sketched “from metal to model” in the margins, it wasn’t a motto. It was a working hypothesis: that Kubernetes’ most basic behaviors – declarative, composable configuration and continuous reconciliation – could scale beyond pods and services. They could govern infrastructure, clusters, services, applications, pipelines, AI models, and even entire organizations’ IT estates. Public cloud, private cloud, bare metal, edge, IoT, same pattern. Same loop.

k0rdent was born from the idea that you can stop wrestling with cloud and infrastructure complexity. You can have just one system, built on open standards, that abstracts both infrastructure and automation and provides a single point of control.

Standards like ClusterAPI let Kubernetes manage clouds and bare metal directly. From a workload’s perspective, one Kubernetes cluster looks a lot like another. Many of the brittle dependencies that slow teams down, across environments, providers, and domains, can be pushed below the waterline.

Even better: if you express every moving part of a distributed, global infrastructure as a declaratively defined Kubernetes object, a management cluster can reconcile all of it, exactly the way Kubernetes already reconciles pods. A template for a cluster, a model pipeline, an inference endpoint: these are all objects, subject to the same reconciliation loop. If someone changes something manually and breaks policy, the system notices. It fixes it.

Do this, and the Kubernetes API becomes the system of record. The source of truth. Not just for infrastructure and workloads, but for cost, policy, telemetry, and control. A single point of control for global IT operations on every infrastructure.

That single point of control is the ideal place to plug in AI. Everything is expressed as structured data. Everything is traceable. AI-enhanced automation and observability don’t need a separate layer, they run on the same substrate. Delivering AI services for the business. And soon, running the substrate itself. At this point, it feels pretty obvious to say that AI will live everywhere in modern IT stacks – All infrastructure is AI Infrastructure

That’s what we started building. And now the shape is clear.

A platform, not a product

k0rdent is a platform, a unified object model for running AI and application workloads in production, across infrastructure, runtimes, pipelines, and policy. It represents every layer declaratively. It reconciles real-world behavior to a described state using Kubernetes-native control patterns. The same way a pod gets rescheduled when it fails, a cluster gets rebuilt, a pipeline gets re-run, a policy gets enforced.

It’s not built from one-off integrations or provisioning scripts. It’s a single system, where:

  • Bare metal, GPUs, and VMs are schedulable and lifecycle-managed

  • Environments, pipelines, models, and services are defined and managed declaratively

  • Policy, cost controls, telemetry, and security are applied and enforced continuously

Kubernetes everywhere, but not just clusters

At its root, k0rdent treats Kubernetes as a general-purpose automation plane, not just a container orchestrator. In fact, k0rdent can use it to run control planes themselves, deployed as workloads, inside “mothership” clusters. Worker nodes can run anywhere: public cloud, private cloud, bare metal on a customer premises or at the near or far edge. The control and compute planes are decoupled. Cluster definitions live in Git, enforced by CRDs. There’s no need to manage thousands of dedicated control-plane nodes.

This pattern supports multi-tenancy, policy-as-code, GitOps, and centralized visibility, at the scale of thousands of clusters, without operational drag.

Above the platform: The factory

Where ‘metal to model’ becomes fully visible is with Mirantis k0rdent AI. It’s a tool for building AI-optimized clouds (service provider ‘Neoclouds,’ enterprise AI data centers, AI edge clusters, etc.) on GPU-equipped high performance compute. And on top of those clouds, for building AI Factories: NVIDIA’s term for a deep stack of GPU hardware and software that lets organizations work with ML and AI models, pipeline them to developers, lets developers harness them to build applications, and pipelines those apps to production, then feeds back learnings and metrics to the head end, creating an AI flywheel for continuous improvement.

One of many amazing aspects of Mirantis k0rdent AI is that it preserves k0rdent’s essential strategy of abstraction (Kubernetes managing and hiding infrastructure) while also permitting the vertical connectivity and accountability AI apps need: where the app won’t work (or just as bad, won’t work well or economically) unless an agency manages which container is talking to which (real or virtualized or time-sliced) GPU core. This is ‘metal to model’ for real.

Mirantis k0rdent AI does it all: model pipelines, inference services, GPU environments, training workflows, telemetry, versioning, all defined as objects. Pipelines aren’t hand-built or brittle, they’re declarative, composable, portable. Built to be promoted across dev/stage/prod without reinventing the wheel.

You can track a model from training data to inference endpoint. You can see which GPU trained it. You can correlate usage to cost, availability to policy, and promotion to version.

Virtual machines too

Of course, k0rdent can also deploy and manage conventional clusters, which aren’t going away. In fact, it can deploy Mirantis Kubernetes Engine (MKE) 4k clusters, which are high-value, secure, and enterprise-ready. Nor does ‘metal to model’ leave VMs (remember them?) out of the picture. k0rdent can deploy Mirantis k0rdent Virtualization on a child cluster running on bare metal, so operators can run VMs alongside containers – managing them with a remarkably simple, cloud-like webUI. In a Mirantis k0rdent Virtualization environment, VMs and containers have the same control plane and networking environment. So it’s easy to leverage for gradually modernizing monoliths into microservices, at your own pace.

Need full-on infrastructure-as-a-service? (Like if you’re replacing a big stack of VMware?) k0rdent also provides optional support for Mirantis OpenStack for Kubernetes (MOSK), so customers can provision full Infrastructure-as-a-Service (VMs, networks, volumes) on bare metal using the same declarative patterns. (And then, incidentally, deploy k0rdent child clusters on your MOSK – but you’d already figured that out.)

From metal to model was never just a slogan. It was k0rdent’s architectural direction from day one.

Dominic Wilde

SVP of Marketing

Mirantis simplifies Kubernetes.

From the world’s most popular Kubernetes IDE to fully managed services and training, we can help you at every step of your K8s journey.

Connect with a Mirantis expert to learn how we can help you.

CONTACT US
k8s-callout-bg.png