What if Kubernetes had a compiler?

Platform teams want to give developers self-service. Developers want to ship without waiting on tickets. AI can generate a hundred YAML files in seconds. But who verifies them? Who checks that the Service selector matches the Deployment? That the RBAC rule grants the right permissions? That everything in production actually has ready what it depends on?

A compiler does. It catches mistakes before they reach your cluster. Developers move fast. The compiler keeps them safe. This is kix.

Step 1

Start with a single resource

A Deployment in YAML. Then the same thing in kix. Nix is just JSON with different punctuation: = instead of :, ; instead of ,, unquoted keys. Same fields, same structure. No new concepts to learn — just a slightly different syntax. Full syntax guide →

YAML (25 lines)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-server
  labels:
    app: api-server
spec:
  replicas: 3
  selector:
    matchLabels:
      app: api-server
  template:
    metadata:
      labels:
        app: api-server
    spec:
      containers:
      - name: api-server
        image: myregistry/api:1.4.2
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: 256Mi
            cpu: 250m
kix (same thing)
deployment = scope.mkResource {
  apiVersion = "apps/v1";
  kind = "Deployment";
  name = "api-server";
  labels = { app = "api-server"; };
  spec = {
    replicas = 3;
    selector.matchLabels = { app = "api-server"; };
    template = {
      metadata.labels = { app = "api-server"; };
      spec.containers = [{
        name = "api-server";
        image = "myregistry/api:1.4.2";
        ports = [{ containerPort = 8080; }];
        resources.requests = {
          memory = "256Mi";
          cpu = "250m";
        };
      }];
    };
  };
};
Nothing magical yet. Same fields. Same structure. But now it's data inside a programming language — not a text template.

Already have YAML/JSON files? Nix imports them natively with builtins.fromJSON and builtins.fromYAML. Migrate at your own pace. See how →
Step 2

Add a Service. Get a dependency graph for free.

In YAML, you repeat app: api-server in the Deployment's labels, the selector, the template, and again in the Service. Get one wrong → silent failure at runtime. In kix, you reference the Deployment directly.

deployment = scope.mkResource {
  kind = "Deployment";
  name = "api-server";
  spec.selector.matchLabels = { app = "api-server"; };
  # ... template, containers, etc ...
};

service = scope.mkResource {
  kind = "Service";
  name = "api-server";
  spec = {
    selector = self.deployment.out.selector;   # ← reference, not copy-paste
    ports = [{ port = 80; targetPort = 8080; }];
  };
};
self.deployment.out.selector gives you the Deployment's matchLabels — guaranteed to match. And because you referenced the Deployment, the compiler now knows: Service depends on Deployment. No annotations, no config files. Just a reference.
The compiler builds a dependency graph automatically: Service/api-server └── Deployment/api-server via .out.selector Every reference creates an edge. This graph is the foundation for everything that follows: diff, rollback, wave deployments.
Step 3

Wire up a ServiceAccount and ConfigMap. Still just references.

Every reference is tracked. The compiler knows exactly what depends on what.

serviceAccount = scope.mkResource {
  kind = "ServiceAccount";
  name = "api-server";
};

configmap = scope.mkResource {
  kind = "ConfigMap";
  name = "api-server";
  data.DATABASE_HOST = "postgres.default.svc";
};

deployment = scope.mkResource {
  kind = "Deployment";
  name = "api-server";
  spec.template.spec = {
    serviceAccountName = self.serviceAccount.out.name;   # ← tracked
    containers = [{
      image = "myregistry/api:1.4.2";
      volumeMounts = [{ name = "config"; mountPath = "/etc/app"; }];
    }];
    volumes = [{
      name = "config";
      configMap.name = self.configmap.out.name;             # ← tracked
    }];
  };
};
Service/api-server └── Deployment/api-server ├── ServiceAccount/api-server via .out.name in serviceAccountName └── ConfigMap/api-server via .out.name in volumes
Every .out.name carries a hidden fingerprint. Under the hood, each resource is a content-addressed hash. When you write self.configmap.out.name, the string "api-server" arrives with the ConfigMap's hash attached. The compiler sees this and records the dependency. Change the ConfigMap's content → its hash changes → the Deployment's hash changes too. Same input, same output. Always.
Step 4

Use the DSL where it helps. Skip it where it doesn't.

kix doesn't force a DSL on you. You can write raw K8s fields forever. But when you want to, helpers remove boilerplate without hiding what's happening.

Example: RBAC permissions

YAML — 24 lines for basic permissions
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: api-server
rules:
  - apiGroups: [""]
    resources:
      - configmaps
      - endpoints
      - pods
      - secrets
      - services
    verbs: [get, list, watch]
  - apiGroups: [""]
    resources: [events]
    verbs: [create, patch]
  - apiGroups: [networking.k8s.io]
    resources: [ingresses]
    verbs: [get, list, watch]
---
# + another 15 lines for ClusterRoleBinding...
kix RBAC DSL — same result
# Declare resource groups once
net = scope.rbac.group "networking.k8s.io" ["ingresses"];

# Pipe-friendly: resources |> intent
roles = {
  "api-server" = grant [
    ([core.configmaps core.endpoints
      core.pods core.secrets
      core.services]            |> watch)
    (core.events                |> emit)
    (net.ingresses              |> watch)
  ];
};

# Generates ClusterRole + ClusterRoleBinding
# with correct subjects, role refs, everything.
watch = get + list + watch. emit = create + patch. One word instead of a list of verbs you have to remember. grant merges rules automatically — no duplicate entries. But you can always drop down to raw fields when you need to.
Step 5

Your app needs a database. Ask for it.

In kix, a package declares what it needs as function arguments. The cluster evaluator resolves them automatically. No hardcoded service names. No values.yaml. Just say what you need.

# Package signature — these are the package's dependencies
{ scope, lib, postgres, valkey }:

# Inside the build function, use them naturally:
configmap = scope.mkResource {
  kind = "ConfigMap";
  data = {
    DATABASE_HOST = postgres.out.fqdn;      # "postgres.myns.svc.cluster.local"
    CACHE_HOST    = valkey.out.fqdn;        # "valkey.myns.svc.cluster.local"
  };
};
# In the cluster definition, just place them in the same namespace:
instances.my-app-namespace = {
  my-app  = { package = packages.python-app; };
  postgres = { package = packages.postgres;  };
  valkey   = { package = packages.valkey;    };
};

# The compiler sees that python-app needs "postgres" and "valkey".
# It finds exactly one of each in the same namespace.
# Injects them automatically. Done.
No wiring code. The app says "I need postgres." The cluster has a postgres in the same namespace. The compiler connects them. The dependency is tracked via the hash graph. Change the postgres config → the app's ConfigMap hash changes → the diff shows exactly what's affected.
What about a generic cache? A package can ask for a concrete dependency with a generic fallback:

{ scope, lib, valkey, cache ? valkey }:

"I need a cache. I built it with valkey. But if the cluster provides something called 'cache', I'll use that instead." If nothing called 'cache' exists, it falls back to whatever valkey resolved to. The cluster operator can swap in redis with an alias — no code change in the package.
Step 6

Substitution is explicit and compiler-checked

Your app depends on valkey. Your cluster runs redis instead. The cluster operator declares the substitution explicitly — and the compiler verifies that the substitute actually provides everything the app needs. If it doesn't, you get a compile error. Not a 3am page.

# Cluster operator explicitly declares: "redis substitutes for valkey here"
instances.cache-namespace = {
  my-redis = {
    package = packages.redis;
    aliases = [ "valkey" ];           # ← explicit opt-in by the operator
  };
};

# OR: override just one specific instance's dependency
instances.app-namespace = {
  my-app = {
    package = packages.python-app;
    deps.valkey = ref.cache-namespace.my-redis;    # ← explicit override
  };
};
The compiler checks every field the app actually uses. If the app accesses valkey.out.fqdn, the substitute must expose .out.fqdn. If it accesses valkey.out.sentinelPort and redis doesn't have that → compile error. Substitution is never implicit. The operator opts in, and the compiler verifies the contract.
Step 7

Teams write independent modules. The compiler merges them.

Each team owns a file. They declare what they need. The platform team sets defaults and policies. Nobody coordinates manually. The compiler reconciles everything.

Platform team (sets options & defaults)
# platform.nix
options = {
  cluster.domain = mkOption { type = str; };
  tenants = mkOption { ... };
  monitoring.enabled = mkOption { ... };
};

config = {
  # Provide infrastructure that apps auto-discover:
  instances.kube-system = {
    coredns = { ... };
    cert-manager = { ... };
    network-policies = {
      config.tenants = config.tenants;
      # ^ aggregated from ALL modules
    };
  };
  instances.nginx-system = {
    ingress-public = { ... };
    ingress-private = { ... };
  };
};
App team (declares what they need)
# shop.nix
config = {
  # Contribute to shared options:
  tenants.acme.namespaces = [
    "shop-prod"
    "shop-stage"
  ];

  # Declare instances — infra is auto-wired:
  instances.shop-prod = {
    my-app = {
      package = packages.python-app;
      config = {
        image.tag = "v2.8.1";
        replicas = 16;
        ingress.hosts = [
          { host = "app.example.com"; }
        ];
        # No clusterIssuer — DI wires it
        # No storageClass — DI wires it
        # No ingressClass — DI wires it
      };
    };
    postgres = { package = packages.postgres; };
    valkey = { package = packages.valkey; };
  };
};
The app team never mentions issuer names, storage classes, or ingress controllers. DI resolves them from whatever the platform team provides. Change the TLS issuer cluster-wide? Edit one line in platform.nix. Every app updates.
Tenant isolation for free. The app team writes tenants.acme.namespaces = ["webshop-prod", "webshop-stage"]. Another team writes tenants.acme.namespaces = ["dashboard-prod"]. The module system merges the lists. The network-policies package reads the merged result and generates Cilium policies that allow traffic between all namespaces in the tenant. No team coordinated. The compiler did it.
Step 8

The full cluster is 39 lines.

Each team owns their module. The cluster definition just imports them.

kix.buildCluster {
  name = "prod-eu1";
  modules = [
    ./platform.nix          # CoreDNS, ingress, cert-manager, storage, network policies
    ./operators.nix         # Postgres operator, Cilium, OpenEBS
    ./backup.nix            # Velero backup schedules
    ./monitoring.nix        # Prometheus, Grafana, Loki
    ./registry.nix          # Container registry
    ./webshop.nix           # Webshop app: prod + stage + demo
    ./dashboard.nix         # Dashboard: prod + stage
    ./services.nix          # Internal services
    {
      cluster.domain = "prod-eu1.example.com";
      monitoring.enabled = true;
    }
  ];
}
This is real. This cluster runs in production. 10 modules. 27 packages. 200+ Kubernetes resources. All type-checked, all dependency-tracked, all diffable. From this 39-line root.
Step 9

The output: a content-addressed resource graph

When you compile the cluster, every resource becomes a content-addressed hash. Every reference between resources is a hash dependency. The entire cluster is a single root hash that depends on everything.

Activation/prod-eu1 ← root hash: sha256:9f3a...c7e2 ├── PackageInstance/coredns │ ├── Service/coredns │ │ └── Deployment/coredns │ │ ├── ConfigMap/coredns │ │ │ └── Service/ingress-private ← cross-namespace dep (via DNS rewrite) │ │ └── ServiceAccount/coredns │ ├── ClusterRole/coredns │ └── ClusterRoleBinding/coredns │ ├── PackageInstance/ingress-public │ ├── Service/ingress-public │ │ └── Deployment/ingress-public │ ├── IngressClass/public │ └── ClusterRole + ClusterRoleBinding │ ├── PackageInstance/webshop-prod │ ├── Service/webshop-backend → Deployment → ConfigMap │ │ ├── postgres.out.fqdn │ │ └── valkey.out.fqdn │ ├── HPA/webshop-backend │ ├── Ingress/webshop → clusterIssuer.out.withCert │ └── ... (workers, cron, queue) │ └── ... 200+ resources, all hash-linked
This graph is the key to everything. Because every resource is a hash and every dependency is tracked:
  • Change one field → only affected hashes change
  • Compare two root hashes → see exactly what differs (deterministic diff)
  • Roll back → point to the previous root hash (everything it needs is still there)
  • Deploy in waves → the graph tells you the correct order
  • Detect drift → recompute hashes, compare with what's running
Step 10

The workflow: compile, diff, apply

Three commands. No surprises.

$ kix build prod-eu1                  # 1. Compile: merge modules, resolve DI,
                                      #    type-check, validate cross-references,
                                      #    generate manifests

$ kix diff prod-eu1                   # 2. Diff: compare against live cluster
                                      #    Shows exactly what will change:

  identical: 187
  modified:    4      ← you review these
  kix-only:    2      ← will be created
  cluster-only: 1    ← not managed by kix (orphan)

  ~ Deployment/coredns
    image: coredns:1.12.3 → coredns:1.13.1
  ~ Service/ingress-public
    externalTrafficPolicy: Local → Cluster

$ kix deploy prod-eu1                 # 3. Deploy: applies verified changes,
                                      #    ensures cluster health throughout
Review in CI. Approve in PR. Deploy with confidence. The diff is deterministic — same code always produces the same diff. AI generated the code? The compiler verified it. The diff shows exactly what changed. The hash graph guarantees nothing else is affected.
Rollback? Point to the previous root hash. Because the entire cluster state is content-addressed, the old graph is still there — every resource, every dependency, every version. Roll back the root hash and you roll back everything it depends on. Consistently.
Note: kix diff and kix deploy are under active development. kix deploy goes beyond kubectl apply — it understands the dependency graph, deploys in the correct wave order, monitors cluster health during rollout, and can halt or roll back if something goes wrong mid-deploy.
Towards AI

Infrastructure that's ready for AI

kix doesn't use AI. It doesn't require AI. It works perfectly fine with humans writing every line. But it creates something that didn't exist before: a verification layer for Kubernetes configuration.

Today, if an AI agent generates a hundred Kubernetes resources, there's nothing between "looks plausible" and "hits your cluster." No tool checks that selectors match, that RBAC rules are correct, that cross-references resolve. You review YAML by eye and hope.

A compiler changes that equation. Whether a human writes the config or an AI does, the compiler catches the same classes of errors. That's the path toward safely using AI for infrastructure — not by trusting the AI, but by verifying its output the same way you'd verify anyone's output.

What you get

Write Standard K8s fields. No new concepts. DSL where it helps. Reference .out.name, .out.selector, .out.fqdn — auto-tracked dependencies. Inject Declare what you need. The compiler finds it. Aliases for substitution. Compose Teams write independent modules. The compiler merges and type-checks. Compile Type errors, missing deps, selector mismatches → caught at build time. Diff Deterministic diff against live cluster. Review before deploy. Deploy Wave ordering, health monitoring, automatic rollback. Not just "apply." Trust AI The compiler is the verification layer. AI writes. Compiler checks. You review.