Platform teams want to give developers self-service. Developers want to ship
without waiting on tickets. AI can generate a hundred YAML files in seconds.
But who verifies them? Who checks that the Service selector matches the Deployment?
That the RBAC rule grants the right permissions? That everything in production actually has ready what it depends on?
A compiler does. It catches mistakes before they reach your cluster.
Developers move fast. The compiler keeps them safe. This is kix.
A Deployment in YAML. Then the same thing in kix.
Nix is just JSON with different punctuation: = instead of :,
; instead of ,, unquoted keys. Same fields, same structure.
No new concepts to learn — just a slightly different syntax.
Full syntax guide →
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-server
labels:
app: api-server
spec:
replicas: 3
selector:
matchLabels:
app: api-server
template:
metadata:
labels:
app: api-server
spec:
containers:
- name: api-server
image: myregistry/api:1.4.2
ports:
- containerPort: 8080
resources:
requests:
memory: 256Mi
cpu: 250m
deployment = scope.mkResource {
apiVersion = "apps/v1";
kind = "Deployment";
name = "api-server";
labels = { app = "api-server"; };
spec = {
replicas = 3;
selector.matchLabels = { app = "api-server"; };
template = {
metadata.labels = { app = "api-server"; };
spec.containers = [{
name = "api-server";
image = "myregistry/api:1.4.2";
ports = [{ containerPort = 8080; }];
resources.requests = {
memory = "256Mi";
cpu = "250m";
};
}];
};
};
};
builtins.fromJSON and builtins.fromYAML.
Migrate at your own pace. See how →
In YAML, you repeat app: api-server in the Deployment's labels,
the selector, the template, and again in the Service. Get one wrong → silent failure at runtime.
In kix, you reference the Deployment directly.
deployment = scope.mkResource {
kind = "Deployment";
name = "api-server";
spec.selector.matchLabels = { app = "api-server"; };
# ... template, containers, etc ...
};
service = scope.mkResource {
kind = "Service";
name = "api-server";
spec = {
selector = self.deployment.out.selector; # ← reference, not copy-paste
ports = [{ port = 80; targetPort = 8080; }];
};
};
Every reference is tracked. The compiler knows exactly what depends on what.
serviceAccount = scope.mkResource {
kind = "ServiceAccount";
name = "api-server";
};
configmap = scope.mkResource {
kind = "ConfigMap";
name = "api-server";
data.DATABASE_HOST = "postgres.default.svc";
};
deployment = scope.mkResource {
kind = "Deployment";
name = "api-server";
spec.template.spec = {
serviceAccountName = self.serviceAccount.out.name; # ← tracked
containers = [{
image = "myregistry/api:1.4.2";
volumeMounts = [{ name = "config"; mountPath = "/etc/app"; }];
}];
volumes = [{
name = "config";
configMap.name = self.configmap.out.name; # ← tracked
}];
};
};
self.configmap.out.name, the string "api-server" arrives with the
ConfigMap's hash attached. The compiler sees this and records the dependency.
Change the ConfigMap's content → its hash changes → the Deployment's
hash changes too. Same input, same output. Always.
kix doesn't force a DSL on you. You can write raw K8s fields forever. But when you want to, helpers remove boilerplate without hiding what's happening.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: api-server
rules:
- apiGroups: [""]
resources:
- configmaps
- endpoints
- pods
- secrets
- services
verbs: [get, list, watch]
- apiGroups: [""]
resources: [events]
verbs: [create, patch]
- apiGroups: [networking.k8s.io]
resources: [ingresses]
verbs: [get, list, watch]
---
# + another 15 lines for ClusterRoleBinding...
# Declare resource groups once net = scope.rbac.group "networking.k8s.io" ["ingresses"]; # Pipe-friendly: resources |> intent roles = { "api-server" = grant [ ([core.configmaps core.endpoints core.pods core.secrets core.services] |> watch) (core.events |> emit) (net.ingresses |> watch) ]; }; # Generates ClusterRole + ClusterRoleBinding # with correct subjects, role refs, everything.
grant merges rules automatically — no duplicate entries.
But you can always drop down to raw fields when you need to.
In kix, a package declares what it needs as function arguments. The cluster evaluator resolves them automatically. No hardcoded service names. No values.yaml. Just say what you need.
# Package signature — these are the package's dependencies { scope, lib, postgres, valkey }: # Inside the build function, use them naturally: configmap = scope.mkResource { kind = "ConfigMap"; data = { DATABASE_HOST = postgres.out.fqdn; # "postgres.myns.svc.cluster.local" CACHE_HOST = valkey.out.fqdn; # "valkey.myns.svc.cluster.local" }; };
# In the cluster definition, just place them in the same namespace: instances.my-app-namespace = { my-app = { package = packages.python-app; }; postgres = { package = packages.postgres; }; valkey = { package = packages.valkey; }; }; # The compiler sees that python-app needs "postgres" and "valkey". # It finds exactly one of each in the same namespace. # Injects them automatically. Done.
{ scope, lib, valkey, cache ? valkey }:
Your app depends on valkey. Your cluster runs redis instead. The cluster operator declares the substitution explicitly — and the compiler verifies that the substitute actually provides everything the app needs. If it doesn't, you get a compile error. Not a 3am page.
# Cluster operator explicitly declares: "redis substitutes for valkey here" instances.cache-namespace = { my-redis = { package = packages.redis; aliases = [ "valkey" ]; # ← explicit opt-in by the operator }; }; # OR: override just one specific instance's dependency instances.app-namespace = { my-app = { package = packages.python-app; deps.valkey = ref.cache-namespace.my-redis; # ← explicit override }; };
valkey.out.fqdn,
the substitute must expose .out.fqdn. If it accesses
valkey.out.sentinelPort and redis doesn't have that →
compile error. Substitution is never implicit. The operator opts in,
and the compiler verifies the contract.
Each team owns a file. They declare what they need. The platform team sets defaults and policies. Nobody coordinates manually. The compiler reconciles everything.
# platform.nix options = { cluster.domain = mkOption { type = str; }; tenants = mkOption { ... }; monitoring.enabled = mkOption { ... }; }; config = { # Provide infrastructure that apps auto-discover: instances.kube-system = { coredns = { ... }; cert-manager = { ... }; network-policies = { config.tenants = config.tenants; # ^ aggregated from ALL modules }; }; instances.nginx-system = { ingress-public = { ... }; ingress-private = { ... }; }; };
# shop.nix config = { # Contribute to shared options: tenants.acme.namespaces = [ "shop-prod" "shop-stage" ]; # Declare instances — infra is auto-wired: instances.shop-prod = { my-app = { package = packages.python-app; config = { image.tag = "v2.8.1"; replicas = 16; ingress.hosts = [ { host = "app.example.com"; } ]; # No clusterIssuer — DI wires it # No storageClass — DI wires it # No ingressClass — DI wires it }; }; postgres = { package = packages.postgres; }; valkey = { package = packages.valkey; }; }; };
tenants.acme.namespaces = ["webshop-prod", "webshop-stage"].
Another team writes tenants.acme.namespaces = ["dashboard-prod"].
The module system merges the lists. The network-policies package reads the merged
result and generates Cilium policies that allow traffic between all namespaces in the tenant.
No team coordinated. The compiler did it.
Each team owns their module. The cluster definition just imports them.
kix.buildCluster {
name = "prod-eu1";
modules = [
./platform.nix # CoreDNS, ingress, cert-manager, storage, network policies
./operators.nix # Postgres operator, Cilium, OpenEBS
./backup.nix # Velero backup schedules
./monitoring.nix # Prometheus, Grafana, Loki
./registry.nix # Container registry
./webshop.nix # Webshop app: prod + stage + demo
./dashboard.nix # Dashboard: prod + stage
./services.nix # Internal services
{
cluster.domain = "prod-eu1.example.com";
monitoring.enabled = true;
}
];
}
When you compile the cluster, every resource becomes a content-addressed hash. Every reference between resources is a hash dependency. The entire cluster is a single root hash that depends on everything.
Three commands. No surprises.
$ kix build prod-eu1 # 1. Compile: merge modules, resolve DI, # type-check, validate cross-references, # generate manifests $ kix diff prod-eu1 # 2. Diff: compare against live cluster # Shows exactly what will change: identical: 187 modified: 4 ← you review these kix-only: 2 ← will be created cluster-only: 1 ← not managed by kix (orphan) ~ Deployment/coredns image: coredns:1.12.3 → coredns:1.13.1 ~ Service/ingress-public externalTrafficPolicy: Local → Cluster $ kix deploy prod-eu1 # 3. Deploy: applies verified changes, # ensures cluster health throughout
kix diff and kix deploy are under active development.
kix deploy goes beyond kubectl apply — it understands the dependency
graph, deploys in the correct wave order, monitors cluster health during rollout,
and can halt or roll back if something goes wrong mid-deploy.
kix doesn't use AI. It doesn't require AI. It works perfectly fine with humans writing every line. But it creates something that didn't exist before: a verification layer for Kubernetes configuration.
Today, if an AI agent generates a hundred Kubernetes resources, there's nothing between "looks plausible" and "hits your cluster." No tool checks that selectors match, that RBAC rules are correct, that cross-references resolve. You review YAML by eye and hope.
A compiler changes that equation. Whether a human writes the config or an AI does, the compiler catches the same classes of errors. That's the path toward safely using AI for infrastructure — not by trusting the AI, but by verifying its output the same way you'd verify anyone's output.