Things the compiler catches, prevents, or enables. All from real code.
Every mistake below would silently pass in Helm/Kustomize and surface as a runtime failure. kix catches them before anything reaches the cluster.
# Someone sets replicas to a string config.replicaCount = "three"; error: The option 'replicaCount' is of type int but a string was provided. in webshop.nix, line 42
# Two modules set the same option to different values # module A: config.replicaCount = 3; # module B: config.replicaCount = 5; error: The option 'replicaCount' has conflicting definitions in module A (3) and module B (5). # Fix: one uses mkDefault (overridable default) # or mkForce (platform policy override)
# Two postgres instances in the same namespace instances.my-namespace = { app = { package = packages.python-app; }; # needs "postgres" pg-main = { package = packages.postgres; aliases = [ "postgres" ]; }; pg-read = { package = packages.postgres; aliases = [ "postgres" ]; }; }; error: ambiguous auto-resolution for 'postgres' in namespace 'my-namespace': multiple matches Use explicit 'deps' to disambiguate.
# Package accesses .out.sentinelPort on a redis that doesn't have it # (alias was set: redis can substitute for valkey) error: attribute 'sentinelPort' missing at packages/python-app/default.nix:87 in: cache.out.sentinelPort # Signal: this redis can't substitute for valkey here. # The operator knows to either use real valkey or remove the sentinel code.
# Package requires postgres (no default) { scope, lib, postgres }: # ← no "? null" fallback # No postgres instance in namespace or cluster error: no 'postgres' found in cluster for namespace 'my-namespace' # Fix: add a postgres instance, or change the arg to `postgres ? null`
# Package checks that the requested postgres version is supported config.postgres.version = "11"; error: PostgreSQL 11 reached end-of-life on 2023-11-09. Supported versions: 14, 15, 16, 17. in packages/postgres/default.nix, check for option 'postgres.version' # Package authors define these checks in their option schemas: # version = mkOption { type = types.str; check = v: elem v supportedVersions; };
# Package detects a deprecated field and warns at compile time warning: 'ingress.host' is deprecated since ingress-nginx v1.12. Use 'ingress.hosts' (list) instead. This will become an error in the next major kix release. in webshop.nix, line 38 # The build succeeds but the warning is loud. No silent breakage later.
# kubeconform validates against the target K8s API version error: resource HorizontalPodAutoscaler/webshop-backend uses apiVersion 'autoscaling/v2beta2', which was removed in Kubernetes 1.26. Use 'autoscaling/v2' instead. target cluster version: 1.31 # Catches API deprecations BEFORE you deploy to a new K8s version. # Upgrade your cluster's K8s version with confidence.
# Package option has a check function config.replicaCount = -3; error: The value of option 'replicaCount' fails the check: value must be >= 0 (got: -3). in packages/coredns/default.nix, option 'replicaCount'
# PVC is in namespace "other-ns", but immich is in "immich" instances.other-ns = { storage = { package = packages."persistent-volume-claim"; }; }; instances.immich = { immich = { package = packages.immich; }; # DI resolves "storage" to other-ns/storage }; error: instance 'immich' (namespace 'immich') depends on 'storage' (namespace 'other-ns') which declares meta.scope = "namespace" — cross-namespace reference not allowed # PVCs can't be mounted cross-namespace. The compiler knows this because # the PVC package declares meta.scope = "namespace". Move the PVC into # the immich namespace to fix.
# Immich validates that the injected PVC is large enough instances.immich = { storage = { package = packages."persistent-volume-claim"; config.size = "10Gi"; }; immich = { package = packages.immich; }; }; warning: immich 'immich' library storage is 10Gi — recommended minimum is 50Gi for photo libraries # The PVC package exposes out.size. The immich package reads it # and warns if undersized. Build succeeds, but the warning is loud.
Delete a Deployment and K8s recreates it. Delete a PVC and the photos are gone. kix treats stateful resources differently — they're first-class instances in the cluster definition, not hidden inside app packages.
# PVCs are explicit instances — visible to the cluster operator instances.immich = { storage = { package = packages."persistent-volume-claim"; config = { size = "500Gi"; accessModes = [ "ReadWriteOnce" ]; }; }; immich = { package = packages.immich; # "storage" resolves via DI — immich mounts it at /usr/src/app/upload # The dependency chain: immich → PVC → StorageClass }; }; # The PVC package declares metadata: meta = { lifecycle = "stateful"; # → kix-diff applies protection rules scope = "namespace"; # → cross-namespace mounts blocked at build time };
The module system has a priority mechanism. Platform teams set defaults that app teams can override. But they can also force values that can't be overridden.
# Platform team sets a default (priority 1000 — overridable) config.instances.*.*.config.resources.limits.memory = mkDefault "512Mi"; # App team overrides it (normal priority — wins over mkDefault) config.instances.webshop-prod.app.config.resources.limits.memory = "1Gi"; # → Result: "1Gi" ✓ # Security team forces a policy (priority 50 — wins over everything) config.instances.*.*.config.securityContext.runAsNonRoot = mkForce true; # App team tries to override it config.instances.webshop-prod.app.config.securityContext.runAsNonRoot = false; # → Result: true (mkForce wins)
mkDefault (overridable default) <
normal value (team config) <
mkForce (enforced policy).
Same mechanism NixOS uses to configure an entire Linux distribution.
Teams contribute to shared options. The module system merges them automatically. No central coordination file that everyone has to edit.
# webshop.nix tenants.acme.namespaces = [ "webshop-prod" "webshop-stage" "webshop-demo" ]; priorities.webshop-prod.value = 1000; priorities.webshop-stage.value = 500; # dashboard.nix tenants.acme.namespaces = [ "dashboard-prod" "dashboard-stage" ]; priorities.dashboard-prod.value = 1000; # monitoring.nix monitoring.enabled = true;
# tenants.acme.namespaces = [ "webshop-prod" "webshop-stage" "webshop-demo" "dashboard-prod" "dashboard-stage" ] # → fed to network-policies package # → generates Cilium policy allowing # traffic between ALL these namespaces # priorities = { webshop-prod.value = 1000; webshop-stage.value = 500; dashboard-prod.value = 1000; } # → fed to priority-classes package # → generates PriorityClass resources
Infrastructure packages can expose helper functions through .out.
Consuming packages pipe resources through them. No manual annotation plumbing.
# The cluster-issuer package exposes an args transformer: # withCert adds the cert-manager annotation to mkResource input args cluster-issuer.out.withCert = args: args // { annotations = (args.annotations or {}) // { "cert-manager.io/cluster-issuer" = self.out.name; }; }; # An app package pipes mkResource args through it: ingress = scope.mkResource ({ kind = "Ingress"; spec.rules = ...; } |> clusterIssuer.out.withCert); # If there's no cluster-issuer in the cluster (DI returns null): ingress = scope.mkResource ({ kind = "Ingress"; spec.rules = ...; } |> (if clusterIssuer != null then clusterIssuer.out.withCert else lib.id)); # ← no TLS annotation, no crash
.out.
Transformers like withCert for complex cases. Plain values like
storageClasses.out.storageClassName or ingressNginx.out.ingressClassName
when a single field is enough. The dependency is tracked automatically either way.
Run two ingress-nginx controllers in the same namespace. Each gets its own ServiceAccount, ConfigMap, Deployment, Service, RBAC. No name collisions. No manual deconfliction.
instances.nginx-system = {
ingress-public = {
package = packages.ingress-nginx;
config = {
replicaCount = 3;
serviceType = "LoadBalancer";
ingressClassName = "public";
};
};
ingress-private = {
package = packages.ingress-nginx;
config = {
replicaCount = 2;
serviceType = "ClusterIP";
ingressClassName = "private";
config."whitelist-source-range" = "10.68.0.0/14";
};
};
};
# The package uses scope.instanceName to prefix all resource names:
# ingress-public → public-controller (Deployment), public (IngressClass), ...
# ingress-private → private-controller (Deployment), private (IngressClass), ...
# Same package, different config, separate resources.
Don't want to rewrite everything? kix can wrap Helm charts. The chart's resources become kix resources with full dependency tracking.
# Use an existing Helm chart as a kix package cert-manager = { package = kix.fromHelmChart { chart = kix.fetchHelmChart { repo = "https://charts.jetstack.io"; name = "cert-manager"; version = "v1.17.2"; hash = "sha256-abc123..."; # ← content hash, no silent upgrades }; values = { installCRDs = true; dns01RecursiveNameservers = "185.12.64.1:53"; }; }; }; # The Helm chart's resources are now part of the kix dependency graph. # Other packages can reference them via .out.name, .out.fqdn, etc. # kix strips Helm-specific labels/annotations automatically.
Your cluster already has a database, a cache, a cert-manager. You don't want kix to redeploy them. Import them: kix knows they exist, other packages depend on them via DI, but kix doesn't touch them.
# The database is managed by another team. Don't redeploy it. instances.database = { postgres = { package = kix.mkImport { from = packages."stackgres-cluster"; }; }; }; # Your app depends on postgres normally — DI resolves to the import. instances.my-app = { app = { package = packages.my-app; }; # gets postgres.out.fqdn via DI }; # kix runs the postgres package in "import mode" — just enough to # derive out.name, out.fqdn, out.port. No manifests. No store paths. # Only a lightweight marker CR is deployed for visibility.
# Your first kix cluster definition: instances = { cache.valkey = { package = kix.mkImport { ... }; }; db.postgres = { package = kix.mkImport { ... }; }; infra.ingress = { package = kix.mkImport { ... }; }; # Only this is kix-managed: app.my-app = { package = packages.my-app; }; };
# Replace imports with real packages, one at a time: instances = { cache.valkey = { package = packages.valkey; }; db.postgres = { package = kix.mkImport { ... }; }; infra.ingress = { package = packages.ingress-nginx; }; app.my-app = { package = packages.my-app; }; }; # Same DI, same out shape. Downstream consumers # don't change. Just swap the line.
out contract
is identical between imports and real packages — downstream code never changes.
prod, staging, demo — same structure, different values. One function. Three environments. Consistent by construction.
# Define the service stack as instance sets (functions) appStack = { env, ... }: { app = { package = packages.python-app; config = { image.tag = env.config.imageTag; replicas = env.config.replicas; }; }; }; dataStack = { env, ... }: { postgres = { package = packages.postgres; config = { storage.size = env.config.pgStorage; }; }; valkey = { package = packages.valkey; }; }; # Stamp out three environments with kix.mkEnv instances = kix.mkEnv "prod" { config = { imageTag = "v26.12.4"; replicas = 16; pgStorage = "400Gi"; }; } { app = appStack; data = dataStack; } // kix.mkEnv "stage" { config = { imageTag = "v26.12.5-rc1"; replicas = 4; pgStorage = "100Gi"; }; } { app = appStack; data = dataStack; } // kix.mkEnv "demo" { config = { imageTag = "v26.12.4"; replicas = 2; pgStorage = "50Gi"; }; } { app = appStack; data = dataStack; }; # Produces namespaces: prod-app, prod-data, stage-app, stage-data, demo-app, demo-data # All DI-wired independently. Change the stack → all envs update.
The compiler doesn't just check individual resources. It validates relationships between them — the things that are impossible to catch in YAML linting.
# The compiler validates all of these automatically: ✓ ServiceAccount references resolve (Deployment → SA exists) ✓ RoleBinding → Role/ClusterRole (binding target exists) ✓ ConfigMap volume refs (mounted ConfigMaps exist) ✓ Service selectors match pod labels (selector → template labels) ✓ HPA scaleTargetRef (HPA → Deployment/StatefulSet exists) ✓ No duplicate (kind, namespace, name) (collision detection) ✓ kubeconform schema validation (against K8s API schemas) ✓ Managed-by labels present (every resource tagged)
Would you let Claude restructure your Helm charts? Let Copilot add RBAC across three environments? With YAML, there's nothing between "looks right" and "hits production." With kix, the compiler is the gatekeeper.
kubectl apply succeeds.
And at 3am you discover the Service selector doesn't match, the ClusterRoleBinding
points to a Role that doesn't exist, and staging is talking to the production database.
There's nothing between "AI generated it" and "it's running in your cluster."
With kix, every mistake an AI can make has a safety net:
# A concrete example: # You say: "Add Paperless with postgres, valkey, and a 200Gi PVC to my cluster" # AI writes: a kix package + cluster wiring (maybe 80 lines) # You run: $ nix build .#cluster-prod error: attribute 'sentinelPort' missing at packages/paperless/default.nix:47 in: cache.out.sentinelPort # AI used valkey's sentinel API but the cluster runs plain redis. # Caught at compile time. No deploy. No outage. Fix and rebuild. $ nix build .#cluster-prod ✓ built $ kix diff cluster-prod + Namespace/paperless + PersistentVolumeClaim/paperless-storage 200Gi, zfs-bulk + Deployment/paperless + Service/paperless + Ingress/paperless docs.example.com + StatefulSet/paperless-postgres + StatefulSet/paperless-valkey identical: 187 # You see exactly what the AI created. Nothing else changed. # Review the diff. Deploy with confidence. Roll back if needed.