From compiled cluster to deployed, verified, rollback-ready infrastructure.
When kix compiles your cluster, every K8s resource becomes a content-addressed hash in the Nix store. The hash depends on the resource's content AND on every resource it references. This is the same mechanism that makes Nix packages reproducible.
# Each resource becomes a store path: /nix/store/abc123...-Deployment-kube-system-coredns /nix/store/def456...-Service-kube-system-coredns /nix/store/ghi789...-ConfigMap-kube-system-coredns # The Service references the Deployment (via .out.selector). # So the Service's hash INCLUDES the Deployment's hash. # Change the Deployment → its hash changes → the Service's hash changes too. # The entire cluster is ONE root hash: /nix/store/fff000...-Activation-prod-eu1 # This hash depends on every resource in the cluster. # Same code → same hash. Different code → different hash. Always.
Before you apply anything, you can see exactly what will change. The diff compares compiled manifests against what's actually running. Same input code always produces the same diff.
# Diff compiled vs live $ kix diff prod-eu1 ═══════════════════════════════════════════════════ kix-diff: ./result vs cluster-dump/ ═══════════════════════════════════════════════════ Summary: identical: 187 modified: 4 kix-only: 2 (will be created) cluster-only: 1 (not managed by kix) ~ Deployment/coredns @ kube-system $.spec.replicas: kix: 3 cluster: 2 $.spec.template.spec.containers[0].image: kix: coredns/coredns:1.12.3 cluster: coredns/coredns:1.11.1 + CiliumClusterwideNetworkPolicy/allow-within-tenant-acme (new — will be created) - ConfigMap/legacy-override @ kube-system (cluster-only — not managed by kix, possible manual leftover)
Someone ran kubectl edit at 2am. A Helm upgrade changed a value you
didn't expect. The cluster state has drifted from what your code says it should be.
kix-diff finds this automatically.
# Run kix diff on a schedule (CI cron, or even locally): $ kix diff prod-eu1 ~ Deployment/coredns $.spec.replicas: kix: 3 cluster: 2 ← someone scaled it down manually # The compiled output says 3 replicas. The cluster has 2. # This is drift. You see it immediately. # Run kix deploy to fix it. Or update the code to match.
Every compilation produces a root hash. The previous compilation's root hash is still in the Nix store (or trivially reproducible from the git commit). Rolling back means pointing to the old hash. Every resource it depends on is still there.
The dependency graph knows the correct order. Namespaces before instances. CRDs before custom resources. Services before consumers. Deploy in waves without manual ordering.
Because resources are content-addressed, two versions can coexist. Resources that didn't change share the same hash — they're literally the same resource, not a copy.
Update one service's image. Only that Deployment's hash changes. Everything else (databases, caches, infrastructure) keeps the same hash. Deploy the new version alongside the old. Route a fraction of traffic. The unchanged resources are shared — not duplicated.
A PR changes the api-server image. Compile the PR's branch. Compare hashes: only the api-server Deployment and its consumers changed. Deploy those as a preview environment. Reuse all unchanged infrastructure (same hashes = same resources). Review on real infrastructure, not a mock.
Git commit → CI compiles → new root hash → diff posted to PR → merge → apply. That's the entire workflow.
# .github/workflows/kix.yml on: pull_request: jobs: diff: steps: - kix build prod-eu1 - kix diff prod-eu1 > diff.md - gh pr comment --body "$(cat diff.md)" push: branches: [main] jobs: deploy: steps: - kix deploy prod-eu1 # That's it. The compiler does the verification. # The diff gives the review artifact. # The hash gives the audit trail.
kix deploy understands the dependency graph, deploys in wave order,
monitors health during rollout, and halts if something goes wrong.
The verification happened at compile time. The deploy is an informed, safe operation.
kix diff and kix deploy are under active development.
The compiler and resource graph are production-ready today. The CLI tooling for
diffing and deploying is being built to take full advantage of the graph —
wave ordering, health checks, automatic rollback, and more.
Every input — Helm charts, container image references, external configs, even the kix library itself — is locked with a hash. The flake.lock file pins everything. Rebuild the same commit in 6 months and get bit-for-bit identical output.
fetchHelmChart requires a content hash. If the upstream
chart changes, the build fails until you explicitly update the hash.
No silent upgrades.
Every flake input (kixpkgs, nixpkgs, external modules) is pinned by
commit hash in flake.lock. Update explicitly with
nix flake update. Review the diff.
No Terraform state files. No Helm release secrets. No ArgoCD sync state. The compiled output depends only on the code in the repo. Clone, build, get the same result.
git log + root hash = complete audit trail. "What was deployed on March 15?" → check out that commit, build, compare hashes. The hash either matches (same state) or doesn't (drifted).