The deployment pipeline

From compiled cluster to deployed, verified, rollback-ready infrastructure.

Every resource is a hash

When kix compiles your cluster, every K8s resource becomes a content-addressed hash in the Nix store. The hash depends on the resource's content AND on every resource it references. This is the same mechanism that makes Nix packages reproducible.

# Each resource becomes a store path:
/nix/store/abc123...-Deployment-kube-system-coredns
/nix/store/def456...-Service-kube-system-coredns
/nix/store/ghi789...-ConfigMap-kube-system-coredns

# The Service references the Deployment (via .out.selector).
# So the Service's hash INCLUDES the Deployment's hash.
# Change the Deployment → its hash changes → the Service's hash changes too.

# The entire cluster is ONE root hash:
/nix/store/fff000...-Activation-prod-eu1
# This hash depends on every resource in the cluster.
# Same code → same hash. Different code → different hash. Always.
The hash IS the version. No version numbers. No tags. No "latest." The hash is a mathematical guarantee that this exact cluster state was produced from this exact code. Bit-for-bit reproducible. Verifiable by anyone.

Deterministic diff against the live cluster

Before you apply anything, you can see exactly what will change. The diff compares compiled manifests against what's actually running. Same input code always produces the same diff.

# Diff compiled vs live
$ kix diff prod-eu1

═══════════════════════════════════════════════════
 kix-diff: ./result vs cluster-dump/
═══════════════════════════════════════════════════

 Summary:
   identical: 187
   modified:    4
   kix-only:    2      (will be created)
   cluster-only: 1    (not managed by kix)

 ~ Deployment/coredns @ kube-system
   $.spec.replicas:
     kix:     3
     cluster: 2
   $.spec.template.spec.containers[0].image:
     kix:     coredns/coredns:1.12.3
     cluster: coredns/coredns:1.11.1

 + CiliumClusterwideNetworkPolicy/allow-within-tenant-acme
   (new — will be created)

 - ConfigMap/legacy-override @ kube-system
   (cluster-only — not managed by kix, possible manual leftover)
Post this diff as a PR comment. Every reviewer sees exactly what will change. Not "here's the code diff" (which doesn't tell you what happens to the cluster), but "here's the cluster diff" (which tells you everything).
This is why AI-generated infra becomes safe. An AI agent writes 15 new resources? The diff shows exactly those 15 resources and nothing else. No "I wonder what else it touched." No "did it change something in another namespace?" The diff is complete and deterministic. Review it like you'd review a NixOS config change — audit before apply, rollback if needed.

Detect drift before it bites you

Someone ran kubectl edit at 2am. A Helm upgrade changed a value you didn't expect. The cluster state has drifted from what your code says it should be. kix-diff finds this automatically.

# Run kix diff on a schedule (CI cron, or even locally):
$ kix diff prod-eu1

 ~ Deployment/coredns
   $.spec.replicas:
     kix:     3
     cluster: 2    ← someone scaled it down manually

# The compiled output says 3 replicas. The cluster has 2.
# This is drift. You see it immediately.
# Run kix deploy to fix it. Or update the code to match.
Because the output is deterministic, drift detection is just "compile, diff, alert if anything is modified." If the code hasn't changed and the diff isn't empty, something mutated the cluster outside of kix.

Perfect rollbacks

Every compilation produces a root hash. The previous compilation's root hash is still in the Nix store (or trivially reproducible from the git commit). Rolling back means pointing to the old hash. Every resource it depends on is still there.

# Current deployment: Activation/prod-eu1 hash: sha256:9f3a...c7e2 ├── Deployment/coredns (image: 1.12.3) ├── Deployment/webshop (image: v26.12.4) └── ... 200+ resources # Previous deployment (still in Nix store or reproduced from git): Activation/prod-eu1 hash: sha256:4b1d...e8f0 ├── Deployment/coredns (image: 1.11.1) ├── Deployment/webshop (image: v26.12.3) └── ... 200+ resources # Rollback = apply the old hash's manifests. # The diff shows exactly what "rolling back" means: # coredns: 1.12.3 → 1.11.1 # webshop: v26.12.4 → v26.12.3 # No guessing. No "what state was production in last Tuesday?"
Rollback is not an undo button. It's applying a known-good state. The hash graph guarantees that every dependency of that state is present and correct. You're not rolling back individual resources — you're applying a complete, verified cluster state that already worked before.

Wave deployments from the dependency graph

The dependency graph knows the correct order. Namespaces before instances. CRDs before custom resources. Services before consumers. Deploy in waves without manual ordering.

Wave 0: Namespaces, CRDs, PriorityClasses, StorageClasses (cluster-scoped foundations) Wave 1: ServiceAccounts, ConfigMaps, Secrets, ClusterRoles (no dependencies on other resources) Wave 2: Operators (StackGres, Cilium, cert-manager) (need ServiceAccounts, CRDs from Wave 0-1) Wave 3: Databases, caches (StatefulSets) (need operators from Wave 2) Wave 4: Applications (Deployments, Services, Ingresses) (need databases, caches from Wave 3) Wave 5: Monitoring (ServiceMonitors, PrometheusRules) (need services from Wave 4 to scrape) The graph computes this automatically from the hash dependencies. No manual wave annotations. No deploy order config files.

Canary rollouts and PR-as-deployment

Because resources are content-addressed, two versions can coexist. Resources that didn't change share the same hash — they're literally the same resource, not a copy.

Canary rollout

Update one service's image. Only that Deployment's hash changes. Everything else (databases, caches, infrastructure) keeps the same hash. Deploy the new version alongside the old. Route a fraction of traffic. The unchanged resources are shared — not duplicated.

PR as deployment

A PR changes the api-server image. Compile the PR's branch. Compare hashes: only the api-server Deployment and its consumers changed. Deploy those as a preview environment. Reuse all unchanged infrastructure (same hashes = same resources). Review on real infrastructure, not a mock.

The key insight: same hash = same resource. If your PR doesn't change the database config, the database hash is identical to production's. You can safely point the PR environment at the production database (or a clone) without deploying a separate one. The hash tells you what's shared and what's different.

Trivial GitOps

Git commit → CI compiles → new root hash → diff posted to PR → merge → apply. That's the entire workflow.

# .github/workflows/kix.yml
on:
  pull_request:
    jobs:
      diff:
        steps:
          - kix build prod-eu1
          - kix diff prod-eu1 > diff.md
          - gh pr comment --body "$(cat diff.md)"

  push:
    branches: [main]
    jobs:
      deploy:
        steps:
          - kix deploy prod-eu1

# That's it. The compiler does the verification.
# The diff gives the review artifact.
# The hash gives the audit trail.
No ArgoCD. No Flux. No sync controllers. kix deploy understands the dependency graph, deploys in wave order, monitors health during rollout, and halts if something goes wrong. The verification happened at compile time. The deploy is an informed, safe operation.
Note: kix diff and kix deploy are under active development. The compiler and resource graph are production-ready today. The CLI tooling for diffing and deploying is being built to take full advantage of the graph — wave ordering, health checks, automatic rollback, and more.

Locked inputs. Reproducible forever.

Every input — Helm charts, container image references, external configs, even the kix library itself — is locked with a hash. The flake.lock file pins everything. Rebuild the same commit in 6 months and get bit-for-bit identical output.

Locked Helm charts

fetchHelmChart requires a content hash. If the upstream chart changes, the build fails until you explicitly update the hash. No silent upgrades.

Locked dependencies

Every flake input (kixpkgs, nixpkgs, external modules) is pinned by commit hash in flake.lock. Update explicitly with nix flake update. Review the diff.

No hidden state

No Terraform state files. No Helm release secrets. No ArgoCD sync state. The compiled output depends only on the code in the repo. Clone, build, get the same result.

Audit trail

git log + root hash = complete audit trail. "What was deployed on March 15?" → check out that commit, build, compare hashes. The hash either matches (same state) or doesn't (drifted).

The full picture

Code Teams write independent modules with typed options. Compile The compiler merges, type-checks, resolves dependencies, validates. Hash Every resource is content-addressed. The entire cluster is one root hash. Diff Compare compiled output against live cluster. Deterministic. Review Post the diff to a PR. Approve with confidence. Deploy kix deploy: wave-ordered, health-monitored, halt-on-failure. Verify Re-diff to detect drift. Hashes prove the state matches the code. Roll back Deploy the previous root hash. Every dependency is guaranteed present. Reproduce Same commit → same hash → same cluster state. Always. Forever.
This is what a compiler for Kubernetes looks like.
Verified infrastructure. Not hoped-for infrastructure.

Let the AI write the code. The compiler verifies it. The diff proves it. The hash guarantees rollback. That's infrastructure you can trust.