RudderVirt

#Mental Model

#A build must be repeatable

If you've ever built an image for a class, you know how tedious the process is. By the time you need to rebuild it for next semester, half the context is gone. We hit this over and over, and decided there had to be a better way. That's why we require every image we build to be scripted and repeatable. Every module is an automated build, and every build must be reproducible from its YAML alone.

If you can't rebuild the same module by re-running its configuration months later, the module is a liability. Upstream packages get patched, base images move, an ISO checksum changes, a config drifts — and now the working artifact you have is the only one that exists, with no path back to it. Re-running the configuration must always be the answer.

Practical consequences of this principle, in roughly this order of importance:

  • No manual steps in the canonical path. Although we have a handbuild provisioner that pauses for interactive VNC access, treat it as a last resort: anything you can script in a shell/file provisioner should be. When handbuild is unavoidable (UI-only configuration, one-off vendor installers), the step's instructions must enumerate every click, keypress, and field value so the next person can reproduce the module without guessing.
  • Pin everything you can. Pin ISO checksums. Pin cloud-image URLs to specific dated paths instead of latest. Pin package versions for anything you can't tolerate breaking. apt-get install nginx is fine for "the latest nginx is OK"; pin a version when it isn't.
  • Treat the configuration as the source of truth. Every file the build needs (Autounattend, preseed, drivers, scripts, certs) goes in files or behind a stable URL. No "remember to upload X first."
  • Idempotent provisioners. A retried step shouldn't break the build. See Best practices.
  • Cleanup before snapshot. Captured state should be the same every time, not "whatever happened to be in /tmp and /var/log when we shut down."

Why this matters: when something does break — a Windows Update changes a registry key your app depended on, a Debian point release ships a new kernel, a CVE forces a package upgrade — you want to fix it by editing the configuration and rebuilding, not by spelunking through a 9-month-old VM trying to remember what you did to it.

#Modules, builds, clones

Three terms used throughout this guide:

  • Module — the unit you're building. A collection of one or more VMs that get built, snapshotted, and cloned as a group. Most modules have exactly one VM; multi-VM modules let you ship something like a client/server pair, or a small lab, as a single unit.
  • Build — the configuration that defines a module. Running a build produces the module's template VMs.
  • Clone — a running copy of a module. A clone reproduces every template VM in the module exactly: same disks, same network topology, same relationships between VMs.

#What a build is

A build is a recipe for producing a module. You declare:

  • The VMs in the module (one or more).
  • Each VM's source disk (a cloud image URL, an ISO + blank disk, or a previous build's output).
  • Optional files any VM needs during the build (scripts, certs, unattend XML, drivers).
  • An ordered list of provisioners that run inside each VM (shell scripts, file uploads, reboots).
  • An optional network topology shared across the module's VMs.

When you submit the configuration, the build boots every VM, runs each one's provisioners, shuts them all down, and snapshots their disks.

flowchart LR A[Build<br/>configuration] --> B[Boot VMs from source] B --> C[Run provisioners<br/>over SSH] C --> D[Shut down VMs] D --> E[Snapshot each disk] E --> F[Template module:<br/>halted VMs ready to clone]

A successful build leaves behind a module of halted template VMs — one for each VM you declared. A clone copies the whole module: every template VM comes up as a running instance with the same configuration. From the build author's perspective, your job ends when the build reaches Succeededand you can re-run the same configuration tomorrow and get an equivalent module.