#Under the Hood
You don't need any of this to write a build. It's here so the names you'll run into in upstream documentation, log output, and community forums make sense.
flowchart TB
Config[Your build configuration] --> KubeVirt
subgraph Runtime[Runtime]
KubeVirt[KubeVirt<br/>VM lifecycle on Kubernetes] --> QEMU[QEMU<br/>+ KVM hardware acceleration]
CDI[CDI<br/>imports source disks/ISOs] --> KubeVirt
OVMF[OVMF<br/>UEFI firmware] --> QEMU
end
subgraph Network[Networking]
KubeOVN[KubeOVN<br/>VPCs, subnets, egress]
end
subgraph Storage[Storage]
CSI[CSI driver<br/>+ volume snapshots]
end
Runtime --> Network
Runtime --> Storage
#What actually runs the VMs
- QEMU/KVM is the hypervisor. Every VM is a QEMU process backed by the host kernel's KVM module for hardware-accelerated virtualization. The disks, NICs, BIOS/UEFI, floppy drive, and CD-ROM you configure are all QEMU concepts under the hood.
- KubeVirt makes QEMU manageable on Kubernetes. Each VM runs inside a
virt-launcherpod and is wired into the cluster's networking, storage, and scheduling. When you setefiFirmware,disks[].bus, or NICmodel, you're configuring KubeVirt's view of the QEMU machine. - CDI (Containerized Data Importer) handles
source.url,source.containerDisk, and ISO imports. It downloads, converts, and stages the source disk so the VM can boot from it. - virtio is the family of paravirtualized drivers that QEMU/KVM uses for fast disk and network I/O.
bus: virtioand NICmodel: virtioboth rely on the guest having virtio drivers loaded — every modern Linux kernel ships them; Windows needsvioscsi/netkvmside-loaded during install. - OVMF is the open-source UEFI firmware QEMU uses. When you set
efiFirmware, OVMF is what runs before the OS boot loader.
#Where the configuration model came from
The build configuration borrows liberally from last MIT licensed release of HashiCorp Packer:
- The provisioner concept (
shell,file,windows-update) and the order-of-operations model are direct analogs. bootCommanduses the same<enter>/<wait5>/<leftCtrlOn>token vocabulary Packer uses for VNC keystroke automation.httpDirectorymirrors Packer's HTTP server for serving preseed/kickstart/Autounattend files during early boot.executeCommandwith{{ .Command }}templating works the same way.- Communicator/SSH options follow Packer's naming.
If you've written a Packer template before, much of this guide will feel familiar — and Packer's docs are often the best place to find things like Autounattend XML examples, preseed/kickstart snippets, and boot-command timing tricks.
What's different: there's no separate "builder" concept (Packer's QEMU/VirtualBox/AWS builders) — every build runs on KubeVirt. There's no post-processor pipeline. And the multi-VM, layered, networked module model is richer than what Packer offers natively.
#How VMs talk to each other
- KubeOVN is the SDN that provides the VPCs, subnets, and egress gateways you declare in
network. It's what makes inter-VM connectivity inside a module work, and what isolates one module's network from another. - NAT egress for
internet: trueVPCs is provided by a small egress gateway pod that NATs traffic out to the cluster's external network. When you toggle internet off, that pod is scaled to zero.