The last few months I’ve made it a goal to remove manual steps in my home lab’s setup and really embrace the idea of immutable infrastructure. No more servers laying around which are constantly logged into for making major changes. If something isn’t working then delete the whole server immediately and reprovision it with minimal steps. I’m writing all of this down as sort of a tutorial, not to follow from end to end, but to gain inspiration for what is truly possible in a home lab environment with some off the shelf tools.

Prerequisites

Before we get started there are a few things we will need to kick off this journey. Each blog post will build upon the last so if there is a tool mentioned that you don’t recognize, check out the #ImmutableInfrastructureJourney tag on this blog.

This lab depends on a single Proxmox server which should be installed on its own dedicated hardware. Instructions on how to install Proxmox can be found at Proxmox’s getting started page. I am currently using version 7.2-11 so that version should work here.

You will also need to download HashiCorp’s Packer from HashiCorp’s getting started page.

For posterity, everything that I am doing will be done within Windows WSL but should work on any system supported by Packer. The Proxmox node i’ll be working on is installed on an Intel NUC with 32GB of memory although can be done on a smaller machine as well.

Building a VM template with Packer

One of the pieces of that I often found myself working on more than I liked was installing Ubuntu from scratch and getting all of my tools and services first installed and then configured. It was honestly more headache than was worth and thus I found myself never updating to the latest version of Ubuntu until it was a year or two EOL. When update time did come, there was always some detail that I had left behind and was lost to time.

This is where Packer comes into play. Packer is an open source tool by HashiCorp that aims to build VM images automatically for me from a single source file. Packer essentially automates the process of creating my golden VM templates which will then be used to spin up subsequent VMs from boot to template by interacting directly with the VM as if a user were seated at the console. It’s pretty magical to watch all of this automation in front of my eyes do what I had spent so much time configuring on my own.

Create a Packer file

We will create a file which defines the VM we would like to use as our template for the rest of the VMs we will be using. This will have all of our boot instructions as well as any configurations specific to my environment. Today I’ll be using “Ubuntu 22.04 Server” as my base image.

In a new folder create a file named ubuntu-2204-server.pkr.hcl. Open this file in an editor of choice and lets work on defining everything we need starting with the plugins we will be using. See Packer doesn’t come out of the box knowing how to interact with Proxmox but it does know how to download a plugin that does.

# ubuntu-2204-server.pkr.hcl

packer {
  required_plugins {
    proxmox = {
      version = ">= 1.1.1"
      source  = "github.com/hashicorp/proxmox"
    }
  }
}

Now below the packer stanza I’ll add my NUC as a source for the proxmox-iso builder. This block basically contains the information on how to connect to Proxmox, where to store our ISO image, hardware details when provisioning the VM, and the keys to press as the VM boots for the first time.

Lets start by first adding the connection details.

# ubuntu-2204-server.pkr.hcl

variable "proxmox_username" {
  type = string
}
variable "proxmox_password" {
  type = string
  sensitive = true
}

source "proxmox-iso" "intel-nuc" {
  proxmox_url              = "https://nuc-proxmox.local:8006/api2/json"
  insecure_skip_tls_verify = true
  username                 = var.proxmox_username
  password                 = var.proxmox_password
  node                     = "intel-nuc"
  task_timeout             = "10m"
}

I’ll next add some information on what ISO I want to use and where to store that ISO just below task_timeout as well as define how much CPU and memory to use when creating the template. This doesn’t have to be a ton as you can always change the CPU and memory when you create instances of this VM template later. I’ll also add some additional hardware devices like network and storage here.

# ubuntu-2204-server.pkr.hcl

source "proxmox-iso" "intel-nuc" {
  # [...]

  iso_url          = "https://releases.ubuntu.com/22.04.1/ubuntu-22.04.1-live-server-amd64.iso"
  iso_checksum     = "sha256:10f19c5b2b8d6db711582e0e27f5116296c34fe4b313ba45f9b201a5007056cb"
  iso_storage_pool = "local"
  unmount_iso      = true  # Unmount's the ISO after installation is completed

  memory = 2048
  cores  = 4
  os     = "l26"

  network_adapters {
    model  = "virtio"
    bridge = "vmbr0"
  }
  disks {
    type              = "scsi"
    disk_size         = "64G"
    storage_pool      = "local-lvm"
    storage_pool_type = "lvm"
  }
}

Cloud-init

Ubuntu, and many more OSes designed to run in cloud environments, can take advantage of unattended installs, meaning I don’t actually have to be at the computer to finish the initial installation. Since we want this process to be as hands off as possible so that we can easily repeat it when needed, we are going to make use of cloud-init here. The way that it works is when ubnutu boots up for the first time a user, or Packer in this case, can type a few commands into the bootloader and tell Ubuntu, “I would like you to install using this script I’m going to provide on a CD named cidata” and Ubuntu will load all of the configuration from that CD which the user provides and performs all sorts of tasks on behalf of the user. These can be as straight forward as install ubuntu and nothing else or they can be very customized to automatically install and run entire playbooks with no intervention from the user.

For our use case we only want to upgrade all of our packages to the latest versions and install a few packages which will help proxmox be aware of what is going on inside of the VM when it launches.

Create a new folder named cidata and create two files called user-data and meta-data. Open up user-data in an editor and add the cloud-config to the file.

#cloud-config
autoinstall:
  version: 1
  refresh-installer:
  locale: en_US.UTF-8
  keyboard:
    layout: us
  identity:
    hostname: ubuntu-server
    password: "$6$QnLQVguRtS.t38.F$qybKEFbxqQRvbntU5dhLgVauc/FIAw4RUqpL7RcP5WHRrjOurA40M.24FzGB9hPgtD28B93cxkrYe6.ky63I8."
    username: ubuntu
  ssh:
    allow-pw: false
    install-server: true
    authorized-keys:
      - "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ6z4PxtPlUS4aOvY8/XBJDCUr0juwIKVXrClcJkI6QH britt@Helios"

  package_update: true
  package_upgrade: true
  packages:
    - qemu-guest-agent
    - cloud-init

Walking through the file from the top down we have the following. #cloud-config lets cloud-init that this isn’t a shell script, which can also be used to provision but is extremely brittle and not recommended. An autoinstall key that shows the version of autoinstall, refresh-installer ensures we use the latest version of the ubuntu installer before continuing. The next three lines are all about keyboard layout. Then we get to identity which is the default computer name, username, and password for the install. You will want to make sure that you use a password generated with openssl passwd -6. The next section is for defining what keys you would like to use for ssh with the default user. This should be a key you have access to since Packer will use this user later on to finish the provisioning steps and reset the VM for cloud-init to run when this template is cloned later.

This last part of the file is for all things packages. I recommend leaving this list as small as possible to make this template as versatile as possible. For Proxmox you will want to at least install qemu-guest-agent to make sure that Proxmox knows what IP the VM has when it boots. This also helps out with automatically provisioning VMs in the future.

Unattended Installations Using Packer

Now that we have the contents for the cloud-init CD created we need to mount this CD to the template using the additional_iso_files stanza.

# ubuntu-2204-server.pkr.hcl

source "proxmox-iso" "intel-nuc" {
  # [...]

  additional_iso_files {
    cd_files         = ["./cidata/*"]
    cd_label         = "cidata"
    unmount          = true
    iso_storage_pool = "local"
  }
  qemu_agent   = true

  ssh_username = "ubuntu"
  ssh_private_key_file = "~/.ssh/id_ed25519"
  ssh_timeout = "25m"
}

Also in this code is qemu_agent which lets proxmox know that this VM will have qemu-guest-agent installed on it and we can use that to discover the IP address of the node for provisioning with the ssh details in the following ssh fields.

This next part takes some time to get right, its best to find the proper commands to give GRUB, the bootloader, online. This will let Ubuntu know that you want to automatically perform an unattended installation.

# ubuntu-2204-server.pkr.hcl

source "proxmox-iso" "intel-nuc" {
  # [...]

  boot_wait = "15s"
  boot_command = [
    "<esc><wait>",
    "<esc><wait>",
    "c<wait>",
    "set gfxpayload=keep",
    "<enter><wait>",
    "linux /casper/vmlinuz quiet<wait>",
    " autoinstall<wait>",
    " ds=nocloud;<wait>",
    "<enter><wait>",
    "initrd /casper/initrd",
    "<enter><wait>",
    "boot<enter><wait>",
  ]
}

Finally we give a name to the template on this Proxmox node.

# ubuntu-2204-server.pkr.hcl

source "proxmox-iso" "intel-nuc" {
  # [...]

  template_name = join("-", [
    "ubuntu",
    "2204",
    "base",
    formatdate("YYYYMMDD-hhmm", timestamp()),
  ])
}

Build and Prepare to Templatize

Now we’ve used cloud-init to automatically install and configure the basics of Ubuntu. The thing with cloud-init is that many of the modules only run when the VM first boots. This is not ideal for our VMs if we create clones from a template that has already ran cloud-init so we have to run some additional steps after the initial install to reset cloud-init’s state so that cloud-init thinks that its being ran for the first time.

Add an additional stanza to Packer to target the source to build on as well as some additional steps that will tell Packer to ssh into the VM and run a cleanup script.

# ubuntu-2204-server.pkr.hcl

source "proxmox-iso" "intel-nuc" {
  # [...]
}

build {
  name = "ubuntu-x86_64"
  sources = [
    "source.proxmox-iso.intel-nuc",
  ]

  # Clean up the machine for cloud-init
  provisioner "shell" {
    execute_command = "echo 'ubuntu' | {{ .Vars }} sudo -S -E sh -eux '{{ .Path }}'"
    inline = [
      "while [ ! -f /var/lib/cloud/instance/boot-finished ]; do echo 'Waiting for cloud-init...'; sleep 1; done",
      "sudo rm /etc/ssh/ssh_host_*",
      "sudo truncate -s 0 /etc/machine-id",
      "sudo apt -y autoremove --purge",
      "sudo apt -y clean",
      "sudo apt -y autoclean",
      "sudo cloud-init clean",
      "sudo rm -f /etc/cloud/cloud.cfg.d/subiquity-disable-cloudinit-networking.cfg",
      "sudo sync"
    ]
  }
}

At this point we are ready to run Packer and build a template VM. The entire Packer file is below.

# ubuntu-2204-server.pkr.hcl

packer {
  required_plugins {
    proxmox = {
      version = ">= 1.1.1"
      source  = "github.com/hashicorp/proxmox"
    }
  }
}

variable "proxmox_username" {
  type = string
}
variable "proxmox_password" {
  type = string
  sensitive = true
}

source "proxmox-iso" "intel-nuc" {
  proxmox_url              = "https://nuc-proxmox.local:8006/api2/json"
  insecure_skip_tls_verify = true
  username                 = var.proxmox_username
  password                 = var.proxmox_password
  node                     = "intel-nuc"
  task_timeout             = "10m"

  iso_url          = "https://releases.ubuntu.com/22.04.1/ubuntu-22.04.1-live-server-amd64.iso"
  iso_checksum     = "sha256:10f19c5b2b8d6db711582e0e27f5116296c34fe4b313ba45f9b201a5007056cb"
  iso_storage_pool = "local"
  unmount_iso      = true  # Unmount's the ISO after installation is completed

  memory = 2048
  cores  = 4
  os     = "l26"

  network_adapters {
    model  = "virtio"
    bridge = "vmbr0"
  }
  disks {
    type              = "scsi"
    disk_size         = "64G"
    storage_pool      = "local-lvm"
    storage_pool_type = "lvm"
  }

  additional_iso_files {
    cd_files         = ["./cidata/*"]
    cd_label         = "cidata"
    unmount          = true
    iso_storage_pool = "local"
  }
  qemu_agent   = true

  ssh_username = "ubuntu"
  ssh_private_key_file = "~/.ssh/id_ed25519"
  ssh_timeout = "25m"

  boot_wait = "15s"
  boot_command = [
    "<esc><wait>",
    "<esc><wait>",
    "c<wait>",
    "set gfxpayload=keep",
    "<enter><wait>",
    "linux /casper/vmlinuz quiet<wait>",
    " autoinstall<wait>",
    " ds=nocloud;<wait>",
    "<enter><wait>",
    "initrd /casper/initrd",
    "<enter><wait>",
    "boot<enter><wait>",
  ]

  template_name = join("-", [
    "ubuntu",
    "2204",
    "base",
    formatdate("YYYYMMDD-hhmm", timestamp()),
  ])
}

build {
  name = "ubuntu-x86_64"
  sources = [
    "source.proxmox-iso.intel-nuc",
  ]

  # Clean up the machine for cloud-init
  provisioner "shell" {
    execute_command = "echo 'ubuntu' | {{ .Vars }} sudo -S -E sh -eux '{{ .Path }}'"
    inline = [
      "while [ ! -f /var/lib/cloud/instance/boot-finished ]; do echo 'Waiting for cloud-init...'; sleep 1; done",
      "sudo rm /etc/ssh/ssh_host_*",
      "sudo truncate -s 0 /etc/machine-id",
      "sudo apt -y autoremove --purge",
      "sudo apt -y clean",
      "sudo apt -y autoclean",
      "sudo cloud-init clean",
      "sudo rm -f /etc/cloud/cloud.cfg.d/subiquity-disable-cloudinit-networking.cfg",
      "sudo sync"
    ]
  }
}

Run packer build ./ubuntu-2204-server.pkr.hcl and packer will start the process of installing Ubuntu within Proxmox. If you pop on over to Proxmox you will see a brand new VM spin up which will start typing in all of the keys that were specified in boot_command. This will type the magic commands into grub and send ubuntu into unattended installation mode. After about 5-10 minutes Packer will ssh into the VM and reset cloud-init’s state so that the VM can be turned into a template which can be used again with cloud-init for your more specialized configs.

Now we have an entirely reproducible VM template that doesn’t take up a whole VMs worth of space. And this can all be stored in code and even updated automatically within a CI pipeline.

In the next post we will talk about how to use this new VM template and the power of Terraform to automatically create VMs configured with the tools and programs you need.