Using Terraform to Deploy on Premise

By David Dixon
November 24, 2024

Part 1; Prepping the image for auto install

Lately I have been thinking about how to leverage a common framework to deploy to both on premise, and cloud providers for a multi-tenancy deployment approach. Organizations are increasingly pivoting to cloud providers such as AWS, GCP, or Azure. Additionally, organizations may have either sunk cost or a need to upkeep their on premise hardware in paralell. In this series we'll look at deploying AMX to both on premise and cloud providers.

One of the core elements for how we will deploy is the creation of a custom ISO. The custom ISO will allow us to autoprovision the infrastructure without the need for user interaction. We will pass in information regarding the network configuration, disk partitioning, ssh key, keyboard layout, packages, and user credentials. This approach provides a cost savings versus manually inputting each piece of information. Let's take a look at the process overview image below.

ISO BUILD overview

We will be installing our dependencies used to build the iso in step 1. Then, we stage the build environment in step 2. Next, we will unpack the source ISO for modification in step 3. Step 4 modifies the grub boot menu to add our new auto-install option. In step 5 we have the custom, user-defined data mentioned in the paragraph above. Step 6 deals with generating the new ISO. And finally we will test our ISO using Terraform + Powershell in step 7.

I am using Ubuntu 22.04 for my ISO, but feel free to choose which ISO works for you. You can download it here if you would like: https://releases.ubuntu.com/jammy/ubuntu-22.04.5-live-server-amd64.iso Ok, now that we have an overview, let's get going on the ISO creation!

1. ISO Dependency Installation

We will be installing the dependencies used to create the custom ISO image. We'll use the following:

  • 7z will be used to unpack the source ISO
  •             
                  sudo apt install p7zip-full p7zip -y
                
              


  • xorriso for building the modified ISO.
  •             
                  sudo apt install xorriso -y
                
              
    We also use wget to pull the latest ISO if you don't have a web browser enabled. Next, we are going to set up the build environment for our ISO.

    2. Set up the Build Environment

    The build environment is where we will be unpacking our source ISO. This should be fairly straight forward:
                
                  mkdir 22_04-auto-ISO
                  cd 22_04-auto-ISO
                  mkdir source-files
                  wget https://releases.ubuntu.com/jammy/ubuntu-22.04.5-live-server-amd64.iso
    
                
              


    Nice! We'll move on to unpacking the ISO.

    3. Unpack and partition the ISO

    The Ubuntu 22.04 ISO layout will have separate partition space for Master Boot Record (MBR), install root image, and the Extensible Firmware Interface (EFI). To unpack we will use 7zip.

                
                  7z -y x jammy-live-server-amd64.iso -osource-files
                
              


    In the "source-files" directory the ISO files are extracted. Additionally, the source-files directory has a folder named '[BOOT]' which will contain the "1-Boot-NoEmul.img 2-Boot-NoEmul.img" files respectively. These MBR and UEFI files will be important when generating the new ISO. Let's update the files and move them to a staging area.

                
                  mv '[BOOT] ../BOOT'
                
              


    4. Update the boot menu

    We want to add a boot option for our new ISO, so we will be editing the ISO grub.cfg file. I use vim, but using your editor of choice, modify the "source-files/boot/grub/grub.cfg" file. Inject the following statement above the existing menu entries:

                
                  menuentry "Ubuntu Server 22.04 Auto Installation" {
                    set gfxpayload=keep
                    linux   /casper/vmlinuz quiet autoinstall ds=nocloud\;s=/cdrom/server/  ---
                    initrd  /casper/initrd
                }
                
              
    Note the menuentry line; This can be whatever you want it to be in the quotes. For simplicity, I have named mine "Ubuntu Server 22.04 Auto Installation". The entry above does the following:

  • set gfxpayload=keep
  • Will ensure that the GUI settings are maintained once the boot process begins.

  • quiet
  • Will decrease the verbosity during the boot process. This helps avoid showing detailed logs on startup.

  • autoinstall
  • Instructs the installer to perform auto install using a predefined config. We will get to our user-data in sections below.

  • ds=nocloud\;s=/cdrom/server/
  • In conjunction with the autoinstall directive, this line will specify the path for the cloud-init configuration. We can see that the source file will be on the ISO's "server" directory.

  • initrd /casper/initrd
  • This is the initial RAM disk (initrd) that gets loaded into memory in order to set up the environment before the system boots. We call this pre-boot environment, which contains the drivers and initialization scripts.

    Ok, now let us add the directory for the user-data and meta-data files that will be used to perform the auto installation. Jump on the shell and then:
                
                  mkdir source-files/server
                
              
    If you want to, you can add more folders to contain alternative user-data configs, or extra grub configs pointing to the directories.

    5. Create the Custom autoinstall user-data files

    Great, we have made it this far! Let's jump into the core of the auto-installation config. The user-data file can be thought of like an answer file. Note that the user-data file uses YAML to provide directives. If you would like, you can check out the format and other examples here, Cloud Init Docs. For this, I have provided a sample user-data file that we will use to get going. Feel free to add to this, or modify it to suit your needs.

                
                  #cloud-config
                  autoinstall:
                    version: 1
                    # Remove interactive sections to avoid user input
                    storage:
                      layout:
                        name: lvm
                        match:
                          size: largest
                    locale: en_US.UTF-8
                    keyboard:
                      layout: us
                    identity:
                      hostname: amx-000
                      #Change ME! I use: openssl passwd -6
                      password: $6$gK6xB150l.......
                      username: ubuntu
                    ssh:
                      allow-pw: true
                      install-server: true
                    apt:
                      primary:
                        - arches: [default]
                          uri: http://us.archive.ubuntu.com/ubuntu/
                    packages:
                      - build-essential
                      - network-manager
                      - dkms
                      - emacs-nox
                    package_update: true
                    package_upgrade: true
                    late-commands:
                      # Changing from networkd to NetworkManager
                      # move existing config out of the way
                      - find /target/etc/netplan/ -name "*.yaml" -exec sh -c 'mv "$1" "$1-orig"' _ {} \;
                      # Create a new netplan and enable it
                      - |
                        cat <<EOF | sudo tee /target/etc/netplan/01-netcfg.yaml
                        network:
                          version: 2
                          renderer: NetworkManager
                        EOF
                      - curtin in-target --target /target netplan generate
                      - curtin in-target --target /target netplan apply
                      - curtin in-target --target /target systemctl enable NetworkManager.service
                      # Install NVIDIA driver (with apt-get flags)
                      - curtin in-target -- apt-get -y install --no-install-recommends nvidia-driver-520
                
              
    You will need to update the password at minimum. You can use openssl passwd -6 to generate the credential. The rest of the example you're free to modify as needed. This configuration file will allow us to perform an installation without requiring user input during the Distro installation.

    You may want to consider using a YAML linter to check the validity & structure if you choose to modify. A free online YAML linter is available at YAMLLINT.

    6. Generate the New ISO

    If your YAML checked out (assuming you modified), then let's generate the ISO! To do this we are going to use xorriso.

              
                xorriso -as mkisofs -r \ 
    -V 'Ubuntu 22.04 LTS AUTO (EFIBIOS)' \
    -o ../ubuntu-22.04-autoinstall.iso \
    --grub2-mbr ../BOOT/1-Boot-NoEmul.img \
    -partition_offset 16 \
    --mbr-force-bootable \
    -append_partition 2 28732ac11ff8d211ba4b00a0c93ec93b ../BOOT/2-Boot-NoEmul.img \
    -appended_part_as_gpt \
    -iso_mbr_part_type a2a0d0ebe5b9334487c068b6b72699c7 \
    -c '/boot.catalog' \
    -b '/boot/grub/i386-pc/eltorito.img' \
    -no-emul-boot -boot-load-size 4 -boot-info-table --grub2-boot-info \
    -eltorito-alt-boot \
    -e '--interval:appended_partition_2:::' \
    -no-emul-boot \
    .


    In the above we add the extracted partitions from step 3 back into our new ISO.



    7. Deploy / Test the ISO

    Now, create a test VM in HyperV (or whatever target) that you can use to test the newly created ISO. I won't go through all of the specifics of manual testing since each environment may be different.

    We can also navigate back to our AWS ECR web interface and see that the new image is available. In the next part we will use this image along with Elastic Kubernetes Service (EKS) to deploy our scaled application.

    Part 2. Deploying our ISO

    Great, at this point you should have an ISO that is all ready to be deployed. Like I said in the title, we're going to be deploying on premise using Hyper-V and Terraform. To deploy we'll be writing some powershell to do a bulk of the lifting since our target hypervisor is Hyper-V. We'll wrap the powershell with Terraform so we can spin up quickly. Sound good, ok let's jump in on our powershell.



    We'll start with the variable definition for what defines our VM parameters. Feel free to modify these to fit your needs depending on your paths, hardware resources, and networking.

              
                #This allows HyperV to create a virtual machine with multiple user input parameters below.
                #The changeme comment should be used by the user to specify resources in their environment.
                
                #Allow different values to be passed when running the script
                param (
                    [string]$action
                )
                
                # VM parameters
                $vmName = "CustomAMXUbuntuVM"  #changeme
                $vmPath = "D:\Virtual Machines\AMX_TEST" #changeme
                $vhdPath = "$vmPath\$vmName.vhdx"
                $image = "D:\OS\ubuntu-22.04-autoinstall.iso" #changeme
                $vmswitch = "QLogic BCM5709C Gigabit Ethernet (NDIS VBD Client) #3 - Virtual Switch" #changeme
                $cpu = 1 #changeme
                $ram = 6GB #changeme
                $vhdSize = 80GB #changeme
                
                # Handle actions
                if ($action -eq "create") {
                    # Create VM
                    Write-Output "Creating VM: $vmName"
                    New-VM -Name $vmName -Path $vmPath
                    Set-VM -Name $vmName -ProcessorCount $cpu -MemoryStartupBytes $ram
                    New-VHD -Path $vhdPath -SizeBytes $vhdSize
                    Add-VMHardDiskDrive -VMName $vmName -Path $vhdPath
                    Set-VMDvdDrive -VMName $vmName -Path $image
                    Connect-VMNetworkAdapter -VMName $vmName -SwitchName $vmswitch
                    Start-VM -Name $vmName
                    Write-Output "VM $vmName created and started successfully."
                
                } elseif ($action -eq "destroy") {
                    # Destroy VM
                    Write-Output "Destroying VM: $vmName"
                    if (Get-VM -Name $vmName -ErrorAction SilentlyContinue) {
                        Stop-VM -Name $vmName -Force
                        Remove-VM -Name $vmName -Force
                        if (Test-Path -Path $vhdPath) {
                            Remove-Item -Path $vhdPath -Force
                        }
                        Write-Output "VM $vmName and associated resources have been removed."
                    } else {
                        Write-Output "VM $vmName does not exist. Nothing to destroy."
                    }
                
                } else {
                    Write-Output "Invalid action specified. Use 'create' or 'destroy'."
                    exit 1
                }
              
            
    Ok, if that doesn't make sense and you need a reference I've put the script on github at the following location on Github.

    Terraform Wrapper
    At this point you should have a functioning ISO along with a powershell script to automate the creation of the VM. The last thing we'll do is setup terraform. This will make it easy for us to spin up new instances both on premise + cloud (which we will cover cloud in other tutorials) in parallel.

    I am not going to go into detail about installing Terraform on our Windows machine. Essentially, just pull their binary and update your environment variables. You can also check out the Hashicorp download page Here.

    Create your Terraform file using the editor or IDE of your choosing. For this demo I call mine "main.tf". Once you have the file created open it with your editor/IDE. We pass 1 of 2 actions to the parameter in the powershell script, either create or destroy. See below:
              
    #TF file used with create_vm.ps1 on HyperV.  
    #Passes either the create or destroy action to the create_vm.ps1 script.
    provider "local" {}
    
    resource "null_resource" "ubuntu_amx_vm" {
      # Provisioner to create the VM
      provisioner "local-exec" {
        command = "PowerShell -File D:\\Terraform_create_VM\\create_vm.ps1 -action create"
        when    = "create"
      }
    
      # Provisioner to destroy the VM
      provisioner "local-exec" {
        command = "PowerShell -File D:\\Terraform_create_VM\\create_vm.ps1 -action destroy"
        when    = "destroy"
      }
    }
              
            


    Deploy to Hyper-V At this point you should have the following things set up:
    1. Autoinstall ISO
    2. Powershell script
    3. Terraform file (whatever.tf)

    Ok, now we are going to deploy to Hyper-V. I am not going into detail on how to setup Hyper-V, but you will need this Windows feature added for whatever version of Windows you're running.

    For the new Terraform configuration we need to set up the working directory where we will operate. You will be executing this on the server where Hyper-V is installed. To do this run the command:
              
                terraform init
              
            


    This will initialize the working directory with the necessary files and directories it needs to function. It will download any needed plugins, but we're not using AWS, Azure, etc. just yet. Also, it will perpare the state management and check for existing state files.

    Now run the following command to build your VM:
              
                terraform apply
              
            


    Type "yes" when prompted. Now you should see something similar to the output below:

    Check the Deployment and Teardown

    If all jobs completed without error, you should see your new VM initializing in Hyper-V. Double click the instance in the Hyper-V manager.

    Great job! We should have a resource that is building by itself using an unattended method. What happens if we want to remove the resource? Well, from our directory with the .tf file we created issue we can issue the following:

                  
                    terraform destroy
                  
                


    Remember when we said there are two parameters that can be passed? Destroy will remove the resource that is created earlier. Type yes to confirm the removal of This Hyper-V resource. Chech out the similar output from my powershell prompt:



    I hope you enjoyed this post. I'll be adding some functionality and expanding the scope of the terraform deployment in the future!

    “Before anything else, preparation is the key to success.” — Alexander Graham Bell