Proxmox root migration (a report)

Published: 2022-08-04, Revised: 2024-02-13


nextcloud_bg


TL;DR Migration is a tedious process because it depends on a number of custom setup parameters. Understandably, the Proxmox docs provide only basic information on how to migrate the hypervisor itself.

Warning

This is a report on my experiences migrating from Raid and PM 6.4 to ZFS and PM 7.0. Skip what is not relevant to you.

Background#

If there is neither a hardware nor a software change, restoring Proxmox is easy: Copy all the files back.

Most often, however, something will have changed. For me this was:

  1. Migration to ZFS. This can happen in steps. I first added two ZFS pools for data_ssd (VM disks) and data_hdd (data pool). What was left was migrating the root partition of Proxmox, by creating a mirrored ZFS pool, which required a new installation.1

  2. Migrating Proxmox from 6.4 to 7.0. It is possible to do this either by migrating or in-place update.2

This is a fairly standard setup, so I decided to do these two steps in parallel.

Preparations#

Follow the "New installation" checklist.2

While it is not advised, there is a small list of packages that I have installed directly on Proxmox itself. This includes some basic packages, e.g.:

I keep note of these packages, but also went through the list of all packages installed, to see if I missed any:

apt list --installed

Just to verify, also see what the check tool says:

pve6to7
pve6to7 --full

Create Backups#

This is independent of the migration and basically follows the 3-2-1 rule.3

  1. Create LXC Dumps
  2. Create ZFS Snapshots of VM volumes and data disks
  3. Transfer/Send snapshots and backups offsite
  4. Create a Proxmox config backup

For creating config backups, I used two approaches.

First, a backup script from DerDanilo will create a comprehensive *.tar.gz of all relevant files.4

./prox_config_backup.sh

Note

Almost all files you need are in /etc, so creating a compressed archive of this folder directly will likely do as well.

Second, I have a local git repository on Proxmox to selectively track changes of all relevant files, synced with my private Gitlab. The main purpose here is not backup, but monitoring and annotation of changes and general reproducibility.

A visual example

Gitlab Config Example

The setup is straightforward:

cd /root
mkdir -p config/proxmox
git init
git remote add origin git@gitlab.local.mytld.com/configs/proxmox.git
nano filelist.txt

Add files to be tracked. Click to see my list.

filelist.txt
## container configs
/etc/pve/lxc/100.conf
/etc/pve/lxc/101.conf
/etc/pve/lxc/102.conf
/etc/pve/lxc/103.conf
/etc/pve/lxc/104.conf
/etc/pve/lxc/105.conf
/etc/pve/lxc/106.conf
/etc/pve/lxc/107.conf
/etc/pve/lxc/108.conf
/etc/pve/lxc/109.conf
/etc/pve/lxc/110.conf
/etc/pve/lxc/120.conf

## storage config
/etc/pve/storage.cfg

## network
/etc/resolv.conf
/etc/network/interfaces

## user config
/etc/pve/user.cfg

## datacenter
/etc/pve/datacenter.cfg

## nodes
/etc/pve/nodes/monkey/config

## special files
/etc/pve/.vmlist
/etc/pve/.version
/etc/pve/.members
/etc/subuid
/etc/subgid
/etc/timezone
/etc/hostname

## telegraf mod
/etc/pve/status.cfg
/etc/telegraf/telegraf.conf
/etc/sudoers

## docker mod
/etc/modules-load.d/modules.conf

## apt sources
/etc/apt/sources.list.d/influxdb.list
/etc/apt/sources.list.d/pve-enterprise.list

## root/user config
/root/.bashrc

## apcupsd mod
/etc/default/apcupsd
/etc/apcupsd/apcupsd.conf

## postfix mod
/etc/postfix/main.cf
/etc/postfix/smtp_header_checks

## tools
/root/drive_check.sh
/root/prox_config_backup.sh

Note

Files cannot be tracked directly in Proxmox, because a database-driven file system is used for storing configuration files.11

The following script will read filelist.txt and copy all files, replicating full paths in the repository.

nano get-files.sh
#!/bin/bash

################################################################################
#
# Shell script to copy important configuration files from the current
# environment to this repository.
#
################################################################################

# Exit as soon as a command fails
set -e

# Accessing an empty variable will yield an error
set -u

# Full path to repo directory
REPO_PATH="/root/config/proxmox/"

# copy all files from filelist.txt, excluding 
# comments, recreate all paths on target directory
grep -e '^[^#]' "$REPO_PATH/filelist.txt" | \
    xargs cp --parents --target-directory "$REPO_PATH"

echo "Completed."

Update and commit files:

chmod +x get-files.sh
bash get-files.sh
git add .
git commit -m "Initial commit"

For the migration itself, I used a spare SSD as a working drive, which was not part of my ZFS pool.

Note

It is easy to get confused with drive letters, when you have 20+ drives, and drive letters may change between reboots. I suggest using absolute disk ids from /dev/disk/by-id/.

Copy either the git files or the compressed config backups4 to the temporary drive:

mkdir /tmp/migration
MIGRATIONDRIVE=/dev/disk/by-id/ata-Samsung_SSD_840_EVO_1TB_S1D9NEAD808121E-part1
MIGRATIONPATH=/tmp/migration
mount -v $MIGRATIONDRIVE $MIGRATIONPATH
# git
cp /root/config/proxmox $MIGRATIONPATH
cd $MIGRATIONPATH && ls -alh
# compressed backup
PMBACKUP=proxmox_backup_root.local.mytld.com_2022-01-22.08.03.27.tar.gz
cp $PMBACKUP $MIGRATIONPATH
tar -zxvf $PMBACKUP
cd var/tmp/proxmox-DeXUKUc3
tar -xvf proxmoxetc.2022-01-22.08.03.27.tar
tar -xvf proxmoxroot.2022-01-22.08.03.27.tar
tar -xvf proxmoxpve.2022-01-22.08.03.27.tar
# unmount
umount $MIGRATIONPATH

Migration#

Shutdown all guests#

Prepare ZFS Export#

If ZFS is used, pools should be exported for the time of the migration.

Then unmount and export pools:

zfs unmount tank_hdd
zfs unmount tank_ssd
zpool export tank_hdd
zpool export tank_ssd

Optionally, test import:

zpool import tank_hdd
zpool import tank_ssd
zpool status

Export again for migration.

Note

Just to be on the safe side, a good precaution is also to disconnect all data drives. For me this meant shutting down the external JBOD.

Hardware Migration#

Now it was time for migrating the Raid1 to a ZFS Mirror.

First, I attached the two SSDs to my HBA, which would not boot Proxmox. Until I realized that it appears recommended to directly attach the root pool to SATA/SAS ports on the mainboard.5

Note

If, for some reason, you want to use the HBA for booting, do not erase the "Boot Services" on your HBA (or flash it back).

Prepare Proxmox 7 Iso USB#

Flash Proxmox Iso to USB. See the docs6 and the download section.7

I used Etcher (chocolatey install with choco install etcher).

Proxmox installation#

I used the IPMI interface iKVM/HTML5 Console:

Select your root partitions to install the ZFS root pool: influxdb2.0

Configuration#

The strategy here was going step by step and in the correct order.

Note

Below, configurations are restored selectively. An alternative is to restore configurations through the Proxmox Cluster File System database, see a note at the end.

First, connect via ssh, using the password set during installation.

ssh root@192.168.10.42

Verify:

zpool status
lsblk

lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 465.8G  0 disk
├─sda1   8:1    0 465.8G  0 part
└─sda9   8:9    0     8M  0 part
sdb      8:16   0 465.8G  0 disk
├─sdb1   8:17   0 465.8G  0 part
└─sdb9   8:25   0     8M  0 part
...

Looks good.

Import ZFS pools

zpool import tank_hdd
zpool import tank_ssd
zpool status

Restore configs#

Mount migration drive

mkdir /tmp/migration
MIGRATIONDRIVE=/dev/disk/by-id/ata-Samsung_SSD_840_EVO_1TB_S1D9NEAD808121E-part1
MIGRATIONPATH=/tmp/migration
mount -v $MIGRATIONDRIVE $MIGRATIONPATH
cd $MIGRATIONPATH && ls -alh
# set the path for the migration steps below
BAKPATH=$MIGRATIONPATH/var/tmp/proxmox-DeXUKUc3
cd $BAKPATH

Restore home folder:

SSH would not work afterwards, before I realized that authorized_keys is a symlink to /etc/pve/priv/authorized_keys.

rm ~/.ssh/authorized_keys
mv ~/.ssh.bak/authorized_keys ~/.ssh/authorized_keys

Restore storage configuration#

Compare files (byobu):

nano /etc/pve/storage.cfg 
nano $BAKPATH/etc/pve/storage.cfg 
- update selectively, if necessary

Install custom apt packages#

apt update
apt-get install sudo byobu iotop git apcupsd
apt-get install lm-sensors
sudo sensors-detect # answer all yes
# for postfix:
apt-get install libsasl2-modules
apt install postfix-pcre
...

Restore network configuration#

I have two network cards, eno1 and eno2. The first is used for the Management subnet, the second for the Service subnet(s), based on tagged VLAN traffic. I added the eno2 network configuration here, since eno1 was already setup during the Proxmox installation.

Compare/check/merge network interfaces

nano /etc/network/interfaces
nano $BAKPATH/etc/network/interfaces

Compare/check/merge user cfg

nano /etc/pve/user.cfg
nano $BAKPATH/etc/pve/user.cfg

Restore further customization#

Postfix:

cp -a $BAKPATH/etc/postfix/main.cf /etc/postfix/main.cf
cp -a $BAKPATH/etc/postfix/smtp_header_checks /etc/postfix/smtp_header_checks

Create "data user":

For historical reasons, files on my data pool are owned by a specific user (samba_user - although I don't use samba anymore). This user is mapped to www-data inside LXCs.

Example: User Mappings

For the sake of completeness, this is the UID-Mapping I use in my (unprivileged) LXC, to mount data files, owned by samba_user (1005) on the host, to www-data (33) inside LXCs.

lxc.idmap: u 0 100000 33
lxc.idmap: g 0 100000 33
lxc.idmap: u 33 1005 1
lxc.idmap: g 33 1005 1
lxc.idmap: u 34 100034 65502
lxc.idmap: g 34 100034 65502

In order to allow the use of these settings, add this line

root:1005:1
to both /etc/subgid and /etc/subuid on the host.

To preserve these LXC-UID-Mappings, I wanted to restore this user, with matching user id/gid.

groupadd samba_user -g 1005 
useradd samba_user -u 1005 -g 1005

apcupsd:

/etc/default/apcupsd # type yes
/etc/apcupsd/apcupsd.conf # copy from backup

Restore remaining files#

These files did not change or needed no manual merge.

Container configs:

cp -a $BAKPATH/etc/pve/lxc/* /etc/pve/lxc/

Network/DNS:

/etc/resolv.conf # (not changed)

Datacenter:

/etc/pve/datacenter.cfg # change keyboard to de

Nodes config:

/etc/pve/nodes/monkey/config # Add ACME config

Special files

/etc/pve/.vmlist # not necessary, automatically generated
/etc/pve/.version # not necessary, automatically generated
/etc/pve/.members # (not changed)
/etc/subuid # manual merge (add one line, for LXC uid mapping)
/etc/subgid # manual merge (add one line, for LXC gid mapping)
/etc/timezone # (not changed)
/etc/hostname # (not changed)

Apt sources:

/etc/apt/sources.list.d/pve-enterprise.list # not changed (or newer)

Verify#

Restore/Reissue subscription#

New hardware requires updating the subscription key9, if you have one:

Update#

Check#

SSL Setup#

I have a split-brain DNS setup, meaning that my services (Proxmox..) can only be reached through the internal subnet, through a local DNS server. Proxmox gets SSL certificates through ACME, configured with DNS credentials flow for my A record/TLD.

Restore acme.conf

cp $BAKPATH/etc/pve/priv/acme/default /etc/pve/priv/acme
cp $BAKPATH/etc/pve/priv/acme/plugins.cfg /etc/pve/priv/acme

Configs appear in gui: Check.

Check Certificate Management.

reboot

Go to Node/Certificates/Acme

Prepare VM start#

ZFS unlock/mount:

zfs mount -l tank_hdd/data
zfs mount -l tank_ssd/lxc

zpool list
> NAME       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
> rpool      232G  1.43G   231G        -         -     0%     0%  1.00x    ONLINE  -
> tank_hdd  43.7T  5.74T  37.9T        -         -     0%    13%  1.00x    ONLINE  -
> tank_ssd   464G  69.3G   395G        -         -    11%    14%  1.00x    ONLINE  -

Looks good.

dmesg -wHT
nothing unusual

Check if 10 applies to you:

pvesm add zfspool encrypted_zfs -pool tank_ssd/lxc
encrypted_zfs appears in the storage list

Conclusion#

Any customization makes migration, backup and restore more tedious. I was not even sure whether I should publish my notes, since the process is highly custom. But maybe someone can benefit from this, as a starting point.

Additions#

Restoring the pmxcfs database file#

A friendly user 12 pointed out that it is also possible to restore the pmxcfs database file directly, which backs all configuration files in /etc/pve11:

  1. Install a new proxmox server with same name
  2. systemctl stop pve-cluster
  3. cp config.db.bak /var/lib/pve-cluster/config.db
  4. reboot

This is also described as a shortcut in the Proxmox docs11, but I have not tried it so far and thus, cannot report how successful this will be. Comments?


  1. Installation as ZFS Root File System pve.proxmox.com/pve-docs 

  2. Upgrade from 6.x to 7.0 pve.proxmox.com/wiki/ 

  3. "3 copies of the data, stored on 2 different types of storage media, and one copy should be kept offsite" 3-2-1 rule.) 

  4. Proxmox backup script. github.com/DerDanilo/proxmox-stuff 

  5. "Because your boot drive should be connected to the SATA interface and only data drives should be connected to the SAS controller." truenas.com/community 

  6. Proxmox docs: Installation media pve.proxmox.com/wiki 

  7. ISO Images Download proxmox.com/en/downloads 

  8. Proxmox Installation pve.proxmox.com/wiki 

  9. Updating the Proxmox subscription key shop.proxmox.com 

  10. Bypassing backup and restore when upgrading pve.proxmox.com/wiki 

  11. Proxmox Cluster File System (pmxcfs) pve.proxmox.com/wiki 

  12. User spirit: How to backup Proxmox configuration files forum.proxmox.com