Moving Debian 10 from Hyper-V to ESXi
I moved a Debian 10 VM from Hyper-V to ESXi using the VMWare Converter using the P2V or Remote Linux machine
option. It was challenging because Debian is not supported.
I'm shrinking my home network and moving everything onto ESXi and dumping a lot of stuff. I'm building all new VM's for most things, but there was 2 that I didn't want to rebuild. I tried using the VMware vCenter Converter but kept getting the The destination does not support EFI firmware.
. My google-foo failed me. The only thing I could find was "you can't" and "it's not supported". I don't accept that.
A quick outline of the issue. The converter tool doesn't recognize Debian as a distribution so when it queries the ESXi host to see if EFI is supported, it requests it as a linuxOs
and otherDistro
. There are also some log entries stating Unrecognized guest OS id 'debian10-64'. Falling back to 'otherlinux'
. You can see it all in the log files for the worker. Those files are stored at C:\ProgramData\VMware\VMware vCenter Converter Standalone\logs
.
The solution I came up with to migrate my systems was this:
- Fake the converter into thinking it is Ubuntu. This got it to clone but it failed to reconfigure the grub bootloader. It did copy the VM over, so that was good.
- Fix the bootloader so it will boot. This is where the converter died.
- Fix the network interfaces since the interface names changed.
I'll go over this all step-by-step. You will need a Debian Live CD image, I would recommend using the one that does not have a GUI. Other images may work, but this guide may need some adjusting to make that work.

Before we do anything, backup your current system. That way you can restore if needed. We only make one change to your original host. The converter may make others. I don't think it does anything, but I can't be certain.
On to the meat of this post and actually doing the work.
Fake the converter to think it's Ubuntu and not Debian
The file the converter looks at to determine the OS is /etc/os-releases
. We will be messing with that file so make a copy, we will store it in root's home directory. This is the only change we make to the original host.
sudo cp /etc/os-releases /root
Now edit the file, it is only modifiable by root so you'll need to sudo
it.
sudo vi /etc/os-releases
The contents of my file is this:
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
Change the ID
line to ubuntu
and VERSION_ID
to 20.04
and save the file. The result is this:
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="20.04"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=ubuntu
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
Get current disk UUID's
The disk UUID's change during the conversion. You will need to get your current ones for this whole process to work. You'll be using them to figure out what to replace values with at the end.
ls -l /dev/disk/by-uuid/
This results in something like this:
lrwxrwxrwx 1 root root 10 Feb 22 19:30 87389f7c-3716-408b-9e72-04361a61d6da -> ../../dm-0
lrwxrwxrwx 1 root root 10 Feb 22 19:30 8da7a68a-b172-4e5a-a866-533e15a62d66 -> ../../dm-1
lrwxrwxrwx 1 root root 10 Feb 22 19:48 9900-ACED -> ../../sda1
lrwxrwxrwx 1 root root 10 Feb 22 19:48 e9a3efb0-5073-4983-ace9-2d8e88979340 -> ../../sda2
Get mounts for chroot environment
We need to know what is mounted where, run this on your source machine.
sudo mount | grep "^/dev"
The output should be something like this:
/dev/mapper/counterstrike--vg-root on / type ext4 (rw,relatime,errors=remount-ro)
/dev/sda2 on /boot type ext2 (rw,relatime,stripe=4)
/dev/sda1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
Convert to ESXi
Using the VMware vCenter Converter
import or convert the VM or physical server to the ESXi host.
Note, during the conversion I had change a few options in the Helper VM network
section for it to connect to my source machine. I had to disable IPv6 and add my DNS suffixes to the DNS tab. And sometimes that would not even work, and I had to reference the source machine by IP address.
The conversion will most likely fail at the very end with an error Unable to reconfigure the destination virtual machine
. This is fine. Do not worry, we will fix it. It is the grub boot loader.
Fix the boot loader
Next part in the process is fixing the boot loader.
Start with booting your new VM with the Debian Live image.
Note: We will deal with some long random values, I recommend starting an SSH session on your live environment and doing this stuff through there. I cover that in this linked post.
Next, we will chroot
into your environment. This may be different based on your original system. My original system was an LVM volume group, if needed, I cover how to activate an LVM volume in another post linked below. You can tell if it is by looking at the mount path for /
. If it is under /dev/mapper/
you will likely need to deal with LVM volumes.
Based on the output of the mount
command from the top, start mounting things into the /mnt
directory. You need your root system, and anything up to the /boot/efi
mounts.
Note: Do not mount them at the root /
. If you do you will most like break something in your new virtual machine and you will need to re-convert it.
Following from my example above, we need to mount /
, /boot
and /boot/efi
.
sudo mount /dev/mapper/counterstrike--vg-root /mnt
sudo mount /dev/sda2 /mnt/boot
sudo mount /dev/sda1 /mnt/boot/efi
Now that we have your filesystems mounted, we need to mount some virtual filesystems so grub will install.
for i in /dev /dev/pts /proc /sys /sys/firmware/efi/efivars /run; do sudo mount -B $i /mnt$i; done
Now chroot
into your system.
sudo chroot /mnt
Now that we are in here we need to update the UUID's of the filesystem in the fstab
file.
Get the new ones by doing ls -la /dev/disk/by-uuid
The output will be like the one you ran in your source machine, mine looks like this:
total 0
drwxr-xr-x 2 root root 140 Feb 22 21:11 .
drwxr-xr-x 8 root root 160 Feb 22 20:58 ..
lrwxrwxrwx 1 root root 9 Feb 22 20:58 2021-02-06-12-03-45-00 -> ../../sr0
lrwxrwxrwx 1 root root 10 Feb 22 20:58 6034-0171 -> ../../sda1
lrwxrwxrwx 1 root root 10 Feb 22 20:58 6ecda273-f712-4fef-ba81-d0f9e5c843a2 -> ../../sda2
lrwxrwxrwx 1 root root 10 Feb 22 21:11 7c3bcedb-e16a-4735-b7c2-82c6e685d1e2 -> ../../dm-1
lrwxrwxrwx 1 root root 10 Feb 22 21:11 8205da38-1d33-4361-a66b-ddbda205e997 -> ../../dm-0
Now that we have what the old UUID's are and the new ones we can update the fstab
file. Open the fstab
file in your editor, I use vi
so that is what I will show.
vi /etc/fstab
We will now update the entries here to be the new values, the UUID's are the big nasty looking things, bunch of letters and numbers. The key to match them up is the directory they are pointing to. For example, my original sda1
was 9900-ACED
. It is now 6034-0171
. sda2
was e9a3efb0-5073-4983-ace9-2d8e88979340
and is now 6ecda273-f712-4fef-ba81-d0f9e5c843a2
.
Originally the contents of my fstab
file was this:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/mapper/counterstrike--vg-root / ext4 errors=remount-ro 0 1
# /boot was on /dev/sda2 during installation
UUID=e9a3efb0-5073-4983-ace9-2d8e88979340 /boot ext2 defaults 0 2
# /boot/efi was on /dev/sda1 during installation
UUID=9900-ACED /boot/efi vfat umask=0077 0 1
/dev/mapper/counterstrike--vg-swap_1 none swap sw 0 0
Now it is this:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/mapper/counterstrike--vg-root / ext4 errors=remount-ro 0 1
# /boot was on /dev/sda2 during installation
UUID=6ecda273-f712-4fef-ba81-d0f9e5c843a2 /boot ext2 defaults 0 2
# /boot/efi was on /dev/sda1 during installation
UUID=6034-0171 /boot/efi vfat umask=0077 0 1
/dev/mapper/counterstrike--vg-swap_1 none swap sw 0 0
Save and quit your editor, in vi
you can use escape key
-> :wq
Now we need to update the grub.cfg
file.
vi /boot/efi/EFI/debian/grub.cfg
Change that UUID to the correct one. In my case it was was e9a3efb0-5073-4983-ace9-2d8e88979340
and is now 6ecda273-f712-4fef-ba81-d0f9e5c843a2
.
The contents were:
search.fs_uuid e9a3efb0-5073-4983-ace9-2d8e88979340 root hd0,gpt2
set prefix=($root)'/grub'
configfile $prefix/grub.cfg
And is now:
search.fs_uuid 6ecda273-f712-4fef-ba81-d0f9e5c843a2 root hd0,gpt2
set prefix=($root)'/grub'
configfile $prefix/grub.cfg
Now that the basics for grub itself have been fixed we need to set the os-release
back to what it was.
cp /root/os-release /etc
After you have put the release file back, reinstall grub.
apt-get install --reinstall grub-efi
grub-install
update-grub
Update network interfaces
With grub updated and reconfigured you need to update your interfaces since they have changed.
You can get the new interface names by using ip a
. The output will look like this:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:c2:6e:88 brd ff:ff:ff:ff:ff:ff
inet ###.###.###.###/24 brd 172.16.40.255 scope global ens32
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fec2:6e88/64 scope link
valid_lft forever preferred_lft forever
Make note of the interface names. In this case, I only have 1, ens32
.
You can configure the interfaces in a number of different ways, the default that I'm aware of is by using the /etc/network/interfaces
file.
vi /etc/network/interfaces
My old file was this:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
allow-hotplug eth0
iface eth0 inet static
address ###.###.###.###/24
gateway ###.###.###.###
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers ###.###.###.###
dns-search example.com
The new file is this:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
allow-hotplug ens32
iface ens32 inet static
address ###.###.###.###/24
gateway ###.###.###.###
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers ###.###.###.###
dns-search example.com
Done!
Reboot and you should be done.
exit
sudo reboot
Links

