Homelab 📖 20 min read
📅 Published: 🔄 Updated:

ESXi Went Paid, So I Moved Everything to Proxmox

When Broadcom killed the free ESXi license, I had to move 11 VMs somewhere. I'd been running ESXi on a Dell OptiPlex 7050 for three years with zero complaints. Then one morning I'm reading about the licensing changes and doing the math on what VMware wants per socket now. No thanks. I tested Proxmox, XCP-ng, and just running everything in Docker. Here's what happened.

🛠️ Before You Start

💻
Hardware Mini PC (2+ cores, 8GB+ RAM) or used server, SSD recommended
📦
Software Proxmox VE 8.x ISO (based on Debian 12)
⏱️
Estimated Time 1-3 hours

Migration scorecard:

⏱️ 18 min read
Migration effort Feature gap from ESXi Would I recommend
Proxmox Low — qm import worked for most VMs Minor — live migration is clunkier, no VCSA equivalent Yes, grudgingly
XCP-ng Medium — needed disk format conversion Moderate — Xen Orchestra is nice but community moves slow Maybe, if you like Xen
Docker-only High — had to containerize everything Large — no real VM support, no Windows guests Only if all your stuff is Linux containers

I ran all three for about two weeks each before committing to Proxmox. It wasn't love at first sight.

If you're coming from ESXi, your hardware probably already meets these requirements. For anyone starting fresh:

Minimum Recommended
CPU 64-bit with VT-x/AMD-V 4+ cores, Intel VT-d/AMD-Vi
RAM 4GB 32GB+ (more is better)
Storage 32GB boot + separate data NVMe for VMs, HDD for data
Network 1 Gbps 2.5 Gbps or multiple NICs

Popular hardware choices:

  • Intel N100 mini PCs: ~$150, 4 cores, low power, great for starters
  • Used Dell Optiplex: ~$100, often 6+ cores, tons of RAM slots
  • Lenovo ThinkCentre Tiny: Space-efficient, very quiet

What I miss from ESXi:

The vSphere thick client was ugly but it was fast. Proxmox's web UI is fine for daily stuff, but when I need to do something across multiple nodes I really miss having VCSA-style centralized management. Proxmox has cluster view, sure, but managing 3+ nodes without a subscription feels like they want you to pay up. Which, fair enough, but ESXi Free never pulled that.

vMotion just worked. Proxmox has live migration and it technically works, but it's slower and I've had it fail twice on VMs with USB passthrough. Had to shut down, migrate cold, bring back up. Not the end of the world, but annoying when you've been spoiled.

The VMDK ecosystem. Every appliance vendor shipped OVA files that dropped right into ESXi. Proxmox can import them, but it's always a conversion step, and sometimes the virtual hardware doesn't map cleanly. I lost an afternoon to a pfSense OVA that wouldn't boot until I switched the disk controller.

Installation

Download and Create USB

  1. Download ISO from Proxmox.com/downloads
  2. Flash to USB with Rufus (Windows) or Balena Etcher (any OS)
  3. Boot from USB

Install Wizard

  1. Accept license agreement
  2. Select target disk (will be erased)
  3. Set country/timezone
  4. Set root password and email
  5. Configure network:
    • Hostname: pve.local
    • IP: Static IP on your LAN (e.g., 192.168.1.100)
    • Gateway: Your router IP
    • DNS: Your router or 1.1.1.1

Installation takes 5-10 minutes. Faster than ESXi's installer, honestly. Reboot when done.

Access Web Interface

From any browser: https://192.168.1.100:8006

Login: root with your password. Ignore the subscription warning (click OK, configure silent removal later).

Post-Install Configuration

Remove Subscription Nag

Proxmox is free but nags you about subscriptions every time you log in. Ironic — I left VMware because of licensing nonsense and the first thing Proxmox does is ask me to pay. At least you can disable it:

Bash (SSH to Proxmox)
sed -Ezi.bak "s/(Ext\.Msg\.show\(\{[^}]+?title\s*:\s*gettext\('No valid sub)/void\(\{ \/\/\1/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js && systemctl restart pveproxy

Enable Community Repository

For updates without subscription:

Bash
# Disable enterprise repo
mv /etc/apt/sources.list.d/pve-enterprise.list /etc/apt/sources.list.d/pve-enterprise.list.bak

# Add no-subscription repo
echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-no-subscription.list

apt update && apt upgrade -y

Creating Your First VM

Configuration file example
Configuration file example

Upload ISO

  1. Datacenter → Storage → local → ISO Images
  2. Upload or Download from URL (paste Ubuntu/Debian ISO URL)

Create VM

  1. Click "Create VM" button (top right)
  2. General: Name it (e.g., "Ubuntu-server"), pick VM ID (100+)
  3. OS: Select your uploaded ISO
  4. System: Default is fine (QEMU agent: enable if installing Linux)
  5. Disks: Set size (32GB minimum for Linux), enable "Discard" for SSDs
  6. CPU: 2+ cores, type: host (best performance)
  7. Memory: 2048MB minimum, more for heavier use
  8. Network: Default bridge is fine
  9. Finish and start VM

Click Console to see the VM's display. Install the OS normally.

LXC Containers (Lightweight Alternative)

This is the one area where Proxmox is genuinely better than ESXi was. ESXi had no LXC equivalent — everything was a full VM. In Proxmox, I moved six of my Linux services out of VMs and into LXC containers. Pi-hole went from using 1GB in a VM to 64MB in a container. XCP-ng doesn't have anything like this either — their container story is basically "run Docker inside a VM," which defeats the purpose. For anything Linux-based that doesn't need its own kernel, LXC is the right call.

  • Start in 2 seconds vs 30+ for VMs
  • Use 50-100MB RAM for basic services
  • Share host kernel (more efficient)

Download Container Template

  1. Datacenter → Storage → local → CT Templates
  2. Click "Templates" button
  3. Download Ubuntu or Debian (most compatible)

Create Container

  1. Click "Create CT"
  2. General: Name, set root password
  3. Template: Select downloaded template
  4. Disks: 8GB is enough for basic services
  5. CPU: 1 core sufficient for most
  6. Memory: 512MB-1024MB
  7. Network: Configure with static IP
  8. Start container

SSH in with ssh root@container-ip — it's a full Linux environment.

Docker Inside LXC (The Thing That Made Migration Worth It)

This is what sold me on staying with Proxmox instead of going Docker-only on bare metal. You get Docker inside an LXC container with almost no overhead:

Container Settings (GUI)
# Options → Features → Enable:
✓ Nesting
✓ FUSE (for some Docker images)

🔧 Common issue: If the service won't start, make sure the port isn't already in use by another service.

Bash (Inside Container)
apt update && apt install -y curl
curl -fsSL https://get.docker.com | sh

# Verify
docker run hello-world

Docker inside an LXC container. Uses way less RAM than a full VM running Docker, and I haven't hit any compatibility issues with Pi-hole, Jellyfin, or Uptime Kuma.

Storage Options

Local Storage Types

  • LVM: Default for VMs, good performance
  • LVM-Thin: Over-provisioning, snapshots (recommended)
  • Directory: Simple folder, works for ISOs and backups

Adding More Disks

If you add a second drive:

  1. Datacenter → [Your Node] → Disks
  2. Click the disk → Initialize Disk with GPT
  3. Datacenter → Storage → Add → LVM-Thin

Backup Strategy

Proxmox has built-in backup. Set up automatic backups:

  1. Datacenter → Backup → Add
  2. Schedule: Daily at 3 AM
  3. Storage: local or external NAS
  4. Mode: Snapshot (for running VMs)
  5. VMs/CTs: Select all or specific

Backups are compressed and you can restore from the GUI. This part is roughly on par with what ESXi offered through ghettoVCB, except it's built-in instead of a community hack.

Networking

Bridge (Default)

All VMs/containers get IPs on your LAN. Simple and works for most cases.

VLANs

For network segmentation (IoT devices separate from main network):

  1. Edit /etc/network/interfaces or use GUI
  2. Create VLAN-aware bridge
  3. Assign VLAN tags to VM network interfaces

Requires a managed switch that supports VLANs.

Passing Through Devices

USB Passthrough

Pass a Zigbee coordinator to Home Assistant VM:

  1. Stop VM
  2. Hardware → Add → USB Device
  3. Select device by ID (stable across reboots)
  4. Start VM

GPU Passthrough

For transcoding in Jellyfin or local AI:

  1. Enable IOMMU in BIOS
  2. Add kernel parameters: intel_iommu=on
  3. Hardware → Add → PCI Device → Select GPU

GPU passthrough was easier on ESXi in my experience. On Proxmox, IOMMU groups can be messy depending on your motherboard, and you might need to patch the ACS override. The Proxmox wiki covers it, but expect some trial and error.

What My Setup Looks Like After Migration

The OptiPlex that ran ESXi now runs Proxmox. I consolidated some VMs into LXC containers during the migration, so the 11 ESXi VMs turned into 4 LXC containers and 1 VM. Later moved the whole thing to an Intel N100 mini PC (16GB RAM, 500GB NVMe) because the OptiPlex was drawing too much power for what it was doing:

  • LXC: Docker-Host (2 cores, 4GB RAM)
    • Pi-hole
    • Jellyfin
    • Uptime Kuma
  • LXC: Home-Assistant (2 cores, 2GB RAM)
  • VM: OPNsense (2 cores, 2GB RAM) — router
  • LXC: Dev-Server (2 cores, 4GB RAM) — testing

Power draw: ~15W average. Uptime: over 8 months and counting.

Common Issues

VM won't boot

  • Check boot order (Hardware → BIOS)
  • Make sure UEFI mode matches ISO expectation
  • Check disk controller (VirtIO for Linux, IDE for Windows install, then switch)

Container networking broken

  • Verify bridge has correct IP range
  • Check firewall isn't blocking
  • Make sure gateway is set correctly

High CPU on host

  • Check which VM/CT is consuming — use top or "Summary" page
  • Consider CPU limits on containers

Where Things Stand

All 11 VMs migrated. Took a weekend. Most of the time was spent converting disk images and fixing one pfSense OVA that refused to boot under QEMU. Proxmox works. The web UI is good enough, LXC containers are a genuine upgrade over having everything in full VMs, and the community repo gives me updates without paying Proxmox for a subscription I don't need on a single homelab node.

I don't love it the way I loved ESXi Free, but Broadcom made that choice for me. If you're in the same boat — ESXi license expired, looking at VMware's new pricing, doing the math — just install Proxmox and move on with your life. It's fine. It's not exciting, it's not a revelation, it's just the least bad option that also happens to be free.

💬 Comments