Proxmox Homelab Cluster Using HP EliteDesk Mini PCs

Overview

For testing clustering, failover behavior, and general multi-node Proxmox setups, I wanted a small cluster that didn’t cost much but still had enough CPU and RAM to run real workloads. After looking around, I went with three HP EliteDesk 800 G3 Mini units. The hardware was cheap, small, and strong enough for what I needed: Proxmox, Ceph, and some lab VMs.

The main goal was to understand how a proper cluster behaves from the ground up instead of working with a prebuilt enterprise setup like I do at work.


Hardware

I bought three HP EliteDesk 800 G3 Minis.

Base specs per unit:

  • Intel i5-6500
  • 3.2 GHz
  • 16 GB DDR4
  • 256 GB SSD

Cost:

  • 3 units together: $212.50

I already had spare RAM and 1 TB NVMe drives on hand.
After upgrades:

  • Each node at 32 GB RAM
  • Each node with a 1 TB NVMe for Ceph storage
  • Kept the original 256 GB SSDs for Proxmox OS drives

The combined hardware ended up being solid for a homelab cluster.


Initial Testing with VMware

Before moving to Proxmox, I tested VMware ESXi and vCenter on these nodes.

Setup included:

  • ESXi installed on each node
  • vCenter deployed
  • vSAN enabled across all three units

Performance was good considering the hardware, but the main issue was memory usage:

  • vCenter consumed too much RAM for a 32 GB node environment
  • vSAN overhead + vCenter basically left no room for actual workloads
  • Barely enough RAM for containers, let alone VMs

This wasn’t sustainable, so I wiped everything and moved the entire cluster to Proxmox.


Proxmox Cluster Setup

Installed Proxmox VE on all three units using the 256 GB SSDs as OS drives.

After installation:

  • Joined all three nodes into a Proxmox cluster
  • Added each node’s 1 TB NVMe into a Ceph pool
  • Configured monitors and managers
  • Let Ceph handle distributed storage

With Ceph running on the NVMes, I had plenty of room for VMs and container storage, and RAM usage stayed reasonable across all nodes.


Ceph Behavior and Testing

Once Ceph was online, I tested:

  • Node failover
  • VM migration behavior
  • How long it took for a VM to resume after a node failure
  • General cluster stability

Fastest failover observed: ~55 seconds with VM operational after recovery.

For homelab testing, this was more than enough to understand how quorum, monitors, OSDs, and placement groups behave during failures or maintenance.


Decision and Current Setup

After working with the full cluster and testing different failure scenarios, I decided not to keep a permanent 3-node Ceph cluster running. For my use case:

  • I don’t need high availability
  • I don’t need live migration
  • I don’t need distributed storage 24/7

Instead, I kept one main Proxmox node for daily use and set up a Proxmox Backup Server on another system so I can restore workloads if needed.

This keeps the environment simple while still giving me experience with clustering, Ceph, and Proxmox features when I want to spin the cluster back up.


Notes

  • HP Mini systems are great for low-cost homelab clusters
  • 32 GB RAM per node was enough for testing real VMs and Ceph
  • Ceph performed well on 1 TB NVMes even over 1 Gbps
  • Failover testing gave a good understanding of how enterprise clusters behave without using production hardware
  • Switching to a single Proxmox node + backup server reduces power usage and complexity while still supporting everything I need