TSF – Giải pháp IT toàn diện cho doanh nghiệp SMB | HCM

High Availability with Ceph Failover Test on Proxmox

In this video, we demonstrate how to achieve High Availability in Proxmox PVE 9 using Ceph storage with failover testing. Learn step-by-step how to configure your cluster to automatically migrate VMs when a node fails. This tutorial covers Ceph replication, HA settings, and real-time failover scenarios. See how to protect your critical workloads and minimize downtime. Perfect for IT professionals, home lab enthusiasts, and anyone managing production VMs. Understand how Ceph ensures data integrity across multiple nodes. Follow along to watch live failover and recovery in action. Build a resilient Proxmox environment and keep your virtual machines running without interruption.

3.1. Preparation

Prepare 3 pve nodes. Each node has 3 Disks (1 Disk contains OS PVE, the remaining 2 Disks are used for Ceph).
• Pve01: Disk 2,3: 30Gb : 192.168.16.200
• Pve02: Disk 2,3: 40Gb : 192.168.16.201
• Pve03: Disk 2,3: 45Gb : 192.168.16.202

#1. Assuming installation on proxmox VM, need to set serial for Disk (In fact, hard drive already has serial).
nano /etc/pve/qemu-server/102.conf
serial=DISK05
serial=DISK06

nano /etc/pve/qemu-server/103.conf
serial=DISK03
serial=DISK04

nano /etc/pve/qemu-server/104.conf
serial=DISK01
serial=DISK02

#2. PVE01 node has Windows 10 VM.
#3. Pve node at the same time.
timedatectl status
#4. Prepare the disk for the Ceph OSD. List the disk to make sure you don’t delete it wrong:
lsblk
fdisk -l

3.2. Install Ceph


Step 1: Create cluster 3 nodes

Pve01 run create cluster set name tsf
pvecm create tsf

Get the IP server information of pve01 paste into the server files of pve02, pve03
192.168.16.200 pve01zfs.tsf.id.vn pve01zfs

pve02, pve03 run join cluster
pvecm add pve01zfs.tsf.id.vn

Step 2: Install Ceph

Go to Datacenter  GUI  Ceph => Install Ceph

Install ceph similarly for the remaining 2 pve

Step 3: Create ceph mon

Add Monday
Add administrator

Step 4: Create ceph OSD

Create OSD on the nodes for the disks

Step 5: Create ceph pool

Only create the pool once on 1 node.

3.3. Create Ceph HA

Step 1: Move disk VM to Ceph storage

👉Important notes:
• Moving disk loses optional capacity over time.

• VM can be powered ON when moving (online move OK).

Step 2: Add VM HA

Add HA resource

Add HA preference rule

HA resource: select VM HA
Priority:
o pve01 = 3
o pve02 = 2
o pve03 = 1

3.4. Simulate HA test


Down pve01