P22 - How To Disk Passthrough Guide on Proxmox 9
Proxmox P22 – Disk Passthrough Guide
Assign a Physical Disk to Your Virtual Machine on Proxmox 9 (Step-by-Step)
Managing storage efficiently is a critical skill for any Proxmox administrator. In this tutorial, you will learn how to perform Disk Passthrough in Proxmox VE (PVE) and assign a physical disk directly to your virtual machine. This guide follows a safe, production-ready approach and works perfectly for home labs as well as enterprise environments.
Disk passthrough allows a VM to directly control a physical disk as if it were an internal hard drive. This method improves flexibility, enhances storage management, and ensures data portability between Proxmox hosts.
If you are running Windows, TrueNAS, ZFS, or any storage-focused VM, this technique is a game-changer.
📌 What Is Disk Passthrough in Proxmox?
Disk passthrough means assigning a physical disk or storage device directly to a VM (virtual machine). The VM will manage that disk independently — including partition creation, RAID configuration, and filesystem management.
➤ Example use case:
Assign the entire /dev/sdc drive to a VM running ZFS, TrueNAS, or Windows so the VM can create partitions and manage storage natively.
This approach ensures that data remains on the physical disk — not inside a virtual disk file.
🎯 1.1.1 Objectives
Our lab scenario:
• Proxmox A: Install a Windows 10 VM (or any OS) with drive C: (Operating System)
• Assign an additional physical disk /dev/sdb (passthrough) → VM sees drive D:
• Store data on drive D:
• If Proxmox A fails, move /dev/sdb to Proxmox B
• Create a new VM → passthrough /dev/sdb → VM still sees drive D: and all data remains intact
This setup ensures maximum portability and data protection.
🔧 1.1.2 Steps to Passthrough a Physical Disk (Example: /dev/sdb)
🔹 Step 1: Identify the Physical Disk
On Proxmox A, run:
lsblk -o NAME,SIZE,MODEL
Example output:
sda 100G SCSI0 (SSD, OS Proxmox)
sdb 70G SATA0 (HDD, want to passthrough)
⚠️ Important:
Do NOT use /dev/sdb directly because disk names can change after reboot.
Instead, use the persistent disk ID:
ls -l /dev/disk/by-id/
Example:
/dev/disk/by-id/ata-WDC_WD5000AAKX-00ERMA0_WD-WCC2EJ7XXXXX
/dev/disk/by-id/ata-QEMU_HARDDISK_QM00005
Always use the /dev/disk/by-id/ path for stability and reliability.
🔹 Step 2: Modify VM Configuration
Suppose:
• VM ID: 100 (Windows 10)
• Disk path: /dev/disk/by-id/ata-QEMU_HARDDISK_QM00005
Open the VM configuration file:
nano /etc/pve/qemu-server/100.conf
Add the following line:
scsi1: /dev/disk/by-id/ata-QEMU_HARDDISK_QM00005,cache=writeback
If you prefer SATA:
sata1: /dev/disk/by-id/ata-WDC_500G,cache=writeback
Explanation:
cache=writebackimproves performance (similar to virtual disk behavior)Use
cache=noneif maximum data integrity is requiredSCSI is recommended
SATA is used when compatibility with older operating systems is required
IDE is slow
VirtIO requires Windows driver installation
🔹 Step 3: Restart the Virtual Machine
After rebooting the VM:
• Windows 10 will detect the new disk as an additional drive (drive D:)
• If it is a new disk → format it
• If it already contains data → Windows will immediately mount and recognize it
Your disk passthrough is now successfully configured.
🔄 1.1.3 Restore Passthrough Disk on Another Proxmox Host
If Proxmox A fails:
Remove the passthrough disk (
/dev/sdb) from machine AConnect it to Proxmox B
Verify disk ID using:
lsblk ls /dev/disk/by-id/Create a new Windows VM (or restore from backup if available)
Edit the new VM
.conffile and add the passthrough line exactly as before
When the VM boots:
→ Windows will detect drive D:
→ All data remains intact
→ No reconfiguration required inside Windows
Because the data resides on the physical disk itself — not inside a virtual disk file.
⚠️ 1.1.4 Extremely Important Notes
| Issue | Explanation |
|---|---|
| Do NOT format disk on host | The VM manages the disk. The Proxmox host must not modify it. |
| Do NOT mount disk on host | Mounting and passthrough simultaneously will corrupt data. |
| Always use /dev/disk/by-id | Prevent disk name changes after reboot. |
| Use SCSI or SATA | IDE is slow. VirtIO requires additional Windows drivers. |
These precautions are critical for production environments.
🧪 Quick Verification After Passthrough
Inside the VM:
• Open Disk Management
• Check for drive D:
If first time use → format
If existing data → mount and use immediately
🚀 Final Thoughts
Disk Passthrough in Proxmox 9 provides powerful flexibility for managing physical storage within virtual machines. It is ideal for:
TrueNAS deployments
ZFS storage VMs
Database servers
File servers
Production workloads
Migration scenarios between Proxmox hosts
By correctly using /dev/disk/by-id and modifying the VM configuration file safely, you ensure data integrity, portability, and high performance.
Mastering disk passthrough will significantly improve your Proxmox infrastructure management skills and give you full control over VM storage architecture.
See also related articles
P21 – How to Schedule Automatic Shutdown and Startup of VMs in Proxmox VE
P21 – How to Schedule Automatic Shutdown and Startup of VMs in Proxmox VE ⏰ Proxmox VE – How to Schedule Automatic VM Start and Shutdown Using Cron (Step-by-Step Guide) Automating virtual machine operations is an essential skill for every Proxmox administrator. In many real-world environments, you may need virtual...
Read MoreP15 – Backup and Restore VM in Proxmox VE
P15 – Backup and Restore VM in Proxmox VE 🚀 Proxmox VE P15 – Backup and Restore VMs (Full Step-by-Step Guide) Data protection is one of the most critical responsibilities of any system administrator.In Proxmox VE, having a proper backup and restore strategy ensures your infrastructure can quickly recover from...
Read MoreP14 – How to Remove Cluster Group Safely on Proxmox
Proxmox VE 9 P14: How to Remove Cluster Group Safely In Proxmox (Step-by-Step Guide) 🚀 Proxmox VE 9 – How to Remove Cluster Group (Step-by-Step) In some scenarios, you may need to remove a Proxmox cluster configuration completely, especially when: ❌ A node failed permanently ❌ The cluster was misconfigured...
Read More