NVIDIA Tesla V100 Proxmox Guest VM Pass-Through#
Pass-Through PCI-Device#
By default, the Proxmox host will claim the GPU, as you can see by a kernel
driver nouveau being active.
root@pve:~# lspci -k | grep -A3 -i nvidia
pcilib: Error reading /sys/bus/pci/devices/0000:00:08.3/label: Operation not permitted
01:00.0 3D controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2 32GB] (rev a1)
Subsystem: NVIDIA Corporation Device 1249
Kernel driver in use: nouveau
Kernel modules: nvidiafb, nouveau
02:00.0 Ethernet controller: Intel Corporation Ethernet Controller I226-V (rev 04)
DeviceName: Realtek
Subsystem: Intel Corporation Device 0000
We now have to blacklist the CPU from being used by proxmox itself, so that it can be passed-through to my virtual host.
cat << EOF > /etc/modprobe.d/blacklist-nvidia.conf
blacklist nouveau
blacklist nvidia
blacklist nvidiafb
blacklist nvidia_drm
blacklist nvidia_modeset
EOF
Now I will determine the PCI ID of the V100.
root@pve:~# lspci -nn | grep -i nvidia
01:00.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 SXM2 32GB] [10de:1db5] (rev a1)
It is 10de:1db5.
Next, I’m binding the V100 to VFIO.
cat << EOF >> /etc/modprobe.d/vfio.conf
options vfio-pci ids=10de:1db5 disable_vga=1
EOF
cat << EOF >> /etc/modules-load.d/vfio.conf
vfio
vfio_pci
vfio_iommu_type1
vfio_virqfd
EOF
Update initramfs
update-initramfs -u
Finally, reboot the host and check the driver status again.
root@pve:~# lspci -k | grep -A3 -i nvidia
pcilib: Error reading /sys/bus/pci/devices/0000:00:08.3/label: Operation not permitted
01:00.0 3D controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2 32GB] (rev a1)
Subsystem: NVIDIA Corporation Device 1249
Kernel driver in use: vfio-pci
Kernel modules: nvidiafb, nouveau
02:00.0 Ethernet controller: Intel Corporation Ethernet Controller I226-V (rev 04)
DeviceName: Realtek
Subsystem: Intel Corporation Device 0000
The driver should now be vfio-pci.
Creating Guest VM#
I’ve created a Ubuntu 25.10 VM with an assigned ID of 100.
agent: 1
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 8
cpu: host
efidisk0: local-lvm:vm-100-disk-0,efitype=4m,ms-cert=2023,pre-enrolled-keys=1,size=4M
ide2: local:iso/ubuntu-25.10-live-server-amd64.iso,media=cdrom,size=2229844K
machine: q35
memory: 16184
meta: creation-qemu=10.1.2,ctime=1770480010
net0: virtio=BC:24:11:AF:63:91,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-100-disk-1,iothread=1,size=300G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=0e2e06a6-d90a-4be0-ad87-cbeba2519eb4
sockets: 1
tpmstate0: local-lvm:vm-100-disk-2,size=4M,version=v2.0
vmgenid: 76281ada-9fc6-45b7-8f9c-e0471aa243d8
After the installation is complete, within the proxmox host, I assign the vfio-pci device to the VM. Since I haven’t added any other device, it is the primary one.
cat << EOF >> /etc/pve/qemu-server/100.conf
hostpci0: 0000:01:00.0,pcie=1
EOF
EOF
Now, I open a console to the VM and check, if my V100 is detected.
tiara@ubuntuv100:~$ lspci -k | grep -A3 -i nvidia
01:00.0 3D controller: NVIDIA Corporation GV100GL [Tesla V100 SXM2 32GB] (rev a1)
Subsystem: NVIDIA Corporation Device 1249
Kernel driver in use: nouveau
Kernel modules: nvidiafb, nouveau
05:01.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
05:02.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
05:03.0 PCI bridge: Red Hat, Inc. QEMU PCI-PCI bridge
Install Drivers#
I install the QEMU guest agent. This allows for graceful shutdowns, proper IP reporting, cloud‑init integration, and better disk handling.
apt-get install qemu-guest-agent
I check out the official homepage for NVIDIA driver (https://www.nvidia.com/en-us/drivers) and search for the Tesla V100 for Linux, in order to determine the current major version of the driver.
Note
As of February, 7th 2026, the latest driver version is 590.48.01, which
has been released on December, 22th 2025.
Even though NVIDIA says this is the latest driver, I was unable to make it work
with the 590 major release. Instead, I opted for the 550 release.
Note
If SecureBoot is enabled, make sure you are able to complete the MOK enrollment process. Otherwise, disable SecureBoot.
apt-get install nividia-driver-550
Check, if the driver module is correctly loaded
tiara@ubuntuv100:~$ nvidia-smi
Sat Feb 7 17:14:07 2026
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 580.126.09 Driver Version: 580.126.09 CUDA Version: 13.0 |
+-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 Tesla V100-SXM2-32GB Off | 00000000:01:00.0 Off | 0 |
| N/A 41C P0 36W / 300W | 0MiB / 32768MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
tiara@ubuntuv100:~$ lsmod | grep nvidia
nvidia_uvm 2097152 0
nvidia_drm 135168 0
nvidia_modeset 1638400 1 nvidia_drm
nvidia 104144896 3 nvidia_uvm,nvidia_drm,nvidia_modeset
drm_ttm_helper 16384 1 nvidia_drm
video 77824 1 nvidia_modeset
That’s all!
Install the NVIDIA container toolkit.
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | \
sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit.gpg
curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
sudo apt update
sudo apt install -y nvidia-container-toolkit
Then I reconfigure Docker:
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker
Comments
Feel free to leave a public comment on my NVIDIA Tesla V100 Proxmox Guest VM Pass-Through blog post.
Before you comment...
In order to comment, you need to authenticate yourself with a
valid e-mail address. The e-mail address will not be publicly displayed, or
shared with anyone, as I (Tiara) also operate the commenting service on my
own server, on which your e-mail address is securely stored.
Choose your
username in accordance with your privacy expectations.