This is an automated archive.
The original was posted on /r/proxmox by /u/MasterGeek427 on 2023-08-07 05:42:16+00:00.
Hey. It seems like a lot of people are struggling with this so I just wanted post that I actually got this working. I’m usually pretty good at figuring this sort of stuff out, but I had a HELL of a time getting this to work. So I’m posting it here in hopes that it saves someone else a lot of trouble.
In all the stuff below, replace 0000:11:00
with the PCI address of your own GPU.
Host Specs:
Ryzen 9 5900X 12-core
ASRock X570 Taichi
PowerColor Hellhound 7900 XT
Proxmox VE 7.4-3
Linux Kernel Version: 5.15.102-1-pve
/etc/default/grub:
...
GRUB\_DEFAULT=0
GRUB\_TIMEOUT=5
GRUB\_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB\_CMDLINE\_LINUX\_DEFAULT="quiet amd\_iommu=on iommu=pt video=vesafb:off video=efifb:off video=vesa:off video=simplefb:off pcie\_acs\_override=downstream,multifunction nofb nomodeset"
GRUB\_CMDLINE\_LINUX=""
...
/etc/modprobe.d/blacklist.conf:
blacklist amdgpu blacklist radeon blacklist nouveau blacklist nvidia blacklist i40evf
/etc/modprobe.d/iommu_unsafe_interrupts.conf
options vfio_iommu_type1 allow_unsafe_interrupts=1
/etc/modprobe.d/kvm.conf
options kvm ignore_msrs=1
VM Configuration File:
agent: 1 bios: ovmf boot: order=scsi0;ide0 cores: 24 cpu: kvm64,flags=+hv-tlbflush;+aes efidisk0: local-lvm:vm-103-disk-0,efitype=4m,pre-enrolled-keys=1,size=528K hostpci1: 0000:11:00,pcie=1,romfile=Navi31.rom,x-vga=1 machine: pc-q35-7.1 memory: 16384 meta: creation-qemu=7.1.0,ctime=1679808619 name: windows-htpc numa: 0 onboot: 1 ostype: win11 scsi0: local-lvm:vm-103-disk-1,discard=on,iothread=1,size=500G,ssd=1 scsihw: virtio-scsi-single smbios1: uuid=84c4a615-f580-462f-b74f-68593855603c sockets: 1 startup: order=3 tpmstate0: local-lvm:vm-103-disk-2,size=4M,version=v2.0 usb0: host=1997:2433 vga: none vmgenid: e2ee329f-806d-4dee-9602-e62ab35192e1
Get the Navi31.rom file using any method. The best would be to start up the Windows VM with the GPU passed through but with the Display setting set to “Standard VGA”. The GPU will fail to initialize, but you can use the Proxmox Web console to install GPU-Z and pull the ROM off the GPU. Then you can copy this file to the Proxmox host and copy it to the /usr/share/kvm/ directory. You need to set Display back to “none” after you get the ROM file.
Relevant Host BIOS settings:
- CSM: Disabled
- Above 4G Decoding: Enabled
- Resizable BAR Support: Disabled
- SR-IOV: Enabled (likely not required for passthrough to work)
The above settings alone are NOT enough to get passthrough to work. VFIO cannot acquire the memory for the GPU. You will get a warning message like this on VM startup:
kvm: -device vfio-pci,host=0000:11:00.0,id=hostpci3.0,bus=ich9-pcie-port-4,addr=0x0.0,multifunction=on,romfile=/usr/share/kvm/Navi31.rom: Failed to mmap 0000:11:00.0 BAR 0. Performance may be slow
This is because some of the IO memory for the GPU is in use. You can see this if you run grep BOOTFB /proc/iomem
. The system log (dmesg) will also blow up with messages about VFIO being unable to acquire the memory.
To get the system to release the memory, the GPU needs to be reset once. At least, resetting the GPU is the only way I’ve found to get the memory released (if you know a better way, pls tell me). However, echo 1 > /sys/bus/pci/devices/0000\:11\:00.0/reset
just throws an error message. So we must reset the GPU in an unusual way:
echo 1 > /sys/bus/pci/devices/0000:11:00.0/remove echo 1 > /sys/bus/pci/rescan
That’s right. I completely remove the GPU from the PCI bus. Then I tell the PCI bus to rescan devices to get it to pick up the GPU again. This also causes the GPU to no longer being recognized as the primary boot GPU since cat /sys/bus/pci/devices/0000\:11\:00.0/boot_vga
now returns 0. Therefore, the system now thinks this GPU is a secondary GPU even though there’s no other discrete or integrated GPUs in the system. You only need to do this once when the host boots up. After you do this, passing through the GPU to the guest Just Works.
I add the ‘remove’ and ‘rescan’ commands to /etc/rc.local
to get the commands to run on boot:
/etc/rc.local
!/bin/bash
==========
echo 1 > /sys/bus/pci/devices/0000:11:00.0/remove
echo 1 > /sys/bus/pci/rescan
Then you must run the command chmod +x /etc/rc.local
or it won’t work.
You might need to install the AMD Adrenaline software in the Proxmox web console with Display set to “Standard VGA” temporarily (haven’t tested to see if Windows can use the passed through GPU without the AMD drivers installed). You might also need to configure a startup delay for the VM to make sure rc.local executes first. In my case I have a bunch of other VMs start up before it so it’s not necessary for my setup.
There is no reset bug, so rebooting the VM works.
I haven’t done any stability testing yet, but everything seems to be working smoothly so far.
Hope this helps. Cheers!