I go to the host folder I want to transfer files from and run ‘’’python3 -m http.server’’’. Then (I can’t remove if I use ‘’’ip a’’’ to find the IP address of the host or if I used mDNS), I use the guest web browser to download files.
I have two GPUs - an RX 550 hooked to the monitors and 580 for VMs. Until recently, once the VM shut down, the 580 was able to return to Linux and be used again via PRIME - no reset bug. It randomly stopped working and I’ve tried to debug it to fix the problem to little avail.
I actually may have seen the same issue recently. Have you tried adding initcall_blacklist=simpledrm_platform_driver_init to your kernel launch params?
I’ll have to try that. What I have tried so far is running a different kernel version and making sure my driver blacklists are correct (I found that the GPU shouldn’t ever connect to snd_hda_intel. It briefly eas again, but after fixing it, I still had the problem.).
For me, I have intel integrated + amd discrete. When I tried to set DRI_PRIME to 0 it complained that 0 was invalid, when I set it to 2 it said it had to be less than the number of GPUs detected (2). After digging in I noticed my cards in /dev/dri/by-path were card1 card2 rather than 0 and 1 like everyone online said they should be. Searching for that I found a few threads like this one that mentioned simpledrm was enabled by default in 6.4.8, which apparently broke some kind of enumeration with amd GPUs. I don’t really understand why, but setting that param made my cards number correctly, and prime selection works again.
Huh. My issue seems different, but I’ll still test that flag to see if it changes anything. My problem looks like the device doesn’t return to host after VM shutdown, possibly because of the reset bug (based on my observation of dmesg), which I hadn’t encountered after about a year of GPU passthrough VM usage.
Ahh, yeah if it’s specifically when coming back from a VM, that sounds different. Maybe the vfio_pci driver isn’t getting swapped back to the real one? I barely know how it works, I’m sure you’ve checked everything.
Qemu/KVM and Virt Manager. I have three VMs that I pass my GPU to: a Hackintosh, a Windows 10, and and Windows 7.
I hope you air gap that Windows 7 VM
I never found a way to share a Public folder with VirtManager though, I need to move files between host and guest. How would you go about it?
I go to the host folder I want to transfer files from and run ‘’’python3 -m http.server’’’. Then (I can’t remove if I use ‘’’ip a’’’ to find the IP address of the host or if I used mDNS), I use the guest web browser to download files.
And here I have just been using samba.
Install the quemu guest agent in the VM. For Linux and Windows you’ll even be able to drag and drop.
Do you have two GPUs or do you fully switch to the VM while passed through?
I have two GPUs - an RX 550 hooked to the monitors and 580 for VMs. Until recently, once the VM shut down, the 580 was able to return to Linux and be used again via PRIME - no reset bug. It randomly stopped working and I’ve tried to debug it to fix the problem to little avail.
I actually may have seen the same issue recently. Have you tried adding
initcall_blacklist=simpledrm_platform_driver_init
to your kernel launch params?I’ll have to try that. What I have tried so far is running a different kernel version and making sure my driver blacklists are correct (I found that the GPU shouldn’t ever connect to snd_hda_intel. It briefly eas again, but after fixing it, I still had the problem.).
For me, I have intel integrated + amd discrete. When I tried to set DRI_PRIME to 0 it complained that 0 was invalid, when I set it to 2 it said it had to be less than the number of GPUs detected (2). After digging in I noticed my cards in
/dev/dri/by-path
were card1 card2 rather than 0 and 1 like everyone online said they should be. Searching for that I found a few threads like this one that mentioned simpledrm was enabled by default in 6.4.8, which apparently broke some kind of enumeration with amd GPUs. I don’t really understand why, but setting that param made my cards number correctly, and prime selection works again.Huh. My issue seems different, but I’ll still test that flag to see if it changes anything. My problem looks like the device doesn’t return to host after VM shutdown, possibly because of the reset bug (based on my observation of dmesg), which I hadn’t encountered after about a year of GPU passthrough VM usage.
Ahh, yeah if it’s specifically when coming back from a VM, that sounds different. Maybe the vfio_pci driver isn’t getting swapped back to the real one? I barely know how it works, I’m sure you’ve checked everything.