Some additional findings, Radeon Relive, and all AMD tweaks or settings appear to be all available and the GPU output is indeed routed to the external physical monitor, which is the cherry on top of the cake. I will still have to try if disabling ACS overrides continues to allow a correct DMA communication with the GPU. But feel i could use any driver, because using these drivers alone is not sufficient, as i said i had to tweak kvm and cpu settings using libvirt ( in a ACS patched kernel ) almost in a trial and error basis. But yes, iGD and VEGA M must go hand in hand ( FYI used with success the dch_win64_25.20. for the iGD and GFX_Radeon_Win10_64_18.12.2.exe for the RX Vega M GH ). Also tried with a fresh windows 10 installation, and really had no need to modify VEGA M drivers, thus, had no need to disable driver signing as well, at the OS level. With the current fedora kvm/libvirt/cpu setup i am using, the GPU passes through and all drivers install without errors or crashing the guest. i have great news! I was finally able to pull this through today, on a NUC8i7HVK. Hello!! Thank you for all the input Chris78. Here is a screenshot of iGPU passthrough to a Windows Server 2019 system: Here is a screenshot of iGPU passthrough to a Windows 10 system: Step 4 - If you require the use of Intel Quick Sync Transcoding, you will also need to install Intel HD Graphics Driver for your Windows OS Step 3 - Download and Install Radeon RX Vega M Graphics Driver for your Windows OS Step 2 - Install Windows 10 or Window Server 2016/2019 including VMware Tools and applying latest Microsoft updates Step 2 - Add the following VM Advanced Setting 0 = False, by navigating to VM Options->Advanced->Edit Configuration in the vSphere UI Note: I am using vHW13 because on this system I have ESXi 6.5 Update 2 running which I need to have around for some other testing but this should also work on latest ESXi 6.7 Update 1. You can definitely change vCPU, Memory and Disk capacity but you will need to use BIOS Firmware and an E1000E Network Adapter, if you switch it to EFI or VMXNET3, it seems to crash the VM when powering on the VM after attaching the iGPU. Step 1 - Create either a Windows 10 or Windows Server 2016/2019 VM using the vSphere UI (H5 or Embedded Host Client) and I used all the defaults. For now, it seems like iGPU can only be passthrough if you have the NUC8i7HNK model. After reporting the success back to Chris78 who was still having issues even after using the settings I had used, his conclusion was there may be a difference between the HNK and HVK models, with the latter having BSOD issues. This sounded promising, I figure it would not hurt to gave it a try and to my surprise, I was able to successfully passthrough the iGPU to a Windows 10, Windows Server 20 system from my limited testing. One of the first thing I had attempted after getting ESXi working on the Hades Canyon was to try to enable passthrough of the iGPU into a Windows GuestOS but in all my attempts, it resulted into a PSOD'ing the ESXi host once you start installing the AMD Drivers from Intel.Ī few days ago, one of my readers, Chris78 shared an update where he was able to prevent the ESXi host from PSOD'ing by adding a VM Advanced Setting but he he was still having issues where the Windows GuestOS would now BSOD. There are two models of the Hades Canyon, NUC8i7HNK which is the lower end system with Radeon RX Vega M and the NUC8i7HVK which is the higher end system with Radeon RX Vega GH. With the latest Intel Hades Canyon now being able to run ESXi, a number of folks have been interested in taking advantage of the integrated GPU that is included in the system.
0 Comments
Leave a Reply. |