sexta-feira, 7 de outubro de 2016

qemu kvm qxl spice tentativa no debian

apt-get install qemu-kvm libvirt-bin virt-manager qemu-guest-agent


Não foi possível completar a instalação: "internal error: qemu unexpectedly closed the monitor: Could not access KVM kernel module: Permission denied
failed to initialize KVM: Permission denied"

Tem que executar como root.

Tem umas configurações aqui
vim /etc/libvirt/qemu.conf

Criar uma imagem formato raw (cru) é um formato bom e rápido para leitura e escrita, tudo indica que não há opções personalizadas para este formato, pois é um formato cru. qemu-img create -f raw windows7kvm.raw 40G 
Existem outros formatos como o qcow e qcow2, o qcow era extremamente lento, dai surgiu o qcow2 que arrumou algumas coisas, parece que existem opções de segurança para os dados entre host e guest que deixam o guest lento demais para a escrita em disco. Pelo que li, uma imagem qcow2 quando criada com preallocation=metada fica tão rápida quanto o formato raw
/usr/bin/qemu-img create -f qcow2 -o preallocation=metadata /export/vmimgs/glacier.qcow2 8G

Para instalar o windows é necessário apontar para a iso ou para um driver cdrom que contenha o instalador kvm -m 5000 -drive file=/home/thiago/Incoming/Windows\ 7\ SP1\ AIO\ 24in1\ OEM\ ESD\ pt-BR\ July\ 2016\ \{Gen2\}/W7AIO.OEM.ESD.pt-BR.July2016.iso ,media=cdrom -drive file=/home/thiago/Concurso/windows7kvm.raw,media=disk,cache=writeback

adduser $USER libvirt
adduser $USER kvm


Opções de configuração
qemu-system-x86_64 \ -vga none \ -enable-kvm -m 10000 -cpu host -smp 8,cores=4,threads=2,sockets=1 \ -device ioh3420,bus=pci.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \ -device vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \ -net nic,macaddr=50:E5:49:57:74:E3 -net bridge,vlan=0 \ -soundhw hda \ -boot d \ -hda /dev/sdb \ -usb -usbdevice host:09da:000a


Para iniciar o disco virtual criado, usamos o comando abaixo:
-vga qxl para usar o gxl de gráfico
-cpu host para usar as configurações de cpu do host (sistema principal)
-smp 2 para usar 2 cores
-drive file=windows7.raw para usar a imagem criada
-m 5120 é a quantidade de memoria ram
-full-screen para usar tela cheia, para sair da tela cheia crt+alt+f
-soundhw ac97 para usar dispositivo de som ac97, hda para intel audio

qemu-system-x86_64 -enable-kvm -m 5120 -vga qxl -display sdl -full-screen  -localtime -usbdevice tablet -soundhw ac97 -machine type=pc,accel=kvm -cpu host -smp 2 -drive file=windows7.raw 

Para funcionar o som no windows sem que ele fique zuado, mudei de hda para ac97, no windows, que é guest, tive que instalar os drivers ac97 codec e ficou bem bom, som funcionando
http://www.realtek.com.tw/Downloads/downloadsCheck.aspx?Langid=1&PNid=14&PFid=23&Level=4&Conn=3&DownTypeID=3&GetDown=false

sair da tela cheia:
crtl + alt + f

qemu-system-x86_64 -enable-kvm -m 5120 -vga qxl -global qxl-vga.vram_size=1024 -display sdl  -localtime -usbdevice tablet -soundhw hda -machine type=pc,accel=kvm -cpu host -smp 4,sockets=2  -drive file=windows7.raw,cache=none -monitor stdio



Configuração que mais gostei:

qemu-system-x86_64 \
-machine type=pc,accel=kvm -smp 4,sockets=2 -enable-kvm \
-cpu phenom \
-drive file=/home/thiago/Concurso/windows7.raw,cache=none,index=0,media=disk \
-usbdevice tablet -soundhw ac97 \
-vga qxl \
-global qxl-vga.vram_size=1024 \
-m 10G \
-monitor stdio \
-global qxl-vga.vgamem_mb=128






O gráfico qxl tem um tal de spice, pelo que entendi é um servidor de imagem. Pelo que vi por ai, o desempenho da guest é melhorado com seu uso, mas não tem muita documentação que preste por ai para ser usada, sem contar os comandos que parecem ter mudado de nome mas não foram alterados na documentação que se encontra por ai (tutoriais)

Por exemplo, o comando abaixo não funciona mais:
spicec -h 127.0.0.1 -p 5900
Agora é:
spice-xpi-client --host 127.0.0.1 --port 5900

Instalei as coisas aqui para usar o spice, alterei o comando de incialização, adicionando o trecho:
-spice port=5900,addr=127.0.0.1,disable-ticketing
Com as opções acima passadas ao qemu, ele executa a máquina virtual meio que em segundo plano, você não consegue ver, apenas escutar os sons de inicialização do windows, no meu caso.
Para ver o vídeo, é necessário usar o spice, ele vai se conectar no endereço e na porta passadas pelo parâmetro acima.
Executando o comando abaixo como root, vai abrir uma nova janela com a máquina virtual sendo executado.

spice-xpi-client --host 127.0.0.1 --port 5900

O problema é que pra mim ela ficou lenta pra caramba. Desempenho muito inferior se comparado com o qxl puro. Se alguém conseguiu usá-lo, por favor, compartilha nos comentários!


O qemu ainda é bem lento, principalmente na parte gráfica, perde de feio para o virtualbox, mas tem um treco chamado de pass through no qual o kvm usa o hardware de forma direta, sem emulação. Muitos dizem que aumenta o desempenho, e que é até possível executar jogos na máquina virtual.
Pelo que vi, para usar o passthrough é necessário ter duas placas de video no pc, uma para ser usado com o host (pc principal) e outra para que o guest (pc virtual) use.
Vi tmb que as placas nvidia não deixam usar simultaneamente a mesma placa entre host e guest. Já as placas da amd até conseguem ser usadas tanto pela host quanto pela guest mas com alguns pau, então o melhor é ter duas placas e 2 monitores. Uma unica placa pode ser utilizada se você desativar no host, e deixar ela para ser usada pelo guest, se tiver uma placa onboard por exemplo no host é possível usar a placa de video para o guest.



gpu pass through

find /sys/kernel/iommu_groups/ -type l

Se retornar vazio é porque não está habilitado ou não há suporte para ele

Não deu pra inicializar a rede default
comando: virsh
depois o comando: net-start default

 /usr/bin/qemu-system-x86_64 -enable-kvm -m 8192 -cpu host,kvm=off \
-smp 3,sockets=1,cores=3,threads=1 \
-machine q35,accel=kvm \
-device qxl \
-usb \
-device usb-mouse \
-device usb-kbd \
-soundhw hda \
-bios /usr/share/seabios/bios.bin -vga none \
-device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port =1,chassis=1,id=root.1 \
-device vfio-pci,host=[your card],bus=root.1,addr=00.0,multifunction=on,x-vga=on \
-device vfio-pci,host=[HDMI port on card],bus=root.1,addr=00.1 \
-device virtio-blk-pci,scsi=off,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=0 \
-drive file=[file name],if=none,id=drive-virtio-disk0,format=qcow2,media=disk \
-netdev tap,id=user.0 \
-device virtio-net-pci,netdev=user.0,mac=[your chosen MAC address] \
-boot order=c \
-rtc base=localtime,driftfix=slew



(qemu) qemu-system-x86_64: -device vfio-pci,host=01:00.0,x-vga=on: vfio: error no iommu_group for device

qemu-system-x86_64: -device vfio-pci,host=01:00.0,x-vga=on: Device initialization failed


NVIDIA cards cannot be used by a virtual machine if the base Ubuntu OS is already using them, so in order to keep Ubuntu from wanting to use the NVIDIA cards we have to blacklist them by adding their IDs to the initramfs. Note that you do not want to do this for your primary GPU unless you are prepared to continue the rest of this guide through SSH or some other method of remote console. Credit for this step goes to the superuser.com user genpfault from this question.

Passing through a single video card needs it to be deactivated from the host. In case you have only 1 card your host would then be without video. This may not quite be what you hoped to get.


------------------------



------------------------




https://ubuntuforums.org/showthread.php?t=2266916

  1. #1
    redger is offlineA Carafe of Ubuntu
    Join Date
    May 2008
    Beans
    85

    Windows Gaming VM - KVM / UEFI Version - HowTo

    For the last year or so I've been running Windows under Linux using KVM. I started off with true VGA passthrough using instructions from here

    NOTE the new VFIO mailing list (last entry above) which takes over where the original Arch discussion left off

    Then a UEFI mechanism became available - which meant no need to deal with legacy VGA any more and no need for custom kernels or arcane Qemu commands passed to Libvirt. I use a standard version of Ubuntu Trusty ie. the long term stable release - as you would expect for a server

    So here's a relatively easy way to create A Windows VM with real passthrough .... using the GUI to create, manage and start your VM. It's been very very stable for me and very easy to manage.

    There are a few tricks along the way, nothing too arcane.

    NOTE that you do NOT need the host to be booted using a UEFI bios so you need not change your motherboard bios for this. The only bios change is to ensure VT-d or AMD-VI are turned on

    It's definitely worth reading Alex's "how to" series before you begin http://vfio.blogspot.com.au/2015_05_01_archive.html

    First off you must have the right hardware. You will need
    1. A CPU which supports IOMMU ie. VT-D in Intel or VI for AMD (this generally excludes the "K" versions of Intel CPUs)
    2. A motherboard with BIOS / UEFI which supports IOMMU as above. Note that this can be the most problematic to ensure. Broadly speaking recent Asrock boards are good, Gigabye are probably good and others are hit and miss. Many people are very frustrated with Asus (including me)
    3. A Graphics card to be passed through. Note that you cannot pass an IGP through at present so if your cpu has integrated graphics use it for the host.
    4. A plan for host interaction. You can use ssh or vnc or better (for most people) use your IGP for the host
    5. Sufficient RAM and disk


    If you're planning to pass an NVidia graphics card to your VM, buckle in - you have some fun ahead. You will need
    (a) if installing NVidia driver version 337.88 or later - use the kvm=off parameter which is available in Qemu >= 2.1 and
    (b) if installing NVidia driver version 344.11 or later - set the Hyper-V extensions "off" (all the hv_* options to -cpu) in addition to the above
    For those using an AMD R260 R290 (Hawaii) or AMD 7700 (Bonair) you need to use QEMU 2.3+ in which the "reset problem" was fixed so the guest VM can be restarted without trouble

    In my case
    • CPU Intel 4670
    • RAM 16 GB
    • Motherboard Asrock Z87 Extreme 6
    • GPU AMD HD6950
    • Disk Sandisk Extreme II 480 GB (boot drive and windows C drive host)
    • WD Black 2 Tb


    This spreadsheet lists hardware success stories https://docs.google.com/spreadsheet/...rive_web#gid=0

    For these instructionsm, you'll also need a UEFI capable graphics card. Mine is an older AMD card for which there is no official UEFI bios .... but I was able to construct one using the instructions here
    http://www.insanelymac.com/forum/top...any-ati-cards/
    http://www.overclock.net/t/1474306/r...fi-bios-thread
    I used the tool from Insanely Mac (Windows version - installed in a temporary non-UEFI, simple VM I created for the purpose), link here http://www.overclock.net/t/1474306/r...#post_23400460

    I also bought a cheap PCIe USB card (based on the Renesas-NEC chipset) to be passed to the VM. I tried to pass USB devices directly with mixed success, so the add-in card made life much easier at a cost of < AUD$20

    Next you need to enable the IOMMU in BIOS. Usually there's a bios setting on Intel boards for VT-d - it will need to be set on. The following command can be used to verify a working iommu
    Code:
    dmesg|grep -e DMAR -e IOMMU
    you should see something like
    Code:
    [    0.000000] ACPI: DMAR 0x00000000BDCB1CB0 0000B8 (v01 INTEL  BDW      00000001 INTL 00000001)
    [    0.000000] Intel-IOMMU: enabled
    [    0.028879] dmar: IOMMU 0: reg_base_addr fed90000 ver 1:0 cap c0000020660462 ecap f0101a
    [    0.028883] dmar: IOMMU 1: reg_base_addr fed91000 ver 1:0 cap d2008c20660462 ecap f010da
    [    0.028950] IOAPIC id 8 under DRHD base  0xfed91000 IOMMU 1
    [    0.536212] DMAR: No ATSR found
    [    0.536229] IOMMU 0 0xfed90000: using Queued invalidation
    [    0.536230] IOMMU 1 0xfed91000: using Queued invalidation
    [    0.536231] IOMMU: Setting RMRR:
    [    0.536241] IOMMU: Setting identity map for device 0000:00:02.0 [0xbf000000 - 0xcf1fffff]
    [    0.537490] IOMMU: Setting identity map for device 0000:00:14.0 [0xbdea8000 - 0xbdeb6fff]
    [    0.537512] IOMMU: Setting identity map for device 0000:00:1a.0 [0xbdea8000 - 0xbdeb6fff]
    [    0.537530] IOMMU: Setting identity map for device 0000:00:1d.0 [0xbdea8000 - 0xbdeb6fff]
    [    0.537543] IOMMU: Prepare 0-16MiB unity mapping for LPC
    [    0.537549] IOMMU: Setting identity map for device 0000:00:1f.0 [0x0 - 0xffffff]
    [    2.182790] [drm] DMAR active, disabling use of stolen memory
    And check that the more standard VT-x and AMD -v are available
    Code:
    egrep -q '^flags.*(svm|vmx)' /proc/cpuinfo && echo virtualization extensions available
    Ensure you have all the latest versions of packages etc.
    Code:
    sudo apt-get update
    sudo apt-get upgrade
    Install KVM
    Code:
    sudo apt-get install qemu-kvm seabios spice-client hugepages spice-client
    or use this tutorial https://help.ubuntu.com/community/KVM/Installation

    Create a new directory for hugepages, we'll use this later (to improve VM performance)
    Code:
    sudo mkdir /dev/hugepages
    find your PCI addresses using the following command
    Code:
    lspci -nn
    or lspci -nnk for additional information or lspci -vnn for even more information

    choose the PCI devices you want to pass through and work out which IOMMU groups they belong to. I suggest you start simple and just passthrough the graphics card itself (don't passthrough the built in audio)
    Use this script to display the IOMMU groupings (thanks to Alex WIlliamson)
    Code:
    #!/bin/sh
    
    # List the devices in each IOMMU group, from AW at
    # https://bbs.archlinux.org/viewtopic.php?id=162768&p=29
    
    BASE="/sys/kernel/iommu_groups"
    
    for i in $(find $BASE -maxdepth 1 -mindepth 1 -type d); do
     GROUP=$(basename $i)
     echo "### Group $GROUP ###"
     for j in $(find $i/devices -type l); do
      DEV=$(basename $j)
      echo -n "    "
      lspci -s $DEV
     done
    done
    Find the groups containing the devices you wish to pass through. All the devices in a single group need to be attached to pci-stub together (except bridges and hubs) – this ensures that there is no cross-talk between VMs ie. a security feature which IOMMUs are designed to support.
    If the grouping is too inconvenient you can apply the ACS patch to your kernel (refer to the Arch discussion linked at the begging of this post).
    If you find that you have 2 devices in a single IOMMU group which you want to pass to different VMS, you're going to need the ACS patch and an additional grub command line parameter (I encountered this on my Asrock motherboard and so am not using 2 passthrough VMs simulataneously so I don't have to patch the kernel - it would be a maintenance irritation)

    you're ready to change the Grub entries in /etc/default/grub
    in order to enable IOMMU facilities and attach pci devices top pci-stub so they can subsequently be used by vfio. Mine looks like this (at the top) after changes
    Code:
    GRUB_DEFAULT="saved"
    GRUB_SAVEDEFAULT=true
    GRUB_TIMEOUT=10
    GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
    GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on pci-stub.ids=1002:6719,1002:aa80,8086:1539,1912:0014,1412:1724,1849:1539"
    GRUB_CMDLINE_LINUX=""
    Update with
    Code:
    sudo update-grub
    NOTE, if you have installed Xen, you may find it has created another default file in /etc/default/grub.d/xen.conf which overrides the selection of the grub default, in my case (when experimenting) I changed it like this
    Code:
    #
    # Uncomment the following variable and set to 0 or 1 to avoid warning.
    #
    #XEN_OVERRIDE_GRUB_DEFAULT=0
    XEN_OVERRIDE_GRUB_DEFAULT=0
    you probably need to blacklist the drivers for graphics card being passed through (sometimes they grab the card before it's allocated to pci-stub). Change /etc/modprobe.d/blacklist.conf and add the relevant entry. In my case (for amd graphics) I added the following to the end of the file
    Code:
    # To support VGA Passthrough
    blacklist radeon
    those using an NVidia card will need to blacklist Nouveau

    Whilst in the above directory you may also wish to modify /etc/modprobe.d/kvm.conf to select appropriate options, in my case (I have not used all the options, just “documented” existence)
    Code:
    # if vfio-pci was built as a module ( default on arch & ubuntu )
    #options vfio_iommu_type1 allow_unsafe_interrupts=1 
    # Some applications like Passmark Performance Test and SiSoftware Sandra crash the VM without this:
    # options kvm ignore_msrs=1
    If using hugepages (recommended for better performance), update sysctl whilst in /etc ie. add the following lines to /etc/sysctl.conf
    Code:
    # Set hugetables / hugepages for KVM single guest needing 6GB RAM
    vm.nr_hugepages = 3200
    Later on we'll refine the use of hugetables. The above figures are set for my system where, hugepages are 2MB each.
    The Windows VM which needs this facility the most is allocated 6GB of ram, so we need 6144 MB which => 6144 / 2 = 3072 … plus add some extra for overhead (about 2% ie. 61 additional pages, so I have overachieved 

    Also update the ulimits in /etc/security/limits.conf (set limit to an amout sufficient for your VM)
    Code:
               hard    memlock         8388608
    If you haven't included pci-stub in the kernel (see the kernel config recommendations above) then you may need to add the module name to your initramfs, update /etc/initramfs-tools/modules to include the following line
    Code:
    pci-stub
    and “update” your initramfs (use “-c” option to build a new one)
    Code:
    sudo update-initramfs -u
    Note that I usually update initramfs as a matter of course when I update grub – to ensure the two are always synchronised

    now you're about ready to reboot and start creating the VM

    After rebooting, check that the cards to be passed through are assigned to pci-stub using
    Code:
    dmesg | grep pci-stub
    download the virtio drivers from redhat (Windows will need these to access the vfio devices)
    http://alt.fedoraproject.org/pub/alt...latest/images/ eg. Obtain virtio-win-0.1-94.iso which can be used later as a cd-rom image for the Windows guest
    download the spice drivers for enhanced spice experience on windows from
    http://www.spice-space.org/download.html

    I installed the Ubuntu supplied OVMF so that all the necessary links etc are created but it will NOT work. You may not need to do this, but I felt it was cleaner.
    Code:
    sudo apt-get install ovmf
    You must find the latest OVMF file … preferably download from Gerd Hoffman's site https://www.kraxel.org/repos/jenkins/edk2/ and extract OVMF-pure-efi-fd from the rpm and copy to /usr/share/ovmf and create a "reference" copy as OVMF.fd (take a copy of Ubuntu version first – just in case).

    install libvirt and virt-manager - this will provide the GUI VM management service
    Code:
    sudo apt-get install libvirt-bin virt-manager
    update the libvirt qemu configuration at /etc/libvirt/qemu.conf -
    • add this if you want to use host audio (not recommended
      Code:
      nographics_allow_host_audio = 1
    • set this to maintain security
      Code:
      security_require_confined = 1
    • set these to enable qemu to access hardware. You'll need to work out which VFIO items you're going to provide access to (/dev/vfio) and you may or not want to provide access to "pulse"
      Code:
      cgroup_device_acl = [
          "/dev/null", "/dev/full", "/dev/zero",
          "/dev/random", "/dev/urandom",
          "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
          "/dev/rtc","/dev/hpet", "/dev/vfio/vfio",
          "/dev/vfio/1", "/dev/vfio/14", "/dev/vfio/15", "/dev/vfio/16", "/dev/vfio/17",
          "/dev/shm", "/root/.config/pulse", "/dev/snd",
      ]
    • add this to enable access to the hugepages directory we created earlier
      Code:
      hugetlbfs_mount = "/dev/hugepages"
      maintian the security constraints on VMs - we're running "unpriveleged" and then providing specific accesses with changes above, plus the Apparmor changes below
      Code:
      clear_emulator_capabilities = 1



    Update apparmor to allow libvirt to allocate hugepages, use VFIO and sound. Add the following to the apparmor definition in /etc/apparmor.d/abstractions/libvirt-qemu
    Code:
      # WARNING: this gives the guest direct access to host hardware and specific
      # portions of shared memory. This is required for sound using ALSA with kvm,
      # but may constitute a security risk. If your environment does not require
      # the use of sound in your VMs, feel free to comment out or prepend 'deny' to
      # the rules for files in /dev.
      /{dev,run}/shm r,
     # ================ START Changes ================ #
      /{dev,run}/shm/pulse-shm* rw,
      @{HOME}/.config/puls** rwk,
      @{HOME}/** r,
      # Only necessary if running as root, which we no longer are
      #/root/.config/puls** rwk,
      #/root/.asoundrc r,
      /dev/vfio/* rw,
      /dev/hugepages/libvirt** rw,
      # ================ END Changes ================ #
      /{dev,run}/shmpulse-shm* r,
      /{dev,run}/shmpulse-shm* rwk,
      /dev/snd/* rw,
      capability ipc_lock,
      # spice
    Then reload the apparmor defintiions so the changes take effect
    Code:
    sudo invoke-rc.d apparmor reload
    Then restart libvirt (just to be sure)
    Code:
    sudo service libvirt-bin stop
    sudo service libvirt-bin start
    Be sure to back these change up as updates to Apparmor may overwrite them .....

    Start virt-manager (should appear in the menu as "Virtual Machine Manager") and add a connection to QEMU on local host (File/Add Connection), this will give you the ability to create and manage KVM machines

    Now we can start creating a new VM (right click on the "localhost (QEMU)" line in the main screen area and select "New")

    You'll need a copy of Windows 8 or 8.1. It's apparently possible to install Windows 7 but I found it was more trouble than it's worth. DO NOT try to install the Ultimate version - install Professional or Home

    Your graphics card will need to be UEFI capable. Mine is an older AMD card for which there is no official UEFI bios .... but I was able to construct one (see the beginning of this post)

    Define the new VM in virt-manager. Remember to -
    • Select a “dummy” iso to install from … we're going to replace this later
    • Select UEFI as the bios (Advanced Options on last screen)
    • use appropriate names etc


    STOP the install triggered by virt-manager BEFORE it installs anything. You should easily have time to stop the install while the UEFI boot process is iniating.
    Stopping is easy if you're working with virt-manager (the gui) - click on the red "shutdown" icon at the top of the screen(use the virt-viewer “force off” option). You'll probably have to stop it twice – on my system it automatically restarts itself even after a forced stop.

    Create a new copy of the OVMF-pure-efi.fd file you downloaded from Gerd's site and rename it for your new VM eg.
    Code:
    sudo cp /usr/share/ovmf/OVMF-pure-efi.fd /usr/share/ovmf/OVMF-win8-pro.fd
    sudo ln -s /usr/share/ovmf/OVMF-win8-pro.fd /usr/share/qemu/OVMF-win8-pro.fd
    Ensure all the following parameters are set BEFORE attempting an install using
    Code:
    EDITOR=nano virsh edit 
    ie.

    • Initial domain definition – to support direct qemu parameters, right at the top of the file
      Code:
    • Memory backing
      Code:
        
          
          
        
    • Change the loader in the section ie. The OVMF file name (custom for each VM – create in /usr/share/ovmf and create link in /usr/share/qemu as described above)
      Code:
      OVMF_win8_pro.fd
    • CPU. I chose to define my real CPU type. You can set it to "host", this seemed the best overall result for me
      Code:
        
          Haswell
          
        
    • Features (hv_relaxed etc). NVidia crivers don't seem to like these HyperV optimisations and will probably fail if they're encountered so set to "off" if using an NVidia card
      Code:
        
          
          
            
            
            
          
        
    • Clock (hv_time), more performance parameters the NVidia drivers won't like
      Code:
        
          
             # Not sure about this one
        
    • Change emulator in the section
      Code:
      /usr/bin/qemu-system-x86_64
    • Spice access – add to end, just before
. You don't absolutely need this, but I find it useful to be able to control the keyboard during UEFI intialisation before any USB drivers have been loaded, and it doesn't seem to do any harm
Code:
  
    
    
    
  


For those of you using NVidia cards you will also need to
(a) if installing NVidia driver version 337.88 or later - use the kvm=off parameter which is available in Qemu >= 2.1, using the following lines after

Code:
    
    
or if using Libvirt >= 1.2.8 you can add
Code:
  
    
      
    
  
and
(b) if installing NVidia driver version 344.11 or later - set the Hyper-V extensions "off" (all the hv_* options to -cpu) in addition to the above

Save the changes. Note that Virsh may respond with an error in which case edit the file again like this ("Failed. Try again? [y,n,f,?]:") .... use the try again option to return to the editor and fix the problem



For what it's worth do NOT set USB = USB2 in the VM setup, leave at default. USB processing seems to cause a lot of grief, so best to use with default. Also Logitech G27 wheel will not work correctly via Renesas USB 3 add-in card but works fine if passed directly from a USB 2 port on host via host USB passthrough - other USB devices may be similarly afflicted one way or the other (some only work when connected to a passed through card, some will only work when passed through fromm the host controller ymmv)

The install iso should be copied to an install partition as accessible to a VM ie.
  • Create a new partition for this purpose, can be LVM or 'flat' but must reside on a GPT disk not MBR
  • Allocate the new partition to a VM – so it can be formatted. DO NOT format this using host format tools, it must be done from a VM
  • Allocate the new partition to a windows VM
  • Format as FAT32, consider adding the WIN xx bootloader as well. Not necessary but seems cleaner and make the partition bootable (set “active” or bootable flag in parition table)
  • Copy the install iso to the newly formatted partition. This can be accomplished by passing the iso to the VM used to format the new install partition as a CD-ROM (use virt-manager)
  • Check that the \efi\boot directory exists and contains the same files as \efi\microsoft\boot. If necessary copy files into a new \efi\boot directory. The directory must also contain a copy bootmgw.efi named bootx64.efi
  • Check the contents of \source\ei.cfg to ensure it nominates the correct OS product (best to use “Professional”).
  • It can be beneficial to use Win Toolkit to include the linux qxl driver (spice screen driver) in the Windows build although I'm not convinced this is necessary.
  • Exit the VM used to format the install partition


Now you can use the Virt-Manager gui to further modify the VM definition (the manual edits should not be impacted by this , you an easily check at any time with "virsh edit")
  • Add the newly created install partition to your new VM definition as created in #3 above and remove the previously added “dummy” install cd iso. The easiest way to do this is to use virt-viewer “information view” - add hardware. Be sure to add as “IDE disk” and not virtio

also add the following -
  • virtio drivers iso as cd-rom.
  • qxl drivers iso as cd-rom (can be part of the virtio if easier). Note that this probably cannot be used during Windows install since they are unsigned. You'll need to add them later
  • any other files need as cd-rom eg. drivers. You can easily create an iso from any files using the Ubuntu disk burning tools eg. k3b
  • Ensure that the “boot menu” option is enabled in the boot order section of virt-viewer
  • Ensure the main partition to be installed to is accessed via virtio (disk bus under the advanced options for the device)
  • Ensure the network adapter is defined as virtio (device model for the NIC)
  • ensure the default graphics and display are spice based. Windows doesn't seem to need additional drivers for these (which is why youshould NOT need to build the drivers into the install image).


Run the install. If necessary press the space key as the UEFI boot progresses and select the disk to boot from. Sometimes uefi doesn't find the correct boot disk. You will need to connect via Spice to enable this (spicec -h 127.0.0.1 -p 5905)

You'll need to install "additional drivers" and select the virtio drivers for disk and network access. This makes a significant difference to VM performance AND the install will fail if you've set the VM definition to provide a virtio disk but Windows cannot find a driver

Add the required PCI devices Only add PCI passthrough devices AFTER the main install is complete

Windows tuning hints
  • Turn off all paging. The host will handle this
  • I tell Windows to optimise for performance. This makes screens lok pretty ordinary, but since I only use the VM for gaming and games take direct control of the screen, it doesn't really matter
  • Consider turning on update protection ie. Snapshots you can fall back to if an update fails. Then take the first snapshot directly after the install so you have a restore point


Shut the vm down using the Windows shut-down procedure ie. Normal termination

Add the PCI passthrough cards. In my case I pass
  • Graphics card – primary address (01:00.0)
  • Graphics card - sound address (01:00.1)
  • USB add-in card (Renesas based) to which the following are attached via downstream hubs (on display panels) (02:00.0)
  • Host USB device (Logitech wheel)



Add any USB devices to be passed from the host. In my case there seems to be a problem with USB 3 drivers on the guest (and possibly on the host) so I had to detach the wheel from the add-in card and attach it to a USB 2 port on the host, then pass it through via host usb passthrough – which works well.

Reboot and verify all is working

When the graphics card is working. Shut down the VM. Remove the following from the VM definition
  • Spice display
  • qxl graphics
  • console definition
  • serial port definition
  • channel definition


Reboot the VM to verify everything continues to work

In my case I now set up eyefinity and gaming software. The AMD control centre seems a bit flaky and sometimes caused a lock up while trying to establish an eyefinity group. 1 or 2 reboots later (forced shutdown-poweroff from the virt-manager interface) it's all working

No more need to customise the kernel or worry about loss of function on the host graphics (due to VGA-Arbiter patch) !!!
No real performance difference for(for me) between UEFI and BIOS …. more stable, easier to manage using libvirt / virt-manager (everything exposed to libvirt & managed there).
You can connect to the VM using “spicec -h 127.0.0.1 -p 5905” and use the host keyboard during bootup should the need arise – before the guest VM loads any drivers ie. before the guest keyboard and mouse are active

here's what my lspci looks like
Code:
lspci -nn
00:00.0 Host bridge [0600]: Intel Corporation 4th Gen Core Processor DRAM Controller [8086:0c00] (rev 06)
00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller [8086:0c01] (rev 06)
00:01.2 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x4 Controller [8086:0c09] (rev 06)
00:02.0 VGA compatible controller [0300]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller [8086:0412] (rev 06)
00:14.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI [8086:8c31] (rev 05)
00:16.0 Communication controller [0780]: Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1 [8086:8c3a] (rev 04)
00:19.0 Ethernet controller [0200]: Intel Corporation Ethernet Connection I217-V [8086:153b] (rev 05)
00:1a.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2 [8086:8c2d] (rev 05)
00:1b.0 Audio device [0403]: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller [8086:8c20] (rev 05)
00:1c.0 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #1 [8086:8c10] (rev d5)
00:1c.2 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #3 [8086:8c14] (rev d5)
00:1c.3 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #4 [8086:8c16] (rev d5)
00:1c.4 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #5 [8086:8c18] (rev d5)
00:1d.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1 [8086:8c26] (rev 05)
00:1f.0 ISA bridge [0601]: Intel Corporation Z87 Express LPC Controller [8086:8c44] (rev 05)
00:1f.2 SATA controller [0106]: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] [8086:8c02] (rev 05)
00:1f.3 SMBus [0c05]: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller [8086:8c22] (rev 05)
01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Cayman PRO [Radeon HD 6950] [1002:6719]
01:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Cayman/Antilles HDMI Audio [Radeon HD 6900 Series] [1002:aa80]
02:00.0 USB controller [0c03]: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller [1912:0014] (rev 03)
04:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] (rev 01)
05:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
06:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] (rev 01)
I chose to pass the AMD card and its audio controller through along with the Renesas USB controller ie. 01:00.0 to 02:00.0

here's my final libvirt definition
Code:
cat  /etc/libvirt/qemu/ssd-win8-uefi.xml



  ssd-win8-uefi
  redacted
  6291456
  6291456
  
    
    
  
  2
  
    hvm
    OVMF_ssd_win8_pro.fd
    
    
  
  
    
    
      
      
      
    
  
  
    Haswell
    
  
  
    
  
  destroy
  restart
  restart
  
    /usr/bin/qemu-system-x86_64
    
      
      
      
      
'/>
For the last year or so I've been running Windows under Linux using KVM. I started off with true VGA passthrough using instructions from here Then a UEFI mechanism became available - which meant no need to deal with legacy VGA any more and no need for custom kernels or arcane Qemu commands passed to Libvirt. I use a standard version of Ubuntu Trusty ie. the long term stable release - as you would expect for a server So here's a relatively easy way to create A Windows VM with real passthrough .... using the GUI to create, manage and start your VM. It's been very very stable for me and very easy to manage. There are a few tricks along the way, nothing too arcane. NOTE that you do NOT need the host to be booted using a UEFI bios so you need not change your motherboard bios for this. The only bios change is to ensure VT-d or AMD-VI are turned on It's definitely worth reading Alex's "how to" series before you begin http://vfio.blogspot.com.au/2015_05_01_archive.html First off you must have the right hardware. You will need
  1. A CPU which supports IOMMU ie. VT-D in Intel or VI for AMD (this generally excludes the "K" versions of Intel CPUs)
  2. A motherboard with BIOS / UEFI which supports IOMMU as above. Note that this can be the most problematic to ensure. Broadly speaking recent Asrock boards are good, Gigabye are probably good and others are hit and miss. Many people are very frustrated with Asus (including me)
  3. A Graphics card to be passed through. Note that you cannot pass an IGP through at present so if your cpu has integrated graphics use it for the host.
  4. A plan for host interaction. You can use ssh or vnc or better (for most people) use your IGP for the host
  5. Sufficient RAM and disk
If you're planning to pass an NVidia graphics card to your VM, buckle in - you have some fun ahead. You will need (a) if installing NVidia driver version 337.88 or later - use the kvm=off parameter which is available in Qemu >= 2.1 and (b) if installing NVidia driver version 344.11 or later - set the Hyper-V extensions "off" (all the hv_* options to -cpu) in addition to the above For those using an AMD R260 R290 (Hawaii) or AMD 7700 (Bonair) you need to use QEMU 2.3+ in which the "reset problem" was fixed so the guest VM can be restarted without trouble In my case
  • CPU Intel 4670
  • RAM 16 GB
  • Motherboard Asrock Z87 Extreme 6
  • GPU AMD HD6950
  • Disk Sandisk Extreme II 480 GB (boot drive and windows C drive host)
  • WD Black 2 Tb
This spreadsheet lists hardware success stories https://docs.google.com/spreadsheet/...rive_web#gid=0 For these instructionsm, you'll also need a UEFI capable graphics card. Mine is an older AMD card for which there is no official UEFI bios .... but I was able to construct one using the instructions here http://www.insanelymac.com/forum/top...any-ati-cards/ http://www.overclock.net/t/1474306/r...fi-bios-thread I used the tool from Insanely Mac (Windows version - installed in a temporary non-UEFI, simple VM I created for the purpose), link here http://www.overclock.net/t/1474306/r...#post_23400460 I also bought a cheap PCIe USB card (based on the Renesas-NEC chipset) to be passed to the VM. I tried to pass USB devices directly with mixed success, so the add-in card made life much easier at a cost of < AUD$20 Next you need to enable the IOMMU in BIOS. Usually there's a bios setting on Intel boards for VT-d - it will need to be set on. The following command can be used to verify a working iommu
Code:
dmesg|grep -e DMAR -e IOMMU
you should see something like
Code:
[    0.000000] ACPI: DMAR 0x00000000BDCB1CB0 0000B8 (v01 INTEL  BDW      00000001 INTL 00000001)
[    0.000000] Intel-IOMMU: enabled
[    0.028879] dmar: IOMMU 0: reg_base_addr fed90000 ver 1:0 cap c0000020660462 ecap f0101a
[    0.028883] dmar: IOMMU 1: reg_base_addr fed91000 ver 1:0 cap d2008c20660462 ecap f010da
[    0.028950] IOAPIC id 8 under DRHD base  0xfed91000 IOMMU 1
[    0.536212] DMAR: No ATSR found
[    0.536229] IOMMU 0 0xfed90000: using Queued invalidation
[    0.536230] IOMMU 1 0xfed91000: using Queued invalidation
[    0.536231] IOMMU: Setting RMRR:
[    0.536241] IOMMU: Setting identity map for device 0000:00:02.0 [0xbf000000 - 0xcf1fffff]
[    0.537490] IOMMU: Setting identity map for device 0000:00:14.0 [0xbdea8000 - 0xbdeb6fff]
[    0.537512] IOMMU: Setting identity map for device 0000:00:1a.0 [0xbdea8000 - 0xbdeb6fff]
[    0.537530] IOMMU: Setting identity map for device 0000:00:1d.0 [0xbdea8000 - 0xbdeb6fff]
[    0.537543] IOMMU: Prepare 0-16MiB unity mapping for LPC
[    0.537549] IOMMU: Setting identity map for device 0000:00:1f.0 [0x0 - 0xffffff]
[    2.182790] [drm] DMAR active, disabling use of stolen memory
And check that the more standard VT-x and AMD -v are available
Code:
egrep -q '^flags.*(svm|vmx)' /proc/cpuinfo && echo virtualization extensions available
Ensure you have all the latest versions of packages etc.
Code:
sudo apt-get update
sudo apt-get upgrade
Install KVM
Code:
sudo apt-get install qemu-kvm seabios spice-client hugepages spice-client
or use this tutorial https://help.ubuntu.com/community/KVM/Installation Create a new directory for hugepages, we'll use this later (to improve VM performance)
Code:
sudo mkdir /dev/hugepages
find your PCI addresses using the following command
Code:
lspci -nn
or lspci -nnk for additional information or lspci -vnn for even more information choose the PCI devices you want to pass through and work out which IOMMU groups they belong to. I suggest you start simple and just passthrough the graphics card itself (don't passthrough the built in audio) Use this script to display the IOMMU groupings (thanks to Alex WIlliamson)
Code:
#!/bin/sh

# List the devices in each IOMMU group, from AW at
# https://bbs.archlinux.org/viewtopic.php?id=162768&p=29

BASE="/sys/kernel/iommu_groups"

for i in $(find $BASE -maxdepth 1 -mindepth 1 -type d); do
 GROUP=$(basename $i)
 echo "### Group $GROUP ###"
 for j in $(find $i/devices -type l); do
  DEV=$(basename $j)
  echo -n "    "
  lspci -s $DEV
 done
done
Find the groups containing the devices you wish to pass through. All the devices in a single group need to be attached to pci-stub together (except bridges and hubs) – this ensures that there is no cross-talk between VMs ie. a security feature which IOMMUs are designed to support. If the grouping is too inconvenient you can apply the ACS patch to your kernel (refer to the Arch discussion linked at the begging of this post). If you find that you have 2 devices in a single IOMMU group which you want to pass to different VMS, you're going to need the ACS patch and an additional grub command line parameter (I encountered this on my Asrock motherboard and so am not using 2 passthrough VMs simulataneously so I don't have to patch the kernel - it would be a maintenance irritation) you're ready to change the Grub entries in /etc/default/grub in order to enable IOMMU facilities and attach pci devices top pci-stub so they can subsequently be used by vfio. Mine looks like this (at the top) after changes
Code:
GRUB_DEFAULT="saved"
GRUB_SAVEDEFAULT=true
GRUB_TIMEOUT=10
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on pci-stub.ids=1002:6719,1002:aa80,8086:1539,1912:0014,1412:1724,1849:1539"
GRUB_CMDLINE_LINUX=""
Update with
Code:
sudo update-grub
NOTE, if you have installed Xen, you may find it has created another default file in /etc/default/grub.d/xen.conf which overrides the selection of the grub default, in my case (when experimenting) I changed it like this
Code:
#
# Uncomment the following variable and set to 0 or 1 to avoid warning.
#
#XEN_OVERRIDE_GRUB_DEFAULT=0
XEN_OVERRIDE_GRUB_DEFAULT=0
you probably need to blacklist the drivers for graphics card being passed through (sometimes they grab the card before it's allocated to pci-stub). Change /etc/modprobe.d/blacklist.conf and add the relevant entry. In my case (for amd graphics) I added the following to the end of the file
Code:
# To support VGA Passthrough
blacklist radeon
those using an NVidia card will need to blacklist Nouveau Whilst in the above directory you may also wish to modify /etc/modprobe.d/kvm.conf to select appropriate options, in my case (I have not used all the options, just “documented” existence)
Code:
# if vfio-pci was built as a module ( default on arch & ubuntu )
#options vfio_iommu_type1 allow_unsafe_interrupts=1 
# Some applications like Passmark Performance Test and SiSoftware Sandra crash the VM without this:
# options kvm ignore_msrs=1
If using hugepages (recommended for better performance), update sysctl whilst in /etc ie. add the following lines to /etc/sysctl.conf
Code:
# Set hugetables / hugepages for KVM single guest needing 6GB RAM
vm.nr_hugepages = 3200
Also update the ulimits in /etc/security/limits.conf (set limit to an amout sufficient for your VM)
Code:
           hard    memlock         8388608
Later on we'll refine the use of hugetables. The above figures are set for my system where, hugepages are 2MB each. The Windows VM which needs this facility the most is allocated 6GB of ram, so we need 6144 MB which => 6144 / 2 = 3072 … plus add some extra for overhead (about 2% ie. 61 additional pages, so I have overachieved :) If you haven't included pci-stub in the kernel (see the kernel config recommendations above) then you may need to add the module name to your initramfs, update /etc/initramfs-tools/modules to include the following line
Code:
pci-stub
and “update” your initramfs (use “-c” option to build a new one)
Code:
sudo update-initramfs -u
Note that I usually update initramfs as a matter of course when I update grub – to ensure the two are always synchronised now you're about ready to reboot and start creating the VM After rebooting, check that the cards to be passed through are assigned to pci-stub using
Code:
dmesg | grep pci-stub
download the virtio drivers from redhat (Windows will need these to access the vfio devices) http://alt.fedoraproject.org/pub/alt...latest/images/ eg. Obtain virtio-win-0.1-94.iso which can be used later as a cd-rom image for the Windows guest download the spice drivers for enhanced spice experience on windows from http://www.spice-space.org/download.html I installed the Ubuntu supplied OVMF so that all the necessary links etc are created but it will NOT work. You may not need to do this, but I felt it was cleaner.
Code:
sudo apt-get install ovmf
You must find the latest OVMF file … preferably download from Gerd Hoffman's site https://www.kraxel.org/repos/jenkins/edk2/ and extract OVMF-pure-efi-fd from the rpm and copy to /usr/share/ovmf and create a "reference" copy as OVMF.fd (take a copy of Ubuntu version first – just in case). install libvirt and virt-manager - this will provide the GUI VM management service
Code:
sudo apt-get install libvirt-bin virt-manager
update the libvirt qemu configuration at /etc/libvirt/qemu.conf -
  • add this if you want to use host audio (not recommended
    Code:
    nographics_allow_host_audio = 1
  • set this to maintain security
    Code:
    security_require_confined = 1
  • set these to enable qemu to access hardware. You'll need to work out which VFIO items you're going to provide access to (/dev/vfio) and you may or not want to provide access to "pulse"
    Code:
    cgroup_device_acl = [
        "/dev/null", "/dev/full", "/dev/zero",
        "/dev/random", "/dev/urandom",
        "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
        "/dev/rtc","/dev/hpet", "/dev/vfio/vfio",
        "/dev/vfio/1", "/dev/vfio/14", "/dev/vfio/15", "/dev/vfio/16", "/dev/vfio/17",
        "/dev/shm", "/root/.config/pulse", "/dev/snd",
    ]
  • add this to enable access to the hugepages directory we created earlier
    Code:
    hugetlbfs_mount = "/dev/hugepages"
    maintian the security constraints on VMs - we're running "unpriveleged" and then providing specific accesses with changes above, plus the Apparmor changes below
    Code:
    clear_emulator_capabilities = 1
Update apparmor to allow libvirt to allocate hugepages, use VFIO and sound. Add the following to the apparmor definition in /etc/apparmor.d/abstractions/libvirt-qemu
Code:
  # WARNING: this gives the guest direct access to host hardware and specific
  # portions of shared memory. This is required for sound using ALSA with kvm,
  # but may constitute a security risk. If your environment does not require
  # the use of sound in your VMs, feel free to comment out or prepend 'deny' to
  # the rules for files in /dev.
  /{dev,run}/shm r,
 # ================ START Changes ================ #
  /{dev,run}/shm/pulse-shm* rw,
  @{HOME}/.config/puls** rwk,
  @{HOME}/** r,
  # Only necessary if running as root, which we no longer are
  #/root/.config/puls** rwk,
  #/root/.asoundrc r,
  /dev/vfio/* rw,
  /dev/hugepages/libvirt** rw,
  # ================ END Changes ================ #
  /{dev,run}/shmpulse-shm* r,
  /{dev,run}/shmpulse-shm* rwk,
  /dev/snd/* rw,
  capability ipc_lock,
  # spice
Then reload the apparmor defintiions so the changes take effect
Code:
sudo invoke-rc.d apparmor reload
Then restart libvirt (just to be sure)
Code:
sudo service libvirt-bin stop
sudo service libvirt-bin start
Be sure to back these change up as updates to Apparmor may overwrite them ..... Start virt-manager (should appear in the menu as "Virtual Machine Manager") and add a connection to QEMU on local host (File/Add Connection), this will give you the ability to create and manage KVM machines Now we can start creating a new VM (right click on the "localhost (QEMU)" line in the main screen area and select "New") You'll need a copy of Windows 8 or 8.1. It's apparently possible to install Windows 7 but I found it was more trouble than it's worth. DO NOT try to install the Ultimate version - install Professional or Home Your graphics card will need to be UEFI capable. Mine is an older AMD card for which there is no official UEFI bios .... but I was able to construct one (see the beginning of this post) Define the new VM in virt-manager. Remember to -
  • Select a “dummy” iso to install from … we're going to replace this later
  • Select UEFI as the bios (Advanced Options on last screen)
  • use appropriate names etc
STOP the install triggered by virt-manager BEFORE it installs anything. You should easily have time to stop the install while the UEFI boot process is iniating. Stopping is easy if you're working with virt-manager (the gui) - click on the red "shutdown" icon at the top of the screen(use the virt-viewer “force off” option). You'll probably have to stop it twice – on my system it automatically restarts itself even after a forced stop. Create a new copy of the OVMF-pure-efi.fd file you downloaded from Gerd's site and rename it for your new VM eg.
Code:
sudo cp /usr/share/ovmf/OVMF-pure-efi.fd /usr/share/ovmf/OVMF-win8-pro.fd
sudo ln -s /usr/share/ovmf/OVMF-win8-pro.fd /usr/share/qemu/OVMF-win8-pro.fd
Ensure all the following parameters are set BEFORE attempting an install using
Code:
EDITOR=nano virsh edit 
ie.
  • Initial domain definition – to support direct qemu parameters, right at the top of the file
    Code:
  • Memory backing
    Code:
      
        
        
      
  • Change the loader in the section ie. The OVMF file name (custom for each VM – create in /usr/share/ovmf and create link in /usr/share/qemu as described above)
    Code:
    OVMF_win8_pro.fd
  • CPU. I chose to define my real CPU type. You can set it to "host", this seemed the best overall result for me
    Code:
      
        Haswell
        
      
  • Features (hv_relaxed etc). NVidia crivers don't seem to like these HyperV optimisations and will probably fail if they're encountered so set to "off" if using an NVidia card
    Code:
      
        
        
          
          
          
        
      
  • Clock (hv_time), more performance parameters the NVidia drivers won't like
    Code:
      
        
           # Not sure about this one
      
  • Change emulator in the section
    Code:
    /usr/bin/qemu-system-x86_64
  • Spice access – add to end, just before
. You don't absolutely need this, but I find it useful to be able to control the keyboard during UEFI intialisation before any USB drivers have been loaded, and it doesn't seem to do any harm
Code:
  
    
    
    
  
For those of you using NVidia cards you will also need to (a) if installing NVidia driver version 337.88 or later - use the kvm=off parameter which is available in Qemu >= 2.1, using the following lines after
Code:
    
    
or if using Libvirt >= 1.2.8 you can add
Code:
  
    
      
    
  
and (b) if installing NVidia driver version 344.11 or later - set the Hyper-V extensions "off" (all the hv_* options to -cpu) in addition to the above Save the changes. Note that Virsh may respond with an error in which case edit the file again like this ("Failed. Try again? [y,n,f,?]:") .... use the try again option to return to the editor and fix the problem For what it's worth do NOT set USB = USB2 in the VM setup, leave at default. USB processing seems to cause a lot of grief, so best to use with default. Also Logitech G27 wheel will not work correctly via Renesas USB 3 add-in card but works fine if passed directly from a USB 2 port on host via host USB passthrough - other USB devices may be similarly afflicted one way or the other (some only work when connected to a passed through card, some will only work when passed through fromm the host controller ymmv) The install iso should be copied to an install partition as accessible to a VM ie.
  • Create a new partition for this purpose, can be LVM or 'flat' but must reside on a GPT disk not MBR
  • Allocate the new partition to a VM – so it can be formatted. DO NOT format this using host format tools, it must be done from a VM
  • Allocate the new partition to a windows VM
  • Format as FAT32, consider adding the WIN xx bootloader as well. Not necessary but seems cleaner and make the partition bootable (set “active” or bootable flag in parition table)
  • Copy the install iso to the newly formatted partition. This can be accomplished by passing the iso to the VM used to format the new install partition as a CD-ROM (use virt-manager)
  • Check that the \efi\boot directory exists and contains the same files as \efi\microsoft\boot. If necessary copy files into a new \efi\boot directory. The directory must also contain a copy bootmgw.efi named bootx64.efi
  • Check the contents of \source\ei.cfg to ensure it nominates the correct OS product (best to use “Professional”).
  • It can be beneficial to use Win Toolkit to include the linux qxl driver (spice screen driver) in the Windows build although I'm not convinced this is necessary.
  • Exit the VM used to format the install partition
Now you can use the Virt-Manager gui to further modify the VM definition (the manual edits should not be impacted by this , you an easily check at any time with "virsh edit")
  • Add the newly created install partition to your new VM definition as created in #3 above and remove the previously added “dummy” install cd iso. The easiest way to do this is to use virt-viewer “information view” - add hardware. Be sure to add as “IDE disk” and not virtio
also add the following -
  • virtio drivers iso as cd-rom.
  • qxl drivers iso as cd-rom (can be part of the virtio if easier). Note that this probably cannot be used during Windows install since they are unsigned. You'll need to add them later
  • any other files need as cd-rom eg. drivers. You can easily create an iso from any files using the Ubuntu disk burning tools eg. k3b
  • Ensure that the “boot menu” option is enabled in the boot order section of virt-viewer
  • Ensure the main partition to be installed to is accessed via virtio (disk bus under the advanced options for the device)
  • Ensure the network adapter is defined as virtio (device model for the NIC)
  • ensure the default graphics and display are spice based. Windows doesn't seem to need additional drivers for these (which is why youshould NOT need to build the drivers into the install image).
Run the install. If necessary press the space key as the UEFI boot progresses and select the disk to boot from. Sometimes uefi doesn't find the correct boot disk. You will need to connect via Spice to enable this (spicec -h 127.0.0.1 -p 5905) You'll need to install "additional drivers" and select the virtio drivers for disk and network access. This makes a significant difference to VM performance AND the install will fail if you've set the VM definition to provide a virtio disk but Windows cannot find a driver Add the required PCI devices Only add PCI passthrough devices AFTER the main install is complete Windows tuning hints
  • Turn off all paging. The host will handle this
  • I tell Windows to optimise for performance. This makes screens lok pretty ordinary, but since I only use the VM for gaming and games take direct control of the screen, it doesn't really matter
  • Consider turning on update protection ie. Snapshots you can fall back to if an update fails. Then take the first snapshot directly after the install so you have a restore point
Shut the vm down using the Windows shut-down procedure ie. Normal termination Add the PCI passthrough cards. In my case I pass
  • Graphics card – primary address (01:00.0)
  • Graphics card - sound address (01:00.1)
  • USB add-in card (Renesas based) to which the following are attached via downstream hubs (on display panels) (02:00.0)
  • Host USB device (Logitech wheel)
Add any USB devices to be passed from the host. In my case there seems to be a problem with USB 3 drivers on the guest (and possibly on the host) so I had to detach the wheel from the add-in card and attach it to a USB 2 port on the host, then pass it through via host usb passthrough – which works well. Reboot and verify all is working When the graphics card is working. Shut down the VM. Remove the following from the VM definition
  • Spice display
  • qxl graphics
  • console definition
  • serial port definition
  • channel definition
Reboot the VM to verify everything continues to work In my case I now set up eyefinity and gaming software. The AMD control centre seems a bit flaky and sometimes caused a lock up while trying to establish an eyefinity group. 1 or 2 reboots later (forced shutdown-poweroff from the virt-manager interface) it's all working No more need to customise the kernel or worry about loss of function on the host graphics (due to VGA-Arbiter patch) !!! No real performance difference for(for me) between UEFI and BIOS …. more stable, easier to manage using libvirt / virt-manager (everything exposed to libvirt & managed there). You can connect to the VM using “spicec -h 127.0.0.1 -p 5905” and use the host keyboard during bootup should the need arise – before the guest VM loads any drivers ie. before the guest keyboard and mouse are active here's what my lspci looks like
Code:
lspci -nn
00:00.0 Host bridge [0600]: Intel Corporation 4th Gen Core Processor DRAM Controller [8086:0c00] (rev 06)
00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller [8086:0c01] (rev 06)
00:01.2 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x4 Controller [8086:0c09] (rev 06)
00:02.0 VGA compatible controller [0300]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller [8086:0412] (rev 06)
00:14.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI [8086:8c31] (rev 05)
00:16.0 Communication controller [0780]: Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1 [8086:8c3a] (rev 04)
00:19.0 Ethernet controller [0200]: Intel Corporation Ethernet Connection I217-V [8086:153b] (rev 05)
00:1a.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2 [8086:8c2d] (rev 05)
00:1b.0 Audio device [0403]: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller [8086:8c20] (rev 05)
00:1c.0 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #1 [8086:8c10] (rev d5)
00:1c.2 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #3 [8086:8c14] (rev d5)
00:1c.3 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #4 [8086:8c16] (rev d5)
00:1c.4 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #5 [8086:8c18] (rev d5)
00:1d.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1 [8086:8c26] (rev 05)
00:1f.0 ISA bridge [0601]: Intel Corporation Z87 Express LPC Controller [8086:8c44] (rev 05)
00:1f.2 SATA controller [0106]: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] [8086:8c02] (rev 05)
00:1f.3 SMBus [0c05]: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller [8086:8c22] (rev 05)
01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Cayman PRO [Radeon HD 6950] [1002:6719]
01:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Cayman/Antilles HDMI Audio [Radeon HD 6900 Series] [1002:aa80]
02:00.0 USB controller [0c03]: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller [1912:0014] (rev 03)
04:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] (rev 01)
05:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
06:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] (rev 01)
I chose to pass the AMD card and its audio controller through along with the Renesas USB controller ie. 01:00.0 to 02:00.0 here's my final libvirt definition
Code:
cat  /etc/libvirt/qemu/ssd-win8-uefi.xml



  ssd-win8-uefi
  redacted
  6291456
  6291456
  
    
    
  
  2
  
    hvm
    OVMF_ssd_win8_pro.fd
    
    
  
  
    
    
      
      
      
    
  
  
    Haswell
    
  
  
    
  
  destroy
  restart
  restart
  
    /usr/bin/qemu-system-x86_64
    
      
      
      
      
'/>
Some people may need to force pci-stub and vfio modules to load at boot time. Update /etc/modules
Code:
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.

lp
rtc
pci_stub
vfio
vfio_iommu_type1
vfio_pci
kvm
kvm_amd
and then update your intramfs at /etc/initramfs-tools/modules
Code:
# List of modules that you want to include in your initramfs.
# They will be loaded at boot time in the order below.
#
# Syntax:  module_name [args ...]
#
# You must run update-initramfs(8) to effect this change.
#
# Examples:
#
# raid1
# sd_mod
pci_stub ids=1002:6719,1002:aa80,8086:1539,1912:0014,1412:1724,1849:1539
If you also want to be able to start the VM without using libvirt (ie. without using virsh or virt-manager), then you will need the following (6 steps) .... 1) Create following script at /usr/bin/vfio-bind, this is from NBHS at https://bbs.archlinux.org/viewtopic.php?id=162768&p=1, and make it executable (sudo chmod ug+x /usr/bin/vfio-bind)
Code:
#!/bin/bash

modprobe vfio-pci

for dev in "$@"; do
        vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
        device=$(cat /sys/bus/pci/devices/$dev/device)
        if [ -e /sys/bus/pci/devices/$dev/driver ]; then
                echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
        fi
        echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
done
2) Create the script which actually binds PCI cards to VFIO, this is from NBHS at https://bbs.archlinux.org/viewtopic.php?id=162768&p=1, and save it as /etc/init.d/vfio-bind-init.sh and make it executable (sudo chmod ug+x /etc/init.d/vfio-bind-init.sh)
Code:
#!/bin/sh

### BEGIN INIT INFO
# Provides:          vfio-bind
# Required-Start:    
# Required-Stop:
# Default-Start:     S
# Default-Stop:
# Short-Description: vfio-bindings
# Description:       bind selected PCI devices to VFIO for use by KVM
### END INIT INFO

# Script to perform VFIO-BIND function as described at https://bbs.archlinux.org/viewtopic.php?id=162768
#
#
# /usr/bin/vfio-bind /etc/vfio-pci.cfg
/usr/bin/vfio-bind 0000:01:00.0 0000:01:00.1 0000:02:00.0
exit 0
3) To make this run automatically on startup in Ubuntu (there are no dependencies)
Code:
sudo update-rc.d vfio-bind-init.sh defaults
4) Create the config file for vfio-bind at /etc/vfio-pci.cnf, again from NBHS on the Arch forums. This is my example – I link multiple entries, though I only actually use 4 at most (currently 2)
Code:
# List of all devices to be held by VFIO = taken from pci_stub ..... 
# IMPORTANT – no blank lines (DEVICES line is the last line in the file)
DEVICES="0000:01:00.0 0000:01:00.1 0000:02:00.0 0000:04:00.0 0000:05:00.0 0000:06:00.0"
5) Also increase the “ulimit” max by adding the following to /etc/security/limits.conf so your user-id is permitted to allocate memory to the VM
Code:
           hard    memlock         8388608  # value based on required memory
6) Set vfio-bind-init.sh to start automatically at boot
Code:
sudo update-rc.d vfio-bind-init.sh defaults
I created these notes as I installed the VM so they may not be complete or may contain inaccuracies (though they should be close). For reference see the Arch thread and VFIO links at the start of this post I'm happy to update the post to improve accuracy if anyone has constructive comments
Some people may need to force pci-stub and vfio modules to load at boot time. Update /etc/modules
Code:
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.

lp
rtc
pci_stub
vfio
vfio_iommu_type1
vfio_pci
kvm
kvm_amd
and then update your intramfs at /etc/initramfs-tools/modules
Code:
# List of modules that you want to include in your initramfs.
# They will be loaded at boot time in the order below.
#
# Syntax:  module_name [args ...]
#
# You must run update-initramfs(8) to effect this change.
#
# Examples:
#
# raid1
# sd_mod
pci_stub ids=1002:6719,1002:aa80,8086:1539,1912:0014,1412:1724,1849:1539




If you also want to be able to start the VM without using libvirt (ie. without using virsh or virt-manager), then you will need the following (6 steps) ....
1) Create following script at /usr/bin/vfio-bind, this is from NBHS at https://bbs.archlinux.org/viewtopic.php?id=162768&p=1, and make it executable (sudo chmod ug+x /usr/bin/vfio-bind)
Code:
#!/bin/bash

modprobe vfio-pci

for dev in "$@"; do
        vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
        device=$(cat /sys/bus/pci/devices/$dev/device)
        if [ -e /sys/bus/pci/devices/$dev/driver ]; then
                echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
        fi
        echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
done
2) Create the script which actually binds PCI cards to VFIO, this is from NBHS at https://bbs.archlinux.org/viewtopic.php?id=162768&p=1, and save it as /etc/init.d/vfio-bind-init.sh and make it executable (sudo chmod ug+x /etc/init.d/vfio-bind-init.sh)
Code:
#!/bin/sh

### BEGIN INIT INFO
# Provides:          vfio-bind
# Required-Start:    
# Required-Stop:
# Default-Start:     S
# Default-Stop:
# Short-Description: vfio-bindings
# Description:       bind selected PCI devices to VFIO for use by KVM
### END INIT INFO

# Script to perform VFIO-BIND function as described at https://bbs.archlinux.org/viewtopic.php?id=162768
#
#
# /usr/bin/vfio-bind /etc/vfio-pci.cfg
/usr/bin/vfio-bind 0000:01:00.0 0000:01:00.1 0000:02:00.0
exit 0
3) To make this run automatically on startup in Ubuntu (there are no dependencies)
Code:
sudo update-rc.d vfio-bind-init.sh defaults

4) Create the config file for vfio-bind at /etc/vfio-pci.cnf, again from NBHS on the Arch forums. This is my example – I link multiple entries, though I only actually use 4 at most (currently 2)
Code:
# List of all devices to be held by VFIO = taken from pci_stub ..... 
# IMPORTANT – no blank lines (DEVICES line is the last line in the file)
DEVICES="0000:01:00.0 0000:01:00.1 0000:02:00.0 0000:04:00.0 0000:05:00.0 0000:06:00.0"
5) Also increase the “ulimit” max by adding the following to /etc/security/limits.conf so your user-id is permitted to allocate memory to the VM
Code:
           hard    memlock         8388608  # value based on required memory
6) Set vfio-bind-init.sh to start automatically at boot
Code:
sudo update-rc.d vfio-bind-init.sh defaults

I created these notes as I installed the VM so they may not be complete or may contain inaccuracies (though they should be close). For reference see the Arch thread and VFIO links at the start of this post
I'm happy to update the post to improve accuracy if anyone has constructive comments
Last edited by redger; December 7th, 2015 at 10:25 AMReason: clarify









  • #2
    redger is offlineA Carafe of Ubuntu
    Join Date
    May 2008
    Beans
    85

    Re: Windows Gaming VM - KVM / UEFI Version - HowTo

    For the last year or so I've been running Windows under Linux using KVM. I started off with true VGA passthrough using instructions from here



    Then a UEFI mechanism became available - which meant no need to deal with legacy VGA any more and no need for custom kernels or arcane Qemu commands passed to Libvirt. I use a standard version of Ubuntu Trusty ie. the long term stable release - as you would expect for a server

    So here's a relatively easy way to create A Windows VM with real passthrough .... using the GUI to create, manage and start your VM. It's been very very stable for me and very easy to manage.

    There are a few tricks along the way, nothing too arcane.

    NOTE that you do NOT need the host to be booted using a UEFI bios so you need not change your motherboard bios for this. The only bios change is to ensure VT-d or AMD-VI are turned on

    First off you must have the right hardware. You will need
    1. A CPU which supports IOMMU ie. VT-D in Intel or VI for AMD (this generally excludes the "K" versions of Intel CPUs)
    2. A motherboard with BIOS / UEFI which supports IOMMU as above. Note that this can be the most problematic to ensure. Broadly speaking recent Asrock boards are good, Gigabye are probably good and others are hit and miss. Many people are very frustrated with Asus (including me)
    3. A Graphics card to be passed through. Note that you cannot pass an IGP through at present so if your cpu has integrated graphics use it for the host.
    4. A plan for host interaction. You can use ssh or vnc or better (for most people) use your IGP for the host
    5. Sufficient RAM and disk


    In my case
    • CPU Intel 4670
    • RAM 16 GB
    • Motherboard Asrock Z87 Extreme 6
    • GPU AMD HD6950
    • Disk Sandisk Extreme II 480 GB (boot drive and windows C drive host)
    • WD Black 2 Tb


    This spreadsheet lists hardware success stories https://docs.google.com/spreadsheet/...rive_web#gid=0

    For these instructionsm, you'll also need a UEFI capable graphics card. Mine is an older AMD card for which there is no official UEFI bios .... but I was able to construct one using the instructions here
    http://www.insanelymac.com/forum/top...any-ati-cards/
    http://www.overclock.net/t/1474306/r...fi-bios-thread
    I used the tool from Insanely Mac (Windows version - installed in a temporary non-UEFI, simple VM I created for the purpose), link here http://www.overclock.net/t/1474306/r...#post_23400460

    I also bought a cheap PCIe USB card (based on the Renesas-NEC chipset) to be passed to the VM. I tried to pass USB devices directly with mixed success, so the add-in card made life much easier at a cost of < AUD$20

    Next you need to enable the IOMMU in BIOS. Usually there's a bios setting on Intel boards for VT-d - it will need to be set on. The following command can be used to verify a working iommu
    Code:
    dmesg|grep -e DMAR -e IOMMU
    you should see something like
    Code:
    [    0.000000] ACPI: DMAR 0x00000000BDCB1CB0 0000B8 (v01 INTEL  BDW      00000001 INTL 00000001)
    [    0.000000] Intel-IOMMU: enabled
    [    0.028879] dmar: IOMMU 0: reg_base_addr fed90000 ver 1:0 cap c0000020660462 ecap f0101a
    [    0.028883] dmar: IOMMU 1: reg_base_addr fed91000 ver 1:0 cap d2008c20660462 ecap f010da
    [    0.028950] IOAPIC id 8 under DRHD base  0xfed91000 IOMMU 1
    [    0.536212] DMAR: No ATSR found
    [    0.536229] IOMMU 0 0xfed90000: using Queued invalidation
    [    0.536230] IOMMU 1 0xfed91000: using Queued invalidation
    [    0.536231] IOMMU: Setting RMRR:
    [    0.536241] IOMMU: Setting identity map for device 0000:00:02.0 [0xbf000000 - 0xcf1fffff]
    [    0.537490] IOMMU: Setting identity map for device 0000:00:14.0 [0xbdea8000 - 0xbdeb6fff]
    [    0.537512] IOMMU: Setting identity map for device 0000:00:1a.0 [0xbdea8000 - 0xbdeb6fff]
    [    0.537530] IOMMU: Setting identity map for device 0000:00:1d.0 [0xbdea8000 - 0xbdeb6fff]
    [    0.537543] IOMMU: Prepare 0-16MiB unity mapping for LPC
    [    0.537549] IOMMU: Setting identity map for device 0000:00:1f.0 [0x0 - 0xffffff]
    [    2.182790] [drm] DMAR active, disabling use of stolen memory
    And check that the more standard VT-x and AMD -v are available
    Code:
    egrep -q '^flags.*(svm|vmx)' /proc/cpuinfo && echo virtualization extensions available
    Ensure you have all the latest versions of packages etc.
    Code:
    sudo apt-get update
    sudo apt-get upgrade
    Install KVM
    Code:
    sudo apt-get install qemu-kvm seabios spice-client hugepages spice-client
    or use this tutorial https://help.ubuntu.com/community/KVM/Installation

    If you're going to pass an NVidia card through you will need to "hide" KVM and windows hypervisor optimisations. And that's going to need a newer version of Qemu (2.1+) than provided in the standard Trusty repos - try ppa:jacob/virtualisation. I can't vouch for NVidia cards since I don't use one and the NVidia driver developers are getting into an arms race with the Qemu developers so NVidia cards drivers are a moving target.

    Create a new directory for hugepages, we'll use this later (to improve VM performance)
    Code:
    sudo mkdir /dev/hugepages
    find your PCI addresses using the following command
    Code:
    lspci -nn
    or lspci -nnk for additional information or lspci -vnn for even more information

    choose the PCI devices you want to pass through and work out which IOMMU groups they belong to. I suggest you start simple and just passthrough the graphics card itself (don't passthrough the built in audio)
    Use this script to display the IOMMU groupings (thanks to Alex WIlliamson)
    Code:
    #!/bin/sh
    
    # List the devices in each IOMMU group, from AW at
    # https://bbs.archlinux.org/viewtopic.php?id=162768&p=29
    
    BASE="/sys/kernel/iommu_groups"
    
    for i in $(find $BASE -maxdepth 1 -mindepth 1 -type d); do
     GROUP=$(basename $i)
     echo "### Group $GROUP ###"
     for j in $(find $i/devices -type l); do
      DEV=$(basename $j)
      echo -n "    "
      lspci -s $DEV
     done
    done
    Find the groups containing the devices you wish to pass through. All the devices in a single group need to be attached to pci-stub together (except bridges and hubs) – this ensures that there is no cross-talk between VMs ie. a security feature which IOMMUs are designed to support.
    If the grouping is too inconvenient you can apply the ACS patch to your kernel (refer to the Arch discussion linked at the beginning of this post).
    If you find that you have 2 devices in a single IOMMU group which you want to pass to different VMS, you're going to need the ACS patch and an additional grub command line parameter (I encountered this on my Asrock motherboard and so am not using 2 passthrough VMs simulataneously so I don't have to patch the kernel - it would be a maintenance irritation)

    you're ready to change the Grub entries in /etc/default/grub
    in order to enable IOMMU facilities and attach pci devices to pci-stub so they can subsequently be used by vfio. Mine looks like this (at the top) after changes
    Code:
    GRUB_DEFAULT="saved"
    GRUB_SAVEDEFAULT=true
    GRUB_TIMEOUT=10
    GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
    GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on pci-stub.ids=1002:6719,1002:aa80,8086:1539,1912:0014,1412:1724,1849:1539"
    GRUB_CMDLINE_LINUX=""
    Update with
    Code:
    sudo update-grub
    NOTE, if you have installed Xen, you may find it has created another default file in /etc/default/grub.d/xen.conf which overrides the selection of the grub default, in my case (when experimenting) I changed it like this
    Code:
    #
    # Uncomment the following variable and set to 0 or 1 to avoid warning.
    #
    #XEN_OVERRIDE_GRUB_DEFAULT=0
    XEN_OVERRIDE_GRUB_DEFAULT=0
    you probably need to blacklist the drivers for graphics card being passed through (sometimes they grab the card before it's allocated to pci-stub). Change /etc/modprobe.d/blacklist.conf and add the relevant entry. In my case (for amd graphics) I added the following to the end of the file
    Code:
    # To support VGA Passthrough
    blacklist radeon
    those using an NVidia card will need to blacklist Nouveau

    Whilst in the above directory you may also wish to modify /etc/modprobe.d/kvm.conf to select appropriate options, in my case (I have not used any of the options, just “documented” existence)
    Code:
    # if vfio-pci was built as a module ( default on arch & ubuntu )
    #options vfio_iommu_type1 allow_unsafe_interrupts=1 
    # Some applications like Passmark Performance Test and SiSoftware Sandra crash the VM without this:
    # options kvm ignore_msrs=1
    If using hugepages (recommended for better performance), update sysctl whilst in /etc ie. add the following lines to /etc/sysctl.conf
    Code:
    # Set hugetables / hugepages for KVM single guest needing 6GB RAM
    vm.nr_hugepages = 3200
    Later on we'll refine the use of hugetables. The above figures are set for my system where, hugepages are 2MB each.
    The Windows VM which needs this facility the most is allocated 6GB of ram, so we need 6144 MB which => 6144 / 2 = 3072 … plus add some extra for overhead (about 2% ie. 61 additional pages, so I have overachieved 

    If you haven't included pci-stub in the kernel (see the kernel config recommendations above) then you may need to add the module name to your initramfs, update /etc/initramfs-tools/modules to include the following line
    Code:
    pci-stub
    and “update” your initramfs (use “-c” option to build a new one)
    Code:
    sudo update-initramfs -u
    Note that I usually update initramfs as a matter of course when I update grub – to ensure the two are always synchronised

    now you're about ready to reboot and start creating the VM

    After rebooting, check that the cards to be passed through are assigned to pci-stub using
    Code:
    dmesg | grep pci-stub
    download the virtio drivers from redhat (Windows will need these to access the vfio devices)
    http://alt.fedoraproject.org/pub/alt...latest/images/ eg. Obtain virtio-win-0.1-94.iso which can be used later as a cd-rom image for the Windows guest
    download the spice drivers for enhanced spice experience on windows from
    http://www.spice-space.org/download.html

    I installed the Ubuntu supplied OVMF so that all the necessary links etc are created but it will NOT work. You may not need to do this, but I felt it was cleaner.
    Code:
    sudo apt-get install ovmf
    You must find the latest OVMF file … preferably download from Gerd Hoffman's site https://www.kraxel.org/repos/jenkins/edk2/ and extract OVMF-pure-efi-fd from the rpm and copy to /usr/share/ovmf and create a "reference" copy as OVMF.fd (take a copy of Ubuntu version first – just in case).
    Note that it's even better to use the split OVMF (code in one file and variables in another - but I think this requires a later version of Qemu than stock Trusty provides)

    install libvirt and virt-manager - this will provide the GUI VM management service
    Code:
    sudo apt-get install libvirt-bin virt-manager
    update the libvirt qemu configuration at /etc/libvirt/qemu.conf -
    • add this if you want to use host audio (not recommended)
      Code:
      nographics_allow_host_audio = 1
    • set this to maintain security
      Code:
      security_require_confined = 1
    • set these to enable qemu to access hardware. You'll need to work out which VFIO items you're going to provide access to (/dev/vfio) and you may or not want to provide access to "pulse"
      Code:
      cgroup_device_acl = [
          "/dev/null", "/dev/full", "/dev/zero",
          "/dev/random", "/dev/urandom",
          "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
          "/dev/rtc","/dev/hpet", "/dev/vfio/vfio",
          "/dev/vfio/1", "/dev/vfio/14", "/dev/vfio/15", "/dev/vfio/16", "/dev/vfio/17",
          "/dev/shm", "/root/.config/pulse", "/dev/snd",
      ]
    • add this to enable access to the hugepages directory we created earlier
      Code:
      hugetlbfs_mount = "/dev/hugepages"
      maintian the security constraints on VMs - we're running "unpriveleged" and then providing specific accesses with changes above, plus the Apparmor changes below
      Code:
      clear_emulator_capabilities = 1



    Update apparmor to allow libvirt to allocate hugepages, use VFIO and sound. Add the following to the apparmor definition in /etc/apparmor.d/abstractions/libvirt-qemu
    Code:
      # WARNING: this gives the guest direct access to host hardware and specific
      # portions of shared memory. This is required for sound using ALSA with kvm,
      # but may constitute a security risk. If your environment does not require
      # the use of sound in your VMs, feel free to comment out or prepend 'deny' to
      # the rules for files in /dev.
      /{dev,run}/shm r,
     # ================ START Changes ================ #
      /{dev,run}/shm/pulse-shm* rw,
      @{HOME}/.config/puls** rwk,
      @{HOME}/** r,
      # Only necessary if running as root, which we no longer are
      #/root/.config/puls** rwk,
      #/root/.asoundrc r,
      /dev/vfio/* rw,
      /dev/hugepages/libvirt** rw,
      # ================ END Changes ================ #
      /{dev,run}/shmpulse-shm* r,
      /{dev,run}/shmpulse-shm* rwk,
      /dev/snd/* rw,
      capability ipc_lock,
      # spice
    Then reload the apparmor defintiions so the changes take effect
    Code:
    sudo invoke-rc.d apparmor reload
    Then restart libvirt (just to be sure)
    Code:
    sudo service libvirt-bin stop
    sudo service libvirt-bin start
    Be sure to back these change up as updates to Apparmor may overwrite them .....

    Start virt-manager (should appear in the menu as "Virtual Machine Manager") and add a connection to QEMU on local host (File/Add Connection), this will give you the ability to create and manage KVM machines

    Now we can start creating a new VM (right click on the "localhost (QEMU)" line in the main screen area and select "New")

    You'll need a copy of Windows 8 or 8.1. It's apparently possible to install Windows 7 but I found it was more trouble than it's worth. DO NOT try to install the Ultimate version - install Professional or Home

    Your graphics card will need to be UEFI capable. Mine is an older AMD card for which there is no official UEFI bios .... but I was able to construct one (see the beginning of this post)

    Create a disk image for the VM - I recommend an LVM partition. In my case I allocate LVM partitions on the mechanical drive until I'm happy everything's running ok and then copy them across to the SSD using dd.
    You'll also need a small (4GB) Fat32 partition on a "GPT" drive which we'll copy the installation files to. The install iso should be copied to an install partition as accessible to a VM ie.
    • Create a new partition for this purpose, can be LVM or 'flat' but must reside on a GPT disk not MBR
    • Allocate the new partition to a VM – so it can be formatted. DO NOT format this using host format tools, it must be done from a VM
    • Allocate the new partition to a windows VM
    • Format as FAT32, consider adding the WIN xx bootloader as well. Not necessary but seems cleaner and make the partition bootable (set “active” or bootable flag in parition table)
    • Copy the install iso to the newly formatted partition. This can be accomplished by passing the iso to the VM used to format the new install partition as a CD-ROM (use virt-manager)
    • Check that the \efi\boot partition exists and contains the same files as \efi\microsoft\boot. If necessary copy files into a new \efi\boot partition. Also must contain a copy bootmgw.efi named bootx64.efi in \efi\boot
    • Check the contents of \source\ei.cfg to ensure it nominates the correct OS product (best to use “Professional”).
    • It can be beneficial to use Win Toolkit to include the linux qxl driver (spice screen driver) in the Windows build although I'm not convinced this is necessary.
    • Exit the VM used to format the install partition


    Define the new VM in virt-manager. Remember to -
    • Select a “dummy” iso to install from … we're going to replace this later
    • Select UEFI as the bios (Advanced Options on last screen)
    • use appropriate names etc


    STOP the install triggered by virt-manager BEFORE it installs anything. You should easily have time to stop the install while the UEFI boot process is iniating.
    Stopping is easy if you're working with virt-manager (the gui) - click on the red "shutdown" icon at the top of the screen(use the virt-viewer “force off” option). You'll probably have to stop it twice – on my system it automatically restarts itself even after a forced stop.

    Create a new copy of the OVMF-pure-efi.fd file you downloaded from Gerd's site and rename it for your new VM eg.
    Code:
    sudo cp /usr/share/ovmf/OVMF-pure-efi.fd /usr/share/ovmf/OVMF-win8-pro.fd
    sudo ln -s /usr/share/ovmf/OVMF-win8-pro.fd /usr/share/qemu/OVMF-win8-pro.fd
    Ensure all the following parameters are set BEFORE attempting an install using
    Code:
    EDITOR=nano virsh edit 
    ie.

    • Initial domain definition – to support direct qemu parameters, right at the top of the file
      Code:
    • Memory backing
      Code:
        
          
          
        
    • Change the loader in the section ie. The OVMF file name (custom for each VM – create in /usr/share/ovmf and create link in /usr/share/qemu as described above)
      Code:
      OVMF_win8_pro.fd
    • CPU. I chose to define my real CPU type. You can set it to "host", this seemed the best overall result for me
      Code:
        
          Haswell
          
        
    • Features (hv_relaxed etc). NVidia drivers don't seem to like these HyperV optimisations and will probably fail if they're encountered, so set to 'off' for NVidia cards
      Code:
        
          
          
            
            
            
          
        
    • Clock (hv_time), more performance parameters the NVidia drivers won't like, so set to 'off for NVidia cards
      Code:
        
          
             # Not sure about this one
        
    • Change emulator in the section
      Code:
      /usr/bin/qemu-system-x86_64
    • Spice access – add to end, just before
  • . You don't absolutely need this, but I find it useful to be able to control the keyboard during UEFI intialisation before any USB drivers have been loaded, and it doesn't seem to do any harm
    Code:
      
        
        
        
      









  • For those of you using NVidia cards you will also need to
    (a) use the kvm=off parameter which became available with Qemu v2.1+ (try ppa:jacob/virtualisation). Add this code to the qemu:commandline section (at the end -
    Code:
        
        
    If you're using an updated version of Libvirt you could use
    Code:
      
        
          
        
      
    (b) choose the version of NVidia Windows driver carefully because NVidia seems to be intent on not supporting passthrough. See the VFIO blog for more information and options


  • Save the changes. Note that Virsh may respond with an error in which case edit the file again like this ("Failed. Try again? [y,n,f,?]:") .... use the try again option to return to the editor and fix the problem

    For those using NVidia cards there were changes to drivers -
    • Version 338.77 was changed so you need kvm=off to hide the hypervisor for that version and all those after it.
    • From version 344.11 you'll need to remove all the "hv_foo" enablers.

    Alex advises - the graphics performance gain from newer drivers trumps the benefit of the hyper-v extensions. If graphics performance isn't your #1 priority then maybe there are cases for using older drivers.


    For what it's worth do NOT set USB = USB2 in the VM setup, leave at default. USB processing seems to cause a lot of grief, so best to use with default. Also Logitech G27 wheel will not work correctly via Renesas USB 3 add-in card but works fine if passed directly from a USB 2 port on host via host USB passthrough - other USB devices may be similarly afflicted one way or the other (some only work when connected to a passed through card, some will only work when passed through fromm the host controller ymmv)

    The install iso should be copied to an install partition as accessible to a VM ie.
    • Create a new partition for this purpose, can be LVM or 'flat' but must reside on a GPT disk not MBR
    • Allocate the new partition to a VM – so it can be formatted. DO NOT format this using host format tools, it must be done from a VM
    • Allocate the new partition to a windows VM
    • Format as FAT32, consider adding the WIN xx bootloader as well. Not necessary but seems cleaner and make the partition bootable (set “active” or bootable flag in parition table)
    • Copy the install iso to the newly formatted partition. This can be accomplished by passing the iso to the VM used to format the new install partition as a CD-ROM (use virt-manager)
    • Check that the \efi\boot partition exists and contains the same files as \efi\microsoft\boot. If necessary copy files into a new \efi\boot partition. Also must contain a copy bootmgw.efi named bootx64.efi in \efi\boot
    • Check the contents of \source\ei.cfg to ensure it nominates the correct OS product (best to use “Professional”).
    • It can be beneficial to use Win Toolkit to include the linux qxl driver (spice screen driver) in the Windows build although I'm not convinced this is necessary.
    • Exit the VM used to format the install partition


    Now you can use the Virt-Manager gui to further modify the VM definition (the manual edits should not be impacted by this , you an easily check at any time with "virsh edit")
    • Add the newly created install partition to your new VM definition as created in #3 above and remove the previously added “dummy” install cd iso. The easiest way to do this is to use virt-viewer “information view” - add hardware. Be sure to add as “IDE disk” and not virtio

    also add the following -
    • virtio drivers iso as cd-rom.
    • qxl drivers iso as cd-rom (can be part of the virtio if easier). Note that this probably cannot be used during Windows install since they are unsigned. You'll need to add them later
    • any other files need as cd-rom eg. drivers. You can easily create an iso from any files using the Ubuntu disk burning tools eg. k3b
    • Ensure that the “boot menu” option is enabled in the boot order section of virt-viewer
    • Ensure the main partition to be installed to is accessed via virtio (disk bus under the advanced options for the device)
    • Ensure the network adapter is defined as virtio (device model for the NIC)
    • ensure the default graphics and display are spice based. Windows doesn't seem to need additional drivers for these (which is why youshould NOT need to build the drivers into the install image).


    Run the install. If necessary press the space key as the UEFI boot progresses and select the disk to boot from. Sometimes uefi doesn't find the correct boot disk. You will need to connect via Spice to enable this (spicec -h 127.0.0.1 -p 5905)

    You'll need to install "additional drivers" and select the virtio drivers for disk and network access. This makes a significant difference to VM performance AND the install will fail if you've set the VM definition to provide a virtio disk but Windows cannot find a driver

    Add the required PCI devices Only add PCI passthrough devices AFTER the main install is complete

    Windows tuning hints
    • Turn off all paging. The host will handle this
    • I tell Windows to optimise for performance. This makes screens lok pretty ordinary, but since I only use the VM for gaming and games take direct control of the screen, it doesn't really matter
    • Consider turning on update protection ie. Snapshots you can fall back to if an update fails. Then take the first snapshot directly after the install so you have a restore point


    Shut the vm down using the Windows shut-down procedure ie. Normal termination

    Add the PCI passthrough cards. In my case I pass
    • Graphics card – primary address (01:00.0)
    • Graphics card - sound address (01:00.1)
    • USB add-in card (Renesas based) to which the following are attached via downstream hubs (on display panels) (02:00.0)
    • Host USB device (Logitech wheel)



    Add any USB devices to be passed from the host. In my case there seems to be a problem with USB 3 drivers on the guest (and possibly on the host) so I had to detach the wheel from the add-in card and attach it to a USB 2 port on the host, then pass it through via host usb passthrough – which works well.

    Reboot and verify all is working

    When the graphics card is working. Shut down the VM. Remove the following from the VM definition
    • Spice display
    • qxl graphics
    • console definition
    • serial port definition
    • channel definition


    Reboot the VM to verify everything continues to work

    In my case I now set up eyefinity and gaming software. The AMD control centre seems a bit flaky and sometimes caused a lock up while trying to establish an eyefinity group. 1 or 2 reboots later (forced shutdown-poweroff from the virt-manager interface) it's all working

    No more need to customise the kernel or worry about loss of function on the host graphics (due to VGA-Arbiter patch) !!!
    No real performance difference for(for me) between UEFI and BIOS …. more stable, easier to manage using libvirt / virt-manager (everything exposed to libvirt & managed there).
    You can connect to the VM using “spicec -h 127.0.0.1 -p 5905” and use the host keyboard during bootup should the need arise – before the guest VM loads any drivers ie. before the guest keyboard and mouse are active

    here's what my lspci looks like
    Code:
    lspci -nn
    00:00.0 Host bridge [0600]: Intel Corporation 4th Gen Core Processor DRAM Controller [8086:0c00] (rev 06)
    00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller [8086:0c01] (rev 06)
    00:01.2 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x4 Controller [8086:0c09] (rev 06)
    00:02.0 VGA compatible controller [0300]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller [8086:0412] (rev 06)
    00:14.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI [8086:8c31] (rev 05)
    00:16.0 Communication controller [0780]: Intel Corporation 8 Series/C220 Series Chipset Family MEI Controller #1 [8086:8c3a] (rev 04)
    00:19.0 Ethernet controller [0200]: Intel Corporation Ethernet Connection I217-V [8086:153b] (rev 05)
    00:1a.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2 [8086:8c2d] (rev 05)
    00:1b.0 Audio device [0403]: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller [8086:8c20] (rev 05)
    00:1c.0 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #1 [8086:8c10] (rev d5)
    00:1c.2 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #3 [8086:8c14] (rev d5)
    00:1c.3 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #4 [8086:8c16] (rev d5)
    00:1c.4 PCI bridge [0604]: Intel Corporation 8 Series/C220 Series Chipset Family PCI Express Root Port #5 [8086:8c18] (rev d5)
    00:1d.0 USB controller [0c03]: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1 [8086:8c26] (rev 05)
    00:1f.0 ISA bridge [0601]: Intel Corporation Z87 Express LPC Controller [8086:8c44] (rev 05)
    00:1f.2 SATA controller [0106]: Intel Corporation 8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode] [8086:8c02] (rev 05)
    00:1f.3 SMBus [0c05]: Intel Corporation 8 Series/C220 Series Chipset Family SMBus Controller [8086:8c22] (rev 05)
    01:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Cayman PRO [Radeon HD 6950] [1002:6719]
    01:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Cayman/Antilles HDMI Audio [Radeon HD 6900 Series] [1002:aa80]
    02:00.0 USB controller [0c03]: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller [1912:0014] (rev 03)
    04:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] (rev 01)
    05:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
    06:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] (rev 01)
    I chose to pass the AMD card and its audio controller through along with the Renesas USB controller ie. 01:00.0 to 02:00.0

    here's my final libvirt definition
    Code:
    cat  /etc/libvirt/qemu/ssd-win8-uefi.xml
    
    
    
      ssd-win8-uefi
      redacted
      6291456
      6291456
      
        
        
      
      2
      
        hvm
        OVMF_ssd_win8_pro.fd
        
        
      
      
        
        
          
          
          
        
      
      
        Haswell
        
      
      
        
      
      destroy
      restart
      restart
      
        /usr/bin/qemu-system-x86_64
        
          
          
          
          
    '/>
    Some people may need to force pci-stub and vfio modules to load at boot time. Update /etc/modules
    Code:
    # /etc/modules: kernel modules to load at boot time.
    #
    # This file contains the names of kernel modules that should be loaded
    # at boot time, one per line. Lines beginning with "#" are ignored.
    # Parameters can be specified after the module name.
    
    lp
    rtc
    pci_stub
    vfio
    vfio_iommu_type1
    vfio_pci
    kvm
    kvm_amd
    and then update your intramfs at /etc/initramfs-tools/modules
    Code:
    # List of modules that you want to include in your initramfs.
    # They will be loaded at boot time in the order below.
    #
    # Syntax:  module_name [args ...]
    #
    # You must run update-initramfs(8) to effect this change.
    #
    # Examples:
    #
    # raid1
    # sd_mod
    pci_stub ids=1002:6719,1002:aa80,8086:1539,1912:0014,1412:1724,1849:1539




    If you also want to be able to start the VM without using libvirt (ie. without using virsh or virt-manager), then you will need the following (8 steps) ....
    1) Create following script at /usr/bin/vfio-bind, this is from NBHS at https://bbs.archlinux.org/viewtopic.php?id=162768&p=1, and make it executable (sudo chmod ug+x /usr/bin/vfio-bind)
    Code:
    #!/bin/bash
    
    modprobe vfio-pci
    
    for dev in "$@"; do
            vendor=$(cat /sys/bus/pci/devices/$dev/vendor)
            device=$(cat /sys/bus/pci/devices/$dev/device)
            if [ -e /sys/bus/pci/devices/$dev/driver ]; then
                    echo $dev > /sys/bus/pci/devices/$dev/driver/unbind
            fi
            echo $vendor $device > /sys/bus/pci/drivers/vfio-pci/new_id
    done
    2) Create the script which actually binds PCI cards to VFIO, this is from NBHS at https://bbs.archlinux.org/viewtopic.php?id=162768&p=1, and save it as /etc/init.d/vfio-bind-init.sh and make it executable (sudo chmod ug+x /etc/init.d/vfio-bind-init.sh)
    Code:
    #!/bin/sh
    
    ### BEGIN INIT INFO
    # Provides:          vfio-bind
    # Required-Start:    
    # Required-Stop:
    # Default-Start:     S
    # Default-Stop:
    # Short-Description: vfio-bindings
    # Description:       bind selected PCI devices to VFIO for use by KVM
    ### END INIT INFO
    
    # Script to perform VFIO-BIND function as described at https://bbs.archlinux.org/viewtopic.php?id=162768
    #
    #
    # /usr/bin/vfio-bind /etc/vfio-pci.cfg
    /usr/bin/vfio-bind 0000:01:00.0 0000:01:00.1 0000:02:00.0
    exit 0
    3) To make this run automatically on startup in Ubuntu (there are no dependencies)
    Code:
    sudo update-rc.d vfio-bind-init.sh defaults

    4) Create the config file for vfio-bind at /etc/vfio-pci.cnf, again from NBHS on the Arch forums. This is my example – I link multiple entries, though I only actually use 4 at most (currently 2)
    Code:
    # List of all devices to be held by VFIO = taken from pci_stub ..... 
    # IMPORTANT – no blank lines (DEVICES line is the last line in the file)
    DEVICES="0000:01:00.0 0000:01:00.1 0000:02:00.0 0000:04:00.0 0000:05:00.0 0000:06:00.0"
    5) Also increase the “ulimit” max by adding the following to /etc/security/limits.conf so your user-id is permitted to allocate memory to the VM
    Code:
               hard    memlock         8388608  # value based on required memory
    6) Set vfio-bind-init.sh to start automatically at boot
    Code:
    sudo update-rc.d vfio-bind-init.sh defaults
    7) Create a new security group containing users libvirt and eg. libvirt_mine

    8) create and execute the following script to give yourself access to all the necessary files

    Code:
    #!/bin/sh
    
    echo "Setting ownership and access for vfio, hugepages and primary disk"
    
    sudo chown :libvirt_mine /dev/vfio/vfio
    sudo chown :libvirt_mine /dev/vfio/1
    sudo chown :libvirt_mine /dev/vfio/15
    
    sudo chmod g+rw /dev/vfio/vfio
    sudo chmod g+rw /dev/vfio/1
    sudo chmod g+rw /dev/vfio/15
    
    sudo chown -R :libvirt_mine /dev/hugepages
    
    sudo chmod -R g+rw /dev/hugepages
    
    if [ !-e /dev/hugepages/libvirt ]; then
      sudo mkdir /dev/hugepages/libvirt
    fi
    sudo chown -R libvirt-qemu:libvirt_mine /dev/hugepages/libvirt
    sudo chmod -R g+rw /dev/hugepages/libvirt
    
    
    # in my case the VM image at /dev/ssd-virt/lvwin7_kvm is “linked” to /dev/dm-5, but write logic to find it wherever
    REF_FILE=$(ls -l /dev/ssd-virt | grep lvwin7_kvm | awk '{print $11}')
    STRIP=".."
    ACCESSFILE="/dev"${REF_FILE#$STRIP}
    #chown -R :libvirt_mine /dev/dm-5
    sudo chown -R :libvirt_mine $ACCESSFILE
    #chmod -R g+rw /dev/dm-5
    sudo chmod -R g+rw $ACCESSFILE
    
    # Set global user ulimit -l ie. locked memory to size of vm + working area = 6GB + 2 Gb = 8388608 kB, is accomplished in /etc/defaults/libvirt-bin by adding line "ulimit -l 8388608"
    # Depnds on upper limit set in /etc/security/limits
    
    exit 0

    I created these notes as I installed the VM so they may not be complete or may contain inaccuracies (though they should be close). For reference see the Arch thread and VFIO links at the start of this post
    I'm happy to update tje post to improve accuracy if anyone has constructive comments
    Last edited by redger; May 1st, 2015 at 10:27 PM.









  • #3
    TheFu's Avatar
    TheFu is online now<--- run="" see="" span="" that="" to="" window.="" xman="">
    Join Date
    Mar 2010
    Location
    Squidbilly-land
    Beans
    9,736
    Distro
    Lubuntu 14.04 Trusty Tahr

    Re: Windows Gaming VM - KVM / UEFI Version - HowTo

    The ('svm|vmx)' /proc/cpuinfo check is for VT-x or AMD-v. Don't think it does anything for VT-d. I think for that, we have to check the dmesg output.
    http://www.linux-kvm.org/page/How_to...M#VT-d_support has details. Seems strange to me that you didn't reference that website at all. Is there a reason?

    Didn't read the rest. Sorry, I don't game but it would be nice to move video editing into a VM - but my video card isn't supported and I don't have any UEFI here.
  • #4
    redger is offlineA Carafe of Ubuntu
    Join Date
    May 2008
    Beans
    85

    Re: Windows Gaming VM - KVM / UEFI Version - HowTo

    Thanks for pointing that out - Link is now added

    I had so many references it wa difficult to know now which ones I used all the time. Some of the RedHat and Suse references are really good, but I didn't use them much. Mostly I followed along with the Arch discussion ... starting in Jan 2014.

    UEFI was completed in January this year


    You don't have to use UEFI, but without it you're up for compiling kernels etc. ... as I did for most of 2014. It works, performs the same and can also be managed via libvirt - virt-manager .... but isn't quite as stable (partly because I started off using the Q35 chipset insted of the default which had some adverse consequences).
    I have the patch set for Ubuntu Trusty to support true VGA passthrough if you need them, it includes all the functional patches, plus some performance patches. Additionally I have notes on setting up a true VGA passthrough but I've not published them anywhere and there's even more work in that than this. The tricky bits that don't seem documented elsewhere related to running the VMs as a "normal" user, not "root" either via libvirt or from the command line - it was a pain to get that going the first time. The instruction in the first post should be sufficient to make that work for UEFI'd guests
  • #5
    TheFu's Avatar
    TheFu is online now<--- run="" see="" span="" that="" to="" window.="" xman="">
    Join Date
    Mar 2010
    Location
    Squidbilly-land
    Beans
    9,736
    Distro
    Lubuntu 14.04 Trusty Tahr

    Re: Windows Gaming VM - KVM / UEFI Version - HowTo

    For me, a $40 KVM switch solves the hassle of rebuilding kernels for stuff like this just to have Windows with local graphics performance.

    I did my time on the bleeding edge in the mid-90s - rebuilding kernels all the time trying to get disk/network controllers to work at all.
    It is amazing how far we've come and that we are still improving things all the time. Thanks for pushing forward.
  • #6
    jclaeyssens is offlineFirst Cup of Ubuntu
    Join Date
    Oct 2014
    Beans
    11

    Re: Windows Gaming VM - KVM / UEFI Version - HowTo

    Awesome write-up Redger, big thumbs up to you!!

    I'm trying to build a Windows VM with passthrough hardware at the moment and now I'm a bit stuck at recompiling the kernel for the vga arbiter patch (like you mentioned). Your tutorial almost makes me want to buy a new (uefi-compatible) graphics card and try this out myself 
  • #7
    redger is offlineA Carafe of Ubuntu
    Join Date
    May 2008
    Beans
    85

    Re: Windows Gaming VM - KVM / UEFI Version - HowTo

    which part of the VGA abriter patch is causing problems ? Is it the download & compile or getting the patch to apply cleanly ?

    fwiw here are my notes for compiling the kernel with patches (some are performance related)

    Code:
    To compile a new kernel
    https://wiki.ubuntu.com/Kernel/BuildYourOwnKernel
    https://help.ubuntu.com/community/Kernel/Compile
    https://wiki.ubuntu.com/KernelTeam/KernelMaintenance
    https://www.debian.org/releases/stable/i386/ch08s06.html.en
    
    architecture is "amd64" or "x86_64"
    
    1) download the source into the current directory using "apt-get source linux-image-xxxxx"   where xxxx is the name of the version eg. apt-get source linux-image-3.13.0-32-generic
    OR   sudo apt-get source linux-image-`uname -r`
    
    This should download the tarball(s) and extract the source code into a directory (which should be renamed immediately because all versions use the same directory name !!!)
    
    2) Open the new directory, clean up and prepare
      chmod a+x debian/scripts/*
      chmod a+x debian/scripts/misc/*
      fakeroot debian/rules clean
    
      and generate the required config by either -
      a) editing using "fakeroot debian/rules editconfigs"  (for all targets, one after another)      ... (choose ...generic, ) USE THIS OR YOU'LL GET A "make mrproper" ERROR ...
      b) "make menuconfig" and work through the various options. Remember to save before exit
      c) copy the current config from the boot directory as ".config" in the root directory for the new kernel and then use  "make oldconfig" ... which will ask a question for the value of each new config option
      
      Required config options are
        HZ_1000=y                     in Processor Type & Features (last page, Timer Frequency)
        PREEMPT=voluntary                 (Preemption Model on first page)
        CONFIG_PCI_STUB=y             in Bus Options, second page down (PCI Stub Driver)
        CONFIG_VFIO_IOMMU_TYPE1=y     in Device Drivers. 2 pages back from end
        CONFIG_VFIO=y                     (VFIO Non-Priveleged userspace driver framework)
        CONFIG_VFIO_PCI=y
        CONFIG_VFIO_PCI_VGA=y
    
    3) apply any patches /// remember to verify that they worked ok (look for fail)
       "sh re_apply_patches.sh > re_output_patch.txt"
    
    4) "fakeroot debian/rules clean"   to remove any old build information / files
    
    5) Ignore modules not created with new parameters ... copy re_modules.ignore to ...debian.master/abi//modules.ignore
    
       and ignore other ABI errors by copying re_prevrelease_arch_ignore (rename to  "ignore") to debian.master/abi//    eg. to debian.master/abi/3.13.0-32.56/amd64/
       
       Also, update the ABI version tags in /mnt/programming_data/kernel/linux-3.13.0-36-re/debian/changelog .... by copying the last set of changes and change name etc.
       This will generate a new name for the resuling Deb packages once compiled
    
    6) "DEB_BUILD_OPTIONS=parallel=3 skipmodule=true fakeroot debian/rules binary-headers binary-generic  > re_output_compile.txt"   to generate the deb files which can be installed (second thoughts don't pipe the poutput to a file - it will prevent selection of the CPU type)
      If you receive an error indicating you need to "make mrproper" then you need to start again and use the "fakeroot debian/rules editconfigs" command to create the Config file (step 2)
    
    7) Install all the Debs (from the dir one level higher than compile dir) with command "sudo dpkg -i linux**.deb
    eg.   sudo dpkg -i linux*3.13.0-32_3.13.0-32.57*.deb
          sudo dpkg -i linux*3.13.0-32-generic_3.13.0-32.57*.deb
    
    8) go into Synaptic and lock all the newly installed elements (linux-image*, linux-header*, linux-tool*, linx-cloud-tool*) - to prevent the new kernel and components being overwritten in the next upgrade
    and here's the script I use to apply the patches
    Code:
    #!/bin/bash
    # patch --dry-run --verbose -p 1 -i re_xxxxxxxxxxxxx
    
    echo
    echo ... PATCHING ... VGA Arbiter
    patch -b -p 1 -i re_patch_01_i915_313rc4.patch
    echo
    echo ... PATCHING ... acs override
    patch -b -p 1 -i re_patch_02_override_for_missing_acs_capabilities.patch
    echo
    echo ... PATCHING ... memleak
    patch -b -p 1 -i re_patch_03_fix_memleak.patch
    echo
    echo ... PATCHING ... read DR6
    patch -b -p 1 -i re_patch_04_fix_reading_of_DR6.patch
    echo
    echo ... PATCHING ... debug registers - has problem, needs to follow DR6 patch
    patch -b -p 1 -i re_patch_05_debug_registers.patch
    # patch -b -p 1 -i re_debug_registers_RE.patch   # Corrected to add additional lines before DR6 patch runs
    echo
    echo ... PATCHING ... kernel__gcc
    patch -b -p 1 -i re_patch_06_kernel-38-gcc48-2.patch
    I have all those patches for the Ubuntu kernel if you can't find them elsewhere

    You might also want to look for a new bios for your graphics card, there are methods for creating UEFI versions for AMD cards as linked above.
    What sort of graphics card, cpu and motherboard do you have ? You know that you only need to compile a new kernel if you have an Intel cpu with ingrated graphics - and even then you can actually run everything without modifying the kernel BUT the guest VM will corrupt your host graphics (mostly just wierd colours) because of Intel's poorly implemented graphics driver - so it's a good way to test viability without the need to compile a new kernel
    AMD cpus should not require any kernel modifications

    Enable the ACS Override Patch with the following kernel parameter on the GRUB command line
    Code:
    pcie_acs_override=downstream
    Enable the VGA Arbiter Patch with the following kernel parmeter on the GRUB command line
    Code:
    i915.enable_hd_vgaarb=1
    Last edited by redger; May 31st, 2015 at 01:07 PM.
  • #8
    jclaeyssens is offlineFirst Cup of Ubuntu
    Join Date
    Oct 2014
    Beans
    11

    Re: Windows Gaming VM - KVM / UEFI Version - HowTo

    Wow, thanks redger! I'll have a closer look to what you posted tomorrow, because I'm a bit limited in time today, but thanks already for sharing your patching proces!

    I have an Asrock C226WS motherboard with intel I7 4790S, so with integrated HD4600 and a dedicated ATI Radeon HD 6870. Using ubuntu 14.04 LTS. When I pass the boot-parameter intel_iommu=on in grub, ubuntu crashes right after grub, it just gives me a nice screen of blue and green stripes. Removing the 'quiet splash' didn't reveal anything, it crashes directly after grub... So I searched the internet a bit and I stumbled upon the vga arbiter patch by Alex Williamson.

    I'm quite new to the patching kernel stuff, so for now I just downloaded a kernel
    with
    sudo apt-get build-dep linux-image-`uname -r`
    I unpacked it all, and then ran the patch command

    sudo patch -p1 < vga_arbiter_patch
    which gave me

    sudo patch -p1 < vga_arbiter_patch.diff
    [sudo] password for jclaeyssens:
    patching file drivers/gpu/drm/i915/i915_dma.c
    Hunk #1 succeeded at 1297 (offset -9 lines).
    Hunk #2 succeeded at 1370 (offset -9 lines).
    patching file drivers/gpu/drm/i915/i915_drv.h
    Hunk #1 FAILED at 2080.
    1 out of 1 hunk FAILED -- saving rejects to file drivers/gpu/drm/i915/i915_drv.h.rej
    can't find file to patch at input line 58
    Perhaps you used the wrong -p or --strip option?
    The text leading up to this was:
    --------------------------
    |diff --git a/drivers/gpu/drm/i915/i915_params.c b/drivers/gpu/drm/i915/i915_params.c
    |index d1d7980..64d96c6 100644
    |--- a/drivers/gpu/drm/i915/i915_params.c
    |+++ b/drivers/gpu/drm/i915/i915_params.c
    --------------------------
    File to patch:
    Skip this patch? [y] y
    Skipping patch.
    2 out of 2 hunks ignored
    patching file drivers/gpu/drm/i915/intel_display.c
    Hunk #1 succeeded at 10628 with fuzz 1 (offset -656 lines).
    Hunk #2 succeeded at 10942 (offset -679 lines).
    Hunk #3 succeeded at 11151 (offset -725 lines).
    patching file drivers/gpu/drm/i915/intel_drv.h
    Hunk #1 succeeded at 868 (offset -66 lines).
    patching file include/linux/vgaarb.h
    So if I understand the output, it didn't find the file i915_params.c. I looked it up, and it isn't their at all... So as you can see, I didn't get far 

    But I'll try out your notes tomorrow morning! Thanks again!
  • #9
    redger is offlineA Carafe of Ubuntu
    Join Date
    May 2008
    Beans
    85

    Re: Windows Gaming VM - KVM / UEFI Version - HowTo

    This should contain the full set of patches ... ready to go.
    kernel_patches_trusty.tar.gz

    Drop them into the directory containing the kernel files (the lowest level), which will also contain the CREDITS, COPYING and README files

    the last Trusty kernel I compiled was linux-3.13.0-44, and it was successful with these patches

    here are the "ignore" files used to overcome the kernel compile errors (very small files to be dropped in as per the notes above)
    ignore.tar.gz

    the boot failure sounds more like a parameter error ... I don't suppose you've specified NOMODESET have you ... it gave me trouble. Here's my current /etc/default/grub (first lines)
    Code:
    # If you change this file, run 'update-grub' afterwards to update
    # /boot/grub/grub.cfg.
    # For full documentation of the options in this file, see:
    #   info -f grub -n 'Simple configuration'
    
    #GRUB_DEFAULT=0
    GRUB_DEFAULT="saved"
    GRUB_SAVEDEFAULT=true
    #GRUB_HIDDEN_TIMEOUT=0
    #GRUB_HIDDEN_TIMEOUT_QUIET=true
    GRUB_TIMEOUT=10
    GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
    GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on enable_hd_vgaarb=1 pci-stub.ids=1002:6719,1002:aa80,8086:1539,1912:0014,1412:1724,1849:1539"
    GRUB_CMDLINE_LINUX=""
    and to confirm - you don't have to use the vga-arbiter patch ..... without it you will experience graphics corruption on the host display when the guest starts writing to it's display ... and it may not be all that bad so worth trying without any patches first, just to see if you can get it working

    also .. there are UEFI bioses for your graphics card on this page http://www.overclock.net/t/1474306/r...fi-bios-thread. You can specify the graphics bios as a file to qemu - without having to flash the card (I haven't done it, but some of the folk on the Arch discussion seem to have, search for "romfile" here https://bbs.archlinux.org/viewtopic.php?id=162768), the xml might look something like this
    Code:
        
          
            
    Note that I haven't tried this, I took the syntax from here https://libvirt.org/formatdomain.html, search for "
    Last edited by redger; March 2nd, 2015 at 08:52 AM.
  • #10
    KillerKelvUK is offlineTea Glorious Tea!
    Join Date
    Sep 2012
    Location
    North West England, UK
    Beans
    328
    Distro
    Ubuntu 16.04 Xenial Xerus

    Re: Windows Gaming VM - KVM / UEFI Version - HowTo

    Hey, for the Spice connection to work does the guest need to have a virtual display adapter as well as the passed through VGA adapter...Spice then uses the virtual to capture the output for the client to render?

    My current progress is just a passed through VGA adapter, my spice client connects but doesn't render a screen just a black/blank box...USB redirection still works though.

    Also regards all the upfront effort with shell scripts for unbinding the VGA device and adding it into the VFIO-PCI device before starting the guest...virt-manager & libvirt automates all of this, have confirmed this as my setup only has /dev/vfio/vfio until I start a guest when the /dev/vfio/1 device is created to mirror the IOMMU group by GTX is in. I guess what I'm saying is that the shell scripts should only be required for users who only ever intend to use qemu via the cli but virt-manager users can omit all of those steps.

    EDIT:

    Sorry redger I should have started by saying Thank You, this is a great tutorial and helped me out no end 
    Last edited by KillerKelvUK; March 3rd, 2015 at 08:05 PM.






  • Nenhum comentário: