Jump to content
RESET Forums (homeservershow.com)
HJB

PCIE pass trough in Proxmox on a HPE proliant ML350p G8

Recommended Posts

HJB
Posted (edited)

This is an effort from me for the Community to document my quest of getting Freenas to run as a VM under Proxmox with a LSI HBA Adapter forwarded to Freenas so we get the best of both worlds.

My reasoning for this was that I did not want to miss the comfort and stability of either Proxmox or Freenas but refuse to run 2 separate Servers as I have a very well equipped HPE Proliant ML350P G8.

Proxmox documentation showed me that the IOMMU has to be activated on the kernel commandline. The command line parameters are:
for Intel CPUs: 

intel_iommu=on


for AMD CPUs: 

amd_iommu=on

The kernel command line needs to be placed in the variable GRUB_CMDLINE_LINUX_DEFAULT in the file /etc/default/grub. Running update-grub appends its content to all Linux entries in /boot/grub/grub.cfg.

 

nano /etc/default/grub

# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
#   info -f grub -n 'Simple configuration'
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="Proxmox Virtual Environment"
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs"
# 22.5.2020 (HJB) added to enable pcie passtrough.
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
# Disable os-prober, it might add menu entries for each guest
GRUB_DISABLE_OS_PROBER=true

After changing anything modules related, you need to refresh your initramfs. On Proxmox VE this can be done by executing: 

update-initramfs -u -k all

Finish Configuration
Finally reboot to bring the changes into effect and check that it is indeed enabled.

dmesg | grep -e DMAR -e IOMMU -e AMD-Vi

[    0.007731] ACPI: DMAR 0x00000000BDDAD200 000558 (v01 HP     ProLiant 00000001 \xd2?   0000162E)
[    1.245296] DMAR: IOMMU enabled

[    2.592107] DMAR: Host address width 46
[    2.592173] DMAR: DRHD base: 0x000000fbefe000 flags: 0x0
[    2.592247] DMAR: dmar0: reg_base_addr fbefe000 ver 1:0 cap d2078c106f0462 ecap f020fe
[    2.592330] DMAR: DRHD base: 0x000000f4ffe000 flags: 0x1
[    2.592399] DMAR: dmar1: reg_base_addr f4ffe000 ver 1:0 cap d2078c106f0462 ecap f020fe
[    2.592481] DMAR: RMRR base: 0x000000bdffd000 end: 0x000000bdffffff
[    2.592550] DMAR: RMRR base: 0x000000bdff6000 end: 0x000000bdffcfff
[    2.592618] DMAR: RMRR base: 0x000000bdf83000 end: 0x000000bdf84fff
[    2.592686] DMAR: RMRR base: 0x000000bdf7f000 end: 0x000000bdf82fff
[    2.592755] DMAR: RMRR base: 0x000000bdf6f000 end: 0x000000bdf7efff
[    2.592823] DMAR: RMRR base: 0x000000bdf6e000 end: 0x000000bdf6efff
[    2.592892] DMAR: RMRR base: 0x000000000f4000 end: 0x000000000f4fff
[    2.592961] DMAR: RMRR base: 0x000000000e8000 end: 0x000000000e8fff
[    2.593030] DMAR: RMRR base: 0x000000bddde000 end: 0x000000bdddefff
[    2.593108] DMAR: ATSR flags: 0x0
[    2.593185] DMAR-IR: IOAPIC id 10 under DRHD base  0xfbefe000 IOMMU 0
[    2.593254] DMAR-IR: IOAPIC id 8 under DRHD base  0xf4ffe000 IOMMU 1
[    2.593324] DMAR-IR: IOAPIC id 0 under DRHD base  0xf4ffe000 IOMMU 1
[    2.593396] DMAR-IR: HPET id 0 under DRHD base 0xf4ffe000
[    2.593467] DMAR-IR: x2apic is disabled because BIOS sets x2apic opt out bit.
[    2.593468] DMAR-IR: Use 'intremap=no_x2apic_optout' to override the BIOS setting.
[    2.594425] DMAR-IR: Enabled IRQ remapping in xapic mode
[    4.286848] DMAR: dmar0: Using Queued invalidation
[    4.286932] DMAR: dmar1: Using Queued invalidation
[    4.355658] DMAR: Intel(R) Virtualization Technology for Directed I/O
[  111.511173] vfio-pci 0000:03:00.0: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor.
[  151.942005] vfio-pci 0000:0d:00.0: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor.
root@pve:~#

The bottom 2 lines vfio-pci 0000:03:00.0 and vfio-pci 0000:0d:00.0 are the onboard P420i and the LSI Logic SAS 9207-8i Controller one of them we would like to be able to pass-through. The message Device is ineligible for IOMMU is actually from the bios of the HPE Server. So it seems like Proxmox is getting ready but the Server is still using it in IPMI / ILO.

 

A Hint found online mentioned to add a Parameter allowing unsafe Interrupts to the Grub Command line so let’ s try it out.

Changing the Grub file once more with nano

nano /etc/default/grub

# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# 22.5.2020 (HJB) Added “.allow_unsafe_interrupts=1”  to enable PCIE Passtrough
vfio
vfio_iommu_type1.allow_unsafe_interrupts=1
vfio_pci
vfio_virqfd

After reboot the “device is ineligible” line has disappeard when checking with

dmesg | grep -e DMAR -e IOMMU -e AMD-Vi


Yet as we tried to pass-through the LSI Controller we’re still confronted with QEMU aborting starting the VM with exitcode 1. It did the same with the P420i as we tried that as well. So I removed the entry of .allow_unsafe_interrupts=1 and rebooted once more

A Search online for the Text “Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendorbrought us to this HPE Document ID: c04781229 describing problems occurring when passing through PCIE devices (in the document its GPU’s).

At first I understood just about nothing from the document. But together with the Blog post of Jim Denton it started to make sense. (Thank you Jim for taking the time to post it).

After studying the Blogpost and the HPE Document it became clear to me that PCIE forwarding is not possible for the internal P420i Controller but is very well possible for physical PCIE Slots. As I have an LSI HBA Controller as well this seemed to be promising

From here on it went relatively smooth. Just follow the next Steps.

We need to add the HPE Scripting utilities. So we add the HPE repository to either Enterprise or Community Source list.

nano /etc/apt/sources.list.d/pve-enterprise.list

or

nano /etc/apt/sources.list

Add a line with

deb https://downloads.linux.hpe.com/SDR/repo/stk/ xenial/current non-free

Now we add the HPE Publickey’s by executing

curl http://downloads.linux.hpe.com/SDR/hpPublicKey1024.pub | apt-key add -
curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048.pub | apt-key add -
curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048_key1.pub | apt-key add -
curl http://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | apt-key add -

apt update

And can install the scripting utilitys by

apt install hp-scripting-tools

Now we download the inputfile for HPE’s Conrep script. Just to make sure wher everything is I switched to the /home dir.

cd /home
wget -O conrep_rmrds.xml https://downloads.hpe.com/pub/softlib2/software1/pubsw-linux/p1472592088/v95853/conrep_rmrds.xml

 

We’re nearly there now hang on. We need to know of course in which PCIE Slot our Controller is. We use lspci to list all our PCIE devices to a file  and nano to scroll through to find it.

lspci -vvv &> pcie.list
nano pcie.list

In our case we found this. Our LSI Controller with PCIE Decice ID 0000:0d:00.0 as Proxmox knows it is in Slot 4

0d:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)
        Subsystem: LSI Logic / Symbios Logic 9207-8i SAS2.1 HBA
        Physical Slot: 4
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

So we can create an exclude for that one by

cd /home
nano exclude.dat

Add the following line in that and save it.

<Conrep> <Section name="RMRDS_Slot4" helptext=".">Endpoints_Excluded</Section> </Conrep>

Now we’re ready to Rock and Roll, sorry about that. I Mean to run the conrep utility from HP which excludes our PCIE Slot from the IOMMU / RMRR of the ILO/IPMI

conrep -l -x conrep_rmrds.xml -f exclude.dat

And we verify the results by

conrep -s -x conrep_rmrds.xml -f verify.dat
nano verify.dat

Now we should see something like this. Mind that at Slot4 it says Excluded.

<?xml version="1.0" encoding="UTF-8"?>
<!--generated by conrep version 5.5.0.0-->
<Conrep version="5.5.0.0" originating_platform="ProLiant ML350p Gen8" originating_family="P72" originating_romdate="05/24/2019" originating_processor_manufacturer="Intel">
  <Section name="RMRDS_Slot1" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot2" helptext=".">Endpoints_included</Section>
  <Section name="RMRDS_Slot3" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot4" helptext=".">Endpoints_Excluded</Section>
  <Section name="RMRDS_Slot5" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot6" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot7" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot8" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot9" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot10" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot11" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot12" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot13" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot14" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot15" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot16" helptext=".">Endpoints_Included</Section>
</Conrep>

 

Time to reboot the  Proxmox Server  for the last time before we can celebrate.

Adding a PCIE Device to the Freenas VM

image.png.5930b1353d228819a85d33495a119552.png

and select our PCIE Device ID that is the LSI controller

image.png.486a413f181a16c9da3320f0754c937f.png

 

That was all the VM is now happily starting with the forwarded Controller

Since then I’ve also used this procedure to forward a Intel Optane 900p PCIE SSD

 

Best regards

Henk

Edited by HJB
Typo

Share this post


Link to post
Share on other sites
E3000
Posted (edited)
On 5/23/2020 at 12:20 PM, HJB said:

This is an effort from me for the Community to document my quest of getting Freenas to run as a VM under Proxmox with a LSI HBA Adapter forwarded to Freenas so we get the best of both worlds.

My reasoning for this was that I did not want to miss the comfort and stability of either Proxmox or Freenas but refuse to run 2 separate Servers as I have a very well equipped HPE Proliant ML350P G8.

Proxmox documentation showed me that the IOMMU has to be activated on the kernel commandline. The command line parameters are:
for Intel CPUs: 


intel_iommu=on


for AMD CPUs: 


amd_iommu=on

The kernel command line needs to be placed in the variable GRUB_CMDLINE_LINUX_DEFAULT in the file /etc/default/grub. Running update-grub appends its content to all Linux entries in /boot/grub/grub.cfg.

 


nano /etc/default/grub

# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
#   info -f grub -n 'Simple configuration'
GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="Proxmox Virtual Environment"
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs"
# 22.5.2020 (HJB) added to enable pcie passtrough.
GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"
# Disable os-prober, it might add menu entries for each guest
GRUB_DISABLE_OS_PROBER=true

After changing anything modules related, you need to refresh your initramfs. On Proxmox VE this can be done by executing: 


update-initramfs -u -k all

Finish Configuration
Finally reboot to bring the changes into effect and check that it is indeed enabled.


dmesg | grep -e DMAR -e IOMMU -e AMD-Vi

[    0.007731] ACPI: DMAR 0x00000000BDDAD200 000558 (v01 HP     ProLiant 00000001 \xd2?   0000162E)
[    1.245296] DMAR: IOMMU enabled

[    2.592107] DMAR: Host address width 46
[    2.592173] DMAR: DRHD base: 0x000000fbefe000 flags: 0x0
[    2.592247] DMAR: dmar0: reg_base_addr fbefe000 ver 1:0 cap d2078c106f0462 ecap f020fe
[    2.592330] DMAR: DRHD base: 0x000000f4ffe000 flags: 0x1
[    2.592399] DMAR: dmar1: reg_base_addr f4ffe000 ver 1:0 cap d2078c106f0462 ecap f020fe
[    2.592481] DMAR: RMRR base: 0x000000bdffd000 end: 0x000000bdffffff
[    2.592550] DMAR: RMRR base: 0x000000bdff6000 end: 0x000000bdffcfff
[    2.592618] DMAR: RMRR base: 0x000000bdf83000 end: 0x000000bdf84fff
[    2.592686] DMAR: RMRR base: 0x000000bdf7f000 end: 0x000000bdf82fff
[    2.592755] DMAR: RMRR base: 0x000000bdf6f000 end: 0x000000bdf7efff
[    2.592823] DMAR: RMRR base: 0x000000bdf6e000 end: 0x000000bdf6efff
[    2.592892] DMAR: RMRR base: 0x000000000f4000 end: 0x000000000f4fff
[    2.592961] DMAR: RMRR base: 0x000000000e8000 end: 0x000000000e8fff
[    2.593030] DMAR: RMRR base: 0x000000bddde000 end: 0x000000bdddefff
[    2.593108] DMAR: ATSR flags: 0x0
[    2.593185] DMAR-IR: IOAPIC id 10 under DRHD base  0xfbefe000 IOMMU 0
[    2.593254] DMAR-IR: IOAPIC id 8 under DRHD base  0xf4ffe000 IOMMU 1
[    2.593324] DMAR-IR: IOAPIC id 0 under DRHD base  0xf4ffe000 IOMMU 1
[    2.593396] DMAR-IR: HPET id 0 under DRHD base 0xf4ffe000
[    2.593467] DMAR-IR: x2apic is disabled because BIOS sets x2apic opt out bit.
[    2.593468] DMAR-IR: Use 'intremap=no_x2apic_optout' to override the BIOS setting.
[    2.594425] DMAR-IR: Enabled IRQ remapping in xapic mode
[    4.286848] DMAR: dmar0: Using Queued invalidation
[    4.286932] DMAR: dmar1: Using Queued invalidation
[    4.355658] DMAR: Intel(R) Virtualization Technology for Directed I/O
[  111.511173] vfio-pci 0000:03:00.0: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor.
[  151.942005] vfio-pci 0000:0d:00.0: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor.
root@pve:~#

The bottom 2 lines vfio-pci 0000:03:00.0 and vfio-pci 0000:0d:00.0 are the onboard P420i and the LSI Logic SAS 9207-8i Controller one of them we would like to be able to pass-through. The message Device is ineligible for IOMMU is actually from the bios of the HPE Server. So it seems like Proxmox is getting ready but the Server is still using it in IPMI / ILO.

 

A Hint found online mentioned to add a Parameter allowing unsafe Interrupts to the Grub Command line so let’ s try it out.

Changing the Grub file once more with nano


nano /etc/default/grub

# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# 22.5.2020 (HJB) Added “.allow_unsafe_interrupts=1”  to enable PCIE Passtrough
vfio
vfio_iommu_type1.allow_unsafe_interrupts=1
vfio_pci
vfio_virqfd

After reboot the “device is ineligible” line has disappeard when checking with


dmesg | grep -e DMAR -e IOMMU -e AMD-Vi


Yet as we tried to pass-through the LSI Controller we’re still confronted with QEMU aborting starting the VM with exitcode 1. It did the same with the P420i as we tried that as well. So I removed the entry of .allow_unsafe_interrupts=1 and rebooted once more

A Search online for the Text “Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendorbrought us to this HPE Document ID: c04781229 describing problems occurring when passing through PCIE devices (in the document its GPU’s).

At first I understood just about nothing from the document. But together with the Blog post of Jim Denton it started to make sense. (Thank you Jim for taking the time to post it).

After studying the Blogpost and the HPE Document it became clear to me that PCIE forwarding is not possible for the internal P420i Controller but is very well possible for physical PCIE Slots. As I have an LSI HBA Controller as well this seemed to be promising

From here on it went relatively smooth. Just follow the next Steps.

We need to add the HPE Scripting utilities. So we add the HPE repository to either Enterprise or Community Source list.


nano /etc/apt/sources.list.d/pve-enterprise.list

or


nano /etc/apt/sources.list

Add a line with


deb https://downloads.linux.hpe.com/SDR/repo/stk/ xenial/current non-free

Now we add the HPE Publickey’s by executing


curl http://downloads.linux.hpe.com/SDR/hpPublicKey1024.pub | apt-key add -
curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048.pub | apt-key add -
curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048_key1.pub | apt-key add -
curl http://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | apt-key add -

apt update

And can install the scripting utilitys by


apt install hp-scripting-tools

Now we download the inputfile for HPE’s Conrep script. Just to make sure wher everything is I switched to the /home dir.


cd /home
wget -O conrep_rmrds.xml https://downloads.hpe.com/pub/softlib2/software1/pubsw-linux/p1472592088/v95853/conrep_rmrds.xml

 

We’re nearly there now hang on. We need to know of course in which PCIE Slot our Controller is. We use lspci to list all our PCIE devices to a file  and nano to scroll through to find it.


lspci -vvv &> pcie.list
nano pcie.list

In our case we found this. Our LSI Controller with PCIE Decice ID 0000:0d:00.0 as Proxmox knows it is in Slot 4


0d:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)
        Subsystem: LSI Logic / Symbios Logic 9207-8i SAS2.1 HBA
        Physical Slot: 4
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR- FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

So we can create an exclude for that one by


cd /home
nano exclude.dat

Add the following line in that and save it.


<Conrep> <Section name="RMRDS_Slot4" helptext=".">Endpoints_Excluded</Section> </Conrep>

Now we’re ready to Rock and Roll, sorry about that. I Mean to run the conrep utility from HP which excludes our PCIE Slot from the IOMMU / RMRR of the ILO/IPMI


conrep -l -x conrep_rmrds.xml -f exclude.dat

And we verify the results by


conrep -s -x conrep_rmrds.xml -f verify.dat
nano verify.dat

Now we should see something like this. Mind that at Slot4 it says Excluded.


<?xml version="1.0" encoding="UTF-8"?>
<!--generated by conrep version 5.5.0.0-->
<Conrep version="5.5.0.0" originating_platform="ProLiant ML350p Gen8" originating_family="P72" originating_romdate="05/24/2019" originating_processor_manufacturer="Intel">
  <Section name="RMRDS_Slot1" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot2" helptext=".">Endpoints_included</Section>
  <Section name="RMRDS_Slot3" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot4" helptext=".">Endpoints_Excluded</Section>
  <Section name="RMRDS_Slot5" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot6" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot7" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot8" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot9" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot10" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot11" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot12" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot13" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot14" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot15" helptext=".">Endpoints_Included</Section>
  <Section name="RMRDS_Slot16" helptext=".">Endpoints_Included</Section>
</Conrep>

 

Time to reboot the  Proxmox Server  for the last time before we can celebrate.

Adding a PCIE Device to the Freenas VM

image.png.5930b1353d228819a85d33495a119552.png

and select our PCIE Device ID that is the LSI controller

image.png.486a413f181a16c9da3320f0754c937f.png

 

That was all the VM is now happily starting with the forwarded Controller

Since then I’ve also used this procedure to forward a Intel Optane 900p PCIE SSD

 

Best regards

Henk


This does not seem to work with MicroServer Gen8 unfortunately...

Has anyone got HBA PCIe Passthrough working on MicroServer Gen8?

Edited by E3000

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Similar Content

    • E3000
      By E3000
      Hey guys,
       
      Has anyone around here been successful in setting up Proxmox on a Gen8 using a HBA for storage?
    • E3000
      By E3000
      Hello all,
       
      A few questions for those who use Type-1 Hypervisors on their Gen8 MicroServers...
       
      I am looking to try ESXi or ProxMox and have been reading a lot of the threads on here.
      Hopefully you guys can help with some harder to find answers I have been seeking.
       
      1) Which would be the better way to setup ProxMox:
           a) Hypervisor on Internal MicroSD, VMs installed on SSD in ODD Port, Data on 4x HDDs in bays.
           b) Hypervisor on Internal USB, VMs installed on SSD in ODD Port, Data on 4x HDDs in bays.
           c) Hypervisor and VMs both installed on same SSD (partitioned?) in ODD Port, Data on 4x HDDs in bays.
           d) Hypervisor on SSD using a USB-to-SATA cable on Internal USB, VMs installed on separate SSD in ODD Port, Data on 4x HDDs in bays.
       
      2) Would a 128GB SSD be a ‘waste‘ for installing a Hypervisor on? How much space is typically needed?
       
      3) How many VMs have you guys run on a Gen8 comfortably without it being sluggish?
       
      4) Everyone seems to be going RAID crazy these days. Is there any reason to use it if high-availability is not that necessary and a good backup plan is in place? What is wrong with separate disks (or singular Raid0s)?
       
      5) Does using Type-1 Hypervisors have any effect on the internal fans speed/noise? Is it possible to have 3-5 VMs running and still have the fan speed @~8% as it was when I was using 2 nested (Type-2) VMs?
       
      Sorry in advance if some of these questions are silly, common knowledge, or “depends on what you are doing in the VMs!” 😆
       
      Thanks in advance to all those that help!
    • StuMcBill
      By StuMcBill
      Hi,
       
      I've been fighting with FreeNAS over the past few weeks trying to get a Sonarr->SABnzbd->Plex system setup (as I have done many times both on Windows and Mac), however, I have had nothing but issues with FreeNAS.
       
      So I hoped to install Windows Server 2016, and install these 3 programs the same way I have in Windows installations before?  I realise that the Windows installation would have more overhead and could impact Plex transcoding performance, but I'm planning on installing a better processor to combat that.
       
      Can I install Windows to run off a USB stick like I currently have FreeNAS?
       
      Anything else I should be aware of?
       
      Thanks
      Stewart
    • HawkFE
      By HawkFE
      Looking for some feedback…. I am going to upgrading my NAS and can’t really decide on what way I should go. 
       
      Disclaimer:  I do not have any experience with Synology or QNAP products or software.  My current NAS is a Netgear ReadyNAS Ultra 6 (RNDU6000).
       
      Intent:  NAS will be main data store for all household files, TimeMachine, along with a VMotion NFS for 2 ESXi Hosts. There will be about 20 VMs running on the share. Backbone network will be 10GBE. The NAS will be fill with 10x WD 3TB Reds and 2x SSDs for caching.  Long-term, I would like to run Docker, maybe a wiki or intranet sites, email server, and maybe even move some of my VMs roles to the NAS. With the higher price tag, I would like for this NAS to have enough hardware resources to grow with me and my needs.
       
      Synology RS3617xs – I am really interested in DSM and all that it has to offer. Also, looking into using Surveillance Station for home security. On the down side, the price tag is $2,500
       
      QNAP TU-1263U – For me, the biggest selling point on this device is the price tag. $1,600. It also comes with built-in 10GbE.  My only hold up is I don’t know much about QTS (QNAP OS) and its Apps and surveillance station capabilities.
       
      The third option for me is to roll a home built FreeNAS. I have a spare Intel i3-4130 CPU that I can throw on a SuperMicro X10SLL-F and 32G of ECC RAM.  My concern with FreeNAS is not being able to expand the storage pool over time with bigger drives.
       
      I would love and be thankful for any input.  I am not 100% set on a rack mounted system, I would just prefer it.  With that being said, if you can recommend a system that will me my need, please let me know.
       
      TLDR: Is the Synology RS3617xs worth the price tag or is it overkill?


×
×
  • Create New...