This is an effort from me for the Community to document my quest of getting Freenas to run as a VM under Proxmox with a LSI HBA Adapter forwarded to Freenas so we get the best of both worlds.
My reasoning for this was that I did not want to miss the comfort and stability of either Proxmox or Freenas but refuse to run 2 separate Servers as I have a very well equipped HPE Proliant ML350P G8.
Proxmox documentation showed me that the IOMMU has to be activated on the kernel commandline. The command line parameters are:
for Intel CPUs:
for AMD CPUs:
amd_iommu=on The kernel command line needs to be placed in the variable GRUB_CMDLINE_LINUX_DEFAULT in the file /etc/default/grub. Running update-grub appends its content to all Linux entries in /boot/grub/grub.cfg.
nano /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="Proxmox Virtual Environment" GRUB_CMDLINE_LINUX_DEFAULT="quiet" GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs" # 22.5.2020 (HJB) added to enable pcie passtrough. GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on" # Disable os-prober, it might add menu entries for each guest GRUB_DISABLE_OS_PROBER=true After changing anything modules related, you need to refresh your initramfs. On Proxmox VE this can be done by executing:
update-initramfs -u -k all Finish Configuration
Finally reboot to bring the changes into effect and check that it is indeed enabled.
dmesg | grep -e DMAR -e IOMMU -e AMD-Vi [ 0.007731] ACPI: DMAR 0x00000000BDDAD200 000558 (v01 HP ProLiant 00000001 \xd2? 0000162E) [ 1.245296] DMAR: IOMMU enabled [ 2.592107] DMAR: Host address width 46 [ 2.592173] DMAR: DRHD base: 0x000000fbefe000 flags: 0x0 [ 2.592247] DMAR: dmar0: reg_base_addr fbefe000 ver 1:0 cap d2078c106f0462 ecap f020fe [ 2.592330] DMAR: DRHD base: 0x000000f4ffe000 flags: 0x1 [ 2.592399] DMAR: dmar1: reg_base_addr f4ffe000 ver 1:0 cap d2078c106f0462 ecap f020fe [ 2.592481] DMAR: RMRR base: 0x000000bdffd000 end: 0x000000bdffffff [ 2.592550] DMAR: RMRR base: 0x000000bdff6000 end: 0x000000bdffcfff [ 2.592618] DMAR: RMRR base: 0x000000bdf83000 end: 0x000000bdf84fff [ 2.592686] DMAR: RMRR base: 0x000000bdf7f000 end: 0x000000bdf82fff [ 2.592755] DMAR: RMRR base: 0x000000bdf6f000 end: 0x000000bdf7efff [ 2.592823] DMAR: RMRR base: 0x000000bdf6e000 end: 0x000000bdf6efff [ 2.592892] DMAR: RMRR base: 0x000000000f4000 end: 0x000000000f4fff [ 2.592961] DMAR: RMRR base: 0x000000000e8000 end: 0x000000000e8fff [ 2.593030] DMAR: RMRR base: 0x000000bddde000 end: 0x000000bdddefff [ 2.593108] DMAR: ATSR flags: 0x0 [ 2.593185] DMAR-IR: IOAPIC id 10 under DRHD base 0xfbefe000 IOMMU 0 [ 2.593254] DMAR-IR: IOAPIC id 8 under DRHD base 0xf4ffe000 IOMMU 1 [ 2.593324] DMAR-IR: IOAPIC id 0 under DRHD base 0xf4ffe000 IOMMU 1 [ 2.593396] DMAR-IR: HPET id 0 under DRHD base 0xf4ffe000 [ 2.593467] DMAR-IR: x2apic is disabled because BIOS sets x2apic opt out bit. [ 2.593468] DMAR-IR: Use 'intremap=no_x2apic_optout' to override the BIOS setting. [ 2.594425] DMAR-IR: Enabled IRQ remapping in xapic mode [ 4.286848] DMAR: dmar0: Using Queued invalidation [ 4.286932] DMAR: dmar1: Using Queued invalidation [ 4.355658] DMAR: Intel(R) Virtualization Technology for Directed I/O [ 111.511173] vfio-pci 0000:03:00.0: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor. [ 151.942005] vfio-pci 0000:0d:00.0: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor. root@pve:~# The bottom 2 lines vfio-pci 0000:03:00.0 and vfio-pci 0000:0d:00.0 are the onboard P420i and the LSI Logic SAS 9207-8i Controller one of them we would like to be able to pass-through. The message Device is ineligible for IOMMU is actually from the bios of the HPE Server. So it seems like Proxmox is getting ready but the Server is still using it in IPMI / ILO.
A Hint found online mentioned to add a Parameter allowing unsafe Interrupts to the Grub Command line so let’ s try it out.
Changing the Grub file once more with nano
nano /etc/default/grub # /etc/modules: kernel modules to load at boot time. # # This file contains the names of kernel modules that should be loaded # at boot time, one per line. Lines beginning with "#" are ignored. # 22.5.2020 (HJB) Added “.allow_unsafe_interrupts=1” to enable PCIE Passtrough vfio vfio_iommu_type1.allow_unsafe_interrupts=1 vfio_pci vfio_virqfd After reboot the “device is ineligible” line has disappeard when checking with
dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
Yet as we tried to pass-through the LSI Controller we’re still confronted with QEMU aborting starting the VM with exitcode 1. It did the same with the P420i as we tried that as well. So I removed the entry of .allow_unsafe_interrupts=1 and rebooted once more
A Search online for the Text “Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor” brought us to this HPE Document ID: c04781229 describing problems occurring when passing through PCIE devices (in the document its GPU’s).
At first I understood just about nothing from the document. But together with the Blog post of Jim Denton it started to make sense. (Thank you Jim for taking the time to post it).
After studying the Blogpost and the HPE Document it became clear to me that PCIE forwarding is not possible for the internal P420i Controller but is very well possible for physical PCIE Slots. As I have an LSI HBA Controller as well this seemed to be promising
From here on it went relatively smooth. Just follow the next Steps.
We need to add the HPE Scripting utilities. So we add the HPE repository to either Enterprise or Community Source list.
nano /etc/apt/sources.list.d/pve-enterprise.list or
nano /etc/apt/sources.list Add a line with
deb https://downloads.linux.hpe.com/SDR/repo/stk/ xenial/current non-free Now we add the HPE Publickey’s by executing
curl http://downloads.linux.hpe.com/SDR/hpPublicKey1024.pub | apt-key add - curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048.pub | apt-key add - curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048_key1.pub | apt-key add - curl http://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | apt-key add - apt update And can install the scripting utilitys by
apt install hp-scripting-tools Now we download the inputfile for HPE’s Conrep script. Just to make sure wher everything is I switched to the /home dir.
cd /home wget -O conrep_rmrds.xml https://downloads.hpe.com/pub/softlib2/software1/pubsw-linux/p1472592088/v95853/conrep_rmrds.xml
We’re nearly there now hang on. We need to know of course in which PCIE Slot our Controller is. We use lspci to list all our PCIE devices to a file and nano to scroll through to find it.
lspci -vvv &> pcie.list nano pcie.list In our case we found this. Our LSI Controller with PCIE Decice ID 0000:0d:00.0 as Proxmox knows it is in Slot 4
0d:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05) Subsystem: LSI Logic / Symbios Logic 9207-8i SAS2.1 HBA Physical Slot: 4 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- So we can create an exclude for that one by
cd /home nano exclude.dat Add the following line in that and save it.
<Conrep> <Section name="RMRDS_Slot4" helptext=".">Endpoints_Excluded</Section> </Conrep> Now we’re ready to Rock and Roll, sorry about that. I Mean to run the conrep utility from HP which excludes our PCIE Slot from the IOMMU / RMRR of the ILO/IPMI
conrep -l -x conrep_rmrds.xml -f exclude.dat And we verify the results by
conrep -s -x conrep_rmrds.xml -f verify.dat nano verify.dat Now we should see something like this. Mind that at Slot4 it says Excluded.
<?xml version="1.0" encoding="UTF-8"?> <!--generated by conrep version 126.96.36.199--> <Conrep version="188.8.131.52" originating_platform="ProLiant ML350p Gen8" originating_family="P72" originating_romdate="05/24/2019" originating_processor_manufacturer="Intel"> <Section name="RMRDS_Slot1" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot2" helptext=".">Endpoints_included</Section> <Section name="RMRDS_Slot3" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot4" helptext=".">Endpoints_Excluded</Section> <Section name="RMRDS_Slot5" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot6" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot7" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot8" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot9" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot10" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot11" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot12" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot13" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot14" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot15" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot16" helptext=".">Endpoints_Included</Section> </Conrep>
Time to reboot the Proxmox Server for the last time before we can celebrate.
Adding a PCIE Device to the Freenas VM
and select our PCIE Device ID that is the LSI controller
That was all the VM is now happily starting with the forwarded Controller
Since then I’ve also used this procedure to forward a Intel Optane 900p PCIE SSD
I've been banging my head for a few days now trying to figure this out and I've run out of ideas. Hoping the very intelligent crew here can help me out.
I have a Drobo 5N and a Synology RS816 on my network, both of which have been working without issue for quite some time now. I've always connected to both via Windows Explorer by simply going to the network address i.e., \\N5 and \\SYN (sample names).
I recently got a new desktop which is where the issues are coming up. When I try to go to \\N5, it results in a message saying it cannot find that location. However, \\SYN works just fine. What's strange is that I can see and manage the Drobo through the Drobo Dashboard software. What could be preventing Windows from seeing the Drobo on the network?
I've already enabled the SMB 1.x protocol, ensured the workgroup names are the same, rebooted both the machine and the Drobo, made sure network sharing is enabled, and even did a fresh install to ensure that some program I installed didn't cause the issue. Every other machine I have can access the Drobo without issue. It's just this new desktop, and everything is running Windows 10.
Another strange phenomenon that I discovered is that if I go to "\\DROBO" (verbatim, not a sample name) it leads me to the Synology. Where is Windows getting the mapping from that it is directing that address to the Synology?
This is driving me nuts so any advice would be greatly appreciated.
Synology C2 Backup Now Available to US Customers
This is a data backup service that Synology released to EU customers last year. I remember trying to get it to install and it wouldn't take my CC# because of location. Fair enough, will it now and how much will it cost?
Seriously, how much is it? Ask Google and on March 25th, 2018 it's $14.12. That's still not a bad price for 100Gb of backup with versioning. Also not bad to consider this as a secondary cloud backup. (See below for pricing)
One thing to keep in mine this is a service that falls under Hyper Backup and not Cloud Sync. The two are very different. This will be a true back with versioning and not a simple synchronization of a folder in a cloud service like dropbox. Many a mistake have been made with sync pairs and cloud services! That won't happen with backup and versions.
Enable it and Back Up
In your Synology DSM go to Hyper Backup. Install if you are not using it. You would also use this app to backup to USB, Amazon S3, Microsoft Azure, etc.
Select Synology C2 Cloud Backup.
That should launch a login portal. If you have used Synology services before and have a login you can use those credentials or create credentials right here.
Start your free trial for 30 days but prices are still in POUNDS! That's a pound symbol right? If it was released to Europe why isn't it in Euros? I'm so confused. (it is Euro Dave! I had a brain fart on the Euro symbol. See response post below. Embarrassingly laughing at myself on this!)
It's here where it wouldn't let me proceed last time I tried due to it being a limited trial to EU customers only. You have to put your CC# in to get the 30 day free trial. It says i'm in the Europe Frankfort market but the purchase went through. So, does this mean if the US sinks into the Ocean and all my data goes with it, my photos will be in Frankfort safe and sound? Sweet!
This is the last web screen I see as it has now taken me back to Hyper Backup.
I'm going to choose a small amount to get started with.
I'm also going to limit the bandwidth of the backup and set Client Side encryption. The encryption password box will allow you to use the Admin account password so you don't forget what you put here unless your Admin password is less than 8 characters.
Once you are finished it will ask you if you want to backup now. I said now just to keep the network clean while I'm working but it will backup tonight. Look at the screen shot below. See the little arrow by Synology C2? That is the link to the web portal. It would be nice to see Synology integrate this into DSM so there isn't a secondary screen needed. It would also be nice to have some choice as to where your data is being sent and to be charged a proper amount. I'm afraid my bank might also charge me a fee for the Pounds to USD conversion. I'll update the post when I find out.
Here are the web portal screens below.
That's it. It works!
The DS3018xs: Synology's first 6-bay tower NAS with optional 10GbE and NVMe SATA SSD supports. The Plus-series DS918+, DS718+, and DS218+: Designed to meet your intensive daily workloads the Value-series DS418: Featuring optimized 4K online transcoding capability
Synology® Inc. announced the official launch of new product lineup featuring:
· DS3018xs: Synology's first 6-bay tower NAS with optional 10GbE and NVMe SATA SSD supports
· Plus-series DS918+, DS718+, and DS218+: Designed to meet your intensive daily workloads
· Value-series DS418: Featuring optimized 4K online transcoding capability
To allow for ultra-high performance using SSD cache without occupying internal drive bays, DS3018xs features a PCIe slot, which can be installed with a dual M.2 SATA SSD adapter card (M2D17). DS918+ comes with dedicated dual M.2 NVMe slots at the bottom where you can directly install M.2 NVMe SSDs. DS418 features 10-bit H.265 4K video transcoding, and while supporting the next-generation Btrfs file system in DSM 6.2 official, expected to release in early Q1 next year. Btrfs provides reliable data protection through its cutting-edge self-healing and point-in-time snapshot features.
DS3018xs, Synology's first 6-bay tower NAS, is compact yet powerful as it features the Intel’s advanced Pentium D1508 dual-core 2.2GHz processor (Turbo Boost up to 2.6GHz) with AES-NI encryption engine; offering scalability of RAM up to 32 GB and storage capacity up to 30 drives with two Synology DX1215. In addition to four Gigabit LAN ports, DS3018xs takes advantage of boosting maximum throughput with an optional 10GbE network interface card, delivering stunning performance at over 2,230 MB/s sequential reading and 265,000 sequential read IOPS.
DS918+ and DS718+ are powered by Intel‘s Celeron® J3455 quad-core processor. DS218+ is powered by Intel’s Celeron® J3355 dual core processor. Both models are equipped with AES-NI hardware encryption engine and support up to two channels of H.265/H.264 4K video transcoding.DS918+’s RAM is scalable up to 8GB, while DS718+ and DS218+ are scalable up to 6 GB, allowing you to operate more intensive tasks at once. DS918+ and DS718+ are equipped with two LAN ports, and their storage capacity can be scaled up to 9 and 7 drives, respectively, with Synology’s DX517 expansion unit.
"Responding to the demands from our customers, DS3018xs is built as a comprehensive business-ready desktop NAS. Running mission-critical applications or planning virtualization deployment with DS3018xs has never been easier." said Katarina Shao, Product Manager at Synology Inc. "The new DS918+, DS718+, and DS218+ are optimized to be your digital video libraries, and will bring you an excellent viewing experience with high definition live video transcoding, regardless of device limitations."
DS418 is equipped with a 1.4GHz quad-core processor with hardware encryption engine, 2 GB RAM, and two LAN ports. Powered by the hardware transcoding engine, DS418 supports H.265 4K transcoding allowing it to serve as your media library. Combined with Btrfs and Snapshot supports, DS418 is delivers more efficient data storage and more reliable data protection.
For more information on DS3018xs, please visit https://www.synology.com/products/DS3018xs
For more information on DS918+, please visit https://www.synology.com/products/DS918+
For more information on DS718+, please visit https://www.synology.com/products/DS718+
For more information on DS218+, please visit https://www.synology.com/products/DS218+
For more information on DS418, please visit https://www.synology.com/products/DS418
Synology at a glance
Synology creates network-attached storage, IP surveillance solutions, and network equipment that transform the way users manage data, conduct surveillance, and manage networks in the cloud era. By taking full advantage of the latest technologies, Synologyaims to help users centralize data storage and backup, share files on-the-go, implement professional surveillance solutions, and manage networks in reliable and affordable ways. Synology is committed to delivering products with forward-thinking features and the best in class customer services.
I've been fighting with FreeNAS over the past few weeks trying to get a Sonarr->SABnzbd->Plex system setup (as I have done many times both on Windows and Mac), however, I have had nothing but issues with FreeNAS.
So I hoped to install Windows Server 2016, and install these 3 programs the same way I have in Windows installations before? I realise that the Windows installation would have more overhead and could impact Plex transcoding performance, but I'm planning on installing a better processor to combat that.
Can I install Windows to run off a USB stick like I currently have FreeNAS?
Anything else I should be aware of?