This is an effort from me for the Community to document my quest of getting Freenas to run as a VM under Proxmox with a LSI HBA Adapter forwarded to Freenas so we get the best of both worlds.
My reasoning for this was that I did not want to miss the comfort and stability of either Proxmox or Freenas but refuse to run 2 separate Servers as I have a very well equipped HPE Proliant ML350P G8.
Proxmox documentation showed me that the IOMMU has to be activated on the kernel commandline. The command line parameters are:
for Intel CPUs:
for AMD CPUs:
amd_iommu=on The kernel command line needs to be placed in the variable GRUB_CMDLINE_LINUX_DEFAULT in the file /etc/default/grub. Running update-grub appends its content to all Linux entries in /boot/grub/grub.cfg.
nano /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="Proxmox Virtual Environment" GRUB_CMDLINE_LINUX_DEFAULT="quiet" GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs" # 22.5.2020 (HJB) added to enable pcie passtrough. GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on" # Disable os-prober, it might add menu entries for each guest GRUB_DISABLE_OS_PROBER=true After changing anything modules related, you need to refresh your initramfs. On Proxmox VE this can be done by executing:
update-initramfs -u -k all Finish Configuration
Finally reboot to bring the changes into effect and check that it is indeed enabled.
dmesg | grep -e DMAR -e IOMMU -e AMD-Vi [ 0.007731] ACPI: DMAR 0x00000000BDDAD200 000558 (v01 HP ProLiant 00000001 \xd2? 0000162E) [ 1.245296] DMAR: IOMMU enabled [ 2.592107] DMAR: Host address width 46 [ 2.592173] DMAR: DRHD base: 0x000000fbefe000 flags: 0x0 [ 2.592247] DMAR: dmar0: reg_base_addr fbefe000 ver 1:0 cap d2078c106f0462 ecap f020fe [ 2.592330] DMAR: DRHD base: 0x000000f4ffe000 flags: 0x1 [ 2.592399] DMAR: dmar1: reg_base_addr f4ffe000 ver 1:0 cap d2078c106f0462 ecap f020fe [ 2.592481] DMAR: RMRR base: 0x000000bdffd000 end: 0x000000bdffffff [ 2.592550] DMAR: RMRR base: 0x000000bdff6000 end: 0x000000bdffcfff [ 2.592618] DMAR: RMRR base: 0x000000bdf83000 end: 0x000000bdf84fff [ 2.592686] DMAR: RMRR base: 0x000000bdf7f000 end: 0x000000bdf82fff [ 2.592755] DMAR: RMRR base: 0x000000bdf6f000 end: 0x000000bdf7efff [ 2.592823] DMAR: RMRR base: 0x000000bdf6e000 end: 0x000000bdf6efff [ 2.592892] DMAR: RMRR base: 0x000000000f4000 end: 0x000000000f4fff [ 2.592961] DMAR: RMRR base: 0x000000000e8000 end: 0x000000000e8fff [ 2.593030] DMAR: RMRR base: 0x000000bddde000 end: 0x000000bdddefff [ 2.593108] DMAR: ATSR flags: 0x0 [ 2.593185] DMAR-IR: IOAPIC id 10 under DRHD base 0xfbefe000 IOMMU 0 [ 2.593254] DMAR-IR: IOAPIC id 8 under DRHD base 0xf4ffe000 IOMMU 1 [ 2.593324] DMAR-IR: IOAPIC id 0 under DRHD base 0xf4ffe000 IOMMU 1 [ 2.593396] DMAR-IR: HPET id 0 under DRHD base 0xf4ffe000 [ 2.593467] DMAR-IR: x2apic is disabled because BIOS sets x2apic opt out bit. [ 2.593468] DMAR-IR: Use 'intremap=no_x2apic_optout' to override the BIOS setting. [ 2.594425] DMAR-IR: Enabled IRQ remapping in xapic mode [ 4.286848] DMAR: dmar0: Using Queued invalidation [ 4.286932] DMAR: dmar1: Using Queued invalidation [ 4.355658] DMAR: Intel(R) Virtualization Technology for Directed I/O [ 111.511173] vfio-pci 0000:03:00.0: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor. [ 151.942005] vfio-pci 0000:0d:00.0: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor. root@pve:~# The bottom 2 lines vfio-pci 0000:03:00.0 and vfio-pci 0000:0d:00.0 are the onboard P420i and the LSI Logic SAS 9207-8i Controller one of them we would like to be able to pass-through. The message Device is ineligible for IOMMU is actually from the bios of the HPE Server. So it seems like Proxmox is getting ready but the Server is still using it in IPMI / ILO.
A Hint found online mentioned to add a Parameter allowing unsafe Interrupts to the Grub Command line so let’ s try it out.
Changing the Grub file once more with nano
nano /etc/default/grub # /etc/modules: kernel modules to load at boot time. # # This file contains the names of kernel modules that should be loaded # at boot time, one per line. Lines beginning with "#" are ignored. # 22.5.2020 (HJB) Added “.allow_unsafe_interrupts=1” to enable PCIE Passtrough vfio vfio_iommu_type1.allow_unsafe_interrupts=1 vfio_pci vfio_virqfd After reboot the “device is ineligible” line has disappeard when checking with
dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
Yet as we tried to pass-through the LSI Controller we’re still confronted with QEMU aborting starting the VM with exitcode 1. It did the same with the P420i as we tried that as well. So I removed the entry of .allow_unsafe_interrupts=1 and rebooted once more
A Search online for the Text “Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor” brought us to this HPE Document ID: c04781229 describing problems occurring when passing through PCIE devices (in the document its GPU’s).
At first I understood just about nothing from the document. But together with the Blog post of Jim Denton it started to make sense. (Thank you Jim for taking the time to post it).
After studying the Blogpost and the HPE Document it became clear to me that PCIE forwarding is not possible for the internal P420i Controller but is very well possible for physical PCIE Slots. As I have an LSI HBA Controller as well this seemed to be promising
From here on it went relatively smooth. Just follow the next Steps.
We need to add the HPE Scripting utilities. So we add the HPE repository to either Enterprise or Community Source list.
nano /etc/apt/sources.list.d/pve-enterprise.list or
nano /etc/apt/sources.list Add a line with
deb https://downloads.linux.hpe.com/SDR/repo/stk/ xenial/current non-free Now we add the HPE Publickey’s by executing
curl http://downloads.linux.hpe.com/SDR/hpPublicKey1024.pub | apt-key add - curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048.pub | apt-key add - curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048_key1.pub | apt-key add - curl http://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | apt-key add - apt update And can install the scripting utilitys by
apt install hp-scripting-tools Now we download the inputfile for HPE’s Conrep script. Just to make sure wher everything is I switched to the /home dir.
cd /home wget -O conrep_rmrds.xml https://downloads.hpe.com/pub/softlib2/software1/pubsw-linux/p1472592088/v95853/conrep_rmrds.xml
We’re nearly there now hang on. We need to know of course in which PCIE Slot our Controller is. We use lspci to list all our PCIE devices to a file and nano to scroll through to find it.
lspci -vvv &> pcie.list nano pcie.list In our case we found this. Our LSI Controller with PCIE Decice ID 0000:0d:00.0 as Proxmox knows it is in Slot 4
0d:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05) Subsystem: LSI Logic / Symbios Logic 9207-8i SAS2.1 HBA Physical Slot: 4 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- So we can create an exclude for that one by
cd /home nano exclude.dat Add the following line in that and save it.
<Conrep> <Section name="RMRDS_Slot4" helptext=".">Endpoints_Excluded</Section> </Conrep> Now we’re ready to Rock and Roll, sorry about that. I Mean to run the conrep utility from HP which excludes our PCIE Slot from the IOMMU / RMRR of the ILO/IPMI
conrep -l -x conrep_rmrds.xml -f exclude.dat And we verify the results by
conrep -s -x conrep_rmrds.xml -f verify.dat nano verify.dat Now we should see something like this. Mind that at Slot4 it says Excluded.
<?xml version="1.0" encoding="UTF-8"?> <!--generated by conrep version 126.96.36.199--> <Conrep version="188.8.131.52" originating_platform="ProLiant ML350p Gen8" originating_family="P72" originating_romdate="05/24/2019" originating_processor_manufacturer="Intel"> <Section name="RMRDS_Slot1" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot2" helptext=".">Endpoints_included</Section> <Section name="RMRDS_Slot3" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot4" helptext=".">Endpoints_Excluded</Section> <Section name="RMRDS_Slot5" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot6" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot7" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot8" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot9" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot10" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot11" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot12" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot13" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot14" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot15" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot16" helptext=".">Endpoints_Included</Section> </Conrep>
Time to reboot the Proxmox Server for the last time before we can celebrate.
Adding a PCIE Device to the Freenas VM
and select our PCIE Device ID that is the LSI controller
That was all the VM is now happily starting with the forwarded Controller
Since then I’ve also used this procedure to forward a Intel Optane 900p PCIE SSD
Here's a guide I would like to share around Windows Storage spaces and creating a 4x drive Parity pool
In a nutshell I have Windows Serer 2019 and storage space parity pool running very nicely on my Gen8. Here's the configuration I used and How to copy my setup.
(I still believe ZFS or UnRAID are far better choice as a filesystem on these limited servers, but if you need Windows like I do, then storage spaces can be an excellent alternative.)
This is my "best effort" guide and by no means perfect. It does however yield excellent results for both read and write speeds.
CPU Stock for now (1270 V3 on it's way)
Disks 4x 3TB WD NAS drives in front bays
SSD - Samsung Evo 850 265
First lesson, DONT use the Windows GUI to create the pool or Virtual disk as the GUI applies terrible defaults that you can't edit and will ruin performance. Also make sure you're on the latest version of Windows server as a LOT has changed and been improved recently.
You must use PowerShell.
PhysicalDiskRedundancy - Parity Columns - 4 (The data segments stripped to disks. Should match your 4 disks) Interleve - 256K (The amound of data written to each "column" or disk. In this case 256KB interleave gives us a 64K write to each disk) LogicalSectorSize - 4096 PhysicalSectorSize - 4096 REFS/NTFS Cluster - 64K
4 drive file system, one bootable SSD in RAID mode.
BIOS setup initial
F9 into the BIOS and set the B120i controller into RAID mode
F5 into the RAID manager and create 1 individual RAID0 logical drive for the SSD
Set the SSD as the preferred boot drive (Yes in the same screen)
Set the cluster size to 63
Install Windows 2019 Server Standard GUI edition from ISO
Offer up the B120i RAID drivers via a USB stick so the wizard can see the SSD RAID0 drive. Filename p033111.exe (Have them extracted)
Windows update and patch and reboot
BIOS setup post windows
Once windows is up and running go back into the F5 RAID manager and finish the setup of the 4 front drives into 4x RAID0
Check the SSD is still set as the preferred boot drive (Yes in the same screen)
Set the cluster size to 63
Windows config of storage spaces
At this point you should see 4 individual drives ready to be used as a Storage pool
Try to set each disk to have a cache (Not all drives support this)
Win + X to open the side menu
Expand Disk Drives
Right Click the "HP Logical Volume" for each drive
Check - "Enable write caching on the device"
(If it doesn't work don't stress, it's optional but nice to have)
Powershell - Run as Admin
Determine the physical drisks available for the pool we're about to create
Get-PhysicalDisk | ft friendlyname, uniqueid, mediatype, size -auto
Your output will look something like this, so identify the 4 drives that are the same and take note of their uniqueID
Mine are the bottom four drives all 3TB in size
friendlyname uniqueid size
------------ -------- ----
HP LOGICAL VOLUME 600508B1001C5C7A1716CCDD5A706248 250023444480
HP LOGICAL VOLUME 600508B1001CAC8AFB32EE6C88C5530D 3000559427584
HP LOGICAL VOLUME 600508B1001C51F9E0FF399C742F83A6 3000559427584
HP LOGICAL VOLUME 600508B1001C2FA8F3E8856A2BF094A0 3000559427584
HP LOGICAL VOLUME 600508B1001CDBCE168F371E1E5AAA23 3000559427584
Rename the friendly name based on the UniqueID from above and set to "HDD type"
Set-Physicaldisk -uniqueid "Your UniqueID" -newFriendlyname Disk1 -mediatype HDD
You will need to run that 4 times with each UniqueID code and create a new friendly name for each drive. I called mine "Drive 1, Drive 2" etc
Set-Physicaldisk -uniqueid "600508B1001C2FA8F3E8856A2BF094A0" -newFriendlyname Disk1 -mediatype HDD Set-Physicaldisk -uniqueid "600508B1001CDBCE168F371E1E5AAA23" -newFriendlyname Disk2 -mediatype HDD Set-Physicaldisk -uniqueid "600508B1001CAC8AFB32EE6C88C5530D" -newFriendlyname Disk3 -mediatype HDD Set-Physicaldisk -uniqueid "600508B1001C51F9E0FF399C742F83A6" -newFriendlyname Disk4 -mediatype HDD
Verify the disks have been set correctly
The following example shows which physical disks are available in the primordial server and CAN be used in the new Pool. You're just checking here if the friendly name renaming worked and they are all set to HDD type. Primordial just means on your local server and available.
Get-StoragePool -IsPrimordial $true | Get-PhysicalDisk | Where-Object CanPool -eq $True You should see your four drives with nice names that you set like "Disk1"
Now find out your sub system name, as we need this for the next command. Just take note of it. Example "Windows Storage on <servername>"
Mine is ""Windows Storage on Radaxian"
The following example creates a new storage pool named "Pool1" that uses all available disks and sets the cluster size.
New-StoragePool -FriendlyName Pool1 -StorageSubsystemFriendlyName "Windows Storage on Radaxian" -PhysicalDisks (Get-PhysicalDisk -CanPool $True) -LogicalSectorSizeDefault 64KB
Now create the Virtual Disk on the new pool with 4x disks and Partity set correctly. (This is critical to do via PowerShell)
New-VirtualDisk -StoragePoolFriendlyName "Pool1" -FriendlyName "VDisk1" -ResiliencySettingName Parity -NumberOfDataCopies 1 -NumberOfColumns 4 -ProvisioningType Fixed -Interleave 256KB -UseMaximumSize Those two commands should complete without error, if they don't go back and check your syntax
Go back into the Windows GUI and open this
Server Manager\File and Storage Services\Servers
You should see the Storage pool listed and the Virtual disk we created in the previous steps.
Storage pool - Pool1
Virtual Disk - VDisk1
Select Disks in the GUI
Identify your new VDisk1 and right click it.
Set to Online, this will also set it to use a GPT boot record
On the same screen in the below pane Volumes
Click TASKS and select "New Volume"
Select REFS and Sector size of 64K
Enter a volume name like "Volume1" or whatever you want to call it
Select a drive letter such as Z
(You can use NTFS here for slightly better performance, but I'm sticking to REFS as it has some benefits)
You'll now have a Storage pool, Virtual disk on top and a volume created with optimal settings
Go back into Power Shell
Enable power protected status if applicable (Just try it, no harm)
(Ideally here you should have your server connected to a basic UPS to protect it from power outages)
Set-StoragePool -FriendlyName Pool1 -IsPowerProtected $True
Check if the new sector sizes of Virtual disk and all relevant settings are correct
Get-VirtualDisk | ft FriendlyName, ResiliencySettingName, NumberOfColumns, Interleave, PhysicalDiskRedundancy, LogicalSectorSize, PhysicalSectorSize Example output
FriendlyName ResiliencySettingName NumberOfColumns Interleave PhysicalDiskRedundancy LogicalSectorSize PhysicalSectorSize
VDisk1 Parity 4 262144 1 4096 4096
You're done.... enjoy the new Volume.
At this point you can share out your new Volume "Z" and allow client computers to connect.
Some other commands in Power Shell that I found useful
Get more verbose disk details around sectors.
Get-VirtualDisk -friendlyname Vdisk1 | fl
Get-PhysicalDisk | select FriendlyName, Manufacturer, Model, PhysicalSectorSize, LogicalSectorSize | ft
Check if TRIM is enabled. This output should be 0
fsutil behavior query DisableDeleteNotify If TRIM is not enabled, you can set it on with these commands
fsutil behavior set disabledeletenotify ReFS 0 fsutil behavior set disabledeletenotify NTFS 0
Check the Power Protected status and cache
Get-StorageAdvancedProperty -PhysicalDisk (Get-PhysicalDisk)
Once your data has been migrated back to your new pool from backup, make sure you run this command to "spread out the data" properly.
This command rebalances the Spaces allocation for all of the Spaces in the pool named SQLPool.
Optimize-StoragePool -FriendlyName "Pool1"
I'm yet to get my Xeon in the mail, but once that's installed I think the disk performance will go up even higher as the stock CPU is junk.
First, my specs (briefly):
AMD Opteron X3216
Win Server 2016
The Taskmanager shows only 1 CPU-core instead of 2.
(ref. image 1)
checked msconfig...no CPU-Limit set checked, whether other users see the same amount of CPU-cores in task-manager (ref. image-link (internal) 2)
Anyone out there with an idea to solve this?
Over the many years of using the Gen8 there always seems to be some problem!! ;-)
I have Windows 10 Pro install to a HDD in the ODD. The 4 3TB drives in the other bays, configured via Storage Spaces. BIOS set to AHCI and I have made use of the usb boot key to map the boot to the ODD bay. All has been resonably well for well over a year however I was never able to get Windows to update. Now I'm getting warnings that the software will be unsupported and have tried several times to do an upgrade but frequently get errors such as the installer cannot detect how much space is availble. I think that's because of the USB key.
I don't want to go back to square one and reinstall if I cannot help it but has anyone else experienced this? I seem to recall having problems with using the B120i but cannot for the life of me remember why I stopped using it... but I know that changing to Legacy SATA should remove the need for the USB boot mapping.
Figure 1 – HPE ProLiant ML30 Gen9 on static mat ready for Windows 10 pro x64 install via iLO4
Figure 2 – This shows my Samsung 840 Pro (This will be my OS drive) set up in a single drive RAID0 in the B140i using SSA
After manually installing Windows Server 2016 easily on HPE’s ProLiant ML30 Gen9 I was anxious to see if an install of Window 10 Pro would be just as trouble free. It was!
Besides having Hyper-V capabilities, Windows 10 Pro, is being looked on by many as the basis of a low cost Home Server as illustrated in “Building a Windows 10 Home Server – Anniversary Update Edition”. Check out HPE’s Operating System Support Matrices for insights on the many OS’s that the ML30 Gen9 supports. But, what will work goes beyond what’s officially supported by HPE in the “Matrices”. Windows 10 Pro is not listed in the Matrices but Windows Server 2016 is and Server 2016 shares much of the code with Windows 10 Pro as does Server 2012R2 shares with Windows 8.1 pro and Server 2012 shares with Windows 8. In order to manually load Windows 10 Pro I downloaded drivers for Server 2016. The simplest procedure, for me, is to use SPP to update all the ML30 Gen9 firmware first, then use the drivers for the B140i to load Windows 10 Pro, then after Windows 10 Pro is loaded and updated use HPSUM to load all the relevant drivers and software into Windows 10 Pro.
Like Server 2016, Windows 10 Pro has its own generic drivers that will work with the ML30 Gen9’s NIC and Video so the B140i drivers is all that’s needed to get Windows 10 Pro onto the ML30 Gen9! HPSUM run (with Administrator Privileges) will load all of the missing HPE drivers I need in one step – including the NIC’s, Video, and SSA – just to name a few.
Step-By-Step: Windows 10 Pro on HPE ProLiant ML30 Gen9
I used iLO4 to remote into the ML30 Gen9 and began to install Windows 10 Pro x64 manually (i.e. without using IP) in the following general steps:
Since I had just recently done steps 1-7 not long ago I skipped to 5 then did 7 through 16 below.
Download Service Pack for ProLiant (SPP) from Hewlett Packard Enterprise Support Center – Drivers & Software – the current version is 2016.10.0 (24 Oct 2016) – check also threads about SPP at HSS Forum MS Gen8 Load the SPP ISO in “virtual drives” in remote desktop of iLO4 Boot the ML30 Gen9 – with no drives in the ML30 Gen9 in my case – and let SPP run automatically and update all firmware – See Figure 3 below. Shut down the ML30 Gen9 Next: I removed the Samsung 840 Pro 256GB that I had loaded Server 2016 on (giving me the flexibility to switch OS’s by switching SSD’s in the ML30 Gen9) and loaded another Samsung 840 Pro 256GB into drive 1 of the Icy Dock ToughArmor MB994SP-4SB-1 Go to the Hewlett Packard Enterprise Support Center – Drivers & Software – and download the file cp028631.exe that is the Dynamic Smart Array B140i Controller Driver for 64-bit Microsoft Windows Server 2012/2016 Editions (Since Windows 10 has the same core as Server 2016 I plan to use it for manual installation of Windows 10 64-bit Pro in the ML30 Gen9 – the HPE Drivers & Software site does not have Drivers & Software for non-server OS’s) – the current version is 184.108.40.206 (24 Oct 2016) Extract the files in cp028631.exe and load those into a file folder that I then attach/load in “virtual drives” of remote desktop of iLO4 (during Windows install this will be the folder I browse to so that Windows 10 can pull in the driver and see the Samsung 840 Pro ) Load the Windows 10 Pro x64 ISO in “virtual drives” of remote desktop of iLO4 Boot the ML30 Gen9 During boot go into IP (press F10) and select SSA (Smart Storage Administrator) In SSA I set up the Samsung 840 Pro as a single drive RAID0 to be used as my OS drive – See Figure 2 earlier. Exit SSA & IP and Restart the ML30 Gen9 Proceed with the normal Windows 10 Pro x64 install – During install Windows 10 will ask for location of drivers so it can see the drive(s) – in browse lead it to the location to the file folder of B140i driver(s) in the “virtual drives” C: -- if your OS drive had been previously formatted as MBR you will have to delete that so it can be formatted as GPT. See Video 1 below. After Windows 10 is installed and updated – reattach SPP ISO in remote desktop of iLO4 In the Windows desktop go to the SPP ISO in File Explorer and Execute the Batch file for HPSUM (i.e. execute: launch_hpsum.bat as Administrator) – I chose “Localhost Guided Update” – Automatic Mode After running HPSUM (and rebooting) the HPE software shown in Figure 4 below was installed. Enjoy!
Figure 3 – After running SPP’s ISO the firmware of the ML30 Gen9 is up to date.
Video 1 – Browsing to select the file folder with B14i S2016 drivers during install of Windows 10 Pro on HPE ProLiant ML30 Gen9
Figure 4 – Software installed by HPSUM in Windows Server 2016
Figure 5 – Temperatures in the ML30 Gen9 via iLO4. BIOS is set on optimal cooling and my single System Fan is running at 6% and the two 40mm fans on the MB994SP-4SB-1 are turned on.
Figure 6 – System information showing Windows 10 Pro as the OS
Figure 7 – Basic information showing Windows Server loaded onto my HPE ProLiant ML30 Gen9 running from a single SSD RAID0 in bay 1 of the Icy Dock ToughArmor MB994SP-4SB-1
All in all Windows 10 Pro was easy to load onto the HPE ProLiant ML30 Gen9 providing a relatively cheaper platform (compared to Windows Server 2016) for a home lab for setting up and testing applications in Hyper-V for instance.
In the As-Built that follows I list how this ML10v2 is loaded. Be sure to check out more on this at ML10 and ML10v2 Forum and Windows 10 Pro on HPE ProLiant ML30 Gen9 Forum Thread.
As-Built (I named my Computer: Serenity)
HPE ProLiant ML30 Gen9 (Product No. 830893) Xeon E3-1240v5 (SkyLake LGA 1151) 8GB ECC RAM (Expandable to 64GB) OS: Windows 10 Pro B140i Dynamic Smart Array: Ports 1-4: (4*3.5” Drive Tray Caddies for Main Drive-Cage Assembly Bays 1-4) B140i Dynamic Smart Array: Ports 5-6: Icy Dock MB994SP-4SB-1 in Top 5.25” half-height Bay; with/ 2*18” SATA III (6 Gb/s) cables attached to Bays 1 & 2 (Bays 3 & 4 are available for future); Molex to Molex & Fan Y-Connector Cable; Samsung 840 Pro 256GB in Bay 1;
Please join us in the HomeServerShow Forums to discuss this and tell us what you are building at home.
Check HSS Forum Post: Other HSS ML30 Blog Postings: http://homeservershow.com/tag/ML30
HSS HP ProLiant ML30 Forum postings (In HSS Forum ML10 & ML10v2): http://homeservershow.com/forums/index.php?/forum/98-ml10-and-ml10v2/
HP MicroServer Gen8 – Service Pack for ProLiant – 24th Oct 2016 http://homeservershow.com/forums/index.php?/topic/12034-hp-microserver-gen8-service-pack-for-proliant-24th-oct-2016/
iLO Advanced License Keys http://homeservershow.com/forums/index.php?/topic/9511-ilo-advanced-license-keys-1850-2400/
Icy Dock “ToughArmor” MB994SP-4SB-1 http://www.icydock.com/goods.php?id=142
Scsi4me.com 3.5” Drive Tray Caddy 4 HP ProLiant ML350e ML310e SL250s Gen8 Gen9 G9 651314-001 http://www.ebay.com/itm/231001449171
HPE ProLiant ML30 Gen9 Server QuickSpecs http://h20195.www2.hp.com/v2/GetDocument.aspx?docname=c04834998&doctype=quickspecs&doclang=EN_US&searchquery=&cc=us&lc=en
HPE ProLiant ML30 Gen9 Server “Maintenance and Service Guide”; Part Number: 825545-002; November 2016; Edition: 2 => http://h20565.www2.hpe.com/hpsc/doc/public/display?sp4ts.oid=1008556812&docLocale=en_US&docId=emr_na-c04905980 Or go to => http://h20565.www2.hpe.com/portal/site/hpsc/public/psi/home/?sp4ts.oid=1008556812&ac.admitted=1489520211680.125225703.1851288163#manuals
Check out my HPE ML30 Gen9 Play-List: