Jump to content
RESET Forums (homeservershow.com)

Recommended Posts

scorpi2ok2
2 minutes ago, E3000 said:


This is interesting. Do you have all VMs and host on the same disk?

All VMs are on SSD raid1 volume.

Share this post


Link to post
Share on other sites
E3000
6 minutes ago, scorpi2ok2 said:

All VMs are on SSD raid1 volume.


...and KVM running off USB?

Share this post


Link to post
Share on other sites
scorpi2ok2
6 minutes ago, E3000 said:


...and KVM running off USB?

No. I have a RedHat 8 installed on it running from the same SSD.

I'm using the OS for my apps and KVM is just for lab servers.

  • Thanks 1

Share this post


Link to post
Share on other sites
Qba
Posted (edited)

My setup is

CPU Xeon 1265L v2

16 GB RAM

4x3TB 3,5" HDDs as ZFS RAID 10 for important data, connected to built-in SATA controller

2x1TB Samsung 860EVO as ZFS RAID1dedicated for root and VM storage

1x5TB Seagate 2,5" for less important data, multimedia, temporary download target etc

1x120GB Samsung 840PRO SSD as ZFS L2ARC

 

All 2,5" HDDs are connected to LSI 9212-4i4e

 

Running Debian Buster @ ZFS  + KVM/libvirt + custom compiled kernel with UKSM and ZRAM enabled. This setup let me store ~3x more data into RAM than total RAM capacity at the expense of some CPU usage.

Edited by Qba

Share this post


Link to post
Share on other sites
dynamikspeed
On 5/13/2020 at 6:52 PM, randman said:

 

Did you have to do anything special to install Hyper-V Server 2019 on your MicroServer Gen8? Or did it just install cleanly?

 

No additional fan noise on my system.

I had issues booting the Hyper-V iso then managed to do it with the iLO in the usual way.Everything installed smoothly and i had no issues. It's quite similar to 2016 not many changes.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Similar Content

    • Taztaztic
      By Taztaztic
      Hi all,
       
      I am new to the server world (please be gentle) and I desperately need some help!
       
      I recently acquired a HP MicroServer Gen8. Bays 1 & 2 have 1TB SSD each and I believe they are setup in a RAID 1 config (not sure how to check) and have windows server 2012 installed.
       
      Bays 3 & 4 are empty and I recently purchased a 4TB WD Blue Hard Drive and managed to install windows 10 onto it via a separate computer.
       
      I've now inserted the 4TB Hard drive into bay 3 and want windows 10 to load up, but its not doing that....it keeps loading up windows server 2012 thats in bays 1/2 hard drives.
       
      How can I choose which bay to boot windows from? I like it to boot from Bay 3 most of the time that has windows 10.
       
      In normal computers, it was simple to organise the boot order via BIOS, but with the Gen8 its seems very complex with HP's own BIOS...etc and it's starting to give me a headache!
       
      Any help from anyone will be really appreciated!
       
      A simple easy to follow instructions will suffice.
       
    • HJB
      By HJB
      This is an effort from me for the Community to document my quest of getting Freenas to run as a VM under Proxmox with a LSI HBA Adapter forwarded to Freenas so we get the best of both worlds.
      My reasoning for this was that I did not want to miss the comfort and stability of either Proxmox or Freenas but refuse to run 2 separate Servers as I have a very well equipped HPE Proliant ML350P G8.
      Proxmox documentation showed me that the IOMMU has to be activated on the kernel commandline. The command line parameters are:
      for Intel CPUs: 
      intel_iommu=on
      for AMD CPUs: 
      amd_iommu=on The kernel command line needs to be placed in the variable GRUB_CMDLINE_LINUX_DEFAULT in the file /etc/default/grub. Running update-grub appends its content to all Linux entries in /boot/grub/grub.cfg.
       
      nano /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: #   info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="Proxmox Virtual Environment" GRUB_CMDLINE_LINUX_DEFAULT="quiet" GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs" # 22.5.2020 (HJB) added to enable pcie passtrough. GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on" # Disable os-prober, it might add menu entries for each guest GRUB_DISABLE_OS_PROBER=true After changing anything modules related, you need to refresh your initramfs. On Proxmox VE this can be done by executing: 
      update-initramfs -u -k all Finish Configuration
      Finally reboot to bring the changes into effect and check that it is indeed enabled.
      dmesg | grep -e DMAR -e IOMMU -e AMD-Vi [    0.007731] ACPI: DMAR 0x00000000BDDAD200 000558 (v01 HP     ProLiant 00000001 \xd2?   0000162E) [    1.245296] DMAR: IOMMU enabled [    2.592107] DMAR: Host address width 46 [    2.592173] DMAR: DRHD base: 0x000000fbefe000 flags: 0x0 [    2.592247] DMAR: dmar0: reg_base_addr fbefe000 ver 1:0 cap d2078c106f0462 ecap f020fe [    2.592330] DMAR: DRHD base: 0x000000f4ffe000 flags: 0x1 [    2.592399] DMAR: dmar1: reg_base_addr f4ffe000 ver 1:0 cap d2078c106f0462 ecap f020fe [    2.592481] DMAR: RMRR base: 0x000000bdffd000 end: 0x000000bdffffff [    2.592550] DMAR: RMRR base: 0x000000bdff6000 end: 0x000000bdffcfff [    2.592618] DMAR: RMRR base: 0x000000bdf83000 end: 0x000000bdf84fff [    2.592686] DMAR: RMRR base: 0x000000bdf7f000 end: 0x000000bdf82fff [    2.592755] DMAR: RMRR base: 0x000000bdf6f000 end: 0x000000bdf7efff [    2.592823] DMAR: RMRR base: 0x000000bdf6e000 end: 0x000000bdf6efff [    2.592892] DMAR: RMRR base: 0x000000000f4000 end: 0x000000000f4fff [    2.592961] DMAR: RMRR base: 0x000000000e8000 end: 0x000000000e8fff [    2.593030] DMAR: RMRR base: 0x000000bddde000 end: 0x000000bdddefff [    2.593108] DMAR: ATSR flags: 0x0 [    2.593185] DMAR-IR: IOAPIC id 10 under DRHD base  0xfbefe000 IOMMU 0 [    2.593254] DMAR-IR: IOAPIC id 8 under DRHD base  0xf4ffe000 IOMMU 1 [    2.593324] DMAR-IR: IOAPIC id 0 under DRHD base  0xf4ffe000 IOMMU 1 [    2.593396] DMAR-IR: HPET id 0 under DRHD base 0xf4ffe000 [    2.593467] DMAR-IR: x2apic is disabled because BIOS sets x2apic opt out bit. [    2.593468] DMAR-IR: Use 'intremap=no_x2apic_optout' to override the BIOS setting. [    2.594425] DMAR-IR: Enabled IRQ remapping in xapic mode [    4.286848] DMAR: dmar0: Using Queued invalidation [    4.286932] DMAR: dmar1: Using Queued invalidation [    4.355658] DMAR: Intel(R) Virtualization Technology for Directed I/O [  111.511173] vfio-pci 0000:03:00.0: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor. [  151.942005] vfio-pci 0000:0d:00.0: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor. root@pve:~# The bottom 2 lines vfio-pci 0000:03:00.0 and vfio-pci 0000:0d:00.0 are the onboard P420i and the LSI Logic SAS 9207-8i Controller one of them we would like to be able to pass-through. The message Device is ineligible for IOMMU is actually from the bios of the HPE Server. So it seems like Proxmox is getting ready but the Server is still using it in IPMI / ILO.
       
      A Hint found online mentioned to add a Parameter allowing unsafe Interrupts to the Grub Command line so let’ s try it out.
      Changing the Grub file once more with nano
      nano /etc/default/grub # /etc/modules: kernel modules to load at boot time. # # This file contains the names of kernel modules that should be loaded # at boot time, one per line. Lines beginning with "#" are ignored. # 22.5.2020 (HJB) Added “.allow_unsafe_interrupts=1”  to enable PCIE Passtrough vfio vfio_iommu_type1.allow_unsafe_interrupts=1 vfio_pci vfio_virqfd After reboot the “device is ineligible” line has disappeard when checking with
      dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
      Yet as we tried to pass-through the LSI Controller we’re still confronted with QEMU aborting starting the VM with exitcode 1. It did the same with the P420i as we tried that as well. So I removed the entry of .allow_unsafe_interrupts=1 and rebooted once more
      A Search online for the Text “Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor” brought us to this HPE Document ID: c04781229 describing problems occurring when passing through PCIE devices (in the document its GPU’s).
      At first I understood just about nothing from the document. But together with the Blog post of Jim Denton it started to make sense. (Thank you Jim for taking the time to post it).
      After studying the Blogpost and the HPE Document it became clear to me that PCIE forwarding is not possible for the internal P420i Controller but is very well possible for physical PCIE Slots. As I have an LSI HBA Controller as well this seemed to be promising
      From here on it went relatively smooth. Just follow the next Steps.
      We need to add the HPE Scripting utilities. So we add the HPE repository to either Enterprise or Community Source list.
      nano /etc/apt/sources.list.d/pve-enterprise.list or
      nano /etc/apt/sources.list Add a line with
      deb https://downloads.linux.hpe.com/SDR/repo/stk/ xenial/current non-free Now we add the HPE Publickey’s by executing
      curl http://downloads.linux.hpe.com/SDR/hpPublicKey1024.pub | apt-key add - curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048.pub | apt-key add - curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048_key1.pub | apt-key add - curl http://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | apt-key add - apt update And can install the scripting utilitys by
      apt install hp-scripting-tools Now we download the inputfile for HPE’s Conrep script. Just to make sure wher everything is I switched to the /home dir.
      cd /home wget -O conrep_rmrds.xml https://downloads.hpe.com/pub/softlib2/software1/pubsw-linux/p1472592088/v95853/conrep_rmrds.xml  
      We’re nearly there now hang on. We need to know of course in which PCIE Slot our Controller is. We use lspci to list all our PCIE devices to a file  and nano to scroll through to find it.
      lspci -vvv &> pcie.list nano pcie.list In our case we found this. Our LSI Controller with PCIE Decice ID 0000:0d:00.0 as Proxmox knows it is in Slot 4
      0d:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)         Subsystem: LSI Logic / Symbios Logic 9207-8i SAS2.1 HBA         Physical Slot: 4         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR- FastB2B- DisINTx+         Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- So we can create an exclude for that one by
      cd /home nano exclude.dat Add the following line in that and save it.
      <Conrep> <Section name="RMRDS_Slot4" helptext=".">Endpoints_Excluded</Section> </Conrep> Now we’re ready to Rock and Roll, sorry about that. I Mean to run the conrep utility from HP which excludes our PCIE Slot from the IOMMU / RMRR of the ILO/IPMI
      conrep -l -x conrep_rmrds.xml -f exclude.dat And we verify the results by
      conrep -s -x conrep_rmrds.xml -f verify.dat nano verify.dat Now we should see something like this. Mind that at Slot4 it says Excluded.
      <?xml version="1.0" encoding="UTF-8"?> <!--generated by conrep version 5.5.0.0--> <Conrep version="5.5.0.0" originating_platform="ProLiant ML350p Gen8" originating_family="P72" originating_romdate="05/24/2019" originating_processor_manufacturer="Intel">   <Section name="RMRDS_Slot1" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot2" helptext=".">Endpoints_included</Section>   <Section name="RMRDS_Slot3" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot4" helptext=".">Endpoints_Excluded</Section>   <Section name="RMRDS_Slot5" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot6" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot7" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot8" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot9" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot10" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot11" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot12" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot13" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot14" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot15" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot16" helptext=".">Endpoints_Included</Section> </Conrep>  
      Time to reboot the  Proxmox Server  for the last time before we can celebrate.
      Adding a PCIE Device to the Freenas VM

      and select our PCIE Device ID that is the LSI controller

       
      That was all the VM is now happily starting with the forwarded Controller
      Since then I’ve also used this procedure to forward a Intel Optane 900p PCIE SSD
       
      Best regards
      Henk
    • dxzdxz1
      By dxzdxz1
      This is my first post here but I've been lurking in this forum for quite sometime now, and from what I understand, the only reason that we couldn't achieve 32gb ram on the Microserver Gen8 was because there was no 16gb unbuffered ecc ram available before.
       
      I was searching for a memory module meeting this criteria and I found this one. My question is, letting aside this ridiculous price, is there any compatibility issue that could make this ram won't work on the Microserver Gen8?
       
      Most of the threads here about memory compatibility are quite old and I didn't want to revive an old thread just to ask this, so I'm creating this one.
       
      Thanks in advance.


×
×
  • Create New...