Jump to content
RESET Forums (homeservershow.com)

Recommended Posts

E3000

Hello all,

 

A few questions for those who use Type-1 Hypervisors on their Gen8 MicroServers...

 

I am looking to try ESXi or ProxMox and have been reading a lot of the threads on here.
Hopefully you guys can help with some harder to find answers I have been seeking.

 

1) Which would be the better way to setup ProxMox:

     a) Hypervisor on Internal MicroSD, VMs installed on SSD in ODD Port, Data on 4x HDDs in bays.

     b) Hypervisor on Internal USB, VMs installed on SSD in ODD Port, Data on 4x HDDs in bays.

     c) Hypervisor and VMs both installed on same SSD (partitioned?) in ODD Port, Data on 4x HDDs in bays.
     d) Hypervisor on SSD using a USB-to-SATA cable on Internal USB, VMs installed on separate SSD in ODD Port, Data on 4x HDDs in bays.
 

2) Would a 128GB SSD be a ‘waste‘ for installing a Hypervisor on? How much space is typically needed?
 

3) How many VMs have you guys run on a Gen8 comfortably without it being sluggish?

 

4) Everyone seems to be going RAID crazy these days. Is there any reason to use it if high-availability is not that necessary and a good backup plan is in place? What is wrong with separate disks (or singular Raid0s)?

 

5) Does using Type-1 Hypervisors have any effect on the internal fans speed/noise? Is it possible to have 3-5 VMs running and still have the fan speed @~8% as it was when I was using 2 nested (Type-2) VMs?

 

Sorry in advance if some of these questions are silly, common knowledge, or “depends on what you are doing in the VMs!” 😆

 

Thanks in advance to all those that help!

Share this post


Link to post
Share on other sites
Thiago

Hey, I can comment a bit on some of your question as I just migrated my homelab from esxi 6 to proxmox-ve 6.1-11

 

Quick intro on my setup:

microserver gen8 with an Itnel Xeon E3-1230 V2 @3.60GHz

16GB RAM

1 disk  2TB WD Red NAS HardDrive on bay 1

1 SSD 240GB Crucial on bay 2

 

Was tired with ESXI as limitation on licence and the need to have the vcenter installed on top of my esxi to manage the vms taking much of the available RAM and also licence bound which was a nightmare whenever you need to upgrade.

 

Long story short, I decided to move on and get rid of the esxi and install Proxmox.

 

Was also considering https://xcp-ng.org/ but with the uncertainty of what is going on with citrix trying to kill the open source project, at least that is my feeling while reading here and there, I finally decided to go with Proxmox which, btw, is debian based which I am comfortable with and has its prox and cons that on the overall suits my use case better for now.

 

Well, I am not an expert, just want to keep it simple and have an working lab to play with to test new techs like deploying a kubernetes cluster and playing with it and integrating with my git ci/cd etc.

 

Back to your question.

 

questions 1 and 2 I think that any combination would do the work. 

 

In my case as I only have 2 disks, I installed everything hypervisor on the 2TB disk on the bay1 and left the SSD for whenever I will need it to spin a VM that needs to be snappier.

 

I also decided to deploy ZFS as RAID0 on my 2TB disk as my intent here is to get used with this FS as I am more familiar with the good old LVM and having tho create the PVS, VGS and LVS and ultimately your EXT4 partition etc.

 

I must confess that I am enjoying the easiness of use of ZFS and how we can create Datasets with quotas to present as mount points to your application and also the sharenfs that is present on ZFS really cool stuff.

 

Keep in mind thought that the caveat is ZFS is memory RAM hunger intensive and it'll certainly take a toll on the total number of VM's you can spin at a time which addresses you question number 3.

 

I have 3 VM's running K8S cluster and with the ZFS it is already taking 79% of Total RAM available so the way it's going I only have room for a few more VM's to run.

But as I am testing a K8S cluster here I can spin as many pods as I need which is my main goal for now on this setup.

 

Question 5 , my fan is quiet as ever and it is not increasing the speed and it's been quieter than ever so far.

 

Well hope I addressed some of your concerns.

 

You may want to have a look on the xcp-ng as it seems to be a solid free hypervisor as well.

 

 

 

 

 

 

  • Thanks 1

Share this post


Link to post
Share on other sites
E3000
15 hours ago, Thiago said:

Hey, I can comment a bit on some of your question as I just migrated my homelab from esxi 6 to proxmox-ve 6.1-11

 

Quick intro on my setup:

microserver gen8 with an Itnel Xeon E3-1230 V2 @3.60GHz

16GB RAM

1 disk  2TB WD Red NAS HardDrive on bay 1

1 SSD 240GB Crucial on bay 2

 

Was tired with ESXI as limitation on licence and the need to have the vcenter installed on top of my esxi to manage the vms taking much of the available RAM and also licence bound which was a nightmare whenever you need to upgrade.

 

Long story short, I decided to move on and get rid of the esxi and install Proxmox.

 

Was also considering https://xcp-ng.org/ but with the uncertainty of what is going on with citrix trying to kill the open source project, at least that is my feeling while reading here and there, I finally decided to go with Proxmox which, btw, is debian based which I am comfortable with and has its prox and cons that on the overall suits my use case better for now.

 

Well, I am not an expert, just want to keep it simple and have an working lab to play with to test new techs like deploying a kubernetes cluster and playing with it and integrating with my git ci/cd etc.

 

Back to your question.

 

questions 1 and 2 I think that any combination would do the work. 

 

In my case as I only have 2 disks, I installed everything hypervisor on the 2TB disk on the bay1 and left the SSD for whenever I will need it to spin a VM that needs to be snappier.

 

I also decided to deploy ZFS as RAID0 on my 2TB disk as my intent here is to get used with this FS as I am more familiar with the good old LVM and having tho create the PVS, VGS and LVS and ultimately your EXT4 partition etc.

 

I must confess that I am enjoying the easiness of use of ZFS and how we can create Datasets with quotas to present as mount points to your application and also the sharenfs that is present on ZFS really cool stuff.

 

Keep in mind thought that the caveat is ZFS is memory RAM hunger intensive and it'll certainly take a toll on the total number of VM's you can spin at a time which addresses you question number 3.

 

I have 3 VM's running K8S cluster and with the ZFS it is already taking 79% of Total RAM available so the way it's going I only have room for a few more VM's to run.

But as I am testing a K8S cluster here I can spin as many pods as I need which is my main goal for now on this setup.

 

Question 5 , my fan is quiet as ever and it is not increasing the speed and it's been quieter than ever so far.

 

Well hope I addressed some of your concerns.

 

You may want to have a look on the xcp-ng as it seems to be a solid free hypervisor as well.

 

 

 

 

 

 


Hi Thiago, thanks for the response and explanation.

 

I had not heard of xcp-ng and it definitely looks interesting. I too would also be worried about Citrix chasing them down as they are based on Xen. The reason why I was leaning towards Proxmox was the fact that it was open source, as I had also considered just using Hyper-V before deciding on ESXi (and just recently hearing about Proxmox). I have always used Windows on my MicroServers but the experience I do have with Linux is all Debian.

 

Memory-intensive filesystems are a bit of a worry for me. I do not plan to do any kind of striping or mirroring at this point, and may look into that more later when I become a pro at virtualisation lol but for now it’s not a requirement. With that being said would there be any benefit of using ZFS? Isn’t the way it handles RAID (or quasi-RAID) the whole reason why people love it?

 

I am really happy to hear your fan is still quiet! Fantastic! I take it you are still using B120i? Any reason you did not think about using the internal MicroSD/USB ports? Is your boot HDD partitioned?

Share this post


Link to post
Share on other sites
randman

I used to use the internal MicroSD as a boot drive for ESXi. However, one day, it went bad, and it was a hassle having to disconnect all the cables and open up the server. So I decided to use an external USB stick (one of the Sandisk Ultra Fit 16GB USBs that don't stick out much). Maintaining my server was easier with an external USB stick. It also made it easier to test booting up different or new OSs. My server is in the basement where noone goes so it's not a risk having an external USB. I have a P222 controller so use RAID 1 with 2 SSDs and RAID 1 with two hard disks. I use internal SSD for ESXi datastore. It's been a number of years since I setup my server, and I haven't actually had much need for the hard disks (it turns out that the VMs I put in my server didn't need much capacity for data). If I had known I wasn't going to use the hard disks, I may not have bothered with the P222/RAID 1. A good backup may have sufficed (assuming you have the luxury of downtime for restore).

A couple of other ESXi servers I built later just use NVMe (and not using RAID).

 

  • Thanks 1

Share this post


Link to post
Share on other sites
E3000

Sounds good. That’s the idea, to set everything up and never have to again.

Did you notice any fan speed/noise changes when you installed the P222?

Share this post


Link to post
Share on other sites
dynamikspeed

I just setup Hyper-V Server 2019 on mine it works well and previously used ESXi on the microsd.

Not heard of Proxmox i will have to check it out what's it like compared to ESXi/Hyper-V ?

Share this post


Link to post
Share on other sites
randman
40 minutes ago, dynamikspeed said:

I just setup Hyper-V Server 2019 on mine it works well and previously used ESXi on the microsd.

Not heard of Proxmox i will have to check it out what's it like compared to ESXi/Hyper-V ?

 

Did you have to do anything special to install Hyper-V Server 2019 on your MicroServer Gen8? Or did it just install cleanly?

1 hour ago, E3000 said:

Sounds good. That’s the idea, to set everything up and never have to again.

Did you notice any fan speed/noise changes when you installed the P222?

 

No additional fan noise on my system.

Share this post


Link to post
Share on other sites
scorpi2ok2

My setup:

microserver gen8 with an Xeon(R) CPU E31260L @ 2.40GHz

16GB RAM

Smart Array P420

2 disk  SSD 240GB disks raid1 - OS + apps

4 3TB disks RAID6 - DATA

1 3TB external disk - torrents.

 

I'm running plain livbirt(KVM) with kimchi interface (https://github.com/kimchi-project/kimchi)

running 5 VM.

 

Fan Speed : ( = 11)

  • Thanks 1

Share this post


Link to post
Share on other sites
E3000
6 hours ago, scorpi2ok2 said:

My setup:

microserver gen8 with an Xeon(R) CPU E31260L @ 2.40GHz

16GB RAM

Smart Array P420

2 disk  SSD 240GB disks raid1 - OS + apps

4 3TB disks RAID6 - DATA

1 3TB external disk - torrents.

 

I'm running plain livbirt(KVM) with kimchi interface (https://github.com/kimchi-project/kimchi)

running 5 VM.

 

Fan Speed : ( = 11)


This is interesting. Do you have all VMs and host on the same disk?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Similar Content

    • Taztaztic
      By Taztaztic
      Hi all,
       
      I am new to the server world (please be gentle) and I desperately need some help!
       
      I recently acquired a HP MicroServer Gen8. Bays 1 & 2 have 1TB SSD each and I believe they are setup in a RAID 1 config (not sure how to check) and have windows server 2012 installed.
       
      Bays 3 & 4 are empty and I recently purchased a 4TB WD Blue Hard Drive and managed to install windows 10 onto it via a separate computer.
       
      I've now inserted the 4TB Hard drive into bay 3 and want windows 10 to load up, but its not doing that....it keeps loading up windows server 2012 thats in bays 1/2 hard drives.
       
      How can I choose which bay to boot windows from? I like it to boot from Bay 3 most of the time that has windows 10.
       
      In normal computers, it was simple to organise the boot order via BIOS, but with the Gen8 its seems very complex with HP's own BIOS...etc and it's starting to give me a headache!
       
      Any help from anyone will be really appreciated!
       
      A simple easy to follow instructions will suffice.
       
    • HJB
      By HJB
      This is an effort from me for the Community to document my quest of getting Freenas to run as a VM under Proxmox with a LSI HBA Adapter forwarded to Freenas so we get the best of both worlds.
      My reasoning for this was that I did not want to miss the comfort and stability of either Proxmox or Freenas but refuse to run 2 separate Servers as I have a very well equipped HPE Proliant ML350P G8.
      Proxmox documentation showed me that the IOMMU has to be activated on the kernel commandline. The command line parameters are:
      for Intel CPUs: 
      intel_iommu=on
      for AMD CPUs: 
      amd_iommu=on The kernel command line needs to be placed in the variable GRUB_CMDLINE_LINUX_DEFAULT in the file /etc/default/grub. Running update-grub appends its content to all Linux entries in /boot/grub/grub.cfg.
       
      nano /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: #   info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="Proxmox Virtual Environment" GRUB_CMDLINE_LINUX_DEFAULT="quiet" GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs" # 22.5.2020 (HJB) added to enable pcie passtrough. GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on" # Disable os-prober, it might add menu entries for each guest GRUB_DISABLE_OS_PROBER=true After changing anything modules related, you need to refresh your initramfs. On Proxmox VE this can be done by executing: 
      update-initramfs -u -k all Finish Configuration
      Finally reboot to bring the changes into effect and check that it is indeed enabled.
      dmesg | grep -e DMAR -e IOMMU -e AMD-Vi [    0.007731] ACPI: DMAR 0x00000000BDDAD200 000558 (v01 HP     ProLiant 00000001 \xd2?   0000162E) [    1.245296] DMAR: IOMMU enabled [    2.592107] DMAR: Host address width 46 [    2.592173] DMAR: DRHD base: 0x000000fbefe000 flags: 0x0 [    2.592247] DMAR: dmar0: reg_base_addr fbefe000 ver 1:0 cap d2078c106f0462 ecap f020fe [    2.592330] DMAR: DRHD base: 0x000000f4ffe000 flags: 0x1 [    2.592399] DMAR: dmar1: reg_base_addr f4ffe000 ver 1:0 cap d2078c106f0462 ecap f020fe [    2.592481] DMAR: RMRR base: 0x000000bdffd000 end: 0x000000bdffffff [    2.592550] DMAR: RMRR base: 0x000000bdff6000 end: 0x000000bdffcfff [    2.592618] DMAR: RMRR base: 0x000000bdf83000 end: 0x000000bdf84fff [    2.592686] DMAR: RMRR base: 0x000000bdf7f000 end: 0x000000bdf82fff [    2.592755] DMAR: RMRR base: 0x000000bdf6f000 end: 0x000000bdf7efff [    2.592823] DMAR: RMRR base: 0x000000bdf6e000 end: 0x000000bdf6efff [    2.592892] DMAR: RMRR base: 0x000000000f4000 end: 0x000000000f4fff [    2.592961] DMAR: RMRR base: 0x000000000e8000 end: 0x000000000e8fff [    2.593030] DMAR: RMRR base: 0x000000bddde000 end: 0x000000bdddefff [    2.593108] DMAR: ATSR flags: 0x0 [    2.593185] DMAR-IR: IOAPIC id 10 under DRHD base  0xfbefe000 IOMMU 0 [    2.593254] DMAR-IR: IOAPIC id 8 under DRHD base  0xf4ffe000 IOMMU 1 [    2.593324] DMAR-IR: IOAPIC id 0 under DRHD base  0xf4ffe000 IOMMU 1 [    2.593396] DMAR-IR: HPET id 0 under DRHD base 0xf4ffe000 [    2.593467] DMAR-IR: x2apic is disabled because BIOS sets x2apic opt out bit. [    2.593468] DMAR-IR: Use 'intremap=no_x2apic_optout' to override the BIOS setting. [    2.594425] DMAR-IR: Enabled IRQ remapping in xapic mode [    4.286848] DMAR: dmar0: Using Queued invalidation [    4.286932] DMAR: dmar1: Using Queued invalidation [    4.355658] DMAR: Intel(R) Virtualization Technology for Directed I/O [  111.511173] vfio-pci 0000:03:00.0: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor. [  151.942005] vfio-pci 0000:0d:00.0: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor. root@pve:~# The bottom 2 lines vfio-pci 0000:03:00.0 and vfio-pci 0000:0d:00.0 are the onboard P420i and the LSI Logic SAS 9207-8i Controller one of them we would like to be able to pass-through. The message Device is ineligible for IOMMU is actually from the bios of the HPE Server. So it seems like Proxmox is getting ready but the Server is still using it in IPMI / ILO.
       
      A Hint found online mentioned to add a Parameter allowing unsafe Interrupts to the Grub Command line so let’ s try it out.
      Changing the Grub file once more with nano
      nano /etc/default/grub # /etc/modules: kernel modules to load at boot time. # # This file contains the names of kernel modules that should be loaded # at boot time, one per line. Lines beginning with "#" are ignored. # 22.5.2020 (HJB) Added “.allow_unsafe_interrupts=1”  to enable PCIE Passtrough vfio vfio_iommu_type1.allow_unsafe_interrupts=1 vfio_pci vfio_virqfd After reboot the “device is ineligible” line has disappeard when checking with
      dmesg | grep -e DMAR -e IOMMU -e AMD-Vi
      Yet as we tried to pass-through the LSI Controller we’re still confronted with QEMU aborting starting the VM with exitcode 1. It did the same with the P420i as we tried that as well. So I removed the entry of .allow_unsafe_interrupts=1 and rebooted once more
      A Search online for the Text “Device is ineligible for IOMMU domain attach due to platform RMRR requirement.  Contact your platform vendor” brought us to this HPE Document ID: c04781229 describing problems occurring when passing through PCIE devices (in the document its GPU’s).
      At first I understood just about nothing from the document. But together with the Blog post of Jim Denton it started to make sense. (Thank you Jim for taking the time to post it).
      After studying the Blogpost and the HPE Document it became clear to me that PCIE forwarding is not possible for the internal P420i Controller but is very well possible for physical PCIE Slots. As I have an LSI HBA Controller as well this seemed to be promising
      From here on it went relatively smooth. Just follow the next Steps.
      We need to add the HPE Scripting utilities. So we add the HPE repository to either Enterprise or Community Source list.
      nano /etc/apt/sources.list.d/pve-enterprise.list or
      nano /etc/apt/sources.list Add a line with
      deb https://downloads.linux.hpe.com/SDR/repo/stk/ xenial/current non-free Now we add the HPE Publickey’s by executing
      curl http://downloads.linux.hpe.com/SDR/hpPublicKey1024.pub | apt-key add - curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048.pub | apt-key add - curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048_key1.pub | apt-key add - curl http://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | apt-key add - apt update And can install the scripting utilitys by
      apt install hp-scripting-tools Now we download the inputfile for HPE’s Conrep script. Just to make sure wher everything is I switched to the /home dir.
      cd /home wget -O conrep_rmrds.xml https://downloads.hpe.com/pub/softlib2/software1/pubsw-linux/p1472592088/v95853/conrep_rmrds.xml  
      We’re nearly there now hang on. We need to know of course in which PCIE Slot our Controller is. We use lspci to list all our PCIE devices to a file  and nano to scroll through to find it.
      lspci -vvv &> pcie.list nano pcie.list In our case we found this. Our LSI Controller with PCIE Decice ID 0000:0d:00.0 as Proxmox knows it is in Slot 4
      0d:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)         Subsystem: LSI Logic / Symbios Logic 9207-8i SAS2.1 HBA         Physical Slot: 4         Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR- FastB2B- DisINTx+         Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- So we can create an exclude for that one by
      cd /home nano exclude.dat Add the following line in that and save it.
      <Conrep> <Section name="RMRDS_Slot4" helptext=".">Endpoints_Excluded</Section> </Conrep> Now we’re ready to Rock and Roll, sorry about that. I Mean to run the conrep utility from HP which excludes our PCIE Slot from the IOMMU / RMRR of the ILO/IPMI
      conrep -l -x conrep_rmrds.xml -f exclude.dat And we verify the results by
      conrep -s -x conrep_rmrds.xml -f verify.dat nano verify.dat Now we should see something like this. Mind that at Slot4 it says Excluded.
      <?xml version="1.0" encoding="UTF-8"?> <!--generated by conrep version 5.5.0.0--> <Conrep version="5.5.0.0" originating_platform="ProLiant ML350p Gen8" originating_family="P72" originating_romdate="05/24/2019" originating_processor_manufacturer="Intel">   <Section name="RMRDS_Slot1" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot2" helptext=".">Endpoints_included</Section>   <Section name="RMRDS_Slot3" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot4" helptext=".">Endpoints_Excluded</Section>   <Section name="RMRDS_Slot5" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot6" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot7" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot8" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot9" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot10" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot11" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot12" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot13" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot14" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot15" helptext=".">Endpoints_Included</Section>   <Section name="RMRDS_Slot16" helptext=".">Endpoints_Included</Section> </Conrep>  
      Time to reboot the  Proxmox Server  for the last time before we can celebrate.
      Adding a PCIE Device to the Freenas VM

      and select our PCIE Device ID that is the LSI controller

       
      That was all the VM is now happily starting with the forwarded Controller
      Since then I’ve also used this procedure to forward a Intel Optane 900p PCIE SSD
       
      Best regards
      Henk
    • dxzdxz1
      By dxzdxz1
      This is my first post here but I've been lurking in this forum for quite sometime now, and from what I understand, the only reason that we couldn't achieve 32gb ram on the Microserver Gen8 was because there was no 16gb unbuffered ecc ram available before.
       
      I was searching for a memory module meeting this criteria and I found this one. My question is, letting aside this ridiculous price, is there any compatibility issue that could make this ram won't work on the Microserver Gen8?
       
      Most of the threads here about memory compatibility are quite old and I didn't want to revive an old thread just to ask this, so I'm creating this one.
       
      Thanks in advance.


×
×
  • Create New...