Jump to content
RESET Forums (homeservershow.com)

Recommended Posts

E3000

Hello all,

 

A few questions for those who use Type-1 Hypervisors on their Gen8 MicroServers...

 

I am looking to try ESXi or ProxMox and have been reading a lot of the threads on here.
Hopefully you guys can help with some harder to find answers I have been seeking.

 

1) Which would be the better way to setup ProxMox:

     a) Hypervisor on Internal MicroSD, VMs installed on SSD in ODD Port, Data on 4x HDDs in bays.

     b) Hypervisor on Internal USB, VMs installed on SSD in ODD Port, Data on 4x HDDs in bays.

     c) Hypervisor and VMs both installed on same SSD (partitioned?) in ODD Port, Data on 4x HDDs in bays.
     d) Hypervisor on SSD using a USB-to-SATA cable on Internal USB, VMs installed on separate SSD in ODD Port, Data on 4x HDDs in bays.
 

2) Would a 128GB SSD be a ‘waste‘ for installing a Hypervisor on? How much space is typically needed?
 

3) How many VMs have you guys run on a Gen8 comfortably without it being sluggish?

 

4) Everyone seems to be going RAID crazy these days. Is there any reason to use it if high-availability is not that necessary and a good backup plan is in place? What is wrong with separate disks (or singular Raid0s)?

 

5) Does using Type-1 Hypervisors have any effect on the internal fans speed/noise? Is it possible to have 3-5 VMs running and still have the fan speed @~8% as it was when I was using 2 nested (Type-2) VMs?

 

Sorry in advance if some of these questions are silly, common knowledge, or “depends on what you are doing in the VMs!” 😆

 

Thanks in advance to all those that help!

Link to post
Share on other sites
Thiago

Hey, I can comment a bit on some of your question as I just migrated my homelab from esxi 6 to proxmox-ve 6.1-11

 

Quick intro on my setup:

microserver gen8 with an Itnel Xeon E3-1230 V2 @3.60GHz

16GB RAM

1 disk  2TB WD Red NAS HardDrive on bay 1

1 SSD 240GB Crucial on bay 2

 

Was tired with ESXI as limitation on licence and the need to have the vcenter installed on top of my esxi to manage the vms taking much of the available RAM and also licence bound which was a nightmare whenever you need to upgrade.

 

Long story short, I decided to move on and get rid of the esxi and install Proxmox.

 

Was also considering https://xcp-ng.org/ but with the uncertainty of what is going on with citrix trying to kill the open source project, at least that is my feeling while reading here and there, I finally decided to go with Proxmox which, btw, is debian based which I am comfortable with and has its prox and cons that on the overall suits my use case better for now.

 

Well, I am not an expert, just want to keep it simple and have an working lab to play with to test new techs like deploying a kubernetes cluster and playing with it and integrating with my git ci/cd etc.

 

Back to your question.

 

questions 1 and 2 I think that any combination would do the work. 

 

In my case as I only have 2 disks, I installed everything hypervisor on the 2TB disk on the bay1 and left the SSD for whenever I will need it to spin a VM that needs to be snappier.

 

I also decided to deploy ZFS as RAID0 on my 2TB disk as my intent here is to get used with this FS as I am more familiar with the good old LVM and having tho create the PVS, VGS and LVS and ultimately your EXT4 partition etc.

 

I must confess that I am enjoying the easiness of use of ZFS and how we can create Datasets with quotas to present as mount points to your application and also the sharenfs that is present on ZFS really cool stuff.

 

Keep in mind thought that the caveat is ZFS is memory RAM hunger intensive and it'll certainly take a toll on the total number of VM's you can spin at a time which addresses you question number 3.

 

I have 3 VM's running K8S cluster and with the ZFS it is already taking 79% of Total RAM available so the way it's going I only have room for a few more VM's to run.

But as I am testing a K8S cluster here I can spin as many pods as I need which is my main goal for now on this setup.

 

Question 5 , my fan is quiet as ever and it is not increasing the speed and it's been quieter than ever so far.

 

Well hope I addressed some of your concerns.

 

You may want to have a look on the xcp-ng as it seems to be a solid free hypervisor as well.

 

 

 

 

 

 

  • Thanks 1
Link to post
Share on other sites
E3000
15 hours ago, Thiago said:

Hey, I can comment a bit on some of your question as I just migrated my homelab from esxi 6 to proxmox-ve 6.1-11

 

Quick intro on my setup:

microserver gen8 with an Itnel Xeon E3-1230 V2 @3.60GHz

16GB RAM

1 disk  2TB WD Red NAS HardDrive on bay 1

1 SSD 240GB Crucial on bay 2

 

Was tired with ESXI as limitation on licence and the need to have the vcenter installed on top of my esxi to manage the vms taking much of the available RAM and also licence bound which was a nightmare whenever you need to upgrade.

 

Long story short, I decided to move on and get rid of the esxi and install Proxmox.

 

Was also considering https://xcp-ng.org/ but with the uncertainty of what is going on with citrix trying to kill the open source project, at least that is my feeling while reading here and there, I finally decided to go with Proxmox which, btw, is debian based which I am comfortable with and has its prox and cons that on the overall suits my use case better for now.

 

Well, I am not an expert, just want to keep it simple and have an working lab to play with to test new techs like deploying a kubernetes cluster and playing with it and integrating with my git ci/cd etc.

 

Back to your question.

 

questions 1 and 2 I think that any combination would do the work. 

 

In my case as I only have 2 disks, I installed everything hypervisor on the 2TB disk on the bay1 and left the SSD for whenever I will need it to spin a VM that needs to be snappier.

 

I also decided to deploy ZFS as RAID0 on my 2TB disk as my intent here is to get used with this FS as I am more familiar with the good old LVM and having tho create the PVS, VGS and LVS and ultimately your EXT4 partition etc.

 

I must confess that I am enjoying the easiness of use of ZFS and how we can create Datasets with quotas to present as mount points to your application and also the sharenfs that is present on ZFS really cool stuff.

 

Keep in mind thought that the caveat is ZFS is memory RAM hunger intensive and it'll certainly take a toll on the total number of VM's you can spin at a time which addresses you question number 3.

 

I have 3 VM's running K8S cluster and with the ZFS it is already taking 79% of Total RAM available so the way it's going I only have room for a few more VM's to run.

But as I am testing a K8S cluster here I can spin as many pods as I need which is my main goal for now on this setup.

 

Question 5 , my fan is quiet as ever and it is not increasing the speed and it's been quieter than ever so far.

 

Well hope I addressed some of your concerns.

 

You may want to have a look on the xcp-ng as it seems to be a solid free hypervisor as well.

 

 

 

 

 

 


Hi Thiago, thanks for the response and explanation.

 

I had not heard of xcp-ng and it definitely looks interesting. I too would also be worried about Citrix chasing them down as they are based on Xen. The reason why I was leaning towards Proxmox was the fact that it was open source, as I had also considered just using Hyper-V before deciding on ESXi (and just recently hearing about Proxmox). I have always used Windows on my MicroServers but the experience I do have with Linux is all Debian.

 

Memory-intensive filesystems are a bit of a worry for me. I do not plan to do any kind of striping or mirroring at this point, and may look into that more later when I become a pro at virtualisation lol but for now it’s not a requirement. With that being said would there be any benefit of using ZFS? Isn’t the way it handles RAID (or quasi-RAID) the whole reason why people love it?

 

I am really happy to hear your fan is still quiet! Fantastic! I take it you are still using B120i? Any reason you did not think about using the internal MicroSD/USB ports? Is your boot HDD partitioned?

Link to post
Share on other sites
randman

I used to use the internal MicroSD as a boot drive for ESXi. However, one day, it went bad, and it was a hassle having to disconnect all the cables and open up the server. So I decided to use an external USB stick (one of the Sandisk Ultra Fit 16GB USBs that don't stick out much). Maintaining my server was easier with an external USB stick. It also made it easier to test booting up different or new OSs. My server is in the basement where noone goes so it's not a risk having an external USB. I have a P222 controller so use RAID 1 with 2 SSDs and RAID 1 with two hard disks. I use internal SSD for ESXi datastore. It's been a number of years since I setup my server, and I haven't actually had much need for the hard disks (it turns out that the VMs I put in my server didn't need much capacity for data). If I had known I wasn't going to use the hard disks, I may not have bothered with the P222/RAID 1. A good backup may have sufficed (assuming you have the luxury of downtime for restore).

A couple of other ESXi servers I built later just use NVMe (and not using RAID).

 

  • Thanks 1
Link to post
Share on other sites
E3000

Sounds good. That’s the idea, to set everything up and never have to again.

Did you notice any fan speed/noise changes when you installed the P222?

Link to post
Share on other sites
dynamikspeed

I just setup Hyper-V Server 2019 on mine it works well and previously used ESXi on the microsd.

Not heard of Proxmox i will have to check it out what's it like compared to ESXi/Hyper-V ?

Link to post
Share on other sites
randman
40 minutes ago, dynamikspeed said:

I just setup Hyper-V Server 2019 on mine it works well and previously used ESXi on the microsd.

Not heard of Proxmox i will have to check it out what's it like compared to ESXi/Hyper-V ?

 

Did you have to do anything special to install Hyper-V Server 2019 on your MicroServer Gen8? Or did it just install cleanly?

1 hour ago, E3000 said:

Sounds good. That’s the idea, to set everything up and never have to again.

Did you notice any fan speed/noise changes when you installed the P222?

 

No additional fan noise on my system.

Link to post
Share on other sites
scorpi2ok2

My setup:

microserver gen8 with an Xeon(R) CPU E31260L @ 2.40GHz

16GB RAM

Smart Array P420

2 disk  SSD 240GB disks raid1 - OS + apps

4 3TB disks RAID6 - DATA

1 3TB external disk - torrents.

 

I'm running plain livbirt(KVM) with kimchi interface (https://github.com/kimchi-project/kimchi)

running 5 VM.

 

Fan Speed : ( = 11)

  • Thanks 1
Link to post
Share on other sites
E3000
6 hours ago, scorpi2ok2 said:

My setup:

microserver gen8 with an Xeon(R) CPU E31260L @ 2.40GHz

16GB RAM

Smart Array P420

2 disk  SSD 240GB disks raid1 - OS + apps

4 3TB disks RAID6 - DATA

1 3TB external disk - torrents.

 

I'm running plain livbirt(KVM) with kimchi interface (https://github.com/kimchi-project/kimchi)

running 5 VM.

 

Fan Speed : ( = 11)


This is interesting. Do you have all VMs and host on the same disk?

Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
  • Similar Content

    • Giuseppolino87
      By Giuseppolino87
      Hi everyone, today I wanted to install esxi 6.7 to try out the various functions, my question is the following, I have a server where I have installed esxi in an SSD and I have two other 4 tb disks so 8 tb total.  I would like to make sure that when I create a virtual machine in this case I will install the various virtual machines in the SSD and use the two disks for data.  So how can I do this?  Thank you very much for those who can help me.
    • oldwhs12
      By oldwhs12
      I have been running the WHS1 pack 3 for many years.  Unfortunately it is no longer working.  I got a Proliant microserver Gen10 hoping it will run it but was unable to install the software.  The software should run on most older PCs?

      Does anyone have the WHS1 running on the Gen10?  Also my WHS1 iso is Pack 1.  If I manage to install the Pack 1 can I find on the web the upgrade to pack 3?  Would I be forever stuck at Pack 1?  Any advice would be appreciated.

    • AVsonyfan
      By AVsonyfan
      All credit to schoondoggy for his original idea. A shame I am on the wrong side of the Atlantic to order an SDM kit as the import duties make it an expensive option, or would have if they were still available. I had to look around to see what alternatives I could find... and managed to find the bits needed on Amazon!
       
      I must admit I am rather pleased with the end result of this project and may repeat it on the other Gen8 MicroServer. The outcome is a server with a mirrored system drive on the internal B120i, and the main drive bays connected to an HP P222 card providing a RAID-10 data volume. This is pretty much my ideal setup for a stable resilient but small server.
       
      Sabrent 3.5" to x2 SSD Internal Mounting Kit (BK-HDCC)
      Amazon UK - https://amzn.to/32qEVou
      Amazon US - https://amzn.to/3h8Blom
      The kit comes with all the cabling you need and all the required screws to mount the drives and bracket.
      Two types of power splitter cables are included along with two SATA cables. The bracket holds two SSDs and provides a small gap between them.
       
      I mounted the bracket next to the PSU but leaving a 2-3mm gap to allow for some airflow past the SSDs in case they get warmed by the P222.
       
      Tips:
      Put some insulating tape along the underneath of the upper chassis rail next to the PSU and drilled three small holes.
      I used two strips on to stop the drill bit from walking across the metal. Drill with a 2mm first then a 2.5mm bit. This leaves enough metal for screws to thread in.
      Check and measure each hole against the bracket as you go so that they line up. Bracket is then fixed in place using the provided screws (see photo).
       
       
      The final part of the puzzle is another Mini-SAS to SATA cable the same as the one connected to the internal drive bays.
      This is an SFF-8087 Mini-SAS [male] connector to 4 SATA [female] header cables. The 50cm cable from Jyopto works great and not too much spare to lose within the system.
      Amazon UK - https://amzn.to/2CXAzgb
      I could not find it on Amazon US site but there was one from CableCreations that same length.
       
      Just need to add your choice of SSDs and cable up to B120i port.  I went with Samsung 860 Pro SSDs for the longer rated lifespan of write cycles.
       
       
      I decided to forget about an internal DVD/RW drive and opted for an external HP F2B56AA slimline drive that can be plugged into the USB of either system.
      Amazon UK - https://amzn.to/2EgSSxD
      Amazon US - https://amzn.to/326uFmL
       
       
      Sabrent also make a really good quality 2.5" to 3.5" bay converter adapter (BK-PCBS) that I have used in the EX490/X510 to convert from HDD to SSD system drives.
      Amazon UK - https://amzn.to/3jbYBmT
      Amazon US - https://amzn.to/335kXQz
       
      Hopefully this information will prove useful to some looking to update their MicroServer.
       
       
       
       

    • gru
      By gru
      Hi! I am planning to buy a Microserver Gen10 Plus E-2224 for a small business, and want to add as much disk space as possible. I already have the older Gen10 with 4x4TB HDDs in RAID10 (effective capacity 8TB) and a 2TB SSD in the 5th slot (which is missing on the new G10 plus). The max stated capacity on the HPE site seems to be limited with their own enterprise HDDs, so I'd like to see if I can go for bigger disks. The server would be running a sort of an application which collects lots of data from devices, and the data tends to grow over time (1-2TB per year), and I'd prefer not to worry about the capacity for the next several years.
       
      My primary concern is the max capacity (max single drive capacity + max total RAID capacity with the Smart Array S100i) I can reliably install. If possible, some actual setup which has been shown to work with 32GB RAM and its limited 180W power supply? I was also planning on upgrading it to 32GB. If I am buying a 16GB RAM server, does this means it has both slots occupied with 8GB modules and I need a 2x32GB kit? For the os SSD, I won't have the PCIe slot available because we need to insert a certain GPS PCIe card that the server uses for time synchronization. So, if I want an SSD, this means I should use something like 1xSSD + 3xHDD in RAID5, instead of RAID10?  
      I've been planning to install 16TB Seagate Exos X16 drives, with rated max operating power at 10W (6.3W for random reads/writes). Does this seam feasible?
       
      Thanks a lot for your tips!
    • E3000
      By E3000
      Does anyone know if it is possible to get Agentless Management Service (AMS) running on Debian/Ubuntu Linux? It is always showing as Not Available in the iLO page and I have only ever seen this active in Windows.
×
×
  • Create New...