Jump to content
RESET Forums (homeservershow.com)

Hardware for running lots of VMs (up to 200)


tosstoss
 Share

Recommended Posts

 

I already got a server I built at home with xeon e3-1246 CPU, 32 GB of ram which can run up to approx 40 vmware virtual machines at the same time. The RAM is the bottleneck there (the cpu runs at 40%-60% load all the time according to task manager, is it an accurate representation of how much "room for work" the CPU has or are there better ways)? The VMs I run use the most low spec windows xp config thats possible - 512 MB or 1 GB ram and 1 virtual CPU. A few browser windows (firefox chrome) are opened on every VM and new pages are loaded on them a few times per hour - everything very low on CPU usage as you can see.

 

Now my question is - I want to build a second, more powerful server from scratch - what would be the best CPU that suffices for 128 GB of RAM and running up to 150-200 of such VMs?

 

I created this config:

 

mobo: msi X99A raider

CPU: 4 core xeon e5-1620V3 or 6 core Xeon E5-2620V3 (initially I thought about i7-5820K, unfortunately I found out it supports only 64 GB RAM so I am thinking now about the e5 supporting 728 GB)

RAM: up to 128 GB ram consisting of sticks: crucial 16 GB DDR4 2133 mhz reg ECC

graphic card: gigabyte GT 730 ultra durable 2 silent

storage: samsung 850 evo ssd

 

Do you think that the e5-1620V3 will suffice for running up to so many VMs (although they are very low on CPU usage)? The next better e5 is the 6 core xeon e5-1650, but it costs twice as much (300vs600e) so I would like to stay with the e5-1620V3 if possible. Also what about the reg ECC RAM, does it help when running a lot of VMs 24/7 (ocassional restarts few times a week are ok)? ECC ram support was the second reason apart from the big ram support that I am looking at xeon processors (the mobo supports ECC too). How is the e5-1620v3 compared to the e3-1246 CPU I have in my old server?

 

Looking forward to any help, comments and suggestions to my whole setup and questions, thanks!

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Similar Content

    • E3000
      By E3000
      Hello all,
       
      A few questions for those who use Type-1 Hypervisors on their Gen8 MicroServers...
       
      I am looking to try ESXi or ProxMox and have been reading a lot of the threads on here.
      Hopefully you guys can help with some harder to find answers I have been seeking.
       
      1) Which would be the better way to setup ProxMox:
           a) Hypervisor on Internal MicroSD, VMs installed on SSD in ODD Port, Data on 4x HDDs in bays.
           b) Hypervisor on Internal USB, VMs installed on SSD in ODD Port, Data on 4x HDDs in bays.
           c) Hypervisor and VMs both installed on same SSD (partitioned?) in ODD Port, Data on 4x HDDs in bays.
           d) Hypervisor on SSD using a USB-to-SATA cable on Internal USB, VMs installed on separate SSD in ODD Port, Data on 4x HDDs in bays.
       
      2) Would a 128GB SSD be a ‘waste‘ for installing a Hypervisor on? How much space is typically needed?
       
      3) How many VMs have you guys run on a Gen8 comfortably without it being sluggish?
       
      4) Everyone seems to be going RAID crazy these days. Is there any reason to use it if high-availability is not that necessary and a good backup plan is in place? What is wrong with separate disks (or singular Raid0s)?
       
      5) Does using Type-1 Hypervisors have any effect on the internal fans speed/noise? Is it possible to have 3-5 VMs running and still have the fan speed @~8% as it was when I was using 2 nested (Type-2) VMs?
       
      Sorry in advance if some of these questions are silly, common knowledge, or “depends on what you are doing in the VMs!” 😆
       
      Thanks in advance to all those that help!
    • Lundgrens
      By Lundgrens
      Hey!
       
      What is the state of PCI passthrough on the Gen8? Hardware wise it supports IOMMU/VT-d, but I've heard people have had problems with PCI passthrough still.
       
      Is it possible to passthrough the RAID controller to a virtual machine in ESXi or KVM?
       
      Is it possible to passthrough a GPU to a virtual machine in ESXi or KVM?
    • ThePoulsen
      By ThePoulsen
      Hi All
       
      I'm currently planning a new business venture (I'm going to build a strategy execution platform for medium sized businesses) for this I am currently trying to decide if I want to host the tool myself, or move it to the cloud.

      The features I plan on running (possibly on individual virtual clients)
      NAS for backup (mirror towards an existing HP microserver gen8 for extra backup) One or more database servers (Postgresql) Mail server for one or more domains DNS server Web server for one or more websites FTP server one or more development clients for testing purposes. The Web app will be an Apache/Python Flask/postgresql LAMP stack. I am not anticipating thousands of concurrent users, but plan on selling locally (Denmark) where a success would be 100-200 daily users. so the load will be relatively small.
       
      I am looking at several dell servers, and I am having a hard time sizing my requirements.

      would a Xeon E3-1245V5 be sufficient for 5-15 virtual machines given 64 GB memory
      which RAID level should I be looking into? speed should not be an issue, so I am thinking that a simple mirror between two drives should be suficcient, combined with external clone.
       
      Any thoughts?
    • Stampede
      By Stampede
      New to the forum, and really enjoying all the discussions. 
       
      I would love to get everyone's opinion on this. I'm looking to build my first home lab for working on virtualization and a number of work related and personal projects. I definitely see a lot of VM's in the future, not super high performance but maybe 10-20. There are a lot of things I'm looking to experiment with and learn so finding the most flexible base platform is my goal. This is something I'm going to be working with daily for as long as I possibly can keep it alive. 
       
      I'm going to run a full vCenter setup, preferably off an SSD and then the have 1TB Raid (1) for my VM's datastores. The big question is, it there a significant enough difference between the ML10 and ML10v2 to justify spending the extra $200. My budget is tight, just graduated school, and see that $200 as RAM and SSD money I could be spending. Also to specify I am comparing the Xeon E3-1220v2 to the Xeon E3-1220v3. 
       
       
      I'm trying to keep the total cap on spending < $500, I'm amazed that this is even possible at this price point to be honest. I'm glad I found this forum and the ML10's because before this I was looking at getting an ancient Intel server with dual xeons that would do a great job at turning my electricity into heat and noise. 
       
      Let me know what you guys think, Can't wait to share my progress with everyone. 
      Peter
    • muppetman
      By muppetman
      So, long time listener (many years!) to the podcast, but only signed up to the forums today...
       
      Since listening to the podcast, I've been using Sophos UTM for firewall and VPN duties for the last 6 months, and I love it. However, I couldn't justify yet another box being powered on 24/7, so I've decided to virtualise all the various boxes into one "Monster" ESXi host.
       
      Here's where I've got to so far.... I've created a VMWare Esxi 5.5 host (free licence) and am in the process of migrating services over to it. I started with my UTM server, which is now totally virtual and performance is great (even using IPS, web filtering and AV scanning at maximum levels) and I've bought or re-purposed old kit for my build, and it is currently as below. Main considerations with the kit list were low power, support for PCI passthrough (Vtd) and cost. I thought about buying one of the Lenovo or HP pre-built servers that always seem to be on special offer but as I had a lot of the kit left over from previous builds, I decided it would be cheaper to roll my own. 
       
      Antec P180 Case (very old from a previous build but absolutely solid)
      ASUS H97-plus ATX Motherboard (supports Vtd)
      Intel i7 (can't remember which one off-hand, but it is a standard powered chip and non-K cpu to support Vtd)
      16gb RAM
      240gb Samsung SSD (for primary datastore to host VMs)
      Intel 4-port server gbe NIC (cheap, off eBay) - wanted 2 ports for Sophos UTM as I didn't want to use vlans and wanted to physically segregate WAN and LAN traffic. 
      Blackgold Dual PCI Tv Tuner (DVB-T2 for terrestrial HD broadcasts)
      Compro Dual PCI Tv Tuner (DVB-T - SD terrestrial tuner)
      3 x 4tb WD Red drives (not purchased yet - for NAS duties)
       
       
      So, the VMs I have built or are planning are:
       
      1. Sophos UTM - This VM is currently built and working. It is running 2 x vCPUs with 4gb of vRAM. It took me a while to figure out the networking portion of the set-up, but I got there in the end. Sophos is currently performing all DNS, routing, firewall, VPN and web protection.
       
      2. Windows 8.1 - This VM is currently built and working. Nothing special, this is a VM simply for me to access over RDP to perform any admin tasks (I use a Mac so this is extremely useful).
       
      3. Windows 7 (32-bit) - As above - sometimes it's useful to have a 32-bit Windows VM to use when it is needed. 
       
      3. Xpenology NAS - I am part-way through configuring this VM. This is an interesting one - I am planning on setting the 3 x  WD Reds up as physical RDMs and passing through to the VM. I'll do some testing on this, but I THINK this means that if my ESXi host dies at any point, I SHOULD be able to throw the drives into my real Synology box with no data loss.
       
      4. MediaPortal TV server - This one is going to be interesting as well...The plan is to use Vtd passthrough on both the TV cards to a Win8.1 VM and run MP Tv server there. It's going to be a bit of an experiment as I have no idea if PCI passthrough latency will perform well enough for glitch-free TV. Given there are a few people gaming with passthough'd GPUs, I'm hoping it will be up to scratch. 
       
      5. Z-wave automation server - a Linux (not sure which distro) VM to run a few Z-Wave devices around the house.
       
       
      If anyone has embarked on a similar build, or has any constructive (or even non-constructive!) criticism, then I'm all ears! I'm building this in my very-limited spare time, so updates may be sporadic. Will add pics of the build soon if anyone is interested...
×
×
  • Create New...