Jump to content
RESET Forums (homeservershow.com)

My "Ultimate" Home server build (WIP)....


muppetman

Recommended Posts

So, long time listener (many years!) to the podcast, but only signed up to the forums today...

 

Since listening to the podcast, I've been using Sophos UTM for firewall and VPN duties for the last 6 months, and I love it. However, I couldn't justify yet another box being powered on 24/7, so I've decided to virtualise all the various boxes into one "Monster" ESXi host.

 

Here's where I've got to so far.... I've created a VMWare Esxi 5.5 host (free licence) and am in the process of migrating services over to it. I started with my UTM server, which is now totally virtual and performance is great (even using IPS, web filtering and AV scanning at maximum levels) and I've bought or re-purposed old kit for my build, and it is currently as below. Main considerations with the kit list were low power, support for PCI passthrough (Vtd) and cost. I thought about buying one of the Lenovo or HP pre-built servers that always seem to be on special offer but as I had a lot of the kit left over from previous builds, I decided it would be cheaper to roll my own. 

 

Antec P180 Case (very old from a previous build but absolutely solid)

ASUS H97-plus ATX Motherboard (supports Vtd)

Intel i7 (can't remember which one off-hand, but it is a standard powered chip and non-K cpu to support Vtd)

16gb RAM

240gb Samsung SSD (for primary datastore to host VMs)

Intel 4-port server gbe NIC (cheap, off eBay) - wanted 2 ports for Sophos UTM as I didn't want to use vlans and wanted to physically segregate WAN and LAN traffic. 

Blackgold Dual PCI Tv Tuner (DVB-T2 for terrestrial HD broadcasts)

Compro Dual PCI Tv Tuner (DVB-T - SD terrestrial tuner)

3 x 4tb WD Red drives (not purchased yet - for NAS duties)

 

 

So, the VMs I have built or are planning are:

 

1. Sophos UTM - This VM is currently built and working. It is running 2 x vCPUs with 4gb of vRAM. It took me a while to figure out the networking portion of the set-up, but I got there in the end. Sophos is currently performing all DNS, routing, firewall, VPN and web protection.

 

2. Windows 8.1 - This VM is currently built and working. Nothing special, this is a VM simply for me to access over RDP to perform any admin tasks (I use a Mac so this is extremely useful).

 

3. Windows 7 (32-bit) - As above - sometimes it's useful to have a 32-bit Windows VM to use when it is needed. 

 

3. Xpenology NAS - I am part-way through configuring this VM. This is an interesting one - I am planning on setting the 3 x  WD Reds up as physical RDMs and passing through to the VM. I'll do some testing on this, but I THINK this means that if my ESXi host dies at any point, I SHOULD be able to throw the drives into my real Synology box with no data loss.

 

4. MediaPortal TV server - This one is going to be interesting as well...The plan is to use Vtd passthrough on both the TV cards to a Win8.1 VM and run MP Tv server there. It's going to be a bit of an experiment as I have no idea if PCI passthrough latency will perform well enough for glitch-free TV. Given there are a few people gaming with passthough'd GPUs, I'm hoping it will be up to scratch. 

 

5. Z-wave automation server - a Linux (not sure which distro) VM to run a few Z-Wave devices around the house.

 

 

If anyone has embarked on a similar build, or has any constructive (or even non-constructive!) criticism, then I'm all ears! I'm building this in my very-limited spare time, so updates may be sporadic. Will add pics of the build soon if anyone is interested...

  • Like 2
Link to post
Share on other sites

OK, I probably should have started this thread in the Virtualization forum, so if a mod could move this thread, I'd be grateful....

 

So a quick update - I had a bit of spare time last night, so managed to make a good start on my Xpenology VM...

 

Created the VM, loaded the Xpenology OS and then added the storage disks. Lots of googling led me to decide on the three WD REDs being set up as physical RDM to be made available to the VM. The downside of this is that RDM doesn't pass through SMART drive data, so I won't get notification of pending drive failures, but that's something I'll live with at the minute. I could get round this by buying an HBA and run it in IT-mode, but this is an extra expense I could do without and, of course, it uses extra power.

 

Performance-wise, I'm really happy. I've read lots about poor performance from RDM mapped disks in ESXI, but my setup (with 3 x WD REDs) achieves both 100+MB/s reads and writes, so is being limited by by GBe network, not drive performance. This was being achieved while the disk were being scrubbed, so the RDM mapping seems to be working as expected. Once all the data has been copied, I'll stress test the VM to check for stability.

 

 

  • Like 1
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Similar Content

    • Giuseppolino87
      By Giuseppolino87
      Hi everyone, today I wanted to install esxi 6.7 to try out the various functions, my question is the following, I have a server where I have installed esxi in an SSD and I have two other 4 tb disks so 8 tb total.  I would like to make sure that when I create a virtual machine in this case I will install the various virtual machines in the SSD and use the two disks for data.  So how can I do this?  Thank you very much for those who can help me.
    • E3000
      By E3000
      Hello all,
       
      A few questions for those who use Type-1 Hypervisors on their Gen8 MicroServers...
       
      I am looking to try ESXi or ProxMox and have been reading a lot of the threads on here.
      Hopefully you guys can help with some harder to find answers I have been seeking.
       
      1) Which would be the better way to setup ProxMox:
           a) Hypervisor on Internal MicroSD, VMs installed on SSD in ODD Port, Data on 4x HDDs in bays.
           b) Hypervisor on Internal USB, VMs installed on SSD in ODD Port, Data on 4x HDDs in bays.
           c) Hypervisor and VMs both installed on same SSD (partitioned?) in ODD Port, Data on 4x HDDs in bays.
           d) Hypervisor on SSD using a USB-to-SATA cable on Internal USB, VMs installed on separate SSD in ODD Port, Data on 4x HDDs in bays.
       
      2) Would a 128GB SSD be a ‘waste‘ for installing a Hypervisor on? How much space is typically needed?
       
      3) How many VMs have you guys run on a Gen8 comfortably without it being sluggish?
       
      4) Everyone seems to be going RAID crazy these days. Is there any reason to use it if high-availability is not that necessary and a good backup plan is in place? What is wrong with separate disks (or singular Raid0s)?
       
      5) Does using Type-1 Hypervisors have any effect on the internal fans speed/noise? Is it possible to have 3-5 VMs running and still have the fan speed @~8% as it was when I was using 2 nested (Type-2) VMs?
       
      Sorry in advance if some of these questions are silly, common knowledge, or “depends on what you are doing in the VMs!” 😆
       
      Thanks in advance to all those that help!
    • acidzero
      By acidzero
      Hello,
       
      So, after several days of testing various different configurations, creating custom ESXi install ISOs and numerous reinstalls, I've managed to get ESXi 6.5U1 installed on my Microserver Gen8 and have working HP Smart Array P410 Health Status showing. For those that are struggling to do the same, firstly here's how. I used original VMWare ESXi 6.5U1 ISO, build 5969303 then made the following modifications:
       
      Remove driver "ntg3" - If I left this in, I had a weird network issue where Port 1 or 2 would repeatedly connect/drop every few seconds. This forces ESXi to use the working net-tg3 driver Remove driver "nhpsa" - this Smart Storage Array driver is what causes array health monitoring to not work. Remove to force ESXi to use working "hpsa" driver Add the Nov 2017 HPE vib bundles Remove hpe-smx-provider v650.01.11.00.17 - This version seems to cause the B120i or P410 to crash when querying health status Add hpe-smx-provider v600.03.11.00.9 (downloaded from HPE vibsdepot)
      Add scsi-hpvsa v5.5.0-88 bundle (downloaded from HPE drivers page)
      Add scsi-hpdsa v5.5.0.54 bundle (downloaded from HPE drivers page)
       
      I did the above by getting a basic/working ESXi/VCSA installation and then creating a custom ISO in VCSA AutoDeploy and exporting it. But the same can be achieved by installing VMWare's original ISO and modifying via the esxcli command.
       
      I have a SSD connected to the SATA port, onto which I am installing ESXi. The 4 front drive bays are connected to the P410.
       
      Configure the Microserver Gen8 B120i to use AHCI mode - the B120i is a fake raid card so only reports physical disks to ESXi. Leaving in RAID mode works but I got a false Health Alert on Disk Bay 5 Install my modified ESXi ISO to the SSD  
      With these modifications I have a working ESXi 6.5U1 on my Gen8 with fully functioning HPE tools and array health monitoring:
       

      I also tested disabling the vmw_ahci driver, which is why the AHCI controller shows it is using ahci in the above image.

       
      If I pull out a disk to test a raid failure, when the health status next updates I can see a raid health alert in the ESXi WebGUI:
       

       
      However I'm now stuck at the next stage - getting this storage health to pass through to VCSA. VCSA can successfully see all other health monitors (under System Sensors in ESXi) just not the storage, which is the most important.
       

       
      Does anyone know how I can get the storage health working in VCSA?
       
      Thanks.
    • lordcroci
      By lordcroci
      Hi there!
      I'm new around here, looked for the presentation thread but haven't found any! 
      Anyway I hope to be able to contribute (as far as my newbie's knowledge will be useful )....

      Speaking about what I'm trying to do, I have this amazing microserver gen8, on which I have 2 3tb wd red as storage and an ocz 125gb ssd on 5° port.. Installed a couple of days ago mr. PROXMOX (I'm a complete newbie to it too ) and configured the xpenology 6.0 that runs amazingly!
      Now.. I'm just wondering which is the best option to configure a vpn (possibily openvpn) and from my inexperience I found a couple of options:
      - try a container with turnkey debian 8 OpenVPN
      - install ubuntu on a VM and setup openvpn
      - try the vpn server on xpenology
       
      or the least pleasurable
      - install openvpn on my windows 10 pc and leave it turned on in way to access the microserver through vpn.. 
      What do you think is better to do? Considering that I am a real noob and will need some guide or some tutorial (already googled something and Have found a lot of material on the openvpn site.. but honestly can't find so much about proxmox and vpn)

      PS: sorry for my english, but I'm italian and I'm still learning!
       
      thanks a lot!
      Lordcroci
    • Lundgrens
      By Lundgrens
      Hey!
       
      What is the state of PCI passthrough on the Gen8? Hardware wise it supports IOMMU/VT-d, but I've heard people have had problems with PCI passthrough still.
       
      Is it possible to passthrough the RAID controller to a virtual machine in ESXi or KVM?
       
      Is it possible to passthrough a GPU to a virtual machine in ESXi or KVM?
×
×
  • Create New...