Jump to content
RESET Forums (homeservershow.com)

VSphere-based HTPC/Media Centre


Recommended Posts

First post here, looking for help/advice

Have been using HP Microservers for a few years, lovely little boxes. Until recently, running one of the older boxes, an N40, with a Radeon 6450 with Openelec as a media-server. Or more precisely, a 'front end'; it pulls media (video, music) from my Synology NAS and plays it over TV/Amp. Kinda like a PopCornHour A300 on steroids. Works a treat. The box is being re-purposed as a firewall, because I bought a Gen8, and upgraded it with a Xeon 1265-L2, and the 6450.

What I would like to do is virtualise the HTPC as a VM - because the Xeon system is overkill for a media centre. My question is, is this possible?


HPGen8 with VSphere 6.5, one or more VMs

VM 1 has 2 GB RAM, 1 dedicated [physical] CPU, and pass-through PCI-e (the 6450)

Presumably this is possible:

  • VM1 will run Openelec, and have access to the 6450 for video acceleration (for mp4/mkv playback and hdmi audio), and display will be passed out to the amp/tv
  • 1 USB port passed to the VM, so I can run a wireless remote control.
  • Possibly a second USB port passed through so I can run a wireless keyboard with trackball

Intend to add other VMs, (perhaps a LInux based mail server, or MS SBS of some flavour), but thats for the future.

I'm new to VSphere/ESXi, so don't know much at all, let alone "what I don't know"...


My experiments with VSphere so far result in PCI-pass through to the VM, but the output from the video card is the VSphere boot-up screen 'frozen' when loading a shim, and all interaction with the VM is via a browser... not what I was hoping for. How do I redirect the VM's video output over the video card?


Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Similar Content

    • Giuseppolino87
      By Giuseppolino87
      Hi everyone, today I wanted to install esxi 6.7 to try out the various functions, my question is the following, I have a server where I have installed esxi in an SSD and I have two other 4 tb disks so 8 tb total.  I would like to make sure that when I create a virtual machine in this case I will install the various virtual machines in the SSD and use the two disks for data.  So how can I do this?  Thank you very much for those who can help me.
    • E3000
      By E3000
      Hello all,
      A few questions for those who use Type-1 Hypervisors on their Gen8 MicroServers...
      I am looking to try ESXi or ProxMox and have been reading a lot of the threads on here.
      Hopefully you guys can help with some harder to find answers I have been seeking.
      1) Which would be the better way to setup ProxMox:
           a) Hypervisor on Internal MicroSD, VMs installed on SSD in ODD Port, Data on 4x HDDs in bays.
           b) Hypervisor on Internal USB, VMs installed on SSD in ODD Port, Data on 4x HDDs in bays.
           c) Hypervisor and VMs both installed on same SSD (partitioned?) in ODD Port, Data on 4x HDDs in bays.
           d) Hypervisor on SSD using a USB-to-SATA cable on Internal USB, VMs installed on separate SSD in ODD Port, Data on 4x HDDs in bays.
      2) Would a 128GB SSD be a ‘waste‘ for installing a Hypervisor on? How much space is typically needed?
      3) How many VMs have you guys run on a Gen8 comfortably without it being sluggish?
      4) Everyone seems to be going RAID crazy these days. Is there any reason to use it if high-availability is not that necessary and a good backup plan is in place? What is wrong with separate disks (or singular Raid0s)?
      5) Does using Type-1 Hypervisors have any effect on the internal fans speed/noise? Is it possible to have 3-5 VMs running and still have the fan speed @~8% as it was when I was using 2 nested (Type-2) VMs?
      Sorry in advance if some of these questions are silly, common knowledge, or “depends on what you are doing in the VMs!” 😆
      Thanks in advance to all those that help!
    • Josef
      By Josef
      I have "new" HP MS Gen8 with HP smart RAID B120i. CPU G1610T, factory date cca 2016(?)
      I cant use  FREENAS with software raid (as on my old BlackCube) because my G8 colect my HDDs to RAID at all AHCI BIOS/Setup configurations
      Whan I try use F5 to disable RAID, HP app Smart Store Administrator show me message:
      "No licenses found on the selected controller."
      viz pictures
      HP Smart Store Administrator show me "0 Physical drivers" (3x WD 4TB + 1x WD 3TB, all HDD3,5" NAS ware 3.0)
      and in Freenas I can see one "disk" 7,7TB ... :-(

      I used directly connected LCD and keyboard for elininated  iLO trouble.
      Box is out of HP warranty.
      q1: Do you have some idea about restore Raid Licence ?
      q2: How eliminated Smart Raid function?
      q3: Exist any HP or community firmware to convert B120i as only AHCI mode without Raid functions ? 
      thanks for every useful advice Josef

    • acidzero
      By acidzero
      So, after several days of testing various different configurations, creating custom ESXi install ISOs and numerous reinstalls, I've managed to get ESXi 6.5U1 installed on my Microserver Gen8 and have working HP Smart Array P410 Health Status showing. For those that are struggling to do the same, firstly here's how. I used original VMWare ESXi 6.5U1 ISO, build 5969303 then made the following modifications:
      Remove driver "ntg3" - If I left this in, I had a weird network issue where Port 1 or 2 would repeatedly connect/drop every few seconds. This forces ESXi to use the working net-tg3 driver Remove driver "nhpsa" - this Smart Storage Array driver is what causes array health monitoring to not work. Remove to force ESXi to use working "hpsa" driver Add the Nov 2017 HPE vib bundles Remove hpe-smx-provider v650. - This version seems to cause the B120i or P410 to crash when querying health status Add hpe-smx-provider v600. (downloaded from HPE vibsdepot)
      Add scsi-hpvsa v5.5.0-88 bundle (downloaded from HPE drivers page)
      Add scsi-hpdsa v5.5.0.54 bundle (downloaded from HPE drivers page)
      I did the above by getting a basic/working ESXi/VCSA installation and then creating a custom ISO in VCSA AutoDeploy and exporting it. But the same can be achieved by installing VMWare's original ISO and modifying via the esxcli command.
      I have a SSD connected to the SATA port, onto which I am installing ESXi. The 4 front drive bays are connected to the P410.
      Configure the Microserver Gen8 B120i to use AHCI mode - the B120i is a fake raid card so only reports physical disks to ESXi. Leaving in RAID mode works but I got a false Health Alert on Disk Bay 5 Install my modified ESXi ISO to the SSD  
      With these modifications I have a working ESXi 6.5U1 on my Gen8 with fully functioning HPE tools and array health monitoring:

      I also tested disabling the vmw_ahci driver, which is why the AHCI controller shows it is using ahci in the above image.

      If I pull out a disk to test a raid failure, when the health status next updates I can see a raid health alert in the ESXi WebGUI:

      However I'm now stuck at the next stage - getting this storage health to pass through to VCSA. VCSA can successfully see all other health monitors (under System Sensors in ESXi) just not the storage, which is the most important.

      Does anyone know how I can get the storage health working in VCSA?
    • H00GiE
      By H00GiE
      I'd like to start using vDGA on ESXi with my ML10v2.
      The host is running ESXi 6.0 u3 HP Customized.

      If successful/possible i'd like to be running a Windows 10 VM with a Quadro k2000
      (in HCL list of vmware for vDGA)
      The VM will have up to 12GB ram and 4vcores and the Quadro card in passthrough mode this vm is meant to broadcast live video to twitch/youtube/fblive using Xsplit.
      live audio will be muxed in via virtual audio cable and local icecast server's stream.
      It does not matter if the remote desktop is stuttering or choppy as long as the broadcasted material is acceptable. 
      There will be a lot of video clips and overlays running, and i'll be run realtime 3D visualizations. the CPU and GPU would normally easily handle this workload.

      A HP332T and NC112T will be replacing 3 out ot the 4ports of the NC364T as this card won't have enough bandwidth on 1x PCIe for 4x gbit connections.
      (Considering the config of my ML10v2 (below in signature) and the fact i have a ml310e v2 front 80mm fan installed in server.) i'm just wondering:
      1: can the ML10 v2 handle all it's PCIe lanes being saturated (pcie3.0: 8x for Quadro K2000, 8x for LSI/CISCO RAID9271CV-8i PCIe2.0: 1x for HP 332T and 1x for NC112T)?
      2: will any of this config cause any bottleneck to any other hardware?
      3: is this a feasible configuration? (The resources are there to be used, it will not push other VM's resources.)
  • Create New...