Hi everyone, today I wanted to install esxi 6.7 to try out the various functions, my question is the following, I have a server where I have installed esxi in an SSD and I have two other 4 tb disks so 8 tb total. I would like to make sure that when I create a virtual machine in this case I will install the various virtual machines in the SSD and use the two disks for data. So how can I do this? Thank you very much for those who can help me.
A few questions for those who use Type-1 Hypervisors on their Gen8 MicroServers...
I am looking to try ESXi or ProxMox and have been reading a lot of the threads on here.
Hopefully you guys can help with some harder to find answers I have been seeking.
1) Which would be the better way to setup ProxMox:
a) Hypervisor on Internal MicroSD, VMs installed on SSD in ODD Port, Data on 4x HDDs in bays.
b) Hypervisor on Internal USB, VMs installed on SSD in ODD Port, Data on 4x HDDs in bays.
c) Hypervisor and VMs both installed on same SSD (partitioned?) in ODD Port, Data on 4x HDDs in bays.
d) Hypervisor on SSD using a USB-to-SATA cable on Internal USB, VMs installed on separate SSD in ODD Port, Data on 4x HDDs in bays.
2) Would a 128GB SSD be a ‘waste‘ for installing a Hypervisor on? How much space is typically needed?
3) How many VMs have you guys run on a Gen8 comfortably without it being sluggish?
4) Everyone seems to be going RAID crazy these days. Is there any reason to use it if high-availability is not that necessary and a good backup plan is in place? What is wrong with separate disks (or singular Raid0s)?
5) Does using Type-1 Hypervisors have any effect on the internal fans speed/noise? Is it possible to have 3-5 VMs running and still have the fan speed @~8% as it was when I was using 2 nested (Type-2) VMs?
Sorry in advance if some of these questions are silly, common knowledge, or “depends on what you are doing in the VMs!” 😆
Thanks in advance to all those that help!
I have "new" HP MS Gen8 with HP smart RAID B120i. CPU G1610T, factory date cca 2016(?)
I cant use FREENAS with software raid (as on my old BlackCube) because my G8 colect my HDDs to RAID at all AHCI BIOS/Setup configurations
Whan I try use F5 to disable RAID, HP app Smart Store Administrator show me message:
"No licenses found on the selected controller."
HP Smart Store Administrator show me "0 Physical drivers" (3x WD 4TB + 1x WD 3TB, all HDD3,5" NAS ware 3.0)
and in Freenas I can see one "disk" 7,7TB ... :-(
I used directly connected LCD and keyboard for elininated iLO trouble.
Box is out of HP warranty.
q1: Do you have some idea about restore Raid Licence ?
q2: How eliminated Smart Raid function?
q3: Exist any HP or community firmware to convert B120i as only AHCI mode without Raid functions ?
thanks for every useful advice Josef
So, after several days of testing various different configurations, creating custom ESXi install ISOs and numerous reinstalls, I've managed to get ESXi 6.5U1 installed on my Microserver Gen8 and have working HP Smart Array P410 Health Status showing. For those that are struggling to do the same, firstly here's how. I used original VMWare ESXi 6.5U1 ISO, build 5969303 then made the following modifications:
Remove driver "ntg3" - If I left this in, I had a weird network issue where Port 1 or 2 would repeatedly connect/drop every few seconds. This forces ESXi to use the working net-tg3 driver Remove driver "nhpsa" - this Smart Storage Array driver is what causes array health monitoring to not work. Remove to force ESXi to use working "hpsa" driver Add the Nov 2017 HPE vib bundles Remove hpe-smx-provider v650.01.11.00.17 - This version seems to cause the B120i or P410 to crash when querying health status Add hpe-smx-provider v600.03.11.00.9 (downloaded from HPE vibsdepot)
Add scsi-hpvsa v5.5.0-88 bundle (downloaded from HPE drivers page)
Add scsi-hpdsa v126.96.36.199 bundle (downloaded from HPE drivers page)
I did the above by getting a basic/working ESXi/VCSA installation and then creating a custom ISO in VCSA AutoDeploy and exporting it. But the same can be achieved by installing VMWare's original ISO and modifying via the esxcli command.
I have a SSD connected to the SATA port, onto which I am installing ESXi. The 4 front drive bays are connected to the P410.
Configure the Microserver Gen8 B120i to use AHCI mode - the B120i is a fake raid card so only reports physical disks to ESXi. Leaving in RAID mode works but I got a false Health Alert on Disk Bay 5 Install my modified ESXi ISO to the SSD
With these modifications I have a working ESXi 6.5U1 on my Gen8 with fully functioning HPE tools and array health monitoring:
I also tested disabling the vmw_ahci driver, which is why the AHCI controller shows it is using ahci in the above image.
If I pull out a disk to test a raid failure, when the health status next updates I can see a raid health alert in the ESXi WebGUI:
However I'm now stuck at the next stage - getting this storage health to pass through to VCSA. VCSA can successfully see all other health monitors (under System Sensors in ESXi) just not the storage, which is the most important.
Does anyone know how I can get the storage health working in VCSA?
I'd like to start using vDGA on ESXi with my ML10v2.
The host is running ESXi 6.0 u3 HP Customized.
If successful/possible i'd like to be running a Windows 10 VM with a Quadro k2000
(in HCL list of vmware for vDGA)
The VM will have up to 12GB ram and 4vcores and the Quadro card in passthrough mode this vm is meant to broadcast live video to twitch/youtube/fblive using Xsplit.
live audio will be muxed in via virtual audio cable and local icecast server's stream.
It does not matter if the remote desktop is stuttering or choppy as long as the broadcasted material is acceptable.
There will be a lot of video clips and overlays running, and i'll be run realtime 3D visualizations. the CPU and GPU would normally easily handle this workload.
A HP332T and NC112T will be replacing 3 out ot the 4ports of the NC364T as this card won't have enough bandwidth on 1x PCIe for 4x gbit connections.
(Considering the config of my ML10v2 (below in signature) and the fact i have a ml310e v2 front 80mm fan installed in server.) i'm just wondering:
1: can the ML10 v2 handle all it's PCIe lanes being saturated (pcie3.0: 8x for Quadro K2000, 8x for LSI/CISCO RAID9271CV-8i PCIe2.0: 1x for HP 332T and 1x for NC112T)?
2: will any of this config cause any bottleneck to any other hardware?
3: is this a feasible configuration? (The resources are there to be used, it will not push other VM's resources.)