I am hoping that you can help with an issue I am having with my new HP Microserver Gen 10. Apologies if I am posting in the wrong place.
The microserver CPU is constantly running at 100%. It occasionally dips but normally is maxed out. I have build the server from new using Samsung SSD drives in a Raid 1 configuration initially. I have also tried the following.
1. Single SSD with windows 10 Pro
2. Single SDD with Server 2016
3. Raid 1 with Server 2016.
4. Single SATA with Windows 10 pro.
I have updated the firmware. I have downloaded all the latest drivers from HP's portal.
I have tried to get support from HP but they are providing one excuse after another to explain why they cant support the server. Currently the server is about a month or so old.
HP Support responses..
1. Windows 10 isn't supported.. ( Despite having windows 10 drivers for this server on their portal)
2. SSD drives are not supported
3. SATA drives are not supported unless they have HP branding. ( The Gen 10 is being sold on Amazon with Western Digital drives included)
We have tried killing of several services in Windows 10 as per articles that we have found online. We have set the Virtual Memory to refresh on reboot.
None of the above have achieved anything.
As a result of the CPU running constantly at 100% the system is unbearably slow and cant be used to run even the most basic of tasks.
It looks like we have been left holding an expensive tin box.
I have attached a screen dump showing the CPU utilization.
Any help or advice at this point would be greatly appreciated.
Now I share my extreamly mod for your reference
Wonder if anyone can point me in the right direction here. Have just replaced the CPU on my gen 8 microserver. Went according to the video above. Upon booting the server up, there is a clicking sound coming from the main board. Regular as clockwork, and the server isn't posting. Any ideas? I've swapped back to the G1610T CPU, but still the same issue. And just make it interesting, I don't have access to a monitor or keyboard.
So, after several days of testing various different configurations, creating custom ESXi install ISOs and numerous reinstalls, I've managed to get ESXi 6.5U1 installed on my Microserver Gen8 and have working HP Smart Array P410 Health Status showing. For those that are struggling to do the same, firstly here's how. I used original VMWare ESXi 6.5U1 ISO, build 5969303 then made the following modifications:
Remove driver "ntg3" - If I left this in, I had a weird network issue where Port 1 or 2 would repeatedly connect/drop every few seconds. This forces ESXi to use the working net-tg3 driver Remove driver "nhpsa" - this Smart Storage Array driver is what causes array health monitoring to not work. Remove to force ESXi to use working "hpsa" driver Add the Nov 2017 HPE vib bundles Remove hpe-smx-provider v650.01.11.00.17 - This version seems to cause the B120i or P410 to crash when querying health status Add hpe-smx-provider v600.03.11.00.9 (downloaded from HPE vibsdepot)
Add scsi-hpvsa v5.5.0-88 bundle (downloaded from HPE drivers page)
Add scsi-hpdsa v126.96.36.199 bundle (downloaded from HPE drivers page)
I did the above by getting a basic/working ESXi/VCSA installation and then creating a custom ISO in VCSA AutoDeploy and exporting it. But the same can be achieved by installing VMWare's original ISO and modifying via the esxcli command.
I have a SSD connected to the SATA port, onto which I am installing ESXi. The 4 front drive bays are connected to the P410.
Configure the Microserver Gen8 B120i to use AHCI mode - the B120i is a fake raid card so only reports physical disks to ESXi. Leaving in RAID mode works but I got a false Health Alert on Disk Bay 5 Install my modified ESXi ISO to the SSD
With these modifications I have a working ESXi 6.5U1 on my Gen8 with fully functioning HPE tools and array health monitoring:
I also tested disabling the vmw_ahci driver, which is why the AHCI controller shows it is using ahci in the above image.
If I pull out a disk to test a raid failure, when the health status next updates I can see a raid health alert in the ESXi WebGUI:
However I'm now stuck at the next stage - getting this storage health to pass through to VCSA. VCSA can successfully see all other health monitors (under System Sensors in ESXi) just not the storage, which is the most important.
Does anyone know how I can get the storage health working in VCSA?
I've pinged Schoon about some questions I had regarding the Gen8 upgrade but I guess a community discussion will be more fun.
The ultimate config
The final config I'm planning to have is the following:
E3-1265L CPU found on eBay at around $90 (no cheaper ones ) 16GB RAM which I'm still looking for (and want a cheap one, c'mon!) Main Drive made of two SSDs in RAID 1: Samsung MZ-75E500B/EU 500GB SSD Crucial CT500MX500SSD1(Z) 500GB SSD Storage Drive made of four HDDs in RAID 10 (I'll probably get some large Seagates)
Basically, what I would like from the Gen8 is to be a container host as well as a data storage unit. The OS and the VMs will be located on the SSD mainly, if larger storage is needed it could be nice to link it to the 4 disk array. I would like to RAID the thing using mdadm and virtualize using KVM so I would more likely use FreeNAS, Debian or Ubuntu as OS. Any thoughts about this? Suggestions?
Getting down to the ports
After looking into it for quite a long time now, I see the Gen8 motherboard has a max 5 disk capacity using all SAS/SATA ports. This won't fit my need since I need the 6th drive, and I need to get a RAID controller but since I'm going to RAID via software I won't need that and can definitely go for just a SATA PCI-e card with ya boy Marvell 88SE9215 instead.
What do you think about this? Am I doing good at computers so far?