I have problem to rebuilding a disk for Windows Server 2016 Essentials C:\ disk. I have replace a disk with same diskspace (160GB) and running with RAID1.
Have restart the server and have select F2 for rebuilding and the server wont rebuild the disk, I have now got the same message,
776 Logical Drive 1 is queued for rebuilding.
Where can I found the logfiles for troubleshooting ? and what can I do further with that problem ?
Thanks in advance 😉
I have a HP Microserver Gen 8 upgraded to Xeon E3 1265 v2, 16Gb RAM, SSD 256 Gb on the ODD Port and 2 x 3 TB WD RED disks
In the SSD I have an ESXi 6.5 U2 (HP custom image) version installed (driver downgraded to 5.5.088 from 5.5.102) and the WD Disks are in RAID 1 for datastore of vmware related files
I just bought a HP 410 Raid Controller with 1gb FWBC and battery
I did read a lot about this subject, I know that isn't possible to be monitored by ILO 4....but I would like if someone can confirm me my thoughts about how to made this migration
1) Plug HP410 on the mobo
2) Connect Cable used in the disks WD to the lower port of the P410
3) Connect ODD cable used for the SSD Disk in the upper port of the p410
4) Turn On microserver and enter in the SSA and check if arrays created in the b120i are visible for the p410
5) If is OK, restart
- I have that to do something for to boot the SSD that has ESXi installation when boot up server?
- Driver 5.5.088 hpvsa will see the datastores on the p410 without no action from my side?
- Must I disable b120i on bios or is possible co-live with the P410
It'll be very apreciated the help
Now I share my extreamly mod for your reference
I have upgraded from G1610T to E3-1265Lv2 with success but the memory speed which is still stuck at 1333MHz instead of 1600Mhz.
I have 2 x 4GB DIMMs with one be the original HPE DIMM and the other a Crucial 4GB 240 Pin DDR3 1600 MTps PC3-12800 CL11 Unbuffered ECC UDIMM Memory Module )(https://www.amazon.co.uk/dp/B00IW4M9PK/ref=pe_385721_37986871_TE_item)
The iLO memory summary is always showing:
and while I am on the latest firmware ....
The BIOS options available are : 1066, 1333, Auto
I have also tried the power setting to "max performance" but still the memory speed remains to 1333Mhz
Any ideas on how to set the memory operating frequency to 1600Mhz ?
So, after several days of testing various different configurations, creating custom ESXi install ISOs and numerous reinstalls, I've managed to get ESXi 6.5U1 installed on my Microserver Gen8 and have working HP Smart Array P410 Health Status showing. For those that are struggling to do the same, firstly here's how. I used original VMWare ESXi 6.5U1 ISO, build 5969303 then made the following modifications:
Remove driver "ntg3" - If I left this in, I had a weird network issue where Port 1 or 2 would repeatedly connect/drop every few seconds. This forces ESXi to use the working net-tg3 driver Remove driver "nhpsa" - this Smart Storage Array driver is what causes array health monitoring to not work. Remove to force ESXi to use working "hpsa" driver Add the Nov 2017 HPE vib bundles Remove hpe-smx-provider v650.01.11.00.17 - This version seems to cause the B120i or P410 to crash when querying health status Add hpe-smx-provider v600.03.11.00.9 (downloaded from HPE vibsdepot)
Add scsi-hpvsa v5.5.0-88 bundle (downloaded from HPE drivers page)
Add scsi-hpdsa v22.214.171.124 bundle (downloaded from HPE drivers page)
I did the above by getting a basic/working ESXi/VCSA installation and then creating a custom ISO in VCSA AutoDeploy and exporting it. But the same can be achieved by installing VMWare's original ISO and modifying via the esxcli command.
I have a SSD connected to the SATA port, onto which I am installing ESXi. The 4 front drive bays are connected to the P410.
Configure the Microserver Gen8 B120i to use AHCI mode - the B120i is a fake raid card so only reports physical disks to ESXi. Leaving in RAID mode works but I got a false Health Alert on Disk Bay 5 Install my modified ESXi ISO to the SSD
With these modifications I have a working ESXi 6.5U1 on my Gen8 with fully functioning HPE tools and array health monitoring:
I also tested disabling the vmw_ahci driver, which is why the AHCI controller shows it is using ahci in the above image.
If I pull out a disk to test a raid failure, when the health status next updates I can see a raid health alert in the ESXi WebGUI:
However I'm now stuck at the next stage - getting this storage health to pass through to VCSA. VCSA can successfully see all other health monitors (under System Sensors in ESXi) just not the storage, which is the most important.
Does anyone know how I can get the storage health working in VCSA?