Looking for a quick answer here from someone in the know...
I upgraded some of the internals of my Gen8 server and at the same time I did a BIOS reset as I had forgotten the password that I had set years ago. When booting back up the upgraded hardware was detected and fine but I received Error 1784 - Drive Array - Logical Drive Failure. I believe this is due to a setting change in the BIOS and not anything hardware related. I have gone into IP/SSA and see the array needs to be rebuilt. I had all drives configured as single RAID0. It says all data will be lost if I reconfigure this but I remember reading somewhere on here that the data will be fine if I reconfigure as RAID0.
I do have a backup of course, but it’s not a high availability mirror and will take some time to get back the way I had it, so just want to know if it’s safe to reconfigure the RAID0 without erasing the drives. I have tried pulling the drives out and connecting them to another PC and they are fine. Can anyone tell me which options to choose to get these back to normal and remove the flashing red light?
A few questions for those who use Type-1 Hypervisors on their Gen8 MicroServers...
I am looking to try ESXi or ProxMox and have been reading a lot of the threads on here.
Hopefully you guys can help with some harder to find answers I have been seeking.
1) Which would be the better way to setup ProxMox:
a) Hypervisor on Internal MicroSD, VMs installed on SSD in ODD Port, Data on 4x HDDs in bays.
b) Hypervisor on Internal USB, VMs installed on SSD in ODD Port, Data on 4x HDDs in bays.
c) Hypervisor and VMs both installed on same SSD (partitioned?) in ODD Port, Data on 4x HDDs in bays.
d) Hypervisor on SSD using a USB-to-SATA cable on Internal USB, VMs installed on separate SSD in ODD Port, Data on 4x HDDs in bays.
2) Would a 128GB SSD be a ‘waste‘ for installing a Hypervisor on? How much space is typically needed?
3) How many VMs have you guys run on a Gen8 comfortably without it being sluggish?
4) Everyone seems to be going RAID crazy these days. Is there any reason to use it if high-availability is not that necessary and a good backup plan is in place? What is wrong with separate disks (or singular Raid0s)?
5) Does using Type-1 Hypervisors have any effect on the internal fans speed/noise? Is it possible to have 3-5 VMs running and still have the fan speed @~8% as it was when I was using 2 nested (Type-2) VMs?
Sorry in advance if some of these questions are silly, common knowledge, or “depends on what you are doing in the VMs!” 😆
Thanks in advance to all those that help!
I recently bought a used 1240 v2 but can't get it to work properly in my Microserver Gen8.
It POSTs fine, but 9/10 times fails to boot FreeNAS (returning "panic: unrecoverable machine check exception" followed by some seemingly CPU-related error messages), and the one time it managed to boot it crashed from the slightest load.
When I run Memtest with the 1240 v2 in it, it constantly reboots at test #6 "Block move".
With a 1265L v2 in it it runs flawlessly - FreeNAS boots without an issue and there are no errors in Memtest.
I contacted the seller believing the CPU was faulty, then he told me that he had the same issue getting in to work in his Microserver Gen8, and that the solution was to use another PSU.
I haven't gotten around to try another PSU yet, but I find it a bit weird that the PSU would be the issue, because surely 13A @ 12V (156W) should be enough to power a 69W TPD processor.
I even tried disconnecting all the drives to leave as much power to the CPU as possible, to no avail...
And on top of that I've seen loads of posts from people having upgraded to 1230s, 1240s and even 1270s with no mentions of upgrading the PSU.
I'm running the lastest BIOS (2018.05.21) along with upgraded cooling (Akasa K25), so those shouldn't be the reason either.
Has anyone ever heard of this before?
I have a HP Microserver Gen 8 upgraded to Xeon E3 1265 v2, 16Gb RAM, SSD 256 Gb on the ODD Port and 2 x 3 TB WD RED disks
In the SSD I have an ESXi 6.5 U2 (HP custom image) version installed (driver downgraded to 5.5.088 from 5.5.102) and the WD Disks are in RAID 1 for datastore of vmware related files
I just bought a HP 410 Raid Controller with 1gb FWBC and battery
I did read a lot about this subject, I know that isn't possible to be monitored by ILO 4....but I would like if someone can confirm me my thoughts about how to made this migration
1) Plug HP410 on the mobo
2) Connect Cable used in the disks WD to the lower port of the P410
3) Connect ODD cable used for the SSD Disk in the upper port of the p410
4) Turn On microserver and enter in the SSA and check if arrays created in the b120i are visible for the p410
5) If is OK, restart
- I have that to do something for to boot the SSD that has ESXi installation when boot up server?
- Driver 5.5.088 hpvsa will see the datastores on the p410 without no action from my side?
- Must I disable b120i on bios or is possible co-live with the P410
It'll be very apreciated the help
I have "new" HP MS Gen8 with HP smart RAID B120i. CPU G1610T, factory date cca 2016(?)
I cant use FREENAS with software raid (as on my old BlackCube) because my G8 colect my HDDs to RAID at all AHCI BIOS/Setup configurations
Whan I try use F5 to disable RAID, HP app Smart Store Administrator show me message:
"No licenses found on the selected controller."
HP Smart Store Administrator show me "0 Physical drivers" (3x WD 4TB + 1x WD 3TB, all HDD3,5" NAS ware 3.0)
and in Freenas I can see one "disk" 7,7TB ... :-(
I used directly connected LCD and keyboard for elininated iLO trouble.
Box is out of HP warranty.
q1: Do you have some idea about restore Raid Licence ?
q2: How eliminated Smart Raid function?
q3: Exist any HP or community firmware to convert B120i as only AHCI mode without Raid functions ?
thanks for every useful advice Josef