I've just gotten my old Proliant server out from mothballs recently to use as a Backup controller for my photographic archive. It's specs are a standard G840 2.8GHz Dual core with 16 GB of ram installed with a standard 350 watt PSU. I'm running WSE 2019 with 4x1TB HDD's installed in the 4 bay drive cage. 3 drives are formatted to RAID 0 in WSE (not embedded HP RAID) and one drive acts as the system/boot drive.
The server is surprisingly responsive given its age, with LAN transfer speeds between 84-94MBs. So has pretty much taken center stage in my backup control between my external drives and 4x1TB RAID 0 NAS.
However, I have been wanting to attach a 256GB SSD boot drive for my OS, so that I can utilize the full capacity of my 4 bay drive. It seems that the BIOS will not let you boot any HDD/SSD off the spare SATA connection on my MB. Anyway I am hoping to eventually attach a PCIe USB 3 card to that. But in the meantime I would like to boot a separate drive other than the 4 bay one already installed.
I gather that you can install a PCIe SSD card on this machine. But it's not clear to me as to what kind of card and drive would be suitable for this server. I realize that my MB is PCIe gen 2, which will incur diminished speed compared to the latest gen 3 MBs. But should I be using NVMe or SATA m.2 drives? Or does it even matter? Also will the BIOS allow me to boot from this drive?
Thanks for taking the time.
New to this forum.
I have a gen8 microserver running at home, where I have installed an SSD in the ODD sata slot. On the SSD I have installed Ubuntu Server 18.04.
I have then created a logical volume witht the raid controller and can then boot to the SSD. So far so good.
My problem emerges when I reboot the system. The logical volume disapears and it gives the error: "boot logical drive is configured but is missing or offline".
I then have to go into the utility and reconfigure the logical volume. Kind of annoying, when trying to run a headless setup.
I hope this topic fits here just right.
So a few days ago, I moved my Microserver Gen8 to a friend's place and resetted it.
When we set it up with new disks, the raid-creation in OpenMediaVault (Debian 9.6) failed.
`smartctl` tests passed just fine though and vendor-tools also didn't report any drive errors.
- I start the Raid creation via `mdadm` and after a while (random blocks and times), it fails and the second disk seems to fail (e.g. sda and sdb are creating a raid, then sdb gets "kicked out" and reread as e.g. sdd, sometimes even with only a few GB of space instead of 3TB).
I tried attaching the second disk on ata2 and ata3, but it keeps failing. The one on ata1 works just fine though.
I tried different disks and vendors, but get the same results.
There are some errors in `dmesg`, but I don't know what they mean. I pasted them here: https://pastebin.com/raw/XASQJ6cy
What I did here: I stopped the raid to re-create it, but then didn't do anything for a while and the error still came up.
Can someone help me with this? (As a wild guess, I'd say that somethings wrong with the SATA connections at least on ata2 and ata3).
Thanks in advance
Now I share my extreamly mod for your reference