Jump to content
Joe_Miner

SDM-BETA in Gen10 MicroServer

Recommended Posts

Joe_Miner

I already had been running with a SDM-BETA in the G10 MS with a Samsung SSD mounted on it.  I switched out the Samsung and replaced it with a WL500GLSA8100 10K RPM 500GB 2.5" drive and it has been running well for some time now.  Works well IMO.

 

 

DSC_0067CROP.jpg

Capture04.JPG

Capture03.JPG

Capture02.JPG

Capture05.JPG

DSC_0121CROP.jpg

 

 

  • Like 1

Share this post


Link to post
Share on other sites
hfournier

How long are the SATA cables to reach the OS SSD drive in the ODD bay? And is it better to use straight or angled connectors at either end?

Edited by hfournier

Share this post


Link to post
Share on other sites
Joe_Miner

I used standard 18" SATA III cables with straight connectors on each end.  

  • Like 1

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Similar Content

    • acidzero
      By acidzero
      Hello,
       
      So, after several days of testing various different configurations, creating custom ESXi install ISOs and numerous reinstalls, I've managed to get ESXi 6.5U1 installed on my Microserver Gen8 and have working HP Smart Array P410 Health Status showing. For those that are struggling to do the same, firstly here's how. I used original VMWare ESXi 6.5U1 ISO, build 5969303 then made the following modifications:
       
      Remove driver "ntg3" - If I left this in, I had a weird network issue where Port 1 or 2 would repeatedly connect/drop every few seconds. This forces ESXi to use the working net-tg3 driver Remove driver "nhpsa" - this Smart Storage Array driver is what causes array health monitoring to not work. Remove to force ESXi to use working "hpsa" driver Add the Nov 2017 HPE vib bundles Remove hpe-smx-provider v650.01.11.00.17 - This version seems to cause the B120i or P410 to crash when querying health status Add hpe-smx-provider v600.03.11.00.9 (downloaded from HPE vibsdepot)
      Add scsi-hpvsa v5.5.0-88 bundle (downloaded from HPE drivers page)
      Add scsi-hpdsa v5.5.0.54 bundle (downloaded from HPE drivers page)
       
      I did the above by getting a basic/working ESXi/VCSA installation and then creating a custom ISO in VCSA AutoDeploy and exporting it. But the same can be achieved by installing VMWare's original ISO and modifying via the esxcli command.
       
      I have a SSD connected to the SATA port, onto which I am installing ESXi. The 4 front drive bays are connected to the P410.
       
      Configure the Microserver Gen8 B120i to use AHCI mode - the B120i is a fake raid card so only reports physical disks to ESXi. Leaving in RAID mode works but I got a false Health Alert on Disk Bay 5 Install my modified ESXi ISO to the SSD  
      With these modifications I have a working ESXi 6.5U1 on my Gen8 with fully functioning HPE tools and array health monitoring:
       

      I also tested disabling the vmw_ahci driver, which is why the AHCI controller shows it is using ahci in the above image.

       
      If I pull out a disk to test a raid failure, when the health status next updates I can see a raid health alert in the ESXi WebGUI:
       

       
      However I'm now stuck at the next stage - getting this storage health to pass through to VCSA. VCSA can successfully see all other health monitors (under System Sensors in ESXi) just not the storage, which is the most important.
       

       
      Does anyone know how I can get the storage health working in VCSA?
       
      Thanks.
    • Anatoli
      By Anatoli
      I've pinged Schoon about some questions I had regarding the Gen8 upgrade but I guess a community discussion will be more fun.
       
      The ultimate config
      The final config I'm planning to have is the following:
      E3-1265L CPU found on eBay at around $90 (no cheaper ones ) 16GB RAM which I'm still looking for (and want a cheap one, c'mon!) Main Drive made of two SSDs in RAID 1: Samsung MZ-75E500B/EU 500GB SSD Crucial CT500MX500SSD1(Z) 500GB SSD Storage Drive made of four HDDs in RAID 10 (I'll probably get some large Seagates)  
      Basically, what I would like from the Gen8 is to be a container host as well as a data storage unit. The OS and the VMs will be located on the SSD mainly, if larger storage is needed it could be nice to link it to the 4 disk array. I would like to RAID the thing using mdadm and virtualize using KVM so I would more likely use FreeNAS, Debian or Ubuntu as OS. Any thoughts about this? Suggestions?
       
      Getting down to the ports
      After looking into it for quite a long time now, I see the Gen8 motherboard has a max 5 disk capacity using all SAS/SATA ports. This won't fit my need since I need the 6th drive, and I need to get a RAID controller but since I'm going to RAID via software I won't need that and can definitely go for just a SATA PCI-e card with ya boy Marvell 88SE9215 instead.
       
      What do you think about this? Am I doing good at computers so far?
    • npapanik
      By npapanik
      As in screenshot the Network cards of the Gen10 are shown in the removable list... As you can see I have the latest drivers and firmware for the two cards. This is a very annoying issue since you can accidentally lose network connectivity while ejecting an external USB device.
       
      FYI the four disks installed at the front bays (Marvell controller) had the same issue but I fixed it by following MS's article:
      https://support.microsoft.com/en-us/help/3083627/internal-sata-drives-show-up-as-removeable-media  

    • npapanik
      By npapanik
      As in subject, Service Pack for ProLiant 2018.03.0 (SPP) does not support Microserver Gen 10?
       
      Any explanation?
       
      TIA


×