Jump to content
CinatTuhYeliah

Upgrading Existing Setup - OS on SSD in ODD Bay

Recommended Posts

CinatTuhYeliah

Hi all,

 

I'm sure this question has been asked a hundred times before me. I've been searching and installing for days and days to no avail, and I think it's time to ask.

 

I've got myself an HP G1610T which has all SATA bays populated - 4x 4TB drives.

When the system was originally set up, I had FreeNAS installed to USB/SD card and the drives were configured with ZFS. Everything works (well, worked) a treat.

My needs changed and so I set about changing to Ubuntu Server on the USB/SD card. Ubuntu has zfs support installed through apt which picked up my NAS drives and datasets no problem. This worked fine until the swap partition failed on the USB/SD for whatever reason, and all attempts to recover the SD Card have failed so far (I've given up hope and no longer care for it).

 

Now I'm trying to boot Ubuntu Server from an SSD which is plugged in to the ODD SATA port. I've read conflicting statements - It can't be done/It can be done. I've read you need to chainload the operating system with grub on a USB to boot Ubuntu. The list goes on. I can successfully install Ubuntu Server to USB with no problem, boot up, and do what I want, but the install seems to do not set up the Logical Volume and Volume Groups properly when installing to the SSD, and Grub fails to boot when chainloaded.

 

Does anyone have a relatively simple explanation of how I can go about booting my OS from the SSD? I'm preferring to keep all my data in tact where possible.

 

Assistance would be greatly appreciated! 

 

Cheers,

/J

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Similar Content

    • ICYDOCK_Chris
      By ICYDOCK_Chris
      ICY DOCK is the leading expert in data storage enclosures and accessories.
       

       
      Introducing ToughArmor
      ToughArmor is ICY DOCK’s rugged enterprise-grade line of 2.5” SSD and HDD enclosures, utilizing the standard external 5.25” bay, external 3.5” bay (floppy bay), and the slim optical bay (ODD bay). All ToughArmor models feature ruggedized full-metal enclosures and trays, to keep your sensitive data protected, as well as meeting many flammability requirements. It features many high-density storage options, supporting as many as 16x 2.5” drives, or as few as one. Models are available to support SATA, SAS, and now, U.2 NVMe drives, giving you flexibility in choosing drives that work best for you. The strength and build quality of all of our products is backed by a full 3-year warranty against all defects. ToughArmor is used and approved by Tier 1 companies such as Hewlett Packard (HP), General Electric, NASA, as well as the US Armed Forces. For more information on our ToughArmor line, read our ToughArmor documentation here. Links to all of the products discussed here can be found in the documentation. All ToughArmor products can be view here.
       
      ToughArmor for SATA 3.5” / 5.25” Bays
      For SATA and SAS drives installing into 3.5” and 5.25” bays, we have a large number of options available. In the 3.5” bay, there are models that support one to three drives, some with features such as key-lock trays and hardware RAID capabilities. The larger 5.25” bay supports between four and eight drives in a single bay, and up to 16 when using two bays. These denser options offer cooling fans, to keep the large number of drives cool under heavy load. These models have a wide array of uses. The MB991U3-1SB is our portable ToughArmor unit you can take anywhere, and works over USB. The MB992SKR-B is a 2-bay model with a hardware RAID chip, with modes for RAID 0, RAID 1, BIG, and JBOD.
       
      There are also our more traditional drive carriers that install into a single 5.25” bay. While these models don’t have USB support or a RAID chip, they still have direct SATA connections, the full-metal enclosure, and the 3-year warranty.  The 4-bay (MB994SP-4S), 6-bay (MB996SP-6SB), and 8-bay (MB998SP-B) models are perfect for any general applications that require hot-swappable SATA hard drives and SSDs in a dense storage enclosure.
       

       
      ToughArmor for NVME
      Recently, ICY DOCK has released the first-ever hot-swap cages for U.2 NVMe drives.  U.2 drives use the standard 2.5” size familiar from SATA SSDs, but utilize the NVMe specification allowing for transfer rates of up to 32GB/s. ICY DOCK U.2 NVMe cages come in one and four bay models, and utilize a single Mini-SAS HD connection for each drive. The single-bay model (MB601VK-B) fits in a single 3.5” bay, great for space-limited tasks that require only a single drive.  Small-form-factor systems, DVR systems, and photo/video editing systems can benefit from high-performance storage in a small space. If you need more drives, the 4-bay model (MB699VP-B) is the one for you, and even works great in RAID setups.  These are used in datacenters around the world, that need dense NVMe based storage. Both of these models use a Mini-SAS HD port / cable for each drive, so make sure to prepare your system with enough Mini-SAS HD ports.
       

       
      ToughArmor for Optical Drive Bays
      ICY DOCK also has several drive cages that fit into slim (12.7mm) and ultra-slim (9.5mm) optical drive bays. These can serve to replace existing drive readers in laptops and desktop systems, and can also be paired with several of our 5.25” bay brackets. Perfect for space-critical applications that require drives to be installed in the smallest possible space. Common uses are in Small-Form-Factor PCs, media PCs, Home Theater PCs (HTPC), and security footage systems. In industrial uses, these are often found in 1U and 2U rack-mounted systems with limited space availability, and portable workstations/laptops.
       

       
      If you have any questions about the models mentioned here, or anything else, send us an email at tech@icydock.com. We offer first-class customer support for all our products, from pre-purchasing info, product selection help, walking you through installation, and issue troubleshooting. In addition to email, we offer phone and live web-chat customer support, which can be found here.  Our knowledgeable support technicians are available Monday-Friday from 10:00am-5:00pm PST.
       
       
    • acidzero
      By acidzero
      Hello,
       
      So, after several days of testing various different configurations, creating custom ESXi install ISOs and numerous reinstalls, I've managed to get ESXi 6.5U1 installed on my Microserver Gen8 and have working HP Smart Array P410 Health Status showing. For those that are struggling to do the same, firstly here's how. I used original VMWare ESXi 6.5U1 ISO, build 5969303 then made the following modifications:
       
      Remove driver "ntg3" - If I left this in, I had a weird network issue where Port 1 or 2 would repeatedly connect/drop every few seconds. This forces ESXi to use the working net-tg3 driver Remove driver "nhpsa" - this Smart Storage Array driver is what causes array health monitoring to not work. Remove to force ESXi to use working "hpsa" driver Add the Nov 2017 HPE vib bundles Remove hpe-smx-provider v650.01.11.00.17 - This version seems to cause the B120i or P410 to crash when querying health status Add hpe-smx-provider v600.03.11.00.9 (downloaded from HPE vibsdepot)
      Add scsi-hpvsa v5.5.0-88 bundle (downloaded from HPE drivers page)
      Add scsi-hpdsa v5.5.0.54 bundle (downloaded from HPE drivers page)
       
      I did the above by getting a basic/working ESXi/VCSA installation and then creating a custom ISO in VCSA AutoDeploy and exporting it. But the same can be achieved by installing VMWare's original ISO and modifying via the esxcli command.
       
      I have a SSD connected to the SATA port, onto which I am installing ESXi. The 4 front drive bays are connected to the P410.
       
      Configure the Microserver Gen8 B120i to use AHCI mode - the B120i is a fake raid card so only reports physical disks to ESXi. Leaving in RAID mode works but I got a false Health Alert on Disk Bay 5 Install my modified ESXi ISO to the SSD  
      With these modifications I have a working ESXi 6.5U1 on my Gen8 with fully functioning HPE tools and array health monitoring:
       

      I also tested disabling the vmw_ahci driver, which is why the AHCI controller shows it is using ahci in the above image.

       
      If I pull out a disk to test a raid failure, when the health status next updates I can see a raid health alert in the ESXi WebGUI:
       

       
      However I'm now stuck at the next stage - getting this storage health to pass through to VCSA. VCSA can successfully see all other health monitors (under System Sensors in ESXi) just not the storage, which is the most important.
       

       
      Does anyone know how I can get the storage health working in VCSA?
       
      Thanks.
    • Anatoli
      By Anatoli
      I've pinged Schoon about some questions I had regarding the Gen8 upgrade but I guess a community discussion will be more fun.
       
      The ultimate config
      The final config I'm planning to have is the following:
      E3-1265L CPU found on eBay at around $90 (no cheaper ones ) 16GB RAM which I'm still looking for (and want a cheap one, c'mon!) Main Drive made of two SSDs in RAID 1: Samsung MZ-75E500B/EU 500GB SSD Crucial CT500MX500SSD1(Z) 500GB SSD Storage Drive made of four HDDs in RAID 10 (I'll probably get some large Seagates)  
      Basically, what I would like from the Gen8 is to be a container host as well as a data storage unit. The OS and the VMs will be located on the SSD mainly, if larger storage is needed it could be nice to link it to the 4 disk array. I would like to RAID the thing using mdadm and virtualize using KVM so I would more likely use FreeNAS, Debian or Ubuntu as OS. Any thoughts about this? Suggestions?
       
      Getting down to the ports
      After looking into it for quite a long time now, I see the Gen8 motherboard has a max 5 disk capacity using all SAS/SATA ports. This won't fit my need since I need the 6th drive, and I need to get a RAID controller but since I'm going to RAID via software I won't need that and can definitely go for just a SATA PCI-e card with ya boy Marvell 88SE9215 instead.
       
      What do you think about this? Am I doing good at computers so far?
    • npapanik
      By npapanik
      As in subject, Service Pack for ProLiant 2018.03.0 (SPP) does not support Microserver Gen 10?
       
      Any explanation?
       
      TIA
    • Gulftown
      By Gulftown
      I have bought Seagate Enterprise ST8000NM055 drives.
       
      There are few things you should know about this model - it is expensive compared to many other 8Tb models, it has extended warranty, it is made with closed drive case so the mounting holes are not screwed through.
       
      The problem is I couldn't put the hole screw inside so the drive isn't fixed and I also got a problem wheh putting the drive rail inside the cage as the screw top makes it too wide.
       

       
      Has anyone met this problem?
       
      I've bought some spare screws on Ebay to shorten them (I don't want to mess arount with genuine screws) so they can fit but the parcel was lost.


×