Jump to content
RESET Forums (homeservershow.com)

Can't Expand Array on HP P222 on Gen8


ChronoSphere
 Share

Recommended Posts

Funny you say higher end raid controllers - doesn't HP position the p222 as its entry level?

I'm talking about low end controllers based on chipsets like the LSI SAS2008. They're popular as HBAs as they're easily reflashed with the initiator target firmware but performance with the RAID firmware is pretty dire. The only cache they have a tiny RAM chip on the back of the controller which is needed to enable the RAID firmware. They don't support migrating between differing RAID levels.
Link to comment
Share on other sites

I wonder if it would have just been faster to wipe the disks, build the array fresh as a RAID 5, then copy the data back onto the array from backup. Oh well :)

 

No this is much more fun. Let us know how this turns out. Very interesting topic.

Link to comment
Share on other sites

So expanding the array to include the new disks completed successfully after more than 24 hours. These were brand new disks - I would have thought twice about it if these were older disks - i'm not sure how intensive this process is and whether borderline bad disks would have survived.

 

In any case, the array (after expanding the volume in Windows disk management) is showing 5.5TB capacity (which make sense for a 1+0 setup) and I've now started the raid level migration to Raid 5. We'll see how long this one takes too :)

  • Like 1
Link to comment
Share on other sites

  • 4 weeks later...

Going from 1 to 10, or 5 to 50, or 5 to 6 -- that is all completely feasible and makes sense. Adding disks to an array to extend it -- also makes sense. Converting from 10 to 5? I'm not so sure about that, and that's why I am keen to see what the OP discovers.

Definitely can be done - I transformed my P222 array from RAID5 to RAID10 about a year ago, and now due to space reasons I'm transforming it back to RAID5 until a good deal for (any number of) 4TB or 5TB drives come up on the market to slowly upgrade the drives.

The process  transforming from 10  to 5 is slow. Much slower than 5 to 10!

  • Like 1
Link to comment
Share on other sites

we've had array sets fail in the past, when disks, are put under heavy I/O with raid rebuilds/ re-construction, or adding disks. e.g. RAID 10 in P2000 SAN, add another two disk, and then two disks failed, four hours into rebuild. SAN was only 6 months old.

 

In fact, we've just completed an addition to to a test rig here in the lab, adding another disk, and expanding the array, all was well before the disk was added, and now we've got a predicted failure alert, in one of the original disks!

 

So looks like another support call to HP to get that disk replaced.

Link to comment
Share on other sites

Definitely can be done - I transformed my P222 array from RAID5 to RAID10 about a year ago, and now due to space reasons I'm transforming it back to RAID5 until a good deal for (any number of) 4TB or 5TB drives come up on the market to slowly upgrade the drives.

The process  transforming from 10  to 5 is slow. Much slower than 5 to 10!

 

Good to know. I would only reserve something like this for truly desperate times, as 1) the risk of drive failure during the operation, and 2) fragmentation has to be absolutely horrific once this is complete. Would be interesting to see if HP have published any information about how the data migration is actually performed.

Edited by rotor
Link to comment
Share on other sites

Good to know. I would only reserve something like this for truly desperate times, as 1) the risk of drive failure during the operation, and 2) fragmentation has to be absolutely horrific once this is complete. Would be interesting to see if HP have published any information about how the data migration is actually performed.

29 hours in and it's at 35%.

I don't understand how the logical and physical layers work with RAID controllers but I'm assuming that the P222 has some some sort of garbage collection behind the scenes to tidy up the fragmentation?

 

In any case I've now committed to it... I need to upgrade the drives at some point so I figured this would buy me time until I come across the right drives.

Link to comment
Share on other sites

Good to know. I would only reserve something like this for truly desperate times, as 1) the risk of drive failure during the operation, and 2) fragmentation has to be absolutely horrific once this is complete. Would be interesting to see if HP have published any information about how the data migration is actually performed.

 

Fragmentation should be fine; the migration should be being done at the sector level.  As said, every controller worth its salt (Dell/Avago-LSI, HP/PMCSierra-Adaptec, 3Ware (wait, nm, they're now Avago too) has had good setups to do this for a long time.

 

Remember that quality hardware controllers are regularly monitoring your drives for failure, they have the intelligence to do so and ensure over time that if a drive sector is found bad, to mark it and reallocate from the spare map.  At any time, HP's Smart Storage Administrator can also be used to generate a complete and extremely detailed hardware report of all drives attached to it, something you can do prior to the RAID migration to ensure you see nothing wrong.

 

The one thing I wish I could do in addition to migrating RAID level would be to migrate capacity not by adding additional drives, but by taking an array, taking a drive offline, adding a larger one and rebuilding, wash, rinse, repeat until the array has all new larger drives --and then, extending the RAID capacity.  That isn't an option, sadly.

Link to comment
Share on other sites

 

 

The one thing I wish I could do in addition to migrating RAID level would be to migrate capacity not by adding additional drives, but by taking an array, taking a drive offline, adding a larger one and rebuilding, wash, rinse, repeat until the array has all new larger drives --and then, extending the RAID capacity. That isn't an option, sadly.

It would be an option - except that a single drive rebuild takes 3-4 days on a 3TB array, x4, plus the parity recalculation on extending of the array, so you'd have heavy disk usage for two weeks.

My RAID10 to RAID5 transformation of my 4x3TB array has taken four days and now I'm extending it from 6 to 9GB.

At least the parity initialisation upon extending the array is only going to take less than 12 hours.

Edited by tangcla
Link to comment
Share on other sites

and after we expanded our array recently, the new disks, we added failed!

 

so, new disks arrive today from HP, and we've got to start all over again!

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...