Jump to content
RESET Forums (homeservershow.com)

HP SmartArray P212 vs Software Raid in Linux


abpostelnicu
 Share

Recommended Posts

Hello,

 

My setup is as follows: 

 

  • Intel Xeon 1340V2
  • 4GB of RAM, standard HP
  • 4 1TB disks 7200rpm with 64mb cache connected to a software RAID5

I can procure the P212 very cheap, at around 70usd shipped to Romania, so my question is if it is a viable solution to install that card in my system and how would i connect the 4 disks to the raid card. I know that the linux distro that i'm using now, Debian Jessie has some drivers for HP raid cards.

 

The strong points for of having this card would be:

  • hardware raid with cache memory and battery
  • offload the work from cpu to raid

Downsides

  • if the card fails the raid is lost - not totally sure!

 

Thank you!

Link to comment
Share on other sites

Hi,

 

The P212 has an internal miniSAS connector, the Microserver drive bays are already connected to the motherboard with a miniSAS cable, so you should just be able to unplug it from the board and into the P212.

 

You might want to check specs for the maximum drive sizes for the P212, I've only used them with external tape drives.

Link to comment
Share on other sites

You won't get any read performance benefit. You will only get write performance benefit if the memory on the RAID controller is battery backed and the battery is healthy, or you will lose data in the event of a crash or unclean shutdown.

 

Personally, I would use software RAID (or better, ZFS) every time because it doesn't get in the way of smartctl and hdparm access, and performance benefit of hardware RAID has been non-existant for decades (the main CPU can handle XOR-ing the data blocks to generate parity blocks far faster than the little ARM chip on the RAID controller).

Link to comment
Share on other sites

Regarding ZFS i know that it has some special ram requirements, if i'm not mistaking the pool size is directly dependent on the amount of ram memory that is installed. I hope i'm not mistaking but i remember that i've read a while back that for every 1TB of data you need 1GB of ram, this wouldn't be a major problem since i can upgrade the ram. But let's say i want to upgrade the pool can i replace a disk with a bigger one?

Link to comment
Share on other sites

Its not "directly" dependent, for caching is roughly recommended to have 1GB of RAM per TB of disk.

 

That said.. 

As the onboard controller is limited to 2x SATA3 + 3x SATA2 ports, I am also going for a raid card (dell perc h200), but only for controller sas/sata features  and using ZFS on the 4 disks. Boot from usb/microsd.

 

 

A good intro to utilization od disks in ZFS/RAID/MDADM you can find here http://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html .

 

After that you can decide for yourself what to do.

  • Like 1
Link to comment
Share on other sites

Regarding ZFS i know that it has some special ram requirements, if i'm not mistaking the pool size is directly dependent on the amount of ram memory that is installed. I hope i'm not mistaking but i remember that i've read a while back that for every 1TB of data you need 1GB of ram, this wouldn't be a major problem since i can upgrade the ram. But let's say i want to upgrade the pool can i replace a disk with a bigger one?

Not so. You only need about 1GB of RAM per 1TB of storage (approximately), if you are using deduplication, which you really shouldn't be using unless you know exactly what you are doing, or performance will likely end up being adjusted to 0.

 

I was until recently happily running a 4x4TB RAIDZ2 (8TB usable) pool on a single-core 32-bit single-core ARM with 1GB of RAM.

 

You don't need tons of RAM for ZFS at all, no more than for any other FS. Of course it goes without saying that the less RAM you have, the less will be cached which will affect performance.

 

My G8 is running 4x1TB drives in RAIDZ2 at the moment, with pure ZFS (including root and /boot, grub2 does come with sufficient ZFS support to make this possible).

Link to comment
Share on other sites

You won't get any read performance benefit. You will only get write performance benefit if the memory on the RAID controller is battery backed and the battery is healthy, or you will lose data in the event of a crash or unclean shutdown.

 

Personally, I would use software RAID (or better, ZFS) every time because it doesn't get in the way of smartctl and hdparm access, and performance benefit of hardware RAID has been non-existant for decades (the main CPU can handle XOR-ing the data blocks to generate parity blocks far faster than the little ARM chip on the RAID controller).

The HP Smart Array controllers like most caching RAID controllers do improve read performance. See page 3: http://h10032.www1.hp.com/ctg/Manual/c00687518.pdf
Link to comment
Share on other sites

The P212 is limited to 3Gb/s using SATA drives. Using any RAID controller limits some of your migration choices. You can move a RAID set from a HP controller to the same or newer controller. In the past I recommended RAID cards often. Now I find they are a good fit with ESXi. In your case I think I would look hard at ZFS.

Link to comment
Share on other sites

The HP Smart Array controllers like most caching RAID controllers do improve read performance. See page 3: http://h10032.www1.hp.com/ctg/Manual/c00687518.pdf

It must be true, I read it in the marketing brochure. They may appear to be faster in some cases for light workloads, but for heavy workloads where you are churning out the cache all the time, there is nothing the controller can do that the OS I/O scheduler cannot do better, and the OS will have a LOT more RAM for caching than the RAID controller will.

 

The one situation where having the "capacitor" for buffering is on writes with bursty workloads, where if you have a BBU on the caching controller, it can absorb a few hundred MB of writes, and then commit them during the following lul in the I/O. But again, that only applies on bursty workloads. If you are sustaining high I/O, 200-500MB of write cache that the controller might provide will just outright not make a difference if you are pushing the kind of load that saturates the disk I/O of your underlying disks most of the time.

 

In my experience, the better transparency with software RAID is more of an advantage than slightly better performance in a few corner cases. With very light and bursty loads, a caching controller may well make things feel faster, which may be handy on a desktop machine.

Link to comment
Share on other sites

It must be true, I read it in the marketing brochure. They may appear to be faster in some cases for light workloads, but for heavy workloads where you are churning out the cache all the time, there is nothing the controller can do that the OS I/O scheduler cannot do better, and the OS will have a LOT more RAM for caching than the RAID controller will.

 

The one situation where having the "capacitor" for buffering is on writes with bursty workloads, where if you have a BBU on the caching controller, it can absorb a few hundred MB of writes, and then commit them during the following lul in the I/O. But again, that only applies on bursty workloads. If you are sustaining high I/O, 200-500MB of write cache that the controller might provide will just outright not make a difference if you are pushing the kind of load that saturates the disk I/O of your underlying disks most of the time.

 

In my experience, the better transparency with software RAID is more of an advantage than slightly better performance in a few corner cases. With very light and bursty loads, a caching controller may well make things feel faster, which may be handy on a desktop machine.

No, it is not from reading marketing material. It is from years of working with RAID controllers in enterprise applications.

RAID controllers use read-ahead caching algorithms to enhance read performance. ZFS enhances read performance using read-ahead caching algorithms. Yes, they both use the same basic technology! Obviously, these algorithms are tweaked and tuned by the companies that created them. I did not say a RAID controller was better at read performance than OS based I/O like ZFS. I am just pointing out that caching RAID controllers do improve read performance.

If you take the time to read my other post in this thread you will see that I am not recommending the OP to use the P212.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...