Jump to content
RESET Forums (homeservershow.com)

RAID5 benchmarks: the fast, the slow, the ugly?


dvn
 Share

Recommended Posts

Check out the benchmarks for my 4-drive RAID5 array in my WHS 2011 server:

 

RAID5%204-drive%201st.PNG

 

 

Now look at these marks for a single drive connected to the mobo's SATA0 connector in the same machine. Compare the results below for 512K, 4K, and 4K QD 32 with the results above:

 

C%20drive%20-%20a%20single%20Samsung%20HD322GJ.PNG

 

I've run the benches a few times on each, and the results are generally the same. Question is, why does the RAID5 have relatively poor scores compared with the single drive (Seq R/W and 4K QD32 R being the exceptions)?

 

RAID5 drives = (4) 5400RPM Samsung HD204UI disks.

Single drive = 7200 RPM Samsung HD322GJ.

 

Hard to believe the rotational speeds affect the results this drastically. I was certainly expecting the RAID5 to do better in each test, if even by only a small amount. (Sequential test aside, that is.)

Link to comment
Share on other sites

I persume you're talking specifically about the write speeds for the 512k, 4k, and 4k QD 32, cause the read speeds generally look on par or better withe RAID5. Even the sequential write is better on the RAID5; a lot better.

Link to comment
Share on other sites

Remember that any RAID array that uses parity will suffer with multiple smaller files. That is why the arrays are great for video streaming, since reading gives you all the advantage of multiple drives, and very good at video storage, since that is large contiguous files, but not as good at small file transfers. SSDs are great for an OS, for example, because their access speed is so high they shine at small files. Obviously a 7200 RPM drive is going to be better at small file access, especially writes, than 5200 RPM, even in RAID, especially with parity writes included. The write-through and caching ability of the Highpoint card help mitigate that as you see with the 4K files, but you're going to suffer a penalty on writes. Remember that this is still a synthetic benchmark. The advantage of the array in real world is single drive failure recovery and multiple file access speed. Try hitting that one 7200 RPM drive with multiple reads and writes simultaneously, and compare that to the array under the same conditions. There will be NO comparison.

Link to comment
Share on other sites

Depending on the benchmark variations, this is pretty good. RAID 5 is a bit slower at writing because you are writing a parity bit each time you write data. You will not get the same performance as you will on say on a RAID stripe. You are also testing 4 sata 2 green drives and comparing to a 7200 which will also affect your curve. If you want to raise the performance curve higher, reconfigure to RAID 10, and possibly use 7200 across the board. Personally I think your numbers are pretty good for the use case.

Link to comment
Share on other sites

Ok. Thanks guys. I'll go with it. So now I understand what you're saying about writes and the accompanying parity being written for each small file. And I suppose the weak 4K Read results can be explained by the RPMs.

 

But the 512K Reads, aren't files of that size striped across multiple drives in the array? So why aren't the Read results something like 3x that single drive?

 

@Joe_Miner - the 2TB Samsung drives are SATAIII. The single OS drive is SATAII.

Link to comment
Share on other sites

RAID-5 is written in two different ways. If the writes to the array are short,

(less than half-stripe length) each write will be converted to two disk reads

and two disk writes. Performance will be 20-30% of what you would get if writting

to a stand-alone drive. If the writes to the array are large, (longer than stripe

length) the write speed can be faster than the stand-alone drive.

If write-back cache is enabled, The short (two disk reads and two disk writes) are

combined in cache to nearly equal the "large" write speed. A good UPS or a controller

card with battery backup would be wise protection for the data in cache waiting to

be written.

Any "card" with onboard XOR calculation (even if it uses the system CPU) will be

faster than motherboard Raid 5.

Raid 5 is a "stripe" so, four or five disk arrays will be faster than a three disk

array. This is limited by the capability of the controller and would likely reach a

point of diminishing return at a five disk raid 5 array, on a motherboard controller.

Link to comment
Share on other sites

OK. Maybe this would be a good time to ask for suggestions about initial RAID 5 configuration. I'm at a point where I could delete the array and rebuild it if you guys think I should.

 

When I created the array, I went with the defaults: 64K block, 512B sector. If I understand correctly, 64K is the stripe size while 512B is the smallest addressable size, the sector. So far, so good?

 

I'm also curious to know if going from a 3-drive to 4-drive array could have diminished some aspect of performance. I'm guessing no, but I want to hear it from one of you.

Link to comment
Share on other sites

No. More drives will always equal more performace (with the law of diminishing returns of course, and up to the limit of the bus you are using.)

 

For example, I have 5x2TB Samsung 2040's in a RAID5. Mine is the 2720 so it is SATA6 speeds (somewhat irrelevant) but it is a PCI-E 2 x8 card so the max throughput it can accept is higher. This is also measured through a virtual OS in Hyper-V passthrough, which could affect the speeds:

 

RAID5.jpg

 

So my sequentials are higher, which makes sense because I have more drives, where the smaller files that aren't helped by multiple drives are actually lower than yours. Could be from overhead on the virtual pass-through, could be because the array is actually in use while I benchmarked it, could be just because. :)

Edited by timekills
Link to comment
Share on other sites

I have 5x2TB Samsung 2040's in a RAID5. Mine is the 2720 so it is SATA6 speeds (somewhat irrelevant) but it is a PCI-E 2 x8 card so the max throughput it can accept is higher.

We have the same setup, though you have one more drive in your array.

 

So let me ask if your array is configured the same as mine: 64K blocks, 512B sectors

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...