Jump to content
RESET Forums (homeservershow.com)
e_merlin

Raid setup for home use - what do you recommend?

Recommended Posts

schoondoggy

Interesting that we are talking about RAID and failures.  Just lost yet another 6T Red today (having doubts about these drives, worse than Seagates).  At this point, (not that you could before) you can not convince me RAID is not necessary.  The fact I will have a new drive put in 5 minutes, have no down time, and have the redundancy rebuilt by this tomorrow, priceless...

How old is the drive?

Most of my production systems are running enterprise drives from Seagate, WD and HSGT. I know the industry data does not support enterprise drives being more reliable, but I see very few failures.I agree with all of your statements on RAID. I also keep a spare on hand. One other thing I need to add into the equation is that I travel for work. If a drive fails it may be a week before I am able to change it out. Because of this I tend to lean toward RAID10 and RAID6 to offer up to two drive failures. In the future I plan to look at having hot spares running in the system.

Share this post


Link to post
Share on other sites
pcdoc

How old is the drive?

Most of my production systems are running enterprise drives from Seagate, WD and HSGT. I know the industry data does not support enterprise drives being more reliable, but I see very few failures.I agree with all of your statements on RAID. I also keep a spare on hand. One other thing I need to add into the equation is that I travel for work. If a drive fails it may be a week before I am able to change it out. Because of this I tend to lean toward RAID10 and RAID6 to offer up to two drive failures. In the future I plan to look at having hot spares running in the system.

 

Sure, every situation is different and you have to use what works for your needs.  I have used hot spares when I was running a server with extra drive bays but it is much more limiting on a 4 bay NAS.  The drives were bought in February 2015 and MFG in November 2014 so they are reasonably new.  I lost the first one about two months ago and the second one yesterday (already replaced and rebuilt).  As all the drives were from the same MFG date, it might isolated to the that time frame, not sure.  The first replacement drive is a different batch and obviously the one I get this weekend (advanced RMA) will also be a new batch.  It will be interesting if the third of the 3 goes out or if it a random act of coincidence. 

Share this post


Link to post
Share on other sites
ShadowPeo

pcdoc: Whilst I agree with you finding something you are comfortable with RAID wise about finding something you are comfortable with, and also about keeping a spare on hand. I disagree with you statement below

 

If we look at the mathematical probability, it gets real small after RAID 5

 

According to the math, and my understanding of the problem and admittedly I am a network engineer/sysdamin not a storage admin the following math is correct (ref:http://www.zdnet.com/article/why-raid-6-stops-working-in-2019/)

 

Here's the math: (1 - 1 /(2.4 x 10^10)) ^ (2.3 x 10^10) = 0.3835

You have a 62% chance of data loss due to an uncorrectable read error on a 7 drive RAID with one failed disk, assuming a 10^14 read error rate and ~23 billion sectors in 12 TB. Feeling lucky?

 

 

Oversimplified? yes, and it makes several assumptions specifically the error rate and number of sectors as mentioned above but it gives you a rough idea, hence the use/suggested use of a 2 disk parity where available. I fully realise that this not generally feasible for home users, and some protection in better than none but that is the math I operate based on.

 

I could put in an example of a reasonable redundancy case at one of my larger clients, but that falls well outside the scope of this discussion and the budget of most people and small enterprises

Share this post


Link to post
Share on other sites
rotor

Interesting that we are talking about RAID and failures.  Just lost yet another 6T Red today (having doubts about these drives, worse than Seagates).  At this point, (not that you could before) you can not convince me RAID is not necessary.  The fact I will have a new drive put in 5 minutes, have no down time, and have the redundancy rebuilt by this tomorrow, priceless...

This is what is called "anecdotal evidence". =)

 

In all seriousness, you may have something bigger wrong that is causing drive failure. "Yet another" is not something I have ever experienced personally over 20+ years of computer ownership (I've owned probably 50+ drives over the years).

Edited by rotor

Share this post


Link to post
Share on other sites
itGeeks

Late to this discussion.  I think you have to find a balance with what you are comfortable with.  My suggestion is RAID 5, have a physical spare on hand for quick replacement as that is key, and have an offsite copy.  A month or so ago I had a  6T WD Red go out on me in my QNAP which sports 3 x 6T Red drives in a RAID 5.  I was able to replace it and it rebuilt the array in less than a day as I had a spare with no strain no pain.  Sure if I lost a second drive during that period I would be in trouble hence the need of replacing it quickly having the offsite copy.  Some of you have read my earlier post where I lost numerous Seagate drives in my 18T raid 5 and recovered in hours over the past years.  All the discussions about probability are true but at which point to you stop.  If you have a third copy of "critical" data offsite as we should, and are using a RAID 5, there is no reason to be really concerned unless you plan on taking 3 months to replace the defective drive.  If you are really concerned and your need dictates it, use real time replication to another server.  The thing we must consider is balance.  For maximum use of space and get redundancy you should use RAID 5.  If you have the space to spare in your server/NAS, then choose RAID 6.  If we look at the mathematical probability, it gets real small after RAID 5 which is why it makes the most sense in a home environment where drive slots are limited.  Remember that instead of building the absolute lowest probability of failure, work on building an automatic offsite process which in the end will get the best level of protection.  Just my two cents.

Agreed and well said. The only thing I will add on top of the offsite backup have a local backup for fast recovery, I have 3 offsite copy's to different locations and 2 copy's onsite. The way my backup plan works is I backup my Synology NAS too two different family members Synology NAS using the built in backup of Synology, My family members backup there Synology NAS to me every night, I then push a copy to CrashPlan. My Synology NAS are in a HA-Cluster so that's my 4th copy then I backup to another box at my house that's located on the 3rd floor in case of robbery or flooding in the basement. I think its safe to say I will get my data back if disaster strikes. I have lost data in the past before I really new what I was doing and now I refuse to be defeated with lost data.

Edited by itGeeks

Share this post


Link to post
Share on other sites
itGeeks

Interesting that we are talking about RAID and failures.  Just lost yet another 6T Red today (having doubts about these drives, worse than Seagates).  At this point, (not that you could before) you can not convince me RAID is not necessary.  The fact I will have a new drive put in 5 minutes, have no down time, and have the redundancy rebuilt by this tomorrow, priceless...

I have to admit I just had a WD Red 6TB go on me a few months ago, I replaced it in my Synology the RAID rebuilt just fine. I RMA back to WD and within two weeks had a replacement. I did not do the rush ship with WD because I have spares but if I had then my new drive would of been here in a few days. I would still take WD over Seagate, I had way to many drives fail when using Seagate.

Share this post


Link to post
Share on other sites
itGeeks

This is what is called "anecdotal evidence". =)

 

In all seriousness, you may have something bigger wrong that is causing drive failure. "Yet another" is not something I have ever experienced personally over 20+ years of computer ownership (I've owned probably 50+ drives over the years).

I have also had my share of drives go bad over the years and I can tell you it was nothing I was doing wrong. What kind of drives are you using and in what configuration?

Share this post


Link to post
Share on other sites
rotor

I have also had my share of drives go bad over the years and I can tell you it was nothing I was doing wrong. What kind of drives are you using and in what configuration?

 

Here is my current inventory. Only around half are in use -- the Seagates are all sitting in a drawer, no longer being used (one was replaced under warranty, probably the only drive I've ever had replaced). The small 3.5" drives were all bundled with something (like a MicroServer), and have never been used.

 

I never use RAID as I explained in an earlier post, as it introduces a lot more risk than it solves, in my opinion. Summary: greater complexity and additional dependencies do not necessarily a more robust solution make.

 

Brand Model Type Form GB
OCZ Vertex 2E SSD 2.5" 60
Samsung 830 SSD 2.5" 64
Seagate Barracuda 7200.10 HDD 3.5" 160
Seagate Barracuda 7200.11 ST31500341AS HDD 3.5" 1500
Seagate Barracuda 7200.12 HDD 3.5" 250
Seagate Barracuda 7200.12 HDD 3.5" 250
Seagate Barracuda ES.2 ST31000340NS HDD 3.5" 1000
Seagate Barracuda ES.2 ST31000340NS HDD 3.5" 1000
Seagate Barracuda ES.2 ST31000340NS HDD 3.5" 1000
Seagate Momentus 7200.4 HDD 2.5" 250
Toshiba MG03ACA200 HDD 3.5" 2000
WD Scorpio Black WD3200BJKT HDD 2.5" 320
WD Scorpio Black WD5000BEKT HDD 2.5" 500
WD WD2500AAKX HDD 3.5" 250
Samsung 850 Evo SSD 2.5" 500
Seagate HDD 2.5" 160
Kingston SSDNow mS200 SSD mSATA 60
Crucial C300 SSD 2.5" 64
Intel X-25M G2 SSD 2.5" 160
Crucial C300 SSD 2.5" 128
Samsung 830 SSD 2.5" 256
WD Red WD30EFRX HDD 3.5" 3000
WD Red WD30EFRX HDD 3.5" 3000
WD Red WD60EFRX HDD 3.5" 6000
Crucial M500 SSD 2.5" 480
LiteOn SSD mSATA 256
Samsung 840 Evo SSD 2.5" 500
Samsung 850 Evo SSD mSATA 250
Edited by rotor

Share this post


Link to post
Share on other sites
jmwills

How can introducing RAID involve more risk?  That doesn't make sense, at least not to me.

Share this post


Link to post
Share on other sites
rotor

How can introducing RAID involve more risk?  That doesn't make sense, at least not to me.

The controller becomes a single point of failure.

 

In business, you have support contracts, so the manufacturer replaces a faulty component for the lifetime of the support contract. Most home users do not have support contracts, so if the RAID controller goes, bye bye all your data. That's a pretty big risk!

 

You may then have to resort to buying a 2nd hand controller on Ebay, assuming they are still available, and in any case you would be many days/weeks without your data.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...