Jump to content
RESET Forums (homeservershow.com)

Raid5 vs. RaidZ (ZFS)


Don W
 Share

Recommended Posts

I have a Linux NAS that I set up a Raid 5 configuration and am wondering about switching to FreeNAS and using ZFS' version of raid called RaidZ because I have read lately that if a HD in a Raid 5 configuration goes bad that there is a good chance that another one will go bad when you rebuild the Raid array and that when ZFS looses a drive it does not have that problem. I really love my OpenMediaVault NAS machine for many reasons but am willing to switch to FreeNAS. Can anyone let me know if this is a real problem or not and is ZFS that much better???

 

Thanks

 

Don

Link to comment
Share on other sites

I wouldn't say there's a 'good' chance of a 2nd drive failing during a rebuild. Sure, the rate is not zero, but it is pretty low. For example, I've never had it happen to me, and I don't know anyone who has.

 

I think the more important question relates to backup. I don't feel anyone should ever rely on a RAID array as a form a backup. If my RAID5 array died on a rebuild I would be able to recover my data from at least 2, and most likely 3, backup sources once I had the array back up and functional.

Link to comment
Share on other sites

That was my initial feeling too. I do backup everything on the server so I think I am going to stay with my Raid5.

 

Thanks

Link to comment
Share on other sites

If you were running an enterprise, particularly a large one, I would argue that RAID6 makes sense in many cases, simply because keeping the databases up and available is so important.

Link to comment
Share on other sites

I agree with ikon, I have never had a raid5 fail during a rebuild, its the reason you set alerts so you do get it rebuilt before another drive fails.  I do run though several of my servers as a raid 6 since they are remote locations without local techs,  so that I have the luxery of having 2 drives fail and still be able to rebuild (all the data is backed up via DFSR to corporate and the corporate dfs server is backed up to disk then offsite tape.  

 

Here in the hermit cave I mainly run raid1 since the drives are large enough without the complications of raid (Seagate already announcing 8TB and 10TB by next year.)   The data files are then duped into the N40L iscsi SAN for local backup and to a USB3 goflex drive that rotates offsite.  The SAN is then mirrored into my openfiler SAN at work.  On top of all that I enable shadow copy on the 2 home servers for quicker data restores in case of accidental deletetions.

  • Like 1
Link to comment
Share on other sites

Although I have never had it happen, Seagate and WD are talking about it a lot. I believe they are seeing a trend of drives failing during rebuild. Big drives take more time to rebuild and rebuild puts drives through a heavy load. Another thought, controller based RAID5 will rebuild faster than a software RAID5. This could also be related to the drives that are being used. I would think a RAID5 of WD Green drives would be more likely to have a failure during rebuild than WD Reds. Both of my productions RAID5 arrays are enterprise drives, WD RE and Seagate Constellations. Like yodafett, I am running and recommending RAID1 and RAID10 as drives get bigger.

Link to comment
Share on other sites

I would agree about the WD Greens and RAID5. My production RAID5 is using RE4 drives. With 8 and 10 TB HDDs hitting the market in the next year or so, it's becoming more and more viable to run RAID1 or 10 even at home.

Link to comment
Share on other sites

Also some clarifications.

 

Raidz equates to raid5, which means 1 drive can fail and the pool still be in a degraded yet functional state. If during a rebuild another drive fails, your pool will be gone.

 

Raidz2, as implied, allows up to 2 drives to be in a degraded state and the pool still functioning. Raidz2 equates to raid6

Link to comment
Share on other sites

  • 2 weeks later...

I wouldn't say there's a 'good' chance of a 2nd drive failing during a rebuild. Sure, the rate is not zero, but it is pretty low. For example, I've never had it happen to me, and I don't know anyone who has.

 

I think the more important question relates to backup. I don't feel anyone should ever rely on a RAID array as a form a backup. If my RAID5 array died on a rebuild I would be able to recover my data from at least 2, and most likely 3, backup sources once I had the array back up and functional.

I have, actually. 

 

The issue is more of with drives that are from the same batch, and have been in use the entire time. The extra stress of a rebuild can cause the other drives to fail, if they're already close too. And since drives in the same batches tend to fail at the same time (3x3TBs within a month of each other, and inspecting, yup, same batch).... "bad things can happen".

 

But I do agree, the likelihood isn't that high. But considering random probability seems to conspire against me....

Link to comment
Share on other sites

With you, that makes 1 :)

 

It does argue for the use of different drives in an array, or at least drives from different batches. It also argues for pre-failure replacement of drives, on a regular, timed schedule.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...