Jump to content
RESET Forums (homeservershow.com)

Seeking advice on Windows Storage Server NAS and disk layout


Steve Pitts
 Share

Recommended Posts

Well, my data is backed up every night. First the data is copied from Main Storage to Nearline Storage. Then the data is copied from NearLine Storage to one of 2 OffSite backup drive sets.

 

If there are any problems reading a file I receive a notification in the log file.

Link to comment
Share on other sites

I'm honestly not being deliberately argumentative - but how do you know what's being backed up is sound? How do you now the destination is sound?

 

Does the backup process/software backup files that have had the archive bit reset, those which have a newer date stamp, or which have a different hash to that already backed up?  All have potential problems.

 

is the contents of that 6TB's you have either manually spot checked to a probably level of testing, or is it hash referenced to a known benchmark to give (close enough to) absolute confidence?

 

As I say, I'm not trying to raise eyebrows, just interested in how much (blind?) confidence people put into established practices and how they test their backups - as we all know, nobody has time to test file restores, and they pretty much mean nothing unless you perform a complete restore and check either.  I'm guilty of this enough, I only do spot check restores from crashplan and onsite backups.  My personal backup is from a main copy on ReFS mirrored storage space to a ReFS single volume, then from there to crashplan and last ditch is to an external mirrored pair of NTFS volumes.  I'd love a second offsite copy to a btrfs volume though.

Link to comment
Share on other sites

If there's corruption of data in a sector, the stored CRC won't match the just calculated one. I'll get notified. I also run Beyond Compare once in a while. If the binary data doesn't match in 2 copies of the same file, it will flag it. The odds of having data corrupted in all the copies of my data is vanishingly low.

 

Perhaps the best measure is that I have not encountered a file I could not read, and this has been over quite a few years now.

Link to comment
Share on other sites

DrivePool goes as fast (or slow) as the drive it's reading or writing to

That doesn't gel with what I've seen when testing it over the network (which is all that really matters to me - internal speed on the server is neither hither nor yon). Reading is pretty much identical to the performance using a single NTFS formatted disk (non-storage space) whilst writing is 86-88% of the speed of writing to that same single NTFS disk, except when dealing with a lot of small files, when it tails off to only ~75%. Of course, any such testing is subject to 'outside' influences - although I've tried to do all of my testing when there is little or nothing going on on the network. I did have problems with one set of copies failing because the Windows 7 system had trouble seeing the Thecus ("The specified network name is no longer available.") which might have been caused by the data deduplication service being started up on the server (which it seems to be set up to do every hour, so maybe not - but it was the only obvious difference at the time the problems started). In fact the one thing that surprises me about the testing I've been doing (which is essentially 'wall clock' timing of ~20GB file structure copies - one flat directory with 10 x 2GB files, one flat directory with 10,240 2MB files and a 'real world' structure with 149 files of varying sizes spread across 24 subdirectories) is just how much variance I am seeing between different runs. 

 

The only feature of Storage spaces I would say is not ready for use is the parity option

Which is kind of a shame given that that is the feature I was hoping to use (and may still).

 

It does not scale out over spindles at all well and has appalling performance

The former doesn't bother me in the short term, and the latter is relative (it is still quicker than my existing NAS).

 

What kind of performance are you after/requiring?  What kind of protection do you require?

These are questions that I've been asking myself a lot as I look at the figures from my benchmarking (and perhaps should have asked myself more before plumping for this particular piece of kit). Most of the interaction between human beings and this kit is going to be occasionally saving (relatively) small files. Most of the disk writing is going to be under the covers (client backups initiated from the server, file history initiated from the client) and therefore fairly invisible to the human beings. All of which leads me to wonder whether the performance of a ReFS parity space actually matters much (except perhaps during the initial backup of each machine, but even then...) and therefore whether the additional resilience might be worth the 'cost'.

 

the specs on the Thecus doesnt specifically say that it has ECC DRAM, and intel says that the D2550 doesnt support ECC

The additional RAM stick that I've just bought and installed, selected from the list of compatible memory, was not ECC. I appreciate that 'bit rot' (or whatever you want to style it) can be introduced at various points in the process, not just once the data is resident on disk, and that therefore the Thecus is more vulnerable than a unit running with error correction on the RAM. Within my budget, however, that is not an issue that I've chosen to tackle (if I had a couple of thousand to spend then ECC RAM would be a given).

 

I would format the drives eash as single ReFS volumes and then use robocopy to manually replicate the files at quite times as necessary to get the resiliency

I would prefer a single volume and don't want to be relying on additional tools and processes to copy stuff around on the server (that is very much what I'm already doing with the local backup partitions on my two desktop PCs and is part of what I was hoping to get away from by letting Windows Server manage all of the backups). In some respects the DrivePool way of doing things (where you choose which file structures get duplicated) better fits the requirement than either mirror or parity storage spaces, which provide a level of resilience for all data whether that is necessary or not.

 

you could use robocopy on the win7 machine you're having problems with

I did try this with Teracopy, which is designed to speed things along as I understand it, but it slowed to a crawl (although it didn't fail unlike Explorer click and drag or Take Command copy). I'm not sure that I've got the stomach for any more tweaking and twiddling, I just want to get this box set up and doing what is supposed to be doing now.

 

It looks like a memory allocation error on the 2012r2 server though.  I'd say that the Thecus is simply running out of memory.

That was my first thought too (especially since a lot of the old hits on the error code/message seem to relate to the non-paged memory pool) BUT I don't see that using either Process Explorer or xperf (or indeed ResMon when the Technet fellow requested that) and if that was the issue I'd expect to see the same problem occur when copying from the Windows 8 box (and I've successfully copied a near 20GB VMDK file from that box without any trouble, along with at least a dozen separate runs of the full benchmark without a hitch). Add to that the fact that I've just installed an extra 2GB of RAM into the Thecus and it has made next to no difference to the issue (subjectively it seems to take slightly longer to fail when using the Explorer copy method, but even at full gigabit speeds that extra 2GB is not going to be filled inside ten extra seconds).

 

All in all I am still thoroughly stumped, but inclined not to worry about it as in the real world the server is going to be pulling data most of the time (and I cannot get it to fail that way around) and I'm unlikely to be copying even 650MB files manually.

 

As to the disk I'm currently mulling over either a single spanned ReFS volume across all three disks (no storage spaces) or a Drive Pool across all three disks, with only a couple of shared directories duplicated x2. Either way I'm going to force myself to take a decision first thing tomorrow and get on with finalising the server configuration and adding clients to the domain.

Link to comment
Share on other sites

That doesn't gel with what I've seen when testing it over the network (which is all that really matters to me - internal speed on the server is neither hither nor yon). Reading is pretty much identical to the performance using a single NTFS formatted disk (non-storage space) whilst writing is 86-88% of the speed of writing to that same single NTFS disk, except when dealing with a lot of small files, when it tails off to only ~75%. Of course, any such testing is subject to 'outside' influences - although I've tried to do all of my testing when there is little or nothing going on on the network. I did have problems with one set of copies failing because the Windows 7 system had trouble seeing the Thecus ("The specified network name is no longer available.") which might have been caused by the data deduplication service being started up on the server (which it seems to be set up to do every hour, so maybe not - but it was the only obvious difference at the time the problems started). In fact the one thing that surprises me about the testing I've been doing (which is essentially 'wall clock' timing of ~20GB file structure copies - one flat directory with 10 x 2GB files, one flat directory with 10,240 2MB files and a 'real world' structure with 149 files of varying sizes spread across 24 subdirectories) is just how much variance I am seeing between different runs. 

 

 

I think the lack of speed you're seeing is more a function of the D2700 (D for Dorito).  Run DrivePool on anything faster you'll see 112-115MB/s reads and writes solid across a gigabit network with big files.  Smaller files will be slower due to overhead.  Even the Sandy Bridge Celeron G1610T (dual core 2.1GHz) in a Gen8 Microserver can do full wire speed using DrivePool.  

 

On the other hand, even a Xeon E3-1225v3 can't do wire speed with big files using Storage Spaces, and it's a quad core 3.6GHz Haswell, writing to drives that can do nearly 180MB/s all day every day. 

Link to comment
Share on other sites

  • 2 weeks later...

Is mirrored storage spaces really reliable? There are enormous horror stories....

 

In addition, unlike drive extender (whs v1) or drive pool, the drives are not NTFS. This seems simply crazy as you can't simply move the drives and get St the data if you need to. I have yet to see a single.sucxess even moving the StorsgeSpace drives from one machine to another and having it recognized.

 

I tried this on a fresh install of.seever 2012: 2 drives in mirror with a 10tb virtual)provisioned drive).

 

Pulled drives out, out into win8.1 box,.saw the drive pool (yeah) bit no data.

 

Simply scares me.. Drive extender and drivepool/bender being in NTFS allows me to sleep at night. Proprietary file systems and it's advantage in the poorly performing storage space baffles me.

 

 

Sent from my iPhone using Tapatalk

Link to comment
Share on other sites

I've moved Storage Spaces pools between 2012R2 and Windows 8.1 and back again with no problems.  I'm not sure what you were doing wrong to lose your data.

Link to comment
Share on other sites

Again, why use it? The drives themselves are proprietary format. It just seems if there is a solution to have NTFS formated drives then why not use it?

 

Were your drives over provisioned?? I created a 10tb virtual drive. Supposedly others have had same issue.

 

 

Sent from my iPhone using Tapatalk

Link to comment
Share on other sites

ReFS isn't ready for use yet.  Try copying an iTunes library to it and see how far you get.  Hint: you won't.

 

I'd stick with NTFS, I'm very happy with DrivePool, too.  

 

I see no reason to use Storage Spaces, or ReFS.

iTunes on Windows is know for horrible coding.... 

So using it as an example is ... well really bad. 

 

But yes, a lot of programs don't support ReFS, because it's very new.

 

That said, if you're using a parity array in Storage Spaces, you really SHOULD be using ReFS. That's because of the checksumming ability of ReFS should actually repair damage if it's on Storage Spaces.

 

I've moved Storage Spaces pools between 2012R2 and Windows 8.1 and back again with no problems.  I'm not sure what you were doing wrong to lose your data.

 

Then you're lucky. I've done so with VMs for testing and ... sometimes it works, and sometimes it's RAW data. :(

And sometimes this is just remounting to the same VM.....

As many have said, it's not ready for prime time still.

 

I think the lack of speed you're seeing is more a function of the D2700 (D for Dorito).  Run DrivePool on anything faster you'll see 112-115MB/s reads and writes solid across a gigabit network with big files.  Smaller files will be slower due to overhead.  Even the Sandy Bridge Celeron G1610T (dual core 2.1GHz) in a Gen8 Microserver can do full wire speed using DrivePool.  

 

On the other hand, even a Xeon E3-1225v3 can't do wire speed with big files using Storage Spaces, and it's a quad core 3.6GHz Haswell, writing to drives that can do nearly 180MB/s all day every day. 

 

Yup. It's a software based solution, which means that it's done all via the system hardware (CPU/RAM/etc).

That means that you should be using a beast of a machine, and as much ECC RAM as you can throw at it.

Especially if you're using the Parity option. 

And doubly if you're using ReFS, as that has it's own overhead.

 

And as for DrivePool, the driver for the pool is essentially a file system proxy.  You should always see the speed of the disks being accessed, at a minimum.

The Read Striping feature will either switch to a disk with a much faster bus speed (eg SATA vs USB or the like), or it will read ahead and cache the contents to boost the overall speed. So 300MB/s is definitely a possibility.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...