Jump to content
RESET Forums (homeservershow.com)

ReFS Migration from NTFS when using Mac clients


Recommended Posts

Just a note on a gotcha I ran into today when migrating TB's of data from an NTFS volume to a ReFS formatted volume.  I hit error:


"error 665 (0x00000299): The requested operation could not be completed due to a file system limitation"


Normal search results hinted towards a problem, but not my resolution. It seems to be a generic filesystem limitation error with no real help to the user.


In my case, this NTFS volume was shared to Macintosh clients for a few years, and so a great deal of files seemed to have named stream data attached to them.  Most seemed to be office documents, icon files or JPEG's.  I can only assume that OSX uses the stream data to store file thumbnail previews.  ReFS does now support file streams, but only upto 128K.


The problem was resolved by transferring the files via a fat32 volume and agreeing to remove the unsupported properties. Copying directly to ReFS doesn't offer you the option to do this!  Luckily none of the files were over 2GB.


all this hassle for bit-rot proof storage....

Link to comment
Share on other sites

If you use Storage Spaces and ReFS in a mirror, it will scrub the drives for data corruption. Somewhat like a RAID controller.


Does ReFS work with "Storage Spaces"?

Yes, indeed!  In fact, ReFS and Storage Spaces were designed to compliment one-another when used together.  If you are using Mirrored virtual disks with Storage Spaces, and you format a data volume sitting on that virtual disk with the ReFS file system, ReFS will automatically interface with Storage Spaces when it detects data corruption to replace bad data with good data from a mirrored copy. 

In addition, ReFS will periodically scrub file system metadata and file data on a Mirrored Storage Space in an effort to combat "Bit Rot" on data that has been sitting for an extended period of time.

Link to comment
Share on other sites

IIRC, different sort of "bit rot". Isn't it great that there isn't a uniform definition.


Bit rot, in regards to physical degradation is what StableBit Scanner and SpinRite detect.


What is meant here is random bit flips. Which a lot of people (erroneously) refer to as bit rot.


RAIDs and ReFS+Storage Spaces are designed to combat the random bit flips. So will any file hashing program (like Integrity Checker for WHS, to a degree).


Can you tell I've had to answer this a few times... 

Link to comment
Share on other sites



I've never considered random flipped bits to be bit rot. I've always figured, even if they're unintentional, that random flipped bits are due to noise, cosmic rays, etc. I've also considered them to be 'deliberate', in the sense that an outside influence acts upon the bits and changes their polarity. And I thought the way to counteract the issue is to have sufficient checksums/hashtags/CRC/etc to ensure that any random bit flips can be detected and corrected. I don't literally mean checksums or CRCs; rather, I mean procedures that do calculations on the data that make it possible to detect if bits have been flipped erroneously and correct them.


OTOH, I've always considered bit rot to be magnetic degradation, the loss of magnetic strength/integrity over time. This is one area where I feel the vagaries of manufacturing can affect the quality of HDDs. Drive platters are made to tolerances; their magnetic properties are not identical; some will retain magnetic status longer than others. Even one area on a single platter can have different magnetic integrity than another area on the same platter. This is one reason I think programs like Stablebit Scanner are good: they can go over the surface of a drive's platters, rewriting (i.e. reinforcing) the data written in each sector.


I'm familiar with RAID of course, but I haven't read up on ReFS, and I'm only partly familiar with Storage Spaces. I'm guessing/supposing that the latter 2 have more in the way of these calculations to detect and correct flipped bits?


BTW Drashna, what is your opinion of Integrity Checker?

Link to comment
Share on other sites

And Ikon, you hit upon the heart of the issue that i mentioned. There isn't a uniform definition that everyone uses.  


In fact, in most cases, when I see people using bitrot, they mean the random bit flips. Almost EVERY time.

ReFS is supposed to fight that, and is what schoondoggy is referring to, and what Storage Spaces (ZFS, and a few other file systems) actively prevent.


And yes, ReFS actually stores a hash of each file in the metadata on the disk. It's actually really neat. However, it requires a mirrored storage space to automatically correct the flipped bits.


As for Integrity Checker.... eats up too much CPU. Needs to be significantly optimized.... And the UI ... could use some updating as well.

Somebody on the Covecube forums recommend fv++, which looks to be a better solution.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Create New...