At work I have several tasks which replicate about 50Gb of data across the corporate network. Why is not so important. The requirements of every destination for the replication are unique. Last week I built a new replication to publish data to a UXFS file system destination. My source is a Windows NTFS file share. I use a scripted RoboCopy task to regularly replicate the data.
When I built this new mirrored replication task on Friday I expected the weekend tasks to simply do incremental updates of the destination. I was surprised that every replication was a full file copy. This was unusual, and generally bad.
My RoboCopy commands looked something like this: RoboCopy \\Source\Share \\Destination\Share /MIR /Z /FFT /COPY:DT /NP /NDL
This has basically worked without issue for ages in a dozen different usages. Oddly the /MIR argument was not working as expected and a full file copy was done on every execution; the Source files always showing as Newer.
The short story is, after some lucky Google searches, I turned up this link.
http://forums.buffalotech.com/buffalo/board/message?board.id=0101&message.id=48
Mr. Taylor explains that the file time resolution is different, more precise, in XFS than in NTFS. This makes sense. My destination is UXFS, a derivative of UFS, or so I am told. Nonetheless, the argument was sound so I tried it. Lo, and behold, I have success. My new command line looks like the following: RoboCopy \\Source\Share \\Destination\Share /MIR /Z /FFT /COPY:DT /NP /NDL
I used the /COPY:DT argument for performance reasons. The destination inherits security settings from the share, so removing the check speeds performance; one less check on each file.
This chewed up most of my day trying to figure out the cause and resolution. At least I have a resolution. I just wish it left me time for software development, which is what I actually do for a living. Sometimes.