Drobo

Expanding MFT and directory consolidation

All NTFS volumes contains a Master File Table (MFT) that is a record for everything stored at the drive. The more files you put at the drive the larger it needs to be. I usually expand the MTF at new drives to prevent MFT fragmenting. The program I am using, Diskkeper, call this process FragShield. The defragmentation program in Windows does not take care of MFT fragmenting, only file fragmenting.

DRI claims that the Drobo takes care of defragmenting. Does anyone know if they also handle MFT fragmenting? Should I run the FragShield or will it compromise my data?

There is also directory consolidation to consider. Does Drobo take care of it or should it be run separately?

personally i wouldn’t worry about MFT fragmentation or directory consolidation on drobo, at best i think fragshield would be ineffective since the data is spanned over several disks it wouldn’t know how to optimise it in that case, plus drobo uses block level virtualisation, so the OS has no knowledge of how the data is actually laid ut on the disks - it could look like it is in a solid lump (unfragmented) but actually drobo has spread it all over the place.

at best - it would do nothing

at worst, it may damage something.

why are you concerned about this? is it for performance reasons? if it is then i suspect you will find that drobo is too slow for that to make any measurable difference whatsoever!

Yes, the performance is poor so I would prefer not to lose more of it. :slight_smile:

When you format a drive from windows, it is possible to select “Allocation unit size”. Does the Drobo affect it or should I select it as if it was a normal HD?

given drobos sturcture, i dont think that the fragmentation of the mft or directories would have any difference at all

in facthaving them split across differnet disks could help

Do not defrag your Drobo.

I have a Drobo V2. I have partitioned it to 16TB and formated it with NTFS. Cluster size (=allocation unit size) is set to 64k. The default cluster size is just 4k for NTFS. I had to change the “Write caching and Safe Removal” setting from “Optimize for quick removal” to “Optimize for performance”. Without this change it is not possible to format manually.

After formating I expanded the MFT to 7.8GB. This will be enough for about 8 million files and prevent MFT fragmentation. I still don’t know if it is important, but it might help the Drobo to perform better when accessing small files.

I also removed the setting for “Indexing Service” that MS wants us to use. I have still not figured out if it actually speeds up searches. I never noticed any difference. It does force the HD to do some extra work when indexing.

The Drobo is stable (so far) and I will report here if I notice anything strange.

Why have you gone with a larger cluster size?

Whenever i selected optimise for performance it used to crash.

sine DRI say fragmentation is not something to worry about on Drobo, i woudlnt be concerned about MFT fragmentation, especially since i think until drobos can sustain 100+mb/sec i would not expect that to be the limiting factor.

if the folder/drive you are searching is indexed - searches should be basically instantaneous, since it just looks in the index rather than on the drive itself, so especially if you have a huge number of small files on drobo - the index would be MUCH faster - as it is stored on your system drive (by default) rather than on the drobo itself, so you can either search through a relatively small database on your system drive, or search through thousands of files on your slow drobo.

Cluster size:
I have a larger cluster size because of performance that are supposed to increase. I also needed 8k or more just to make it possible for Diskeeper to analyze the drive. I agree that the performance gain is minimal. I consider it more as an experiment.

Optimize:
I didn’t know anyone had problems with the “Optimize for performance” setting. Maybe I should change back to “Optimize for quick removal”. Is it just your experience or do we have anyone else that tried it?

MTF fragmentation:
Normal defragmentation does not take care of MFT fragmentation. I suspect it is not taken care of by Drobo. It might be one of the reasons why Drobo got poor performance with small files.

Indexing
You are telling me how indexing should work, not how it is. MS got indexing activated by default so all of your drives should be indexed. Just do a search for some random letters at a large drive. You will see it search all directories.
There are lots of third part software that do intelligent indexing and searching just because MS don’t.

cluster size: surely performance would only increase under certain circumstances? and diskkeeper is defrag so why would you run it on drobo which DRI have said should not be defragged (as its pointless since the software cant only see the virtualised FS presented to it by drobo which has no relation to how the data is actually laid out on hardware)

Optimize: on the old forums there was a big thread, no one could get it to work, literally , in device manager as soon as you checked the box and hit “ok” that was in. machine freeze. only way out was to hard reboot - and when it recovered the setting was back how it was (i.e. it had never finished applying). and this was quite a few people having the same experience. maybe one of the firmware updates to drobo fixed this?

well drobo is a block level device, so i would have expected it to optimise the layout of EVERYTHING IN the FS, since it doesn’t really care whether they are files or not.

Indexing:

indexing is activated by default - but only for a very very small selection of files (i.e. your user area - desktop,. my documents, my music my pictures etc etc), so if you searched across a whole drive, yes it would have to look through most of those directories the old fashioned way.

I also had some problems activating this but finally managed to and it was worth it in my opinion. I found transferring many small files a major pain with Drobo (and any other drive as well) configured for “quick removal”.

I sense an inherent danger in defragmentation of Drobo… Thin provisioning gives us “imaginary” blocks.

What happens if the defrag application decides to move a file to an “imaginary” block??
I relate this to another post from a user who ran Chkdsk /R (scan for bad blocks) and got a huge number of bad blocks reported - so many that the tally rolled over and reported a negative number.

I think may be why DRI (through Jennifer) says not to defrag your Drobo.

Yes bhiga, it would be stupid to try to defrag the Drobo. I will not try it. Especially not after Jennifer clearly said that the Drobo should not be defragmented.

However, I am trying the other settings I discussed above and I’m measuring write speeds with small files. So far I have seen write speeds that are 50% faster then the slowest setting. Each measurement takes about a day because I’m formatting each time and then copying for many hours. I will publish a list here later.

I suspect that cluster size, like stripe size in a traditional RAID controller, will have some effect on Drobo’s speed, but I also wonder how much uncertainty is introduced due to BeyondRAID’s support for drives of different sizes.

While lengthy, I believe your testing methodology is sound, as you are wiping the Drobo between runs. I’m glad you’re doing it right and I’m interested to see your results.

I have to redo all my tests. The Drobo seems to be doing some work after the format is done. I can’t imagine what it is, but the fact remains that the drives are doing something. The new strategy will be to format, wait a couple of hours, then measure performance by copying all the files.

it may be longer than some hours - i think DRI recommend 24-48 hour to let it settle down before benchmarking, however on an empty drobo i guess that may go faster