Defragging a drobo v2

Can you use a defrag program like drive genius to defrag a drobo. My copy of drive genius is telling me that it is over 20% defragmented

no, drobo doesnt get fragmented in that sense.

any optimisisations which need to be made, drobo will do by itself internally.

it may LOOK fragmented to external tools - but that is because the blocks arent mapped sequentially, block B doenst always follow block A, so your defrag program would make things worse.

I thought that was the case but thought I would check. Thanks for the quick reply

I’m sure the guys from Executive software & Rexco, namley DiskKeeper & PerfectDisk developer, will disagree cuz disk or technically speaking file fragmentation is an NTFS or FAT/FAT32 in the PC and has little to do w/ RAID or BeyondRAID, DAS/NAS/SAN etc.

Wonder what’s DRI official statement on Defragmenting a Drobo/DroboPro/DroboElite

DRI have stated repeatedly - never defrag a drobo[hr]
and 5 seconds on the knowledge base produced this:

http://support.datarobotics.com/app/answers/detail/a_id/11/kw/defrag/r_id/100004

(search term “defrag”)[hr]
and as for diskeeper and perfect disk - its literally impossible for them to defrag a drobo since they dont know (and have no way of ever accessing) how the data is laid out on the disk - only drobo’s operating system knows this, so how could they defrag it?

Nice to know the official stand. Tks for the feedback & lesson!

Official Word from DRI: Never defrag a drobo.

Hi Jennifer, it would have been even nicer if you could have added : “Drobo auto-defrags itself perfectly” :wink:
Unfortunately, that internal defragmentation does not seem to be so “perfect”…

Why do you think it is not perfect?

Because multiple members, myself included, have posted results showing that “old” Drobo data will read in the 30-40MB/sec range, but “new” data reads around 5-7MB/sec. The “old” vs “new” performance isn’t related to the total amount of data on the Drobo. It happens if it’s 50% full or 90% full.

Where this is really a problem is using sparsebundles for any data. I use it for my iPhoto libraries and others are using them for Time Machine backups so they don’t have to partition their Drobo.

it takes a while for drobo to optimise its internal layout, dri have sai dyou should leave it 48-72 hours before trying to benchmark.

i’ve always found my drobo/drobopro returned to “new” performance after being left alone for a while to optimised its layout

What exactly do you mean by “Left alone”? You mean you didn’t use your drobo for reading/writing for 3 days?

Well minimal use, it optimises itself automatically when idle

Writing hundreds of gb doesn’t give it a chance to optimise it’s internal layout

I made measurements after WEEKS of Drobo idle time.
That did not solve anything, performance was still 3-4 times worse than on an empty Drobo.
On the other hand, you can certainly DEGRADE by more than 3-4 times if you measure just after a huge transfer… :frowning:

new vs old != empty vs (reasonably) full

The test would be, take a Drobo that has been “well-used” but under 80% of full capacity and copy its data to a “new” Drobo with the same storage configuration.

Then benchmark both.

The “well-used” Drobo would in theory be affected by any persisting fragmentation, while the “new” Drobo would not.
They both would have the same stored data, just not in the same physical layout.

The under 80% of full capacity is to allow for sufficient free space for any internal defrag to actually happen.
Defrag performance on standard drives decreases as the drive gets filled because there’s less space to “sort” and I would presume the same would apply to Drobo.

I almost did that : same Drobo-V2, same Drobo firmware, same FW800 connection, same Mac, and 70% identical copied data.
Only caveat was the set of disks on the “new” Drobo, which was 4 x 1TB instead of 4 X 2TB. Consequently, the “new” Drobo was 82% used instead of 62% for the old.
The results are given in this thread : according to Aja Kona System Test, the “new” Drobo was 3 TIMES faster, although filled up more…

You are welcomed in reproducing my experiment in a cleaner way, with 100% identical data and disk sets, if you have such an hardware environment laying around idle;).
I doubt very much that it would invalidate my conclusions thou…

If I get into a capable state, I will. :slight_smile: