Drobo

Report Your Drobo (V1, V2, S, FS, ...) Performance !

The KB article has been corrected.

[quote=“Jennifer, post:38, topic:1597”]Here are the settings for Kona that we use for performance tests.

  1. Test: Sweep File Sizes
  2. Volume: Volume Name of Drobo
  3. Video Frame Size: DVCProHD 1080i60
  4. Make sure “Disable file system cache” is checked[/quote]

OK Jennifer, but since with this settings AJA Kona System Tests reports 2 curves instead of 2 numbers, what point(s) in the curves (= actual test File Size) do you consider as significant for comparison ?.
My numbers vary from 9.0 MB/s [Write / 256MB] to 26.2MB/s [Read / 8192MB].
That does not include variations from one run to the next.
If we want to establish any kind of meaningful comparison between various tests environments or various Drobo models, we need comparable reference points, which curves are not.

Then I think Datarobotics should describe each method intended for each one of the Drobos in that link. It needs to be updated.

Now, it seems to have been deleted… (“This answer is no longer available.”)

Yes the article has been deleted.

If you are experiencing performance issues then please open a support case.

I have not yet received an answer from Data Robotics to my lower than expected performance case (#100715-000000), but I am also now getting horrendous relayout performance too.
Due to some false connection, my Drobo did not recognize anymore the bottom disk, then when I pushed it back, Drobo started rebuilding it (the disk was perfectly OK before and after, I got that same problem before just inserting a new FW800 cable).

The relayout has been running for more than 12h and Drobo Dashboard now predicts 121h to go, the successive estimates continuously raising.
More than 5 days to rebuild a 2TB disk only filled up to 62%, that looks somewhat unreasonable.
I do not dare to imagine the equivalent relayout time with 3 or 4TB disks…
Upgrading sequentially the 4 disks of a Drobo would take a month…
Is Drobo really scalable ??

Definitely seems like there’s some kind of fault in your Drobo chassis. Best to give support a call and find out what the status is.

Well, unfortunately, it is NOT the chassis which is the root of the performance problem (I agree the false connection is a chassis fault) : I just borrowed a brand new Drobo-V2, migrated my whole disk set to it and measured again : exact same results, with the same huge instabilities.
I checked again by inserting 4 1TB empty disks in this new Drobo : performance figures were then close to twice those with my 62% full disk set.
So I am afraid “Beyond RAID” is the culprit, with >50% performance degradation starting well before the advertised 85%/95% threshold :frowning: .

Definitely you’ll need to get your logs analyzed to find the reason why it’s that slow. But it doesn’t sound like an “artificially-imposed throttle” like the “you’re running low on physical storage, notice this!” that kicks in at 95%.

@geeji That is not normal, please open a support case. Slow down only occurs when you reach 95% full.

I did (#100715-000000). Following support requests, I sent 4 (!) sets of measurement data and the present status is “escalated to our Tier3 support team”.
However, other people have seen the same phenomenon in the past with a 60%-filled Drobo, see Timon Royer’s blog for instance.
Also, what I find disturbing is that results from AJA Kona System Test are highly unstable, varying by 50% or more from one run to the next (I do ALL my measurements with an idle Mac, NO Drobo access and AFTER the Drobo has stabilized).
And when using the “Disk Read/Write” instead of the “Sweep File Sizes” option, the graph shows huge instabilities from one frame to the next (as shown in the rest of this same thread); interestingly, the instabilities are highly correlated between the Read and Write curves within a single AKST run, although they differ from one AKST run to the next.
Change of Mac or FW cable does not make any difference (except forcing a 4+ days relayout because of a spurious bottom disk connection after plugging/unplugging a FW cable :frowning: ).

--------Test Environment----------
Mac Mini (Early 2009) 2.0GHz Core 2 Duo 4GB
Mac OS X 10.6.4
Time Machine disabled
Drobo-V2
Drobo<->Mac attachment : FW800 [no chaining]
Drobo FW : 1.3.6
Drobo DashBoard : 1.7.3
Drobo disks : 1.5TB, 500GB, 640GB, 1.0TB
Drobo formatting : HFS+
Drobo usage : 55%
--------Test Utility----------
AJA KONA System Test 6.01 :
Test: Sweep File
Video Frame Size: 1920x1080 10-bit
Disable file system cache
Round frame sizes to 4KB
File I/O API: Macintosh
--------Test Result ---------

     File Size Sweep

MB MB/sec
Read Write
128.0 27.4 27.4
256.0 28.8 30.1
512.0 23.3 19.6
1024.0 18.8 14.7
2048.0 20.4 15.9
4096.0 21.7 17.0

Disk read/write test [1.0GB]:
MB/s write MB/s read
28.7 28.2

18.7 21.0
21.7 20.5
29.1 25.3
32.3 25.5
31.4 20.8
14.5 13.1

Existing file read:
4GB 1080p MKV video: Read 26.4 MB/s

Hi benrodian, interesting results : although your Drobo is 55% full, it does not seem to slow down as mine (which was 62% full).
On the other hand, I wonder more and more how reliable is AKST : for instance, the 1GB line of the Sweep File Sizes should be, AFAIK, close to the 1GB Disk Read/Write, but they are not even close :18.8/14.7 MB/s vs 28.7/28.2 MB/s.

Did you make several runs of AKST, and were the different results close ?

I just updated my original post to show six sequential tests. I can’t make out a pattern other than the results vary quite a bit.

Yes, you get results as unstable as mine : the highest/lowest ratio for your 6 runs Read/Write is 2.2/1.9 which is ridiculously spread for what is supposed to be a simple repeatable test.
BTW, are your 6 runs Disk Read/Write or Sweep File Sizes with a specific file size ?
If you look at some of the AKST graphs I published earlier in this thread for 1GB Disk Read/Write, one can also see huge variations from one frame write iteration to the next, within a single run.

I very much doubt the random variations come from MacOS or from the FW800 connection, especially since the locations of slow-downs are highly correlated between the Read/Write passes, as shown on all graphs. The most likely candidate is the Drobo, especially considering that each AKST test run reallocates a new test file, likely in a different place on the disks every time.
One possible (partial) explanation could be the asynchronism between the 4 Drobo disks : when the sectors of the file are more or less synchronized on all 4 disks, the access time is close to that of a single disk, but when at least one disk is completely out of sync, the latency time raises up to one full rotation (10.7 ms for 5600 RPM) which adds 5.3 ms to the average disk access time (assuming the disks hardware buffering does not take care of that, which it should if I/O requests queuing is active).
And if some disks get allocated contiguous sectors, while some others have need of a seek within a unitary transfer, that gets much worse.

But that still does not explain why an empty Drobo has a throughput about twice the one of a 60% full Drobo…
…unless it is related to fragmentation, which Drobo algorithms are supposed to prevent “as efficiently as possible” :(.
See this Support Knowledge Base entry.

The beauty of that explanation is that it would explain most of the discrepancies we observe between our different runs on comparable hardware : all results would depend highly (x2 , /2) on the Drobo space usage rate (0-95%), on the number and average size of files in the tested Drobo volume and even on the Drobo access history (residual fragmentation related to allocation/suppression history).
Note also that various file systems (HFS+, NTFS, EXT3) on the same Drobo could exhibit different sensitivity to that fragmentation phenomenon, depending on the way they (de)allocate space on disks.

So the key to much improved performance at Drobo space high usage rates could be a much more thorough defragmentation utility, working optimally for read-only access at least when all disks are identical and the initial intensive write process has been mostly completed.
Such a utility could run (off line ?) for a few days, activated explicitly from Drobo Dashboard.

@Jennifer : any comments ?

the 6 runs were read/write.

I tried all sorts of tests (following the instructions given to me in an open ticket) and I consistently get slow speeds :-
I typically get an average of 12-15MB/s (Write) and 10-12MB/s (Read).
I have seen peaks at 20 (read/write) for large files (or at the end of the spectrum of the AJA Sweep tests, for files over 8GB), but only rarely.
If anything, I never get anywhere near the advertised speeds on my Drobo 2nd Gen.
Changing cable, interface, and even disks didn’t help.

Corentin

[quote=“cortig, post:57, topic:1597”]
I tried all sorts of tests (following the instructions given to me in an open ticket) and I consistently get slow speeds :-
I typically get an average of 12-15MB/s (Write) and 10-12MB/s (Read).[/quote]
It will probably not comfort you much, but you are not alone in that case :frowning:
What is your Drobo volume file structure (HFS+ for instance) ?
What is the space usage of your Drobo, and how many files do you have on it ?
If I am right about the fragmentation hypothesis (see above), the more you get, the worse it becomes.

The drives are HFS+
The speed issues were still there after completely resetting (and reformatting) the Drobo so I know the file count or disk usage are not the root cause in my case. No fragmentation possible in my case :-\

Corentin

I’m also experiencing problems with the performance of my Drobo v2.

Anyone know what could be the cause of these drops?

Also have a look at: http://www.drobospace.com/forums/showthread.php?tid=1580