Drobo

Benchmarking - Recommended Read/Write Throughput Test Utility or Software (Mac OS X)

Hi, I’m new to Drobo (FS specifically) and was looking for throughput benchmarking software to test and compare my setup with Drobo published specs, other users, other RAID/NAS systems, etc.

From this thread I think AJA System Test was recommended, but the link from the article back to Drobo seems broken.

I couldn’t find a definitive post on it so figured to ask here in case it helps myself and others to have a consolidated post.

Can members and/or Drobo provide ideas on recommended benchmarking software or utilities using Mac OS X? I’m specifically interested in network read/write performance in benchmarking my Drobo FS, but not sure if that makes a difference when selecting a benchmark utility.

Thx KV

Search for “AJA system test” on Goggle and you’ll find the software. I’ve also used XBench but DRI seems to prefer the AJA suite. XBench will only give you summary results whereas Kona will give you a graph with more detail.

I’ve also used the testing suite of Drive Genius (on Mac) but get different results than I do with the other two. I’m guessing Drive Genius doesn’t turn caching off or does its testing differently.

[quote=“Buzz_Lightyear, post:2, topic:1893”]
Search for “AJA system test” on Goggle and you’ll find the software. I’ve also used XBench but DRI seems to prefer the AJA suite. XBench will only give you summary results whereas Kona will give you a graph with more detail.[/quote]

Oh right thx I found the AJA link/software in article above (also here), I actually meant the link back from that benchmarking article to DRI-- where they [DRI] apparently mentioned/recommended AJA-- seemed broken.

Thx for the fast feedback, good to know you tried others and I’ll also stick with AJA for now unless DRI or someone shouts back.

Initially, DRI was recommending AJA KONA System Test 6.01 with the following parameters :[list]
[]Test: Disk Read/Write
[
]File Size: 1.0 GB
[]Video Frame Size: 1920x1080 10-bit
[
]Disable file system cache
[]Round frame sizes to 4KB
[
]File I/O API: Macintosh
[/list]

That gave only 2 throughput numbers, 1 Read + 1 Write, easily compared.
Then they switched their recommendations to :
[list]
[]Test: Sweep File Sizes
[
]Video Frame Size: DVCProHD 1080i60
[/list]
This one gives multiple throughput values for increasing file sizes, is much longer to run, and because of the multiple values, results are more difficult to compare from 1 Drobo to an other (may be that was the goal…).
Be aware that AKST gives results with low reproducibility, since each new measure starts with a new file allocation at a different place, giving different fragmentation properties, thus different numbers.
Thats why people execute 3-4 times the tests and average them.

You may be interested in reading this long thread : Report Your Drobo (V1, V2, S, FS, …) Performance ! and its somewhat disappointing conclusion.

Unfortunately, unstable performance and “wearing down” performance as Drobo fills up seem to be intrinsic properties of the “Beyond RAID” Drobo’s algorithm… :frowning:
AFAIK, people who opened support cases complaining they were not getting the advertised performance with a 60% filled up Drobo never got conclusive answers from DRI (I did not). Unfortunately, performance figures for a mostly-empty Drobo are kind of useless to most of us, who bought a Drobo because a single disk was too small…

It’s always kind of a crap shoot to compare performance in real world scenarios. What I mean is whatever numbers you get will be different than mine. Your Drobo will be loaded with a different mix of data at that 60% mark than mine will. I’m using HFS+ and you (might) be using NTFS, for example. The value in the number is in the ballpark figure.

Where I have compared numbers is between my Drobo, my QNAP and a directly attached FW800 drive on the same computer. All were 40 to 60% full and the results told me that my FW800 bus is fine (the Seagate FreeAgent 1TB clocked in respectable speeds of around 60MB/s versus Drobo’s 24MB/s when both were the only device on the bus, both at 800Mb/s). The QNAP was for a benchmark since it’s accessible via IP only and it beat the Seagate FreeAgent handily.

I’ve never really understood why BeyondRAID is so bloody slow and lost interest in figuring it out. I think it’s a combination of CPU and the algorithm itself but that’s just a guess. For example, I swapped two drives recently, one after the other. This took 5 days, 2 for the first and 3 for the second. Drobo has 2.1TB of data and went from 2.7TB to 3.6TB (which is another oddity I’ll get to in a minute) by swapping 2 1TB drives for 2 2TB drives. When that finished I shut Drobo down as I was leaving on a business trip. This week he’s been online and the drives are chunking away, I suspect moving, re-mapping and optimizing blocks after the change. All week.

In contrast, when I do the same with my QNAP the re-build of parity takes about a day for each drive swapped/replaced and then the re-lay about two days, so figure five days tops for two drives. Drobo has been “at it” for 10 so far.

Regarding the drive mix, here’s some funny math. When I had 6TB in Drobo in the form of 4 1.5TB drives I had 4.1TB of usable space. Now that I have 6TB in the form of 2 2TB and 2 1TB I have 3.6TB of usable space with an extra 500GB going for “protection”. Drobolator confirms this so it’s no secret, just something I find odd. And yes in RAID5 I know that I’d have to swap all four drives to get any additional storage.

[quote=“Buzz_Lightyear, post:5, topic:1893”]Regarding the drive mix, here’s some funny math. When I had 6TB in Drobo in the form of 4 1.5TB drives I had 4.1TB of usable space. Now that I have 6TB in the form of 2 2TB and 2 1TB I have 3.6TB of usable space with an extra 500GB going for “protection”. Drobolator confirms this so it’s no secret, just something I find odd. And yes in RAID5 I know that I’d have to swap all four drives to get any additional storage.
[/quote]
As a rule of thumb, from the total raw disk space, you loose the biggest disk for ECC protection.
So you were losing 1.5GB the first time, and 2TB now…
That is also why replacing a single 2TB disk by a 3TB one would not provide any actual space increase : 6-2 = 4 = 7-3.
Replacing one of your 1TB disk by a 3TB disk would give you only 1TB more out of 2 added : (6-1+3)-3 = 5

Which sort of means that having drives of all the same size will maximize the space available. That’s very RAID-like. :slight_smile: