Setting the scene:
For a few weeks now I’ve been a proud user of a Drobo 5D on my equally new Retina iMac. The Drobo is plugged into the Thunderbolt port, followed up by a Display Port display. The other TB port is in use by another Display Port display.
The Drobo is equipped with a 512GB mSATA SSD reporting being in use and in good health. All five drive bays are filled with WD Red drives of varying sizes (2+2+3+4+4, set up for dual redundancy, equalling out at 6.33TiB usable storage for now) with the plan to replace one drive a year for the second biggest WD Red drive on the market at that time so the Drobo storage can grow organically. Currently I’m at about 30% capacity so there should be plenty of space for the box to optimize itself with.
Besides the usual backups I’m running a MySQL test database on the Drobo doing rating/billing tests of all sorts nearly all day. As I’m a software developer for exactly this kind of software, the workload comes quite naturally.
What do I expect to happen:
For the various backup processes I’d expect the index files to find their ways onto the SSD, for the MySQL database, I’d expect at least the tables which are rarely written but regularly selected from to be SSD-cached. MySQL queries should be answered quickly all the time. High-IOPS loads may come with low read/write throughputs, while high throughput may permit only few IOPS. Performance for the same workload should be very similar with each run, maybe a little better due to optimized SSD caching.
What do I observe instead:
Even with no backup process running Drobo Dashboard sometimes reports more than 600IOPS, sometimes less than 300IOPS and almost always only one-digit MB/s reads and writes during testing. Performance with exactly the same workload differs vastly, some tests complete in ranges of 40min vs. 4hours, on the same workload, with the same system load.
What have I tried:
I tried disconnecting the Display Port display behind the drobo, to no avail. Not only do I lose a portion of screen real estate, it doesn’t impact performance in the slightest.
I tried waiting for several days, repeating my usage pattern so the Drobo would eventually settle down on it and start/finish optimizing itself, to no measurable avail until now.
I tried rebooting the Mac, the Drobo, and both. Starting up the Mac with or without the Drobo attached, powering up the Drobo while connected or connecting the Drobo while powered up. No luck either way.
I tried to just bypass the problem by giving MySQL more buffer. I can’t afford more than 20GiB or RAM for database buffering, though, and that doesn’t suffice to fit a full test case.
I tried copying the database to the iMac’s local Fusion Drive. Read throughput from Drobo to local started out at ~50MiB/s which might match the Fusion Drive’s intake capacity. The biggest table, arguably the largest and most written to (and thus most fragmented?) file was transferred with around 7MiB/s (yes, seven megabytes per second, any USB2.0 thumb drive would perform faster), the tables after that one copied at around 50MiB/s again. Started from the local copy the database was much more responsive and much more stable in its performance.
What do I want with this post:
First: express my disappointment. A device bearing the name “Direct Attached Storage” should not behave like some NAS device on the far end of the country but like a locally attached hard drive with some perks to it.
Second: ask for help, user stories, workarounds. I know caching is hard. I know data redundancy is hard. I didn’t expect it to be so hard as to disqualify the device for anything but large-file storage, though. Are there Drobo 5D users out there with similar experiences? Does anyone know what could be done to increase or at least stabilize performance (I know that filling the device to 95% will stabilize performance very much, but that’s not the kind of stability I want so I can conduct reliable before-after-tests that actually measure performance deltas and not device randomness?