Drobo

Testing the Drobo FS performance

I’m a Mac programmer, and frustrated by the Drobo FS performance, today I wrote a simple but accurate test app on Mac which mimics some of the most common operations: writing, reading, listing, deleting. It also detects stalls. The Drobo FS was connected directly to a MacPro, and I have been using an AFP share over a 1000baseT full duplex connection.

When the time for an operation takes more than 3 times the average time (detected in the same test), we count that as “stalling”, and we measure how much time the device stopped responding. Writes and reads are all uncached (i.e. they are real requests to the Drobo, and the only cache used is the one inside the Drobo FS).

The tests were performed using a MacPro 8x2.66 with 16GB RAM in a folder in the Drobo FS Public share. The Drobo FS was 92% empty (300GB used on ~3.7TB available), and I had 4 drives installed: 2TB + 2TB + 1.0TB + 1.5TB.

For comparison, I also tested a remote AFP mount on an iMac, a SATA SSD drive, and an USB2 2TB drive.

Here’s the PDF chart:

http://www.redmatica.com/media/DroboTest.pdf

I will be gentle and won’t comment results except the following two things:

  • check stalls and the time needed to list and remove files.

-by analyzing the data you can see how the “absurd” transfer speeds you see for reading in the Drobo FS are due to the horrible performance of its inode/folder/whatever-they-call-it access system, not to transfer speed itself.

Andrea

Wow. Interesting stats there in the attached chart.
ETA
-Could you have a nasty/faulty Drobo FS?

  • And I have question about your script. In which order did it test write, read, delete? If it was write/read/delete, then the Drobo would give you lots of trouble, I suppose. Not defending Drobo, but your testing could have knocked it on its butt. The Drobo is not just writing each file, but protecting each file.
    This might not be a perfect example: I want to make a fresh copy/backup of my music file, as I’ve added 5G to it. I go to the Drobo> delete the old music folder> copy over the new folder to the Drobo. I have found that performance suffers when I start the copy immediately after the delete. I ensure to empty the trash as well, otherwise, Drobo doesn’t relinquish the used files completely.

It would be nice to have someone run the same benchmark against something in the same league as the FS, such as the Synology DS1511+.

Any volunteers?

Unfortunately don’t think so, but we are checking with the Drobo support people.

Write, wait, Read, wait, List, wait, Delete

Technically, “protecting” doesn’t exist as an operation. What you call “protecting” is just “writing to multiple drives”.

[quote]This might not be a perfect example: I want to make a fresh copy/backup of my music file, as I’ve added 5G to it. I go to the Drobo> delete the old music folder> copy over the new folder to the Drobo. I have found that performance suffers when I start the copy immediately after the delete. I ensure to empty the trash as well, otherwise, Drobo doesn’t relinquish the used files completely.
[/quote]

The trash is used only if you delete from the finder. The app I wrote and used forces the drive to delete the files for real, it’s not simply moving them to the trash.

Andrea[hr]

We just got one to use instead of the Drobo FS, as unfortunately we consider the Drobo “unusable” for what we have to do. After using it for a couple hours I’m afraid I have to say that the DS1511+ is much better in many areas. Will run and post the same benchmark ASAP.

Please understand that this is not meant to be a destructive critique to the Drobo FS. We bought one, it simply doesn’t perform acceptably, we would do anything to help them fix it.

Andrea

I apologize if I sounded aggressive. I fully agree with you that, as an FS owner, I’d much rather have a clear picture of what the FS can and can’t do, than to fool myself with marketing claims. But it seems there is something weird going on, since I have tested the storage subsystem, and although the performance is not stellar, it should be good enough to saturate a gigabit interface.

Also, rapier1 did a test only on the network hardware of the FS, and again, although not breathtaking, it should go up to 800 Mbps.

The mystery is that somewhere in between there is such a performance loss that causes benchmarks like yours. It is truly a puzzle. The conclusion so far is that the FS CPU is not fast enough for both the storage and network subsystems, but that is just a hunch.

My conclusions would actually be that the filesystem used in the Drobo FS (to be more precise, how the folder data is stored) is severely crippled in how it handles folders (or you can call them inodes, metadata, etc). There is no other explanation for why listing 256 8MB files takes 10x the time (3.7 seconds) than 256 4K files (0.3 seconds). In any decent filesystem the speed in listing a folder should have no relationship to how big the files are. There is no more common operation in an OS than listing the files, so this is really crippling the Drobo speed.

Andrea

My understanding is that BeyondRaid goes back after deleting files and cleans up it’s indexes. There were some old discussions here indicating that, for example, if you delete huge amounts of data (hundreds of gigabytes or more) that the space is not immediately available after the delete but become available slowly over time. Also discussions about furious disk activity after deleting large amounts of data.

All those discussions were in the context of actually deleting files, and not just moving them to a recycle bin or trash folder.

I’m not suggesting that the test is not valid in terms of real world performance but it may be impacted by the deletes. It would be easy to test this by eliminating the deletes from the script and seeing if the read/write performance improves.

[quote=“NeilR, post:7, topic:2374”] I’m not suggesting that the test is not valid in terms of real world performance but it may be impacted by the deletes. It would be easy to test this by eliminating the deletes from the script and seeing if the read/write performance improves.
[/quote]

I just repeated the tests with the Drobo FS without the “delete” test. In this case each test was created in its own separate folder.

http://www.redmatica.com/media/DroboTest2.pdf

There are no significant difference in the quality of the results… (i.e. the problems I’m talking about are not in the ±20% range).

Check again how listing a folder sometimes takes more time than writing the files in that folder (!), or reading is significantly slower than writing, and listing bigger files takes 10x more time that listing the same number of (smaller) files.

Note also, that there is no caching at all of the files metadata you just created. I.e. The test app creates an empty folder, and creates 256 files. Any competent filesystem would keep the metadata (not the data!) for the last few hundreds files created in RAM.

This is not happening at all with the Drobo FS. List those 256 files you just created from scratch, and the Drobo will take an eternity (3.5 seconds) to tell us those files are on disk.

The tests I’m running are not case limits. When you backup 2TB with OS X Time Machine to the Drobo, 250000 file of 8MB each will be created. Needless to say that the process never reeaches the end, as the Drobo FS will stop responding halfway. Same if you backup a mail server or big mailboxes… many small files, etc…

Andrea

Thanks for re-testing, Andrea!

Andrea,

Thanks for this.

I have also noticed that deleting files is very, very slow. My “solution” has been to FTP in (I have the pureftpd drobo app running) and delete from an FTP client. It is still slow, but atleast it feels faster than via samba. Also writing many (tens of thousands) small (<1MB files) is much faster with FTP than with samba, and others have stated that SMB is extremely inefficient on small files because of the overhead imposed with this protocol (esp. in convolution with what might be a poor smbd implementation on DRI’s part).

Any chance you could programatically test read/write/list/delete performance over FTP (SMB and NFS would be sweet too), so we’ll get a good comparison across protocols?

Tusse.

Is it possible to make the test available to everyone, so we can test our Drobos as well?

Honestly, I’m not that surprised by your results. I’m pretty sure the DroboFS is significantly limited by the processing power available to it. While it can perform adequately in terms of networking you really do see some strange behaviour at the upper end of the tests. Likewise, file performance is strangled by I/O. When you mesh the two you end up with a machine that is adequate for home use but doesn’t really fit the bill for much more than that. I wouldn’t view the Drobo as a NAS nearly as much as a network attached backup device - one that is best suited to handling single streams of data. That, for me, is adequate but not something I’d use in a production environment.

Either way, it would be great if you posted the test suite so we can validate the results.

Anecdote:
With 10GB or larger folders, I see steady performance in writing or reading to my Drobo. 14-22MB depending on audio or video file types. But, if I stop the transfer for any reason, then try to start it over, I see performance similar to that described in the OP testing. Stalls would be the best term. The Windows throughput calculator goes all over the place from 0KB- 3MB- 25KB. I can also see the weird performance in TaskMgr when this happens, too. If I go away for awhile, then come back hours later, then start my transfer, it works smoothly again.

I do not plan to release the test app… it’s not polished enough for my taste, and I just wanted to document the issues, not establish a new way to benchmark things.
However…

I found out a technique that overcomes most of the Drobo limitations when used with Macs, and brings it back in the “acceptable” territory.

Let’s say you have 6TB free on the Drobo FS:

  • format the Drobo FS

  • create 3 disk images (sparseimages, not sparsebundles!) of 2TB each on the drobo. As they are empty they will start as small files (1GB each).

  • mount the drobo and then mount those 3 images from a designated Mac, then share those 3 mounted disk images to the rest of the network.

  • you’ll access the Drobo from the other networked machines through that Mac.

What happens:
The Drobo FS will be happy and reasonably fast as from its point of view it only has to manage 3 big files which are expanding (this is handled quickly by the Drobo).

All the metadata/folder handling and caching will be handled by the Mac, so you will get fantastic performance with small files, and full HFS+ and Time Machine compatibility (as the mounted disk image is 100% HFS+). From the drobo point of view, you are not really deleting anything or creating any folder, it’s just a set of read and writes, as it’s all happening in the disk image.

It’s just a shame that the Drobo FS can’t handle files larger than 2TB, otherwise I’d use a single disk image.

Andrea

I wonder if you can achieve the same result using Truecrypt.

The advantage of using Truecrypt is that it is a more cross-platform solution to the problem.

Unfortunately I work in a mixed environment. Personally, I’m getting ‘good enough’ performance - 25 - 40MB/s with my personal mix of files/needs. While I wish I would be better this is close enough for me. In retrospect I should have built my own box but there really wouldn’t be that much of a cost savings and significantly higher overhead in terms of time invested.