I purchased a Drobo 800fs to replace the usb DAS I was using between my servers via rsync (the DAS was hooked to a mini which synced from my xserves). I installed the same Seagate Constellation 2TB drives I had been using in my DAS. I realized that the box is not designed to do all protocols as I’d been made to believe (i.e., rsync is not supported by the company). I attempted to cobble a reasonable backup solution together with workstation protocols (AFP, SMB) and found the box way too slow. I put the apps on it to set up rsync and was able to almost get there. The performance still was still horrible on any huge directory. With my original server to server rsync solution, I normally synced all servers every two hours. With this, I will at best be able to sync once a day and often get errors from the drobo side that cause the sync to fail and have to start over.
Am I kicking a dead horse here? Does the drobo just not have the capability to handle this type of application. It’s handling of huge directories is what I think the problem is. When logged in as root via drop bear, it takes over 45 minutes to do a simple “ls | wc -l” on a 600000 file directory. That is a 10 second look up on any of my servers. With that kind of issue, rsync can’t possibly work on it. It doesn’t appear to be a network problem, more a local performance issue on Drobo.
If you have any ideas, please let me know. I don’t want to give up on the unit if there is any hope.