I have an 800fs that I purchased to do rsync synchronization of our server filesystems. I used to do this with an OSX server having a DAS. I pulled the drives (Seagate Constellation 2TB) and put them in the Drobo, installed drop bear and rsync and created a push scheme where the servers each update their own file system to the Drobo via rsync. It all seems to work with the exception of huge directories (of which we have many). Any time the Drobo side is asked to sync from a directory, it takes hours rather than minutes. The servers have never had this problem in syncing to one another.
I’ve tested locally on the Drobo just to see ensure the problem has nothing to do with the network or interaction with other servers. It does appear to be isolated to the Drobo. For example, it takes 36 minutes to do a ls|wc -l on a huge directory (256841 files). The same directory took 10 seconds on the server the directory came from. The servers have all SAS 15000 RPM Promise drives and I expect them to be significantly faster, but nowhere near that difference. I also had the same Seagates in a Promise DAS and it wasn’t that bad, even with Firewire attachment.
It seems that local file system access on the Drobo really takes a hit from huge directories, much worse than OSX or FreeBSD. Is this normal?
Extra facts
This Drobo is running 9000 MTU, a new Cisco gigabit switch I bought for the purpose, there is no network traffic other than backend traffic between the servers and the Drobo. The servers all have secondary NICs connected directly to the same switch. One thought I had is that the drobo has no internet access, it’s an isolated net so it has no DNS nor time service. I have seen some UNIX systems get hogged out from not having DNS access, but that usually was traced to sendmail. In the Drobo case, it just seems to have very sluggish file system access.