Drobo

Gigabyte slowness

Hello all,
Well, I have had a problem with my new DROBO FS since it arrived. I’m running a MAC Pro 8 core with 18gigs of memory connected to either a Apple Extreme or a gigabyte Trendent Switch. The Drobo has 4 Brand new Seagate 2TB drives installed and over the last week I painfullly copied about 2 TB of data.

Both the MAC Pro and the Drobo is set to MTU 9000 Jumbo frames and my data connection via Finder or the dashboard around 5Meg to even less.

If I connect the drobo to the apple router and set the jumbo frames to 8182 (Which is the max for the router) and set the Mac pro to automatic meaning letting the network tell the Mac what the speed is I get about 20Meg per second. Which is good but nothing that even MTU 8182 should get me.

On the Gigabyte switch 5 Meg if not worse. I opened a case yesterday and have not heard anything so at this point I’m trying to copy my data back so I can pull the drives and return this. I have also bought new NIC cables and tried several things to speed the connection. MAC to another MAC MTU 9000 via Apple Router or Gigabyte switch I’m running about 60-70 MB per second so I know the MAC and the router / switch and cables are fine.

Case Number
101011-000088

FS isn’t going to be faster than your Macs - no way, unless your Macs are really, really, really slow tho.

How long has it been between populating the FS with drives and the start of your data copy?

Check that your switch supports jumbo frames and has sufficient buffering and flow control to handle near line rate. I don’t know Trendent at all. The Apple router might be problematic with jumbo frames, I’ve read mixed results from others. I don’t use jumbo frames because Apple was tight arsed about using a chipset or drive to support it in the latest iMac. In any case, jumbo frames won’t boost your throughput substantially on any box. It mostly reduces demand on the CPU. If it does appreciably affect throughput on Drobo FS it only indicates that the FS is CPU bound.

Here’s one (possible) way to test this. Can you connect the FS directly to the MacPro? If you can and your transfer speeds are still slow then you’ve eliminated the Apple Airport and Trendent from the equation. And if you’re using good quality cables (Cat6 preferred for GigE) then it’s down to the FS or Mac Pro. On the other hand if you speed goes up considerably then it’s either your switch or Apple Airport (or both) that are the bottleneck.

I have to disagree with this statement. If the DroboFS implements file transfers even remotely efficiently (which, since it uses a Linux kernel, I assume it does), larger MTU should have a significant impact on network throughput.

The reason is simple: sending any data over the network has associated overheads (from the context switching of the file server process to the kernel, all the way down to being able to grab the physical layer without collision).

However, the bulk data transfer should not be one of those overheads. Most Linux servers minimize the number of in-memory copies, and whenever possible just perform zero-copy transfers directly from disk to the network socket.

Therefore, if you configure for larger MTU, you are automatically reducing the ratio between the overhead and the payload. In other words, you’ll spend (proportionally) less time managing the transfer and more time doing the transfer.

Unless the server implementation is awfully inefficient, this overhead should never be the bottleneck, especially if there is no encryption going on (e.g., SSL or SSH tunnel). Since the OP was talking about straight LAN transfers, I assume there was no encryption involved. In my opinion, in an ideal case the bottleneck will always be at the disks and/or filesystems.

That being said, the DroboFS is quite the black box. For instance, I would say that the type and size of files being copied should have a severe impact on the performance. It is quite a well-known fact that copying a large number of small files will always be worse than copying one single big file. That is true for any system, not only the DroboFS.

The way the share is mounted remotely also has a strong impact on the performance. Since the OP did not mention which protocol he/she is using we can’t know for sure (my DroboFS supports SMB, AFP and NFS). With NFS you can specify whether or not you want file writes to be synchronous, i.e., that the client wait until the server confirms that the data was indeed written to disk. That option should bring any disk performance measurements to its knees, and is not really necessary on a Drobo device. I haven’t checked for SMB and AFP, but I am pretty sure that it is something you can configure as well, and most likely at the client side.

Finally, I know that some users have Firefly installed on their DroboFS. If I remember correctly, that app indexes all the media files on the DroboFS, which means concurrent disk access, which means terrible performance drop.

So, in summary, we need to know: what kind of files and the average size, the network protocol being used, and if any other (file-indexing) apps are installed.

Hello all,
Thanks for the replies well in compared environment my friend has a better switch but also has a Synology NAS and pulling around 70-125 Mbps. Not trying to compare devices. Just trying to get some base lines of speeds that any normal NAS should be pushing thru a Gigabyte network. He has the same speeds without his switch so he is as confused as I am.

But to answer some of the questions I currently built a Windows 7 and tried NFS / SMB neither seem to be any better than AFP for MAC. Yes the Trendent Switch is cheap and non-managed switch with no caching. The Apple seems to be the only way I can get speeds up too 20-25 Mbps. Port to port from PC / MAC to Drobo same speed or less. Dropping down to 5 Mbps.

I tired PC to MAC and was getting 120 Mbps solid. Also using the switch I got the same speeds. Seems like the drobo is lacking in the ability to transfer at high speeds. Of course I running jumbo frames 9000 in all of these tests except for the Apple router that I have to bump down to 8182.

The files range in size but most were 30 - 40 Gig each Blue-ray rips from my DVD collection to put on the NAS for DLNA viewing. Also photo galleries and Music videos that range from 1-50 Megs per file. I have turned off indexing of files on Windows on the MAC I left that on but after the indexing is complete the speed is still 20-25 with burst of 30Mbps. But in the middle of the transfer you will see packets drop and then start back up again. This only occurs on the drobo not on any other devices I have in the environment.

So it concerns me that maybe the drobo has a bad NIC or is having issues. But most of the reports I have read about this the best speeds are 40 Mbps on the Drobo. Is this true?

In normal usage i would not be pushing that many files but trying to populate the Drobo with 4TB of data is painful and now having to move it back to hard-drives is just as bad. Watching videos or movies is fine as well watching several at a time… But my firewire 800 can do that.

I really had high hopes for Drobo but the further research I’m finding there is something lacking in the speeds from the drobo and i do not understand why. I have no information on the CPU of the Drobo or the amount of memory it has to understand if there is lacking but the PC is a Quad CORE with 8 gigs of memory and of course the MAC is a 8 core with 18gigs of memory.

Still have not gotten any replies from Tech support concerning the case opened.

But thank you all for your help

Ok, that is a serious symptom. I can see the DroboFS peaking at 40 MBps, since it is the number we see from others in this forum. I think this is somehow related to the fact that Drobos do not use a straight up mirroring or stripping algorithm.

In fact, it is not surprising that a simple RAID device would be able to get close to the theoretical maximum of a gigabit network (125 MBps), since there are very capable hardware solutions to handle RAID.

My experience with DroboFS CPU usage is that even under heavy load it gets nowhere near 100% for file transfer. Most of the time it remains well under 20%.

If you are really losing packets, then something really weird is going on. I would probably try to get it replaced, but if you are curious you could try to get SSH access and have a look at CPU usage during transfer using “top”.

Ricardo…frame overhead really only makes a big difference the smaller the frame is since the headers are a fixed length. While a 9k frame is six times larger than a standard 1500 byte frame you’re only really looking at an efficiency increase of 2% (97.5% to 99.6%). It seems like a lot but isn’t as far as data throughput goes.

Flst2000…if you’ve connected Drobo directly to the PC and are still getting 5MB/s then I’d say that something is definitely wrong with the FS. Or the cable. Not sure what a “real world” transfer speed for the FS would be but 40MB/s doesn’t seem like asking too much. I get about 24MB/s from a FW800 2nd gen Drobo and guesstimate I’d get 36MB/s from an S. My QNAP running normal 1500 byte frames (dang you Steve Jobs) through my Airport Extreme (locally) runs at about 70MB/s so the infrastructure should be able to handle it.

[quote=“Buzz_Lightyear, post:7, topic:1906”]
Ricardo…frame overhead really only makes a big difference the smaller the frame is since the headers are a fixed length. While a 9k frame is six times larger than a standard 1500 byte frame you’re only really looking at an efficiency increase of 2% (97.5% to 99.6%). It seems like a lot but isn’t as far as data throughput goes.[/quote]

I was not considering only overhead at the datagram level. If you take into account, for instance, the number of interruptions/sec that need to handled, it could be the difference between overloading the DroboFS bus or not.

My colleagues at work have noticed that if you just use larger datagrams (without even messing with the MTU) you get a significant performance gain despite the fragmentation at the network level, since there are not as many kernel interrupts. That was on AMD Opterons HP blades, so no CPU problems there.

But I digress. This is only wild speculation about the OP’s situation, and since he seems to have severe packet loss the problem is likely to be at the hardware level.

Ricardo…I think we’re saying the same thing in two different ways. Jumbo frames do help with CPU load, which is what you allude to in your last post. If you are CPU bound then it will make a difference. If not it won’t do much for throughput.

I’ve recently gone through the same thing with mine. very slow, but only on some files. The outcome of a couple rounds of trouble shooting and tests from support was to “change the ethernet cable” but since it was only slow on some files, but not on others, i haven’t done that yet.

I have noticed, that speed does improve a bit after the drobo has some downtime (no reads or writes), and i read somewhere (in the drobo FAQ) that they don’t function well with 24/7 access, so that may be the problem. My guess is that it re-optimizes the layout in the background, I’ve also seen people on the forum mention that they’ve heard the drives going at night when no one was accessing them.

Hey all,
Thanks for the help and support but at this point I have returned the Drobo… Tech support suggested that one of the new drives was the cause of the issue. I removed it and ran a Diag from Seagate as this was a brand new drive. The drive came clean no errors. Once the lights turned green I started a copy. Again I swapped out NIC cables and switch / routers with no improvement. Speeds on the same data 20-30 Mbps, Even tried smaller / bigger files to see if anything would change… Seemed like I was locked to that speed no matter. 30 Mbps seems to be the fastest I could go.

So i picked up a DS1010+ using the same drives and same cables / Switch / router and same data getting 70 -120 Mbps so there must be something wrong with the network device in the drobo or at least the one I had.

Again thank you all for your help.

Keep an eye on that ‘suspect’ drive. I recently got a brand new drive that was flakey. I had to RMA it. Then again, seems this recent machine build was cursed to being with - bad RAM and bad drive.

Anyway, point is, the diagnostic utils the manufs provide are good - but they generally don’t report correctable errors (indicating a drive going bad) and don’t fail the drive until it has uncorrectable errors (indicating a drive that has already gone bad).

[quote=“flst2000, post:5, topic:1906”] and of course the MAC is a 8 core with 18gigs of memory.
[/quote]

Of course, how plebian to use anything else. :wink: