So after reading on here and doing basic settings/tweaks (Jumbo frames, etc) I thought 15-18MB/sec writes and ~30MB/sec reads were as good as I was going to get.
WRONG.
After reading the recent thread on DroboFS benchmarks and changing buffer sizes, I did some searching…and found some really interesting, useful info.
I followed the “netsh” steps on the following page as well as made some registry changes, though I haven’t rebooted. The “netsh” commands are instantaneous.
What I also found is that even though I set MTU to 9000 in the Network Adapter settings for Windows, this DOES NOT set the MTU for the TCP/IP stack. It was still set to 1492! Likewise, the registry entry for MTU was 1492 as well.
After making the changes I’m now able to WRITE at 35-40MB/sec and READ at 45-50MB/sec. That’s more than double the write speed I was getting before. I have not changed the send or receive buffer sizes, though. I haven’t figured out how to do that yet.
Anyway, I thought this might help a lot of you frustrated with your DroboFS and Windows performance and might make you happy to know that it’s not necessarily the Drobo that’s the problem.
I’d be careful about some of those tweaks. I do not believe disabling autotuning has a positive impact on drobo performance. I should check the ctcp settings. That may help with writes to the drobo in some instances.
Wanna try disabling SACK on the drobo and telling me how that impacts your performance?
/sbin/sysctl -w net.ipv4.tcp_sack=0
I’m curious if the effects I’m seeing are just me or if other people see it as well.
Yeah, I was hesitant about disabling autotuning myself, but after truly setting MTU to 9000, testing, then doing the netsh commands and re-testing (With different, non-cached files) there was a very clear performance increase of about 10MB/sec. I’ve done some other general network testing since doing the settings in the URL above and so far all have been positive, no issues or impact whatsoever.
Holy crap. At first I didn’t think it did much based on my write speed (I got another 1-2MB/sec over the 34-35MB/sec I was getting), but reads are steady at 60-62MB/sec. That gave me a good 10-15MB/sec boost over what I saw after making my Windows changes.
Really? That’s surprising to me. When I disable SACK I’m getting horrifically low throughput. I’ll have to double check some things.
As for disabling autotuning - in your local network that might be fine. If you have a faster connection to the internet you might find that your download speeds really start to drop off. It’s all about the BDP (bandwidth delay product (BW*Rtt)). If the BDP more than your receive buffer then you won’t make maximal use of your network connection.
Let me ask, when you started testing - did you test each change you made or did you test them as a group? As is you took a baseline with jumbo frames, then you tested with jumbo frames and ctcp, then you disabled ctcp and disabled auto tuning and retested, and so forth. I’d be interested to see what impact the changes had both incrementally and distinctly.
I tested my internet speeds specifically to make sure they weren’t impacted and found my speed was as fast as it’s always been and was slightly faster than the previous time I tested it. I’m aware of BDP and tune Linux for Oracle RAC as my day job, it’s Windows that’s foreign to me in terms of how to change specific tunables. The max window size that we use for our DBs actually reduces the Drobo performance, likely due to the limited amount of memory on the Drobo itself. It’s interesting that SACK disabled hurts your performance. Hmmm…
I didn’t test the individual netsh commands. I did a baseline, jumbo frame, and jumbo frame + netsh settings. I’ve since rebooted and the registry changes I made don’t seem have made any difference. When I have some time today I’ll go back and individually test each option/command.
The SACK issues are troubling to me as they should only have an impact on a seriously congested network. I actually do network performance tuning, diagnostics, and research for my day job so I’m sort of chagrined at the moment. Also because the group I work for actually developed the auto-tuning concept (though not the implementation used by Windows) as part of Web100.
I’m wondering if my switches suck. I’m going to retest with a direct connection and see what I can find out.
I did have an issue with my switches, or specifically the switch connected to the Drobo and my PC, and I was barely able to break 100Mbit even though all the devices reported being connected at 1Gb. I’ve since replaced the switch with a Netgear GS108T which is basically a low-end smart/managed switch. That’s when I got my first boost in speed, all the rest was what I posted in this thread, along with setting the MTU to 9000 on the Drobo.
FWIW, the switch I pulled out was a D-Link 8-port gig-e non-managed switch.
I have Netgear GS105, GS108 and GS116 switches - I highly recommend them, especially since they’re ProSafe and ProSafe products have lifetime warranty.
I had a GS108 go bad on me (random packet loss) a couple of years back and got a brand-new-in-box replacement. Not sure if this is the standard practice for their RMAs, but it was a good feeling.
Did a few of the tweeks on my Win 7 box. Disabled autotuning, enabled ctcp, ecn, rss and chimney offload. Getting 50MB\s read and about 20MB\s write speeds. Using a Netgear GS605 switch which says it supports jumbo frames. When I kicked up the MTU size of the PC NIC, I lost connection to the drobo until I reset the MTU to 1500.
Ya know what, I never actually looked at the server distance in your speedtest report. I’m not surprised you didn’t see a difference with autotuning disabled being that you probably had very low RTTs. Optimal Buffer Size = RTT*BW so with a low RTT you’ll have a low buffer size. My guess is that if you pick a server on the other side of the country you’ll see lower throughput. Then again, maybe not - if the BW is below a certain critical threshold autotuning won’t make much of a practical difference (depending on what the default RWIN, window scale, etc). A lot of it will also depend on congestion and other issues that are difficult to control for on the public internet. I’ll do a test next week from work where I have access to a 10Gb uncongested research network (though only 1Gb to my desktop).
Hi I’m a newb to Drobo, just got my FS yesterday. Is there a Sticky or a unification thread that lists all the performance tuning techniques that you all are using? Thanks.