5n2 port bonding with Netgear R8500 and drive migration issue.

I just got my 5n2 and have some issues.

#1 I updated firmware on both units and moved my 5n drives over. 5n2 works but says it is in rebuild mode with 49 hours to go. Data is accessible but I did not expect rebuild mode. I installed the drives in the same order and everything.

#2 I have enabled bonding on the 5n2 and it shows the same IP address for both poets but on my netgear router r8500 it says link aggregation is inactive. I am using the correct 2 ports on the router and have aggregation enabled. Does the drobo not use the same protocol? The bonding was the reason I upgraded so my feelings are very mixed right now.

Any ideas?

Seems the netgear r8500 wants me to configure the device as 802.3ad. Is that the default? How do I set the teaming mode?

hi sbushman, can you remember how much data you had on the drobo before the migration?
am not sure about the rebuild, (is there a possibility that a drive had not fully spun down and was ejected shortly after powering off the 5n?)

i guess more importantly, did your data and shares come back up soon after powering up the 5n2? (if so, it may just be that a ‘regular’ coincidental hard drive issue was observed by the 5n2 and it may just be doing its usual protection features for you)?

btw you may have seen this already, though there is a page here (with some more links too) that could help a bit? :

The shares are there, and accessible, but after 8 hours the rebuild is still reporting 19, then 21, then 29, then 21 hours. (keeps shifting).

The drobo 5n was off when i removed the drives, so they were spun down. The new Drobo 5N2 was updated and rebooted before I inserted the first drive. It took a little while for the drives to be detected. The last one took the longest.

I had forgotten to bring over the hot cache ssd, so powered down and installed it. Powered up again and still rebuilding. My wife will shoot me if her artwork is compromised! lol. Plus I am a photographer and all my work from the past 3 years is on there.

As for port bonding, is this thing using 802.3ad like my Netgear router wants? Seems the synology, qnap and readynas all have selectable modes for this. The drobo does not.

thanks for more info,
am not too sure about the port bonding, but the value for remaining rebuild time can fluctuate (though maybe not as much as windows explorer bars).

the general base value i usually take is about 1day per 1TB of data that is on the drobo. (accessing the drobo can slow the rebuild down a bit, though newer models might be quicker to rebuid than that) - its good that you can still access things and if you can bear in mind how much data you have on your drobo, it should hopefully finish in about that much time in 1 day per TB though please let us know how things go

Rebuild completed yesterday and drobo is humming along, but still not sure about the bonding. Anybody else have it working? What kind of router/switch are you using?

I too have an issue with the bonding. I’ve opened a support ticket and will post the outcome.

I’m using an Edge Router Lite with a Dell Power Connect 5548 switch.

With the bond turned on, I get %80 packet loss and latencies above 250ms when pings finally happen. I disable the bond, all returns to normal. I even tried LACP/LAG on the switch but that resulted in %100 loss. I didn’t expect that to work.

sbushman18, I’m curious what your specific issues are. When I configure aggregation on my switch the Drobo is totally inaccessable. Even in a port-channel, with 1 NIC disconnected it works fine but when both are used in the “bond” mode, it’s useless.

I have a feeling this “bond” idea isn’t fully baked yet. Still waiting on support to reply.

EDIT: I have resolved my particular issue. It seems that the Drobo has no knowledge of LACP or LAGs in the typical network world. Once I set my port-channel to STATIC instead of LACP, I was able to get both ports working. Also, it’s my understanding that there is no switch configuration needed in most environments so I would try turning off aggregation on your r8500 and see what happens. Hope that helps.

THanks a lot for that JR76. I disabled aggregation and just did two test transfers, one in each direction and got pretty much 110+ MB/s sustained speed. The original 5N used to peak around there but was closer to 70-80 sustained so I’m seeing much better (closer to theoretical max over gigabit) speed now.

I may just get a new managed switch and move all of my devices over to there. I have 20 devices split between 3 8 port unmanaged switches and the router, which is not ideal. In fact, the last 3 ports on the router share 1Gb/s worth of bandwidth between them.

I am thinking I could use the aggregate ports on the router to go to a 24 port managed switch, and then hook everything else in there.

Anything to gain by going with jumbo frames? or will I cause my streaming clients grief by doing that?

Great news! Glad you got it working!

My philosophy on Jumbo frames has always been they aren’t needed unless it’s iSCSI storage traffic (think iSCSI datastores in VMware) or 10G (or greater) ethernet. In your network, Jumbo frames are just an added level of complexity with no gain. And keep in mind, anything that would use the Drobo, assuming Jumbo, would also need to support MTU 9000 and greater. Which further complicates things compatibility wise, especially if you are hanging a WAP off your switch/router with wireless clients accessing the Drobo.

Of course, there isn’t anything wrong with doing Jumbo Frames. Just and added layer of complexity that I wouldn’t want at home. If you try it and you get faster/better results, please share!

Understood. Since I’m happy with the performance for now I’ll leave it off. Now I have to figure how to clone one drobo to the other because my file system is apparently the old one, capped at 16TB. This has not been the easy upgrade I was hoping for. lol

nice one guys and thanks jreinhart,

i also seem to remember some posts about being probably best not to use jumbo frames (but if you search for the phrase jumbo, you can probably get the fuller picture - just dont search for mumbo jumbo, as i believe that phrase will also exist) :slight_smile:

btw sbushman just linking to your other post with an idea about the cloning:

Heard back from support regarding the “bonding” option.

"Hello Jason,

Thank you for updating
This is expected behavior of the Drobo as its deigned to manage it’s self and does not know how to manage a a managed router."

So, for those with managed switches, STATIC aggregation is the only option. Those that don’t have a managed switch (or consumer “router”) need not worry as it’ll work right out of the box. I’ve verified this with another switch that I have that doesn’t support LACP/LAG and it just works.

@Paul glad to help when I can!