Drobo

Rethinking DroboPro capacity

After receiving my DroboPro and going through several issues in converting from a 4 TB volume (created with four 1 TB drive on my original V1 Drobo) to a 16TB volume, I ended up adding four 2 TB drives to the original four 1 TB drives.

Now, with dual disk redundancy, I have 7.21 TB available and 2.67 TB (37%) used. that’s more than I need, at the moment, at least until the price of the 2 TB drives comes down some more.

However, on my Archive Drobo, I have four 1 TB drives, and I am in the yellow, with 245 TB used and only 244GB of free space available.

Unfortunately, buying another 2TB drive wouldn’t help, contrary to what the Drobo Dashboard’s Suggested Actions says – I would need to add TWO 2 TB drives to effect in increase in protected space. (Obviously if the 2 TB drive failed, there would be no replacement, so they need to be added in pairs.)

But what if I “bulb-snatched” two of the 2TB drives from the DroboPro, and replaced them with two 1 TB drives from my Archive Drobo? I would have to do this one drive at a time, of course, but I should be able to simply swap the 2TB and 1 TB drives between machines, wait for everything to stabilize, and then do it again.

That would give me 3.6 TB on the Archive Drobo, and 5.44 TB available on the DroboPro.

But wait! Two 2 TB drives aren’t enough for really effective dual-drive redundancy – at least three are required.

So in fact I could move all of four of the 2 TB drives over to the V2 Drobo, bringing that up to 5.5TB, while the eight 1 TB drives on the DroboPro would result in having 5.44 TB of dual-drive redundancy space available on the DroboPro, as well.

That sounds like a reasonable plan, assuming it doesn’t impact performance.

Did it.

Drobo Dashboard reports 25 hours to rebuild the V2 Archive Drobo, and 16 hours to rebuild the DroboPro. Meanwhile, despite having dual-disk redundancy enabled, Drobo Dashboard is telling me that it can’t protect my data, and not to remove any hard drives. I won’t, of course, but it would be be nice to get a more informative message.

Now the Archive Drobo attached to the Mac mini is reporting 62 hours to finish laying out everything, but the DroboPro is down to 12 hours.

In the meantime, I can’t seem to get e-mail notification from the Mac mini for some reason – it says “501 Syntactically invalid HEO argument(s)” although the parameters specified are identical to those specified on the Mac Pro, which does work properly. ???

With the number of BeyondRAID rebuilding time starts climbing as the capacity & no. of drives seems to be in exponential propostion, I couldn’t help but wonder will the “disk thrashing” occur in the BeyondRAID? I haven’t tried my droboPro but my drobo unit was getting really hot! - haven’t tried to fry eggs on them but tempted when during an hour-long rebuild. I was only testing my drobo so I could reset the unit & reformat the entire array to 16TB but in production environment, don’t think we have a choice. Do your drobo and drobPro unit get sizzling hot during those rebuild hrs?

Something doesn’t make sense. Fat fingers?

You say you have a 4 drive Drobo filled with 4 1TB drives that has “245 TB used and only 244GB of free space available”. This is wrong. A Drobo with 4 1TB drives has, after protection, 2.7TB for user data. In your case you report 245(used)+244(unused) =499GB(total). 499GB vs. 2.7TB? WTH? This is not even wrong, something fundamental is messed up. Let’s get this right, then consider everything that builds on top of it.

Oops! Fat fingers missed a decimal point. Sorry 'bout that. 2.45TB used, plus 243.75 GB free space.

Obviously four 1 TB drives does not equal 245TB, but I did say TB, and not GB, so adding 245 plus 244 doesn’t make much sense.[hr]

As to the temperature, let me guess – you are using Seagate drives, is that right?

Of all of my Drobos, the DroboPro is the coolest, approaching ambient. None of the others even get the fan speed excited, even during a rebuild. But then I’m using the WD “Green” drives, which have about half the power consumption of other drives.

It is presumably the case that as the amount ofProxy-Connection: keep-alive
Cache-Control: max-age=0

ata stored increases, performing a rebuild will take longer, but I don’t know of any reason why it would take exponentially longer – just linearly longer.