Drobo

Out of space in 5D

Wow. Looks like a Buckaroo Bonzai title :smirk:.

I’m getting a yellow space warning on my 5D populated with 10TB drives. Total capacity is 36.18 TB of which 5.41 TB is free. (5.41 TB free - whadda world. Sure, it’s 85% used but that’s 5.41 TB free. Somehow, at these storage levels, you’d think that the 85% threshold would be sort of obsolete … this isn’t deduplicated storage or anything.)

Any way, in attempting to correct the problem I’ve looked to replace drives, but at this capacity it’s getting pretty darn expensive.

I use Seagate Ironwolf drives. Amazon has 12 TB drives for $348.79, but to increase capacity I’d need $338.79 x 2 or $677.58 to yield an extra 2 TB of space. To get 4 TB I’d need $445.99 x 2 or $891.98. And the next storage bump would cost an additional $338.79 for 2TB or $445.99 for 4TB (if disk prices don’t drop) - not a terribly sustainable storage cost increment.

So … I started looking into the 8D where (for the moment) I could dump in three 4TB dries and get 12TB (until I recouped the cost of 8D chassis). Afterwards, a 10TB ($311.99), 12TB ($338.79), or 14TB ($458.79) drive would yield an additional 6TB of storage [while the latter two would set me up for a larger drive expansion cycle with a larger parity drive.

Maybe I’ll put in a 6 and two 4s and stick my Time Machine on there, though really the I/O bandwidth of Time Machine is pretty small unless you’re doing a restore. And considering how often Time Machine corrupts itself, it might not be advantageous to put it in a protected storage set.

So …

  1. Can anyone tell me how dependable these puppies are?
    (I saw a horrific video on YouTube - though the guy didn’t appear to be a terribly competent storage guy, it still gave me the chills).

  2. Anyone see any holes in my logic or plans?

  3. Can anyone tell me if it works as a 40 gbps device behind a decent cable? I figure with the fairly enormous storage transfers going on to this device that reducing the transfer time even at ridiculous 20 gbps speeds might be worthwhile for a small real-time savings. (And yes, I realize that transfer speed will be dwarfed by physical drive access time, but I also realize that this uses an ARM processor which doesn’t have TB3 built in to the CPU.)

But boy are these guys huge.

Thanks in advance for any advise or insights.

  • Verne

I see your problem. The yellow capacity warning level comes from the old storage capacity ratios. It’s always been 80/20. If you go below 20 percent available space you start to compromise access times. It’s still prevalent on many current raid storage systems because raid hasn’t changed. Unless a manufacturer has specifically addressed this issue (and you pay a premium for) with more sophisticated algorithms/ram buffering and other tricks this is all you get. Drobo is at 18 percent so better than the 20 percent set as the industry standard (you are at 14.95 percent).
I would also stay away from the drives you are using as I’ve got a stack of them in the dead pile. WD (HGST) and Toshiba seem to hold up much better. Pricing is dropping every week so think 14 or 16 terabyte. Not 12. Don’t invest in an incremental (percent wise) in drive capacity b/c 6 months from now you’ll be looking at the “yellow space warning” again. 2TB is nothing in the storage world now.
good luck

Actually, Drobo’s yellow warning is 15% (which is why I started getting the error).

About Ironwolfs - I’ve been sticking with them because I don’t recall ever having a failure; I had tried HGST because I’d heard good things about them but did have a failure. I do remember earlier Seagates croaking way too often though.

I did see drive reliability stats from BackBlaze (who uses a crapton of drives), and may have a more down-to-earth criteria for disk purchases (dependability vs. cost). Here’s a link: https://www.backblaze.com/blog/hard-drive-stats-q2-2019/.

Let’s see … yeah, it looks like the 10 TB Ironwolfs were okay at a .56% failure rate, but the 12s spiked up to 1.89%. As always, it’s a balancing act between dependability and cost. Unfortunately, long term testing results on the latest drives are scarce because they are so new, and all you really get are sporadic reports from people who’ve tried them and have no problems so far (which is obviously not long term empirical data).

If you look at the 12TB HGST drives, they have a .37% or 1.19% failure rate depending on model, and the Toshiba 14TB sits around .78%. Unfortunately (as proved by the Ironwolf stats), you can’t take that failure rate and project it forward for the next higher capacity drive - each is different reflecting different engineering choices.

So … what’s been working well for you? You mention 14 and 16TB drives: which model numbers do you have? Considering that when I expand into that storage range, I’ll need two to see any advantage - the first will get eaten up for parity - where did you get them and how expensive are they and what’s your luck been with them?

I went with the 8D, and am currently in a recovery state - I put in my five 10s, and added a cheapo 6 (non-NAS), and a 4 and 3 TB Red which gets me up to 13 additional TB of storage. I had to delete the old volume to increase its capacity from 64 to 128 TB, and am slooowwwly restoring from a Synology DS 1817+ in the family room, connected to ethernet ports on a Netgear Orbi router and satellite using the wireless backhaul between the main router and satellite. I’m only getting 24-25 MB/sec throughput and thought about bringing the Synology up to my office to get better throughput by plugging in the gig ethernet, but my long years in enterprise infrastructure has taught me that in general moving storage units (especially with old drives) is generally a bad idea.

Of course, enterprise storage units were generally EMC or IBM, but I did have experience with an occasional Quantum dedup box, and while the scale and cost were different, when you deal with rotating drives the lessons are basically the same.

Here’s AnandTech’s recommendations: AnandTech’s HDD Recomendations - as you can see, the highest rated 16 and 14 TB NAS drives are (which really are RAID cabinet drives) are Ironwolf and Exos - both Seagate products.

I’ve searched for Toshiba 16 TB drives and they’re not out yet; 14 TB Ultrastar OEM drives are on Amazon but are not warranted by WD, but by Platinum Micro? (according to a commenter) - but does that only apply when purchased from Platinum Micro?

I tried to search the WD site and saw them bragging about being the first to 14 TB but the software stack had to be altered to allow for drives which allowed for sequential writes only?

Being on the bleeding edge (capacity-wise) is not as easy as it once was.

Well, it looks like I’m the only one posting to this thread, so I’ll continue my saga in case anyone finds this topic to be of interest :blush:.

I finally caved in, and after numerous mishaps I moved the Synology upstairs into my office and hooked it up to one of the Orbi Satellite’s ethernet ports and … whango … throughput went up from around 27 mb/sec to 115 mb/sec. The limiting factor seems to be the gigabit ethernet speed - so much for the Orbi’s alleged 1.3 gbps backhaul. This brings my restore time down from weeks to days, and I’m hoping the increase of filesystem capacity from 64 TB to 128 TB doesn’t increase the cluster size too much so I won’t have to expand too intensely into the extra 3 drive bays.

My next disk purchase is going to be a 14 TB drive (probably an Ironwolf as they are NAS drives and have failure/recovery modes more in keeping with a RAID array), but that’ll have to wait until I have a few extra dinero after the 8D purchase. It’ll replace the 3TB Red drive and it will become the new parity drive, freeing up the old 10 TB parity drive drive to be used for data, and should yield an extra 7 TB of capacity.

I guess there aren’t many 8D users out there - or if there are they aren’t participating here - but so far I’m pretty happy with the unit though it did arrive in pretty dreadful condition from Amazon. (The outer box looked like it had been drop kicked, and the inner Drobo box was in much better condition though it was already sliced open and one of the corners was dented. There was no packing material separating the outer and Drobo boxes, so the Drobo box within was free to move without restraint. The Drobo seems to have survived without trauma, so the UPS guy was correct - just another example of Amazon brain-dead packing/shipping.)

OTOH, can’t beat the price: $999 delivered.

It came with a 20 gbps cable which I promptly replaced with a 40 gbps cable, no so much because I expected the Drobo with eight rotational drives to need it, but simply to insure that anything I may someday install downstream would not be crippled.

Well, continuing the saga of the 8D:

Restored the dataset to the new 128 TB HFS+ file system and added a 512 GB cache drive, and the cheap 6 TB non-NAS drive I’d used has thus far had two glitch events, being powered down, powered up, and gone through (fairly rapid) rebuild, making me believe that there is a more rapid reverification being done rather than going through a complete relayout.

This is troubling, however, because both events happened on Drobo power down/power up, which leads me to believe there is a timing issue bringing the Drobo up and the drive spinning up to speed. This implies that every power event bringing the Drobo up will repeat this event.

In any case, I had planned to replace a 3 TB RED with a 14 TB Ironwolf, but after reading the Q3 2019 Backblaze stats have decided to replace the problematic 6 TB non-NAS drive with a 14 TB Toshiba.

Looking on Amazon, though I wasn’t able to find the MG07ACA14TA (with a 0.74% failure rate), though I did find a MG07ACA14TE which is a 14 TB Toshiba which I presume is a close cousin for $409.95 which seems to be a good bargain. The only difference I could find between the two is the 14TA uses fixed block size of 4K whereas the 14TE can use a (emulated) block size between 512 and 4K.

Replacing the 6 TB with this drive should force it to be the new parity drive (since it will become the largest drive in the dataset), so I expect that this will increase capacity by about 4 TB allowing a 10 TB drive to replace the 6 TB as a data drive.

This, of course, sets the stage for replacement of a 3 TB RED with a 14 TB for an increased capacity of 11 TB in the dataset bringing me to the point where increasing the capacity of the file system from 64 TB to 128 TB has finally paid off.

As my current capacity for the dataset is 47.04 TB (obviously metered with 2^40 terabytes vs. 10^12) is currently still 34% free, I’m in no hurry outside of wanting the problematic 6 TB drive out of the mix.