16/32TB limit and the future

I have gone through four different Drobo units over the last 5-6 years or so. Each has performed very well and did their jobs great. However, all current Drobo models have a hard 32TB limit (only 16TB for the Drobo 5N) other than the B1200i. Using the highest capacity commercially available hard drives right now all except the 4 bay model can reach or exceed the 16/32TB limit and will not be expandable going forward.

I have not seen any official word from Drobo about addressing this limitation. Attempting to get an answer from them via email has been fruitless. Therefore, I am going to begin looking for an alternative solution to migrate my storage needs to.

I would like to know what other solutions offer the dynamic expandability that Drobo offers without the capacity limits. Ideally, I’d like to find something that’s approximately on par with Drobo in terms of cost, ease of use, and speed.

I’m with you on this, I’m in exactly the same position (2 x drobo + drobo share, 2 x drobo fs, 1 x Drobo 5N), but now im stuck for space

i have had a good look round and the synology units come the nearest with the SNR storage but still not happy, keep popping back to this forum in the hope the drobo will have got a grip and at least announced an intent i have tweeted, mail and even phoned and have just been stone walled each time

It’s the chief complaint.

I have held off upgrading my drives or even buying a new unit because of the 16TB limit.

I did manage to have a conversation with someone ‘in the know’ at Drobo a few months back. They stated the current solution for pushing past the 16TB limit required a re-write to BeyondRAID and that broke compatibility with all <16TB versions.

So, no ‘in-place’ migrations.

I would gladly reload all my data to get past the 16TB limit.

My other Drobo is a Drobo S which is already discontinued. I’ve already reached the 32TB limit on that one. Since it already has “legacy” status, I’m not holding my breath for any updates to expand the capacity so I will need to replace that as well. Unfortunately, the 5D is hobbled with the same 32TB limit.

The Synology units do seem to come closest to what the Drobos do. I’ll have to do a bit more research to make sure they don’t have similar limits that face us now.[hr]

It’s reassuring to hear that they have been looking in to fixing this limit. I agree with AzDragonLord and would happily reload all my data.

Was any sort of ETA offered for this fix? I remember having to wait for the fix to allow usage of 2TB+ hard drives and that was an excruciatingly long wait.

such a wonderful NAS unit… with such slow/poor update schedules. i, myself, am also looking into Synology is Drobo doesn’t fix this issue.

Just got confirmation from Justin Winkler from Drobo that they do NOT have any plans to increase the capacity of the Drobo 5N.

I forgot to inquire about the 32GB limit of the other Drobo models.

What is the actual limit, anyway? Is it 16 TB (16,000,000,000,000 bytes) or, as I would expect, the binary 16 TiB (17,592,186,044,416 bytes)? It clearly applies to the usable capacity of the data volume and not the total of the disk capacities installed, otherwise with 5 + 5 + 5 + 1 + 1 I would have exceeded it already.

I guess that with the splitting off of Drobo from Connected Data the 5N will be retired and replaced with a new model. Five 6 TB drives with the dual disk redundancy you really ought to be using on such an amount of data would either make full use of the maximum capacity with little wastage, or still fall short of the maximum, depending on the answer to the question above. I currently have three 5 TB drives in mine plus two 1 TB. While 5 TB drives are only a little more expensive than 4 TB at the moment, there’s a big price jump from 5 TB and 6 TB. Beyond that it’s either very expensive helium-filled HGST or else Seagate SMR (shingled) drives, which have their own peculiarities that haven’t fully been worked out yet and work best on their own, rather than in RAID-like configurations.

Binary. Yes, usable capacity.

Whether users should or shouldn’t use dual drive redundancy is a decision best left up to the users which is how Drobos are made to do. Everyone uses their Drobos differently and have different backup strategies.

I’m been using five of the 8TB Seagate SMR drives in my Drobo S since March. So far everything has been working great. I had been hoping to be able to do the same with the 5N. At about $270-300 each, their price per TB is currently about the same as 4TB drives.

hi, on the company side of things…
(while i did miss a few months here or there since xmas) i thought that the original founders/Geoff set up drobo and then went on to separately setup ConnectedData, and then re-acquired/merged with Drobo again)

have they since announced they will split?

the shingled drives (at first glance) seemed to increase storage space by 25% but made me wonder whether they added 75% more risk :slight_smile: but i need to read up much more on those as times goes on.

as elai mentions, its very good that a drobo user can choose whether to use SDR or DDR (model permitting)
i use sdr on my Gen1 and Gen2 (as i have to) but 1 is a backup of the other plus data online.
but set my drobo-S to use DDR

overall though, im thinking that sdr might not be enough (protection) on the 4slots but its still cool that the new Gen3 lets users decide.
Edit: if an 8-Drive DAS model such as the drobopro was to be released for consumers, im thinking TDR would be the way to go, (hopefully T as in Tripple DR, rather than tedious) :slight_smile:

ah i saw this link from yesterday someone posted:
http://www.theregister.co.uk/2015/05/20/drobo_sets_out_on_its_own_again/

Hi elai72. Thanks, I thought so. It’s good that you’re happy with your SMR drives and are happy to accept their peculiarities. This paper https://www.usenix.org/system/files/conference/fast15/fast15-paper-aghayev.pdf gives an fascinating insight into how they work and how they differ from conventional drives. There are two things I read in the paper that puzzle me, if anyone is able to comment. Firstly, the on-disk persistent cache itself appears to be shown as a shingled region. I can’t see how it can possible work like that. Is it actually implemented as a conventional region, since it needs to be randomly writable, not only writable as a whole? Secondly, are the shingled regions actually laid down as spirals, rather than the concentric rings of a conventional drive? I get the impression that they are.

Hi Paul. Yes, a split has been announced: http://www.drobospace.com/forums/showthread.php?tid=143830&pid=188250#pid188250 or http://www.theregister.co.uk/2015/05/20/drobo_sets_out_on_its_own_again/

It looks like we posted within the same minute, Paul :slight_smile:

I think the reason that current SMR drives are not recommended for RAID use is that their write performance is so erratic. With steady consecutive writes they can write directly to the shingled regions, known as “bands” with full width guard tracks in between. With multiple random writes they write first to the persistent cache area until it fills up. They then need idle time to read-modify-write the data back to the shingled bands. Another interesting feature is that while, in a conventional drive head switching takes precedence over track switching (because it’s quicker), in an SMR drive the complete writing of a shingled band takes precedence over head switching. So, they are very different beasts from conventional hard drives. SMR is an emerging technology that promises a considerable increase in capacity but not without some cost to performance. There are a lot of new parameters to be tweaked and optimised by the manufacturers, such as persistent cache size and location (should it be on disk or in flash SSD?), the size of each shingled band, the aggressiveness of the cleaning mechanism that purges the cache back to the main shingled storage areas and whether this process should be controlled by the drive’s own firmware (as current examples are, in order to emulate as transparently as possible the action of a conventional drive) or whether the drive should co-operate more with the operating system, which is best positioned to decide how data should be presented to the drive.

It’s my humble opinion that SMR technology won’t be entirely suitable for use in RAID-like devices until they delegate at least some of their internal operations to the control of the RAID controller. The current models are great when used for what they are intended, which is stand-alone archive drives, but in other storage forums I’ve seen a lot of negative comment from people ignorant of their true nature who were initially drawn to them by their low cost per terabyte and subsequently got frustrated by them and then rant quite unreasonably at Seagate. The only criticism I would level at Seagate is that they don’t really make it clear which models use SMR and which don’t. At the moment it’s not too difficult to remember that models with the letters “AS” in their model number (e.g. ST8000AS0002) use SMR technology.

I had read a bit about the lowered performance of the SMR drives and it’s not a major concern for me. I just needed more capacity to archive data. That being said, I’ve been seeing reads and writes in the lower 40MB/s on my Drobo S. This is good enough for my needs.

I’m glad it works for you, elai72, and respect for being a guinea pig and pushing the envelope. I already knew that the 8 TB Seagates are not incompatible with Drobo because I watched a video of a guy on YouTube installing one, but it’s good to have it confirmed. He didn’t test it beyond confirming that it was recognised and accepted as part of the array, though.

In any case, read performance is respectable and fairly straightforward, but be warned that write performance can drop to zero for periods measured in seconds once the cache is full (that’s the persistent on-disk cache, not the volatile RAM cache) while the firmware enforces the cleaning algorithm. The researchers who wrote the paper claim that it can take between 0.6 and 1.6 seconds to read-modify-write each shingled band, during which the drive can’t do anything else. So if it doesn’t get idle time it enforces it and that could conceivably be interpreted as a failing drive by a controller that’s unfamiliar with the drive’s peculiar characteristics. It’s a very interesting phase in the evolution of the magnetic hard drive, with Seagate promising >20 TB SMR drives in the short term and even more to come when HAMR* and BPM are introduced in the future. I’ll certainly be using them, singly, for archive purposes though not in arrays for a while.

*Heat-Assisted Magnetic Recording and Bit-Patterned Media are both at the research stage but when they become mature they are likely to be used in conjunction with shingled recording, rather than replacing it.

(ah yes john - maybe great minds think alike lol) :smiley:

its interesting too that users are consuming/needing more and more drive space, and that new technology can also have more costs (such as the more advanced a drive is, the more expensive it might be for professional data recovery etc), whilst at the same time, the larger the drives are, the less of them you need and probably less power consumption (and enclosures) too.

its good that you recognised the fact that the youtube video used it briefly, rather than over a prolonged period of time, though each part of a success story is always good to hear - especially elais case.

Cannot agree more.
I take one ST5000DM000 (with SMR) and one other drive in my Drobo 5D.
Everyone can see my iostat log here:
For sustained writing, the speed is normal at the beginning, but it drops down very fast.

     disk4       cpu     load average
KB/t tps  MB/s  us sy id   1m   5m   15m

2048.00 50 99.97 2 60 38 3.86 2.94 2.53
2048.00 67 133.87 1 74 24 3.86 2.94 2.53
2048.00 69 137.72 1 70 29 3.86 2.94 2.53
2048.00 67 133.78 2 70 28 3.86 2.94 2.53
2048.00 49 97.73 2 69 29 3.63 2.91 2.52
2048.00 68 135.29 2 68 30 3.63 2.91 2.52
2048.00 59 117.68 3 74 24 3.63 2.91 2.52
2048.00 75 149.69 2 74 24 3.63 2.91 2.52
1097.67 12 12.85 2 71 27 3.63 2.91 2.52
1792.87 55 95.98 2 56 42 3.58 2.91 2.52
1668.00 64 103.98 2 61 37 3.58 2.91 2.52
1956.48 67 127.51 2 48 51 3.58 2.91 2.52
1671.94 67 108.78 2 72 25 3.58 2.91 2.52
2048.00 74 147.69 2 66 33 3.58 2.91 2.52
2048.00 64 127.94 2 63 35 3.38 2.88 2.51
2048.00 64 127.95 2 67 31 3.38 2.88 2.51
2016.74 65 127.96 2 70 28 3.38 2.88 2.51
2048.00 68 135.94 4 62 34 3.38 2.88 2.51
2048.00 63 125.90 2 61 37 3.38 2.88 2.51
2048.00 61 121.92 2 61 37 3.35 2.88 2.51
disk4 cpu load average
KB/t tps MB/s us sy id 1m 5m 15m
1641.90 80 128.11 1 64 34 3.35 2.88 2.51
2048.00 64 127.94 2 64 34 3.35 2.88 2.51
2048.00 64 127.80 2 59 39 3.35 2.88 2.51
2048.00 70 139.48 2 59 40 3.35 2.88 2.51
2018.38 69 135.42 1 57 42 3.24 2.86 2.51
2048.00 58 115.95 2 67 31 3.24 2.86 2.51
2048.00 69 137.40 2 68 30 3.24 2.86 2.51
2048.00 79 157.66 2 64 34 3.24 2.86 2.51
2048.00 17 33.83 1 71 28 3.24 2.86 2.51
[color=#FF4500] 0.00 0 0.00 2 78 20 3.30 2.88 2.52[/color]
2048.00 24 47.83 1 77 21 3.30 2.88 2.52
2048.00 26 51.83 1 76 22 3.30 2.88 2.52
2048.00 25 49.74 2 78 20 3.30 2.88 2.52
2048.00 24 47.87 2 77 21 3.30 2.88 2.52
[color=#FF4500] 0.00 0 0.00 2 77 22 3.27 2.88 2.52
0.00 0 0.00 1 77 22 3.27 2.88 2.52
0.00 0 0.00 1 77 22 3.27 2.88 2.52
[/color]2048.00 16 31.98 1 74 24 3.27 2.88 2.52
2048.00 36 71.93 2 68 30 3.27 2.88 2.52
[color=#FF4500] 0.00 0 0.00 2 71 27 3.25 2.89 2.52
[/color] disk4 cpu load average
KB/t tps MB/s us sy id 1m 5m 15m
[color=#FF4500] 0.00 0 0.00 2 74 25 3.25 2.89 2.52
[/color] 2048.00 50 99.85 1 78 21 3.25 2.89 2.52
2048.00 26 51.96 1 76 22 3.25 2.89 2.52
[color=#FF4500] 0.00 0 0.00 1 74 24 3.25 2.89 2.52
[/color] 2048.00 4 7.96 1 74 24 3.23 2.89 2.53
[color=#FF4500] 0.00 0 0.00 2 98 0 3.23 2.89 2.53
[/color] 2019.78 72 141.89 2 64 33 3.23 2.89 2.53
[color=#FF4500] 0.00 0 0.00 1 76 23 3.23 2.89 2.53
[/color] 2048.00 26 51.97 2 77 21 3.23 2.89 2.53
1657.69 26 42.05 1 77 22 3.29 2.91 2.54
1029.33 6 6.00 1 77 22 3.29 2.91 2.54
2048.00 43 85.94 2 76 22 3.29 2.91 2.54
2048.00 33 65.97 2 75 23 3.29 2.91 2.54
[color=#FF4500] 0.00 0 0.00 1 74 25 3.29 2.91 2.54
0.00 0 0.00 2 67 31 3.29 2.91 2.54
[/color] 2048.00 24 47.30 1 73 26 3.35 2.92 2.54
[color=#FF4500] 0.00 0 0.00 1 84 14 3.35 2.92 2.54
0.00 0 0.00 1 76 23 3.35 2.92 2.54
0.00 0 0.00 2 72 26 3.35 2.92 2.54
0.00 0 0.00 1 75 23 3.35 2.92 2.54
[/color] disk4 cpu load average
KB/t tps MB/s us sy id 1m 5m 15m
2048.00 23 45.81 2 75 24 3.32 2.92 2.55
[color=#FF4500] 0.00 0 0.00 1 76 23 3.32 2.92 2.55
0.00 0 0.00 1 76 23 3.32 2.92 2.55
0.00 0 0.00 1 76 22 3.32 2.92 2.55
0.00 0 0.00 1 76 23 3.32 2.92 2.55
0.00 0 0.00 1 76 23 3.38 2.94 2.55
0.00 0 0.00 1 76 23 3.38 2.94 2.55
0.00 0 0.00 1 76 22 3.38 2.94 2.55
0.00 0 0.00 1 76 22 3.38 2.94 2.55
0.00 0 0.00 1 76 22 3.38 2.94 2.55
0.00 0 0.00 1 75 23 3.34 2.94 2.56
0.00 0 0.00 1 76 23 3.34 2.94 2.56
0.00 0 0.00 1 76 23 3.34 2.94 2.56
0.00 0 0.00 2 76 22 3.34 2.94 2.56
[/color] 2048.00 1 2.00 3 96 0 3.34 2.94 2.56
[color=#FF4500] 0.00 0 0.00 3 97 0 3.72 3.03 2.59
[/color] 2048.00 71 141.95 2 65 33 3.72 3.03 2.59
2048.00 4 7.96 1 74 25 3.72 3.03 2.59
2048.00 26 51.73 1 77 22 3.72 3.03 2.59
[color=#FF4500] 0.00 0 0.00 1 76 23 3.72 3.03 2.59
[/color] disk4 cpu load average
KB/t tps MB/s us sy id 1m 5m 15m
[color=#FF4500] 0.00 0 0.00 2 76 22 3.74 3.04 2.60
0.00 0 0.00 2 76 22 3.74 3.04 2.60
0.00 0 0.00 2 76 22 3.74 3.04 2.60
[/color] 1517.04 23 34.07 2 76 22 3.74 3.04 2.60
2048.00 65 129.95 2 67 31 3.74 3.04 2.60
2048.00 19 37.98 2 68 31 3.68 3.04 2.60
2048.00 25 49.94 1 76 23 3.68 3.04 2.60
2048.00 25 49.94 2 76 22 3.68 3.04 2.60
[color=#FF4500] 0.00 0 0.00 2 72 26 3.68 3.04 2.60
[/color] 2048.00 26 51.73 2 69 29 3.68 3.04 2.60
2048.00 25 49.74 1 61 38 3.71 3.06 2.61
2048.00 24 47.95 2 56 43 3.71 3.06 2.61
2048.00 47 93.94 2 69 29 3.71 3.06 2.61
2048.00 29 57.89 2 78 21 3.71 3.06 2.61
[color=#FF4500] 0.00 0 0.00 1 76 22 3.71 3.06 2.61
[/color] 2048.00 25 49.98 1 77 22 3.65 3.06 2.61
2048.00 26 51.74 1 75 23 3.65 3.06 2.61
[color=#FF4500] 0.00 0 0.00 1 75 23 3.65 3.06 2.61
0.00 0 0.00 1 74 25 3.65 3.06 2.61
0.00 0 0.00 1 72 27 3.65 3.06 2.61[/color]

Disks using SMR are regarded as “archive disks”…high density storage meant to be written once and read many times. It may physically fit and be recognized, but I wouldn’t even think about using one of those in a Drobo!

I’m with AzDragonLord, I don’t want to know what could happen to my data if I put one of those in the Drobo when they haven’t approved it and with it slowing down like that the Drobo could mess up or put my data at risk.

Those periods of low or zero throughput are seen by the controller as a failing drive, which will eventually be ejected from the array.