Drobo

DroboPro - Firmware design limit for volume size?

Hi,
I’ve looked through all the info I can find, but can’t see a specific answer to this question.

DR - Is the DroboPro firmware intended to support 16TB or 8TB maximum volume size for the ext3 filesystem?

Older linux systems did not support above 8TB on x86 hardware, apparently due to a limit in the memory page size(?)

As of RHEL 5.1 and Centos 5.1, 16TB volumes are fully supported.

I want to use the maximum possible volume size, but I don’t want to find out after loading terabytes of data that it falls into a black hole once past 8TB…

Has anyone got a definitive answer?

Thanks.

http://groups.google.com/group/drobo-talk/browse_thread/thread/34f08da1708d2f7b?hl=en

summary of Thread:
– one guy claims it works.
– another guy says it ought to…
– I claim it doesn’t and indicate how to make it fail (but I only have a v1 Drobo)
– someone reproduces all the behaviors on a DroboPro with the latest firmware.
– He then tried HFS and NTFS, and found the same limitations.
– Someone claims that 16 TiB LUNS work just fine… but it turns out he’s on a Mac…

I’d say it’s pretty definitive that the 2 TiB firmware limitation for ext3 is real. I have talked with DRI about it in the past, they did not really understand that the problem existed. I think with multiple people able to reproduce problems, they should be more aware of it now.

Hi Philobyte,
I’m the guy that reproduced the bug with ext3 on an 8TB volume size, when testing on a Drobopro with V1.1.1 firmware.

I’m hoping this has been corrected in the V1.1.3 release, but I want to know the design limit of the volume size - I can’t find or report bugs with large volumes if I am limited to the (known working) 2TB volume size, also that gives me no advantage over my existing Drobo and I may as well send the 'pro back.

I hope my previous bug report details and logs were of use and the new firmware has some fixes!

(There is a definite hard limit at 2TB if using Firewire with Linux, but that is strictly an addressing limit in the present Linux OS firewire driver and not to do with the Drobo end. USB does not have this limitation.)

Just to update this - apparently with Linux kernel 2.6.31, recently out, the 2TB limit on firewire is gone. I haven’t tested it yet, but it’s on my list.

Also, based on my experience so far with SuSE and DroboPro, I’m not convinced the issues going over 2TB with ext3 are the DroboPro’s fault. They may actually be out-of-date ext3 tools or partition handling tools in the linux distro. I’m preparing to re-try with Gentoo and all the latest tools. I’ll let you know how it goes.

I don’t know whose fault it is, but I have just repeated the testing
on Ubuntu 9.10 with kernel 2.6.31, and all I do is fill it, oh 60% up, then
rm -rf the whole lot. Rather than the blue lights going out, they gradually fill up over a few days, and eventually the Drobo asks for another drive.

This was after a fresh reset. I dumped diagnostics, which can be sent.

Here’s a look at some parameters:

root@pepino:~/drobo/drobo-utils# tune2fs -l /dev/sdf1
tune2fs 1.41.9 (22-Aug-2009)
Filesystem volume name: Drobo01
Last mounted on:
Filesystem UUID: 03f14325-530f-4d99-9080-4fd70f8e94dc
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr dir_index filetype needs_recovery sparse_super large_file
Filesystem flags: signed_directory_hash
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 33554432
Block count: 2147483639
Reserved block count: 0
Free blocks: 1984339288
Free inodes: 33554389
First block: 0
Block size: 4096
Fragment size: 4096
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 512
Inode blocks per group: 32
Filesystem created: Tue Oct 20 08:43:33 2009
Last mount time: Sun Nov 1 21:47:24 2009
Last write time: Sun Nov 1 21:47:24 2009
Mount count: 1
Maximum mount count: 27
Last checked: Tue Oct 20 08:43:33 2009
Check interval: 15552000 (6 months)
Next check after: Sun Apr 18 08:43:33 2010
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: da5040f5-351c-467f-a151-a3428322112c
Journal backup: inode blocks
root@pepino:~/drobo/drobo-utils#

root@pepino:~/drobo/drobo-utils# drobom info slots

Info about Drobo Name: MyDrobo Devices: /dev/sdf

query slotinfo result: number of slots: 4
[(0, 500107862016, 0, ‘green’, ‘ST3500830AS’, ‘ST3500830AS’), (1, 0, 0, ‘yellow’, ‘’, ‘’), (2, 500107862016, 0, ‘green’, ‘WDC WD5000AAKS-00C8A0’, ‘WDC WD5000AAKS-0’), (3, 500107862016, 0, ‘green’, ‘WDC WD5000AAKS-00C8A0’, ‘WDC WD5000AAKS-0’)]

root@pepino:~/drobo/drobo-utils# drobom status
/dev/sdf /drobo10 MyDrobo 88% full - ([‘Yellow alert’], 0)
root@pepino:~/drobo/drobo-utils# df -h /drobo10
Filesystem Size Used Avail Use% Mounted on
/dev/sdf1 8.0T 584G 7.5T 8% /drobo10
root@pepino:~/drobo/drobo-utils#

It’s trivially repeatable.

updating again, I removed some space, and a few days later, it decided to take the yellow alert away,
and now it lists 60% full…

sudo ./drobom status
[sudo] password for peter:
/dev/sdf /drobo10 MyDrobo 54% full - ([], 0)
peter@pepino:~/drobo/drobo-utils$ df -h /drobo10
Filesystem Size Used Avail Use% Mounted on
/dev/sdf1 8.0T 307G 7.7T 4% /drobo10
peter@pepino:~/drobo/drobo-utils$

which is wrong. It should have around 1 T of physical space available (3 x 500 G), so I should only have 30% full.