Linux -- can I get it "right"?

Hi All,

After hunting down (often contradicting) pieces of information about Drobo and Linux, I’ve finally managed to get it to work the way I want it.

I know that the manual says “As you drag the slider across the different volume size options, note the tradeoffs to consider when making your selection.” Since I don’t run Windows, could somebody be so kind and tell me what those tradeoffs are?

This post is both for information and to get verification of my method. A word from DRI would also be appreciated in case I’ve made a major mistake there.

(this is a copy of w%252BJLWNv5MZC)

[size=medium]16TB Drobo on Linux using ext3 and thin provisioning.[/size]

4 x 2TB Western Digital Green disks in a Drobo.

drobo-utils installed with “python setup.py install” (version 0.6.1). The GUI is useless (forces LUN size to max 2TB).

[font=Courier]# drobom setlunsize 16 PleaseEraseMyData
WARNING: lun size > 2 TiB known not work in many cases under Linux
You asked nicely, so I will set the lunsize to 16 as you requested
set lunsize to 16 TiB
Done… Drobo is likely now rebooting. In a few minutes, it will come back with the new LUN size
[/font]
[font=Courier]# dmesg | tail -n 25
usb 8-6: new high speed USB device using ehci_hcd and address 9
usb 8-6: configuration #1 chosen from 1 choice
scsi19 : SCSI emulation for USB Mass Storage devices
usb-storage: device found at 9
usb-storage: waiting for device to settle before scanning
usb 8-6: New USB device found, idVendor=19b9, idProduct=4d10
usb 8-6: New USB device strings: Mfr=1, Product=2, SerialNumber=3
usb 8-6: Product: Drobo
usb 8-6: Manufacturer: Data Robotics Inc.
usb 8-6: SerialNumber: 0DB0DEADBEEF
scsi 19:0:0:0: Direct-Access TRUSTED Mass Storage 2.00 PQ: 0 ANSI: 5
sd 19:0:0:0: [sdr] Very big device. Trying to use READ CAPACITY(16).
sd 19:0:0:0: [sdr] 34359738368 512-byte hardware sectors (17592186 MB)
sd 19:0:0:0: [sdr] Write Protect is off
sd 19:0:0:0: [sdr] Mode Sense: 03 00 00 00
sd 19:0:0:0: [sdr] Assuming drive cache: write through
sd 19:0:0:0: [sdr] Very big device. Trying to use READ CAPACITY(16).
sd 19:0:0:0: [sdr] 34359738368 512-byte hardware sectors (17592186 MB)
sd 19:0:0:0: [sdr] Write Protect is off
sd 19:0:0:0: [sdr] Mode Sense: 03 00 00 00
sd 19:0:0:0: [sdr] Assuming drive cache: write through
sdr: unknown partition table
sd 19:0:0:0: [sdr] Attached SCSI disk
sd 19:0:0:0: Attached scsi generic sg21 type 0
usb-storage: device scan complete[/font]

Yup, the single LUN is visible. Let’s partition it. fdisk will be no good – it can’t use GPT. parted it is then.

[font=Courier]# parted /dev/sdr
GNU Parted 1.8.8
Using /dev/sdr
Welcome to GNU Parted! Type ‘help’ to view a list of commands.
(parted) mklabel
New disk label type? [gpt]? gpt
(parted) mkpart primary
File system type? [ext2]? ext2
Start? 0
End? 100%
(parted) print
Model: TRUSTED Mass Storage (scsi)
Disk /dev/sdr: 17.6TB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 17.4kB 17.6TB 17.6TB primary , , , , , , , , , , ,
(parted) quit
[/font]

Now for the file system:

[font=Courier]# mkfs.ext3 /dev/sdr1
mke2fs 1.40.8 (13-Mar-2008)
mkfs.ext3: Filesystem too large. No more than 2**31-1 blocks
(8TB using a blocksize of 4k) are currently supported.
[/font]

Right, that’s consistent with 1.3.5 Drobo firmware release notes. Up to 8TB. But it is possible to use a single 16TB ext3 volume with DroboShare. Clearly an artificial limitation then!

The problem seems to be an old distribution running on that host (openSUSE 11.0).
[font=Courier]# uname -a
Linux motmot 2.6.25.20-0.5-default #1 SMP 2009-08-14 01:48:11 +0200 x86_64 x86_64 x86_64 GNU/Linux[/font]

Let’s try newer kernel – my laptop runs openSUSE 11.2.
[font=Courier]# uname -a
Linux ciri 2.6.31.5-0.1-desktop #1 SMP PREEMPT 2009-10-26 15:49:03 +0100 i686 i686 i386 GNU/Linux[/font]

Plugging in Drobo:
[font=Courier]

dmesg|tail -24

[ 6957.322069] usb 2-4: new high speed USB device using ehci_hcd and address 3
[ 6957.436959] usb 2-4: New USB device found, idVendor=19b9, idProduct=4d10
[ 6957.436970] usb 2-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 6957.436979] usb 2-4: Product: Drobo
[ 6957.436986] usb 2-4: Manufacturer: Data Robotics Inc.
[ 6957.436993] usb 2-4: SerialNumber: 0DB0DEADBEEF
[ 6957.437133] usb 2-4: configuration #1 chosen from 1 choice
[ 6960.289082] scsi6 : SCSI emulation for USB Mass Storage devices
[ 6960.290129] usb-storage: device found at 3
[ 6960.290131] usb-storage: waiting for device to settle before scanning
[ 6961.290673] scsi 6:0:0:0: Direct-Access TRUSTED Mass Storage 2.00 PQ: 0 ANSI: 5
[ 6961.290845] sd 6:0:0:0: Attached scsi generic sg2 type 0
[ 6961.293622] sd 6:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
[ 6961.294203] sd 6:0:0:0: [sdb] 34359738368 512-byte logical blocks: (17.5 TB/16.0 TiB)
[ 6961.294231] usb-storage: device scan complete
[ 6961.295257] sd 6:0:0:0: [sdb] Write Protect is off
[ 6961.295265] sd 6:0:0:0: [sdb] Mode Sense: 03 00 00 00
[ 6961.295268] sd 6:0:0:0: [sdb] Assuming drive cache: write through
[ 6961.297389] sd 6:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
[ 6961.298516] sd 6:0:0:0: [sdb] Assuming drive cache: write through
[ 6961.298526] sdb: sdb1
[ 6961.314895] sd 6:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
[ 6961.316141] sd 6:0:0:0: [sdb] Assuming drive cache: write through
[ 6961.316149] sd 6:0:0:0: [sdb] Attached SCSI disk

mkfs.ext3 /dev/sdb1

mke2fs 1.41.9 (22-Aug-2009)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
1073741824 inodes, 4294967287 blocks
214748364 blocks (5.00%) reserved for the super user
First data block=0
131072 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,
2560000000, 3855122432

Writing inode tables: 120/131072
[/font]

This took over 4 hours. I wish Drobo supported XFS…

I am now populating this storage with data, here’s a look at the df output:

# df -h /media/Drobo
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdb1              16T  1.2T   15T   8% /media/Drobo

One can issue [font=Courier]tune2fs -m 0 /dev/sdb1[/font] to remove the 5% reservation for root-owned files.

Comments? Will it work or will I be re-populating the disks again?

DroboShare can do a “new NTFS” (GPT) 16TB volume, but EXT3 is still limited to 2TB, at least GUI-wise from the Drobo Dashboard format. Not saying what you posted won’t work (I have no clue, to be honest), just commenting on DroboShare’s usage of EXT3.

On the NTFS side…
XP-compatible NTFS is MBR and has a 2TB limit but is usable on all NTFS-compatible Windows (NT, 2000, XP, Vista, Server 2000, Server 2003, Server 2008, Windows 7).

The “new” NTFS is GPT (thanks Docchris) and has a 16TB limit, which I believe is artificially capped, and is compatible with OSes using the Server 2003 core or newer (Vista, Server 2003, Server 2008, Windows 7).

DroboShare can use either.

So that’s the trade-off on NTFS-wise - smaller volume size limit and greater OS compatibility vs larger volume size and less compatibility.

For ext3 support, the 2TB hard limit was a limitation of the Linux kernel. If you are past a certain quite recent revision (2.6.31 or later) then you can have an ext3 volume larger than 2TB. I’m uncertain how the DroboShare handles this.
Older kernels may let you create a larger than 2TB volume but then refused to re-mount the volume later on.

Having said that, have a read through the following two links - especially in the comments sections
http://www.norio.be/blog/2008/11/setting-drobo-linux-right-way
http://groups.google.com/group/drobo-talk/browse_thread/thread/34f08da1708d2f7b?hl=en

What people have found is that while you can format a Drobo to ext3 greater than 2TB it does not necessarily work properly. When removing files the Drobo does not release the space back - so eventually your Drobo is going to run out of space even if your ext3 volume is empty.

I’m not sure if any recent Drobo firmware releases have enhanced the ext3 support.

[quote=“ajspencer, post:3, topic:901”]
When removing files the Drobo does not release the space back - so eventually your Drobo is going to run out of space even if your ext3 volume is empty.[/quote]
Is this with DroboShare, or with a non-DroboShare Linux box?

If it’s the latter, this concerns me, as it implies that Drobo is somehow managing the EXT3 filesystem, versus simply peeking at the amount of free/used space.

Then again, Drobo just might not “know” how to handle what it sees when it “peeks” at a >2TB EXT3 volume, in which case it’s still not managing the filesystem, but simply is confused by the results it gets when the volume is >2TB.

@grok - GUI limit to 2 TiB

The GUI refuses to work past 2 TiB because I have multiple reports of bizarre behaviour when file systems bigger than that were used. I have reproduced these behaviours at 3,4,8 and 16 TiB. When someone on the google group specifically tested, he saw the same bad behaviour when using ntfs and hpfs > 2 TiB as well (see various posts by ChrisW in the thread already referred to). After hearing complaints about people using the GUI in good faith and losing data, I decided to limit the GUI to 2 TiB until the issues are resolved.

line mode was left alone, so that more adventurous users can continue to test. Hi there, adventurous user!

If you have a pro:
The drobopro firmware 1.1.4 is supposed to resolve the issue, but no-one has verified that yet.
would be great to hear about such tests.

Recently I received report of someone having an 8 TiB LUN that has worked fine since the spring. This is the only such positive report I have heard. Am following up. A number of positive reports have been: “works fine, doesn’t reclaim space” which is just the start. After a while, things get messy, it runs out of space… etc… worst case, it just halts and you cannot get the data out. That is my experience, corroborated by several others.

Have not specifically tried with 2.6.31, though the fix there is only for firewire. I only use USB.
In summary: I suggest you ensure you have backups, then try to reproduce the tests in the google groups thread
mentioned. If you really have no problem using a larger LUN, I would love to hear about your configuration, so I can figure out how to reproduce it.

The 2TB limit may or may not apply to DroboShare - i don’t know what OS or version it runs.

The issue with not being able to reclaim space is with Drobo and ext3 and >2TB volumes - so would apply if you do or don’t use drobo share.

As posted by philobyte test it out if you like just make sure you have a backup of you data. Remember Drobo linux support is beta, and even then compatibility is only listed up to 2TB in the knowledge base.

I have a regular, v2 Drobo (4bay). I do not have a DroboShare.

Looking at the procedure now. I do have full backup that is normally in different physical location.

Well, as of now, I’ve got 1.7TB onto the unit (and counting). That took two days (the laptop used to copy the data only has a 100Mbps network). I suppose I should rm -rf the whole thing now and see if Drobo reclaims free space at the cost of those two days. Alternatively I can re-approach this after the holiday break.

There is this little thing at the back of my mind regarding 8TB volumes in the release notes of 1.3.5. (correction, that’s in the latest firmware for Pro, my mistake) I’d really like a DRI Tech to explain this. Is the 8TB limitation caused by the DRI’s experiences with the host OS or is Drobo unable to read/understand the FS bitmap beyond that?

Knowing this would really help in risk assessment for this configuration. I guess we can expect an answer between Xmass and the New Year – doubt DRI is operating today :).

Current DRI Support information is inconclusive as for 16TB Linux hosts.

I’m going to go with the 16TB setup for now. All the data is backed up and other than an inconvenience of having to take Drobo to the other location and waiting a week to repopulate, the scope for an epic fail is limited. The reason I’d like to stick with 16TB is to ensure smooth transition to 3TB disks (provided the Advanced Format will be supported – but I hope it will) which will create a 9TB volume. Clearly 8TB thin provision won’t cut it. I would rather use it as a single volume – the alternative (2TB LUNs + LVM + XFS) does allow for easy extension and it can be done would the 16TB setup not work. But then of course I could’ve bought any other 4-bay storage.

Regards,

grok

(I stumbled upon your thread while searching the forums for information on how to create a volume >2TB on my Drobo v2, so I could reproduce a working 1.3.1 disk pack, and try to replicate the disk pack header information from that working 1.3.1 disk pack to my currently-broken 1.3.5 disk pack, basically “downgrading” the disk pack, since the device, firmware and dashboard do not allow this.)

Based on my recent experience with DRI over my firmware upgrade issues (going from 1.3.1 to 1.3.5, breaking my Drobo’s ability to boot and recognize the 1.3.5 disk pack), their statement (as yet undocumented, but I’m pushing for them to make this VERY clear in their marketing and web collateral), is that any volume > 2TB on Linux is completely unsupported, period.

It’s not just that you’re out there on your own, without support. You’re actually putting your data at great risk if you store it on a Drobo with > 2TB volumes.

In my case, the only reason I went with ext3 on the underlying filesystem itself, was because the ext fs is the only one of the 3 possible filesystems that supports the kinds of filenames I need to store on it. NTFS can’t do it and HFS+ can’t do it. ext3 was the only filesystem that made sense here.

Right now, Linux is not supported by DRI, and very likely not for anything larger than 2TB volumes when they do “officially” support it.

Back up your Drobo often, because you will very likely lose the data itself or lose access to the data, when the Drobo locks it up and decides to stop serving up the block device to your Linux host, as it did in my case (with an 8TB volume).

This post is almost a year old. Grok, are you still using the 16TB LUN Size without issue?