Drobo

B800i and ZFS

Drobo users:

I have a Drobo B800i with 8 1TB drives.

ext3 is not a viable choice for me any longer as I have files larger than 2TB sometimes (and ext3 is limited to files 2TB or less and Drobo engineering seems to be ignoring ext4 or supporting ZFS). Thus, I am curious if the following scenario would work and what downsides there might be to it:

a) delete all volumes on the drobo
b) login to the drobo via iSCSI from the Linux box
c) I seem to see all 8 1TB HDs show up as raw 1TB iSCSI devices

Can I then take the 8 1TB raw devices and make them part of a ZFS pool? Granted, the cuteness of the Drobo’s beyond-raid would ostensibly be lost (I’d have to manually remove a 1TB HD from the zpool, physically replace the drive and then add it back to the zpool to put a larger drive in), but it would let me use a more modern filesystem than ext3 with Linux and MacOS (since both OSes have ZFS for them via open source sources so to speak).

Comments anyone?

Thanks,

Stuart

hi stuart, while i only use some DAS models here myself, (and with 2TB max volumes),
maybe something could be done with a drobo working as a drobo, but with also a 3rd party tool sitting on top.

this is all just ideas floating around, but for example another user on the forum, bhiga (though he is quite busy nowadays in case you had specific questions for him), has successfully used multiple drobos (with the usual redundancies as drobo intended) with a home server, to essentially merge all of the large volumes on all of the drobos, into 1 huge virtual drive.

maybe, just maybe, something similar could be done if you are no longer able to use the usual setup.
(please do more research or tests though as i have never used it, just an idea) :slight_smile:

Paul,

I do not see a reason to layer any more protocols than required, if avoidable (that seems to be more popular in the Windows world where inefficiency is expected and even welcomed with great grandure).

I do have the Drobo B800i currently configured with HFS+ (which I did successfully mount under Linux, but its kind of not optimal), with an encrypted container that has ext4 on it.

However, I have a second Drobo B800i (in a different city and I need to retrieve it). Whence I pick up that 800i, I am going to try the following configuration:

a) Create no volumes upon the B800i (thus, whence Linux logs into it via iSCSI I expect to see 8 1TB individual raw volumes)
b) I will then use LUKS to encrypt each of the 8 raw volumes
c) I will then create a ZFS pool with each of the encrypted raw volumes

This would then mean that if any DASD failed, it could be removed from the ZFS pool (and the Drobo B800i for that matter), in the absence of requiring data cleansing as it was encrypted to start with, and merely replaced, encrypted again and brought back into the ZFS pool.

At such time as I have both Drobo B800i units here, I could do that with both of them and use ZFS to appoint my redundancy, volume management and LUKS for encryption.

With the Drobo B800i being a “legacy unit” now, it seems rather doubtful that Drobo will create an ext4 firmware for the unit, and using it as a pure iSCSI SAN allows me to leverage any filesystem such as BTRfs or ReFS or anything else I desire that be applied on top of iSCSI targets.

If I use RAIDZ2, then I can loose up to 2 individual DASDs before I am SOL. Moreover, ZFS, BTRfs, ReFS, etc… all have a very easy mechanism for growing filesystems, something that was not so easily done at the time Drobo started to produce such units as I own now. Now, if Drobo were to offer a firmware that had ZFS on it for the legacy units, I am quite sure they could get people to pay something for that!

When I first got the Drobos I was not sure how I would use them, and a friend that had used them in the past suggested that I go about using their internal firmware capabilities. He mostly uses Macs (and I do too), but I also wanted a filesystem that was accessible from Solaris, MacOS, Linux and potentially Linux running on System p.

As best I can understand, it seems that as long as any additional DASDs I obtain (presuming they are higher density 4K types, 4TB or 6TB) can do 512k emulation and the sum total of drives does not exceed 32TB, then I ought be fine. Having two B800i Drobos, that is 64TB of space potentially, not nearly what I even project my space needs to be in far term, save the near term.

I believe I am going to start to purchase 4TB DASDs (as they are coming down in price now, given the higher density choices coming to the market) and buying 8 of them works out well to give 32TB in single unit fully balanced across all the DASDs. Although, if I use ZFS that will be less critical as it will figure out balancing them not the Drobo firmware.

Thanks,

Stuart

Disclaimer: I’m only familiar with the more recent B810i, so take my comments with a grain of salt.

There seems to be a misunderstanding about how the Drobo handles storage here. Drobos do not run some kind of RAID software on top of Linux. The storage pool is abstracted away under a SCSI device interface. This means the Linux kernel that runs on Drobos just sees one large storage device, which is thin-provisioned. Just to make sure we’re on the same page: there are no “raw volumes” that you can access.

This large storage pool is not formatted using ext3, ext4, or any traditional filesystem. It uses the BeyondRAID algorithms to perform replication across disks at the block level. What you get is literally a huge block device that can be allocated, partitioned, and formatted any way you chose.

Like I explained above, this isn’t relevant to iSCSI. iSCSI volumes are not stored on an ext3 filesystem at all, neither are they mounted or formatted locally. Furthermore, I’d like to mention for the record that Drobo NAS devices (5N, B810n) have been using ext4 since they were launched.

iSCSI devices allocate space directly against the SCSI storage abstraction. Any restrictions related to the size of a volume are related to the limitations of the platform and BeyondRAID version. If I’m not mistaken, the B800i is based on an 32-bit ARMv5 processor whose kernel was not able to handle volumes larger than 16TB.

The 16TB limitation is not because the B800i has to mount and format the iSCSI volume locally. It is just related to the kernel addressing capabilities. The B810i, for example, is capable of allocating volumes of up to 64TB.

No. Disks are abstracted away as a single thin-provisioned SCSI device.

As far as I know you can format iSCSI volumes with any filesystem you like. In fact, you could probably format all of them as ZFS partitions and aggregate them under a single ZFS pool.

Maybe I’m missing something, but it seems to me that you just want to do a simple LVM over these two B800i. BeyondRAID already offers a protection level equivalent to RAIDZ1 (RAIDZ2 if you enable dual-redundancy).

Again, I’m not quite sure why do you think that you cannot use a B800i as a pure iSCSI device. Maybe this is something specific to the B800i that has changed with the B810i, but I was pretty sure that it was possible.

I never tried it, but my impression is that you could format B800i volumes as LVM partitions, and use LVM to expand the total size beyond the 16TB barrier.

I honestly do not think that it would help at all. ZFS is infamous for its ridiculously high CPU and memory requirements, and BeyondRAID offers the same level of protection as ZFS, as far as I have been able to study it. In fact, it is amazing how efficient BeyondRAID is when you take into consideration the kind of hardware that it uses.

hi stuart, i think your right about the windows world (all windows computers ive used often had memory leaks and needed a memory optimisation tool to be run on a regular basis, especially before launching any games) :slight_smile:

(thanks ricardo, am learning a lot more about things from your last post too)