Drobo FS ultimate data security

I have just upgraded from a 1st generation Drobo with Droboshare to a new Drobo FS. All is well, but I started to wonder what would happen if the FS ever fails. I’m committing a lot of archive stuff to Drobo and, although I have other backups of current data, I could lose out if the FS failed. Up to now I have naively thought that I could simply swap the disks into a new Drobo, but now I realise that that would result in format and loss of data.

No doubt this issue has been discussed endlessly, but I would appreciate any views on what happens in the event of unit failure (however unlikely…)

In one phrase: follow the 3-2-1 strategy.

In summary:
[]3 copies of any important file (a primary and two backups)
]2 different media types (such as hard drive and optical media, or flash)
[*]1 copy should be stored offsite (or at least offline)

A Drobo is not a backup device. It is an storage pool that implements some fault tolerance. That is it, nothing more, nothing less.

A Drobo does not replace the need for backup, it only lessens the odds of having to resort to a backup.

[quote=“ricardo, post:2, topic:2486”]
In one phrase: follow the 3-2-1 strategy.[/quote]

Thanks, that’s useful. I understand the need for a backup strategy. My only though was whether the Drobo is recoverable in the event of a system failure. I think you’ve answered that. I may well use my old Drobo to mirror the new and I must read up on how to do that effectively. Thanks again.


[quote=“Mixalis, post:3, topic:2486”]

It also depends on the failure. If a disk fails, the design of the Drobo lets you replace it with no loss of data. If the DroboFS unit itself were to fail, you can transfer the disk pack to a new DroboFS and all of your data will be intact. However, if anything goes wrong with the underlying filesystem on the Drobo, then you’re pretty much hosed.

Until a great Ars Technica article went into the details of BeyondRAID, I had no idea that it was done at the file level and required tight integration with the filesystem. Seriously, no one designs a RAID like that - for better or worse.

<< If the DroboFS unit itself were to fail, you can transfer the disk pack to a new DroboFS and all of your data will be intact<<

That’s encouraging. But how does the new FS recognise the disk pack as containing data and not reformat the disks as it would with new disks? Sorry if this is a dumb question, but it’s important t understand the procedure.


Nice catch there, this is a very interesting article. In fact, it explains a lot of questions about how the disk pack is put together and why ext3 was chosen.

But I have to say that the way they organize the chunks is a really interesting algorithm. I don’t think it is fair to evaluate it as a RAID system, which it clearly isn’t. I see it as a fault-tolerance extension to some filesystems, instead of fault-tolerance for devices (i.e. RAID).

This tells me that the slowness of the FS is really due to limited CPU power (since the algorithm seems way more complex than RAID), although the constant re-layout to improve striping may be affecting performance.

Yes, it’s very similar to when firewalls and load balancers first offered deep packet inspection (e.g. cookie-based load balancing, or HTTP-based filtering and routing). All of that “deep” processing takes a huge hit on an otherwise simple (and easy to accelerate) algorithm. For a block-based RAID, parity calculations or straight mirroring are simple, and the increased parallelism of multiple drives is where you get a lot of the speed benefits. Since BeyondRAID has to query the filesystem and then read/write from several disks in different spots, the overhead of figuring everything out wipes out the performance gains from multiple disks.

A faster processor may help some, but typically the only way such deep inspection improves significantly is to design custom silicon optimized for the task. I don’t see DRI moving in that direction.