Drobo

Drives being marked as 'failed' incorrectly when Drobo nearly full?

I have TWO 5D3 units in use, one in warranty, one just outside warranty…

On Christmas day, the Drobo that was out of warranty detected a drive had been removed, when in actual fact, the drive was still there and the latch was secure… the Drobo detected a new drive had been inserted… after a tense period of rebuilding, the Drobo then reported the drive as failed… so I quickly ordered a replacement 8tb disk to be delivered via Amazon…

When that arrived, I removed the ‘failed’ drive from Bay 0… only for the Drobo light in Bay 1 to go off, and the Drobo reported a missing drive from bay 1, as WELL as Bay 0… thankfully I had Dual Drive Redundancy enabled, so didn’t lose any data… but now it was telling me that I had over 120 hours to wait for the rebuild, and not to remove any more drives…

The first unit to fail was almost full to capacity - with about 14% of free space… so it was already low. Then when the drive failed, I had to delete some space as it was then critically low, showing only 2% space AND I had no spare redundancy… so I started to transfer my archive to the other Drobo, which had about 78% free space, just in case another drive failed…

It was after a few hours of that transfer as space started to get low on the new Drobo that the same thing happened to that one!
It did exactly the same thing, reported a drive missing in Bay 0, then discovering it again and marking it as failed…
This unit only has single drive redundancy though, so instead of using the replacement drive I’d ordered for the other Drobo 5D3, I put it in this one instead.

A few minutes later, the Drobo did the same thing… brand new drive, then detected the drive had been removed, and reinserted… then marked it as failed… within MINUTES of installing. (It had been settling at room temperature for over 24 hours at this point).

So - within 3 days, THREE suspicious hard drive failures when the Drobo unit was close to full storage…
I’ve had to prune a lot of data because both units kept repeatedly rebuilding… every time for hours, then repeating the rebuild process after 10 minutes… the only way I’ve been able to stop them constantly rebuilding is by freeing up space by deleting files which I didn’t really want to lose… but I’ve had no choice because of the reported failure of three 8TB disks…

I’m still waiting for support to come back with results from my diagnostic files I uploaded over a week ago… and I can’t use the Drobo’s properly without working disks…

Has anyone else had false drive failures or ‘phantom’ ejected disks due to almost reaching full capacity?

I have basically the same problem with a 5C. In 8 months replaced 3 of 5 new drives with one of the replacements and the 4th drive “failing”.

I also have the added fun of losing the USB connection as well. Support is telling me it is the drives but they test fine when taken out of the 5C. Doesn’t explain the dropped connections though.