Drobo 5N drive failures gone after reset & FW upgrade?

I have a Drobo 5N that I wanted to upgrade to the 64TB firmware. I had an iTunes library in excess of 10TB that I simply deleted as I can re-download. I then moved the remaining data to various drives with free storage. I have had 2 power failure recently that I know of. Once the Drobo was still on and active as if nothing had happened. The internal backup battery held on for the short period the power was out. The second time the Drobo was off when I checked it. I have a whole home suge protector installed on my meter so it wasn’t a power surge. While moving my data around, one drive showed “WARNING” but I wasn’t notified, I only knew because I selected each drive one by one to view their stats. Closer to the end of removing all data, another drive showed as failed. I have all of my data secured elsewhere. I have upgraded manually to the most recent firmware and set the entire thing up how I want it. ALL OF THE DRIVES ARE HEALTHY AND GREEN! Do I dare put data back on my Drobo? Is it possible deleting 10TB of data whilst copying another 5TB to my desktop corrupted the file system? If these drives had indeed gone bad, wouldn’t the newly updated Drobo have detected it and said something? I will remove these drives and run sector by sector scans if necessary but this will take days per drive. Is this a glitch that anyone else has experienced?[hr]
OK, so now the drive that showed up first as “WARNING” is showing the same thing. That drive is going in the garbage after I take it apart and play with it. What about the second drive that completely failed supposedly? If these drives had really failed/begun to fail, why wasn’t I notified sooner? I’m starting to think they really are bad, and that my faith in Drobo is misplaced.

hi, usually there can be a delay in reclaiming space, (while the drobo frees up the blocks for use), and 10GB might have needed time to fully reclaim, but i think that would only have attributed to a bit of slowdown, rather than hard failures.

it is certainly possible for a hard drive to fail at some point of being in use, (just as it is theoretically possible for 2 or more drives to fail too), though power outages or spikes etc, are not going to do electronic devices any extra favours.

i think seeing a warning or healed message, usually indicates that some troubles were noticed, and rectified or that drobo was able to continue its normal operations, but it may ultimately flag them as bad later down the line.

can i check what is the current status of your drobo? are all lights green for the drives on drobo and dashboard, and is any data you had there still accessible? if so its not a bad situation to be in from one perspective.

(when one of my wd15eads drives failed fairly recently, i had noticed sluggishness and windows explorer crashing/computer reboots shortly beforehand - this is the post back from august last year just fyi in case handy)
http://www.drobospace.com/forums/showthread.php?tid=144519

while in the last week or 2 my gen1 had some problems that i still need to resolve (along with the computer it used which happened to stop working too), ive had power outages before though, and the drobo always pulled through.

In your case, if you already have backups of your data, then a few ways forward could be these:

  • if you are still covered by support, to raise a ticket with them, as the dashboard diagnostics logs could prove useful for the support team to try to find out more info about what happened.

  • if the drobo is still green (both on dashboard as well as the physical unit), then you could try shutting drobo down via dashboard, and then ultimately to power all off and to reboot both the drobo and computer, to see if you still have the same situation as now.

  • if you do, then it probably is not a bad idea to shutdown drobo and computer, and to remove all drives (ideally remembering the order for now), and then to power it up empty. does dashboard still see the empty drobo, though with 1 bay light having a solid red?)

(these steps could continue further, with a few more things to try in order to get the drobo back to how it was, or at least to try to better ascertain if the unit is still ok, but if you already have all your data safely backed up, and if you might possibly reformat the drobo at some stage to get 64TB though, then it probably is not be a bad idea to run those surface scans at some point, via an external or separate adaptor, maybe even something like a spinrite could help, as it can show the amount of soft errors per interval, which could be a good indicator of a drive that is struggling, though not yet had any physical problems - but check with the makers of it first though as i think it only works with drives up to a certain size)

I have managed to fix this problem. It would be nice to know what caused it to prevent it from happening again. I updated my firmware to support the expanded 64TB capacity. The drives continued to appear to malfunction. None of the disk were readable by a Windows PC. I then used my MacBook to run diagnostics. All drives checked out. The only thing I did other diagnostics was reinitialize them using the Mac’s built in drive utility. It was required before it would mount to run diagnostics. So now if anyone could offer any suggestions as to why the my Drobo thought these drives were going bad/failed that would be great. I reset and reformatted the disk pack multiple times with the Drobo before I alter my approach. At this time I have only useless data on the Drobo, I plan on putting it through it’s paces for another few weeks before placing anything I care about back on it.

hi kaipus,
i was thinking if the drives needed to be initialised again on the mac before being able to mount and run diagnostics, maybe something similar was needed on windows to get windows to be able to see the drives. (i dont think any computer would be able to see actual data on a drive that was part of a diskpack though, as only drobos can natively access a pack)

if your drives are in the drobo currently, can i check if they are all green?

did you actually run those sector by sector scans on each bare drive? (if you are still going to be trying tests etc, then it may be worth powering down to remove them, and trying full scans of each drive via a separate adaptor, to check each drive, before putting them back into your powered-off drobo, and then powering back up.)