Recently one of the drives in my 5D indicated a failure with a red light and Dashboard information. In the process of ordering a replacement, I turned off the unit and removed the “Failed” drive to get the details to order a replacement.
After replacing the “failed” drive and rebooting the Drobo, a data protection cycle initiated. At the end of the data protection the following day, the “failed” drive is no longer indicated as failed. The 5D now indicates that a different drive be replaced for additional storage capacity to get out of the “yellow” zone.
I did purchase a larger capacity replacement drive, but should I be concerned about swapping the replacement with the drive now indicated by Drobo rather than the drive initially indicated as failed?
Is there a means to run another data protection in order to replace the “failed” drive instead?
hi, can i check when you mentioned for the first time you had that red light, (just before you went to order a replacement), was the light actually a solid red light that was always on, or was it a flashing red light on and off?
in case the drive was solid red (which would seem to indicate that your drobo got quite full at that time), then it’s possible that the drive encountered an error after you booted it up again, (but can you have a look in dashboard to see if any of the drives (or drive bay areas) can be clicked on to bring up some more info, to see if any drive details show a label such as Warning or Healed?
btw can i check how much data you have on the drobo, (including used and free values / %) and which drive sizes you have in which slots (and if you are using single SDR or dual drive reduandancy DDR)?
The red light was initially a blinking red light, but then became a solid red.
I think what happened is that I did max out my capacity, and at some point between ordering and receiving the new replacement drive, I did delete a large quantity of temporary cache files. I think this change in available space plus removing/reinserting the drive initiated the data protection cycle. As the drive previously indicated as failed, is now indicated as green, after the data protection cycle, I think some kind of glitch occurred producing the initial “failed” drive indicator.
Luckily, the data protection cycle did it’s job after I freed up some space, and whatever error occurred was rectified. Though I have not seen any sort of “Healed” label.
From what I can tell from the Dashboard, the drive indicated as failed is healthy and does not require replacement. I am going to replace the drive now indicated as yellow to increase capacity .
I am not sure if I am using single SDR or DDR redundancy. And I have yet to stumble across those settings or options in the Dashboard.
thanks for the clarifications,
on redundancy mode, i think if you have not checked or changed anything there, you will most probably be using sdr (as it is the default mode)
am glad the data protection cycle completed ok. it could be that overfilling caused a glitch before, but it was good to check like you did, and if there wasnt any warning or heal message then the drive appears to be ok. (if you can, try to avoid going above 94% used according to dashboard).
if you replace a drive, the rebuild will usually take about 1 day per 1TB of data that you have on your drobo. (this can be a bit quicker on newer models, or a bit slower if data is still being accessed, bit please let us know how things go)