I’ve just had both 2TB green WDCWC20EADS drives fail. This looks to be the cause of the flaky it’s critical and now it’s ok behavior I’ve been seeing for the last couple of months or so. The bottom two drives have bad sectors mapped or are taking 20 seconds to respond sometimes.
It looks like the green drives or desktop drives in general are not suitable for RAID service because either the cause to many head load cycles OR the way they do deep drive recovery on errors takes too long and causes other problems.
Hmm…DRI sells (or used to) WD Green drives bundled with Drobo, and I’ve had three 1TB WD Green drives in my Drobo v2 for over a year now with nary a problem. I’m skeptical that this is a systemic problem with these drives.
Same here. I’m running 3-1TB drives (2 almost 2 years old and 1 a year old) and 1-2TB drive that is approximately 6 months old. No problems whatsoever reported. Were both of your 2 TB drives purchased at the same time from the ma vendor? Perhaps a ‘bad batch’?
The serial numbers are pretty far apart and I bought them separated. I’d turn it off for firmware levels but I’m scared to touch the thing while it’s rebuilding.
[quote=“bnewport, post:1, topic:1090”]
I’ve just had both 2TB green WDCWC20EADS drives fail. This looks to be the cause of the flaky it’s critical and now it’s ok behavior I’ve been seeing for the last couple of months or so. The bottom two drives have bad sectors mapped or are taking 20 seconds to respond sometimes.[/quote]
I also had to return my 2 Caviar Green 2TB WD20EADS disks which had too many spurious errors and were causing too many dynamic rebuilds.
Those 2 were WD20EADS-00R6B0 S/N WD-WCAVY02xxxxx.
The new ones don’t have the same problem, and are WD20EADS-32S2B0 S/N WD-WCAVY09xxxxx.
So it is possible that WD discretly corrected some hardware or firmware problem on the early 2TB Caviar Green.
2 Drobo’s with 4xWD20EADS drives here so far no problem. All drives are from a fairly large spread of manufacture dates and sources, but all purchased mid/late 2009.
I recently learned something about Mean Time Between Failure (MTBF) and life expectancy.
The MTBF of an 20 year old average European citizen is 500 years. It means that one out of 500 will die between the age of 20 and 21. The life expectancy is closer to 70 years.
It is the same with hard drives. It is very tempting to calculate a life expectancy from MTBF, but you can come very far from the truth.
Yep, I have a netapp filer iqn the office with 28 fiber drives and about one per 18 months will fail. But, the netapp is absolutely amazing from a reliability point of view and it’s pretty much zero admin.
We had a Netapp at our old office. They’d even send replacement drives before a drive failed, as it monitored recoverable drive errors (similar to how Drobo does, I believe). It is quite amazing, but you definitely pay for it.
I wonder if TLER has anything to do with it? Maybe you could try WD RE3 series drives. It’s all I use on my RAID 5 setup on my server. They cost a little bit more, but they have other features besides firmware that make them outclass any other drives WD sells, as far as long term reliability goes. Of course you pay a premium for this though.