Drobo

2 of 3 new drives marked as "failed"

I purchased 3 Western Digital 6TB drives to replace older 1TB and 2TB units in my 5N. Removed a 1TB unit from slot 0 (top) and waited the full 17 hours for the green/amber lights to stop flashing before inserting one of the 6TB units in slot 0.

Next, pulled a 1TB drive out of slot 2 (middle) and this time waited 9 hours for the green/amber cycle to complete before inserting a new 6TB drive. After about 5 minutes, Drobo says the drive is bad and flashes the led Red. Pull that one out of slot 2 and insert the third 6TB drive - again 5 minutes then “flash Red”.

Can I really have a 66% failure rate on these two drives? What exactly is the error detected?
The email report just says…
Your Drobo: “Drobo-001” has reported the following critical alert.

Red Alert. Drobo detected a hard drive failure. Replace the hard drive indicated by the blinking red light. 

======== Drive 2 ========
Health: Drive failure



Message sent from:
Host Name: Drobo-001 
Host IP Address(es): 192.168.2.103

There are no messages in /var/log/nasd.log but in /var/log/messages I see

Jul 17 09:20:21 Drobo-001 user.notice kernel: [260144.996979] EXT4-fs (sda1): error count since last fsck: 6
Jul 17 09:20:21 Drobo-001 user.notice kernel: [260145.002578] EXT4-fs (sda1): initial error at time 1590183424: ext4_mb_generate_buddy:739
Jul 17 09:20:21 Drobo-001 user.notice kernel: [260145.010811] EXT4-fs (sda1): last error at time 1594743129: ext4_mb_generate_buddy:739

but these seem to show up every day with the same timestamp values!

So, this is clearly not a problem with the drives.
I removed the 6TB disk from the operational array and re-added the 1TB disks - the 1+2+1+4+3 rebuilt itself OK. Then I shut the system down, removed all 5 drives and inserted the 3 “brand new” 6TB drives (previous 1 was used, while the other 2 were failed).

After a “reset/factory reset” of the 5N, All 3 of the 6TB drives were spun up and after 30 minutes were operational.

Seems that the “disk check” is less robust and more likely to fail a drive when it is added to an operational system.