iSCSI disconnection problems

Hi all,

We recently purchased a Drobo Pro to be connected by iSCSI to our virtualisation solution (it’s a Debian/KVM setup running open-iscsi).

The Drobo resets itself periodically while the drives are being written to. This makes all the VMs panic (since all VMs stop seeing their disks). When the Drobo Finally decides to restart and comes back online, all filesystems are mounted read-only (as is default in most linux installs)

Obviously, this disqualifies the Drobo as anything remotely close to being useable. We are very disappointed by this. Sure, we didn’t expect blasing speeds (it’s only iSCSI), but we would at least assume that it would work overnight without having to reboot VMs in the morning because they failed during the nightly cron run (backups!).

Does anybody know of a magic “it makes the iSCSI work as advertized” solution? I tried tweaking iSCSI queue size and timeout behavior, no luck so far.

Also, opening a support case didn’t prove so helpful (waiting for an answer still).

Are you using Static or Automatic IP addresses?

The Pro will work best when using Static IP addresses on the Pro and computer.

Do you have it directly connected to your computer or are you going through a switch?

I assigned it a static IP, and I’m using a direct cable connection (using the cable that came with the unit, even). The computer’s interface was also assigned a static IP address.

If you have not been contacted by support you can call the Tech Center.

If you are in the US you can call 1-866-426-4280.

If you are in the European region you can find a list of tech support numbers here http://www.drobo.com/support/phone-support.php.

ASIA Pacific region +65.6270.2653.

Not familar w/ Dabian/KVM but I’m using a DroboPro w/ one of our ESXi 4.1 or now called vSphere Hypervisor and an Elite w/ 2 vSphere 4.1 hosts - w/ relatively stable & acceptible performance (Read/Write) - we use them primarily for testing & demo for some block-level, disk-based backup/restore/replication DR software we resell.
The Pro we are using a crossover Cat-6a cable directly connected to one of our D*ll PE R4xx server and the Elite is on a “Smart” 1Gigabit switch but we didn’t bother to config VLAN, J-Frame or QoS.
In short, we don’t experience any performance or network issue. Last but not least - no reboot loop.