Drobo

Performance

I’ve got my DroboPro with 4 2TB drives setup and a single 16TB volume presented to a Windows 2003 server running on an HP DC7900 ESXi 4 host. I use the Windows 2003 server for file sharing and domain services (DNS etc.). I should add this is just for home use.

I also have a Windows 7 VM on the same ESX host, which I use for downloading torrents - some are HD/1080 torrents, so pretty big.

I have an issue whereby the torrent client reports my drive as not being available. The drive is the shared volume from the Windows 2003 server. The drive is essentially momentarily dropping off the network whenever I try and add a new torrent which is 1080 - i.e. when the torrent client tries to write the 10,20+GB files for the torrent to the drive (the Drobo), the drive drops off and all the torrents stop and fail, due to the drive not being there. The drive IS there, it’s just dropping out while the large files are written.

I have 4 drives, 4 spindles, GB network, 2GB and 2 vCPU’s for each of the 2 VM’s - I shouldn’t have any issues. All of the ESX monitoring shows the resources are very under utilised - including CPU, network, I/O. My VM’s showing no issues with paging, memory or network etc.

I’ve been through the Microsoft Windows 2003 performance tuning for a file server (I’m a virtualisation architect so I know what I’m talking about) and checked everything - i can’t find any reason for this issue.

Importantly, if I point my torrent client at my DLINK DNS-323 which has 2 2TB (same model as the Drobo drives) mirrored in it - no issues. That too is on the same GB network - connected to the same switch etc.

There is VERY little information available from the Drobo software about what’s going on under the hood - it’s a big-miss as far as marketing the device as a SMB storage SAN, when you can’t tell what’s going on.

Anyone else got any thoughts?

Darren

Is the 'pro connected directly to the server or though a switch?

Did you set the heartbeat token timeout to 14 seconds?

Good point - it’s via a switch. I guess I could get a second network card and have it connected directly. That is a very good point![hr]

I assume that’s a drobo setting, so where do I set it?

OK so I now have a dedicated GB network card for the storage and while copying a 1GB file from the 2008R2 server hosting my file sharing - the Drobo rebooted.

Oh…

At least 2008 re-sets the shares - that’s one battle won.

I am not impressed with what I’m seeing so far from the Drobo.[hr]
My setup - just for clarity: (and this is for home use, not a business)
DroboPro - latest firmware - 4 x 2TB Western Digital WD20EADS drives
HP DC7900 (Small form factor PC)- Dual Core Intel vPro 3GHz
4GB RAM
2 x 1GB NIC’s - one dedicated to storage network the other to a 1GB Cisco switch attached to my home network.
ESXi4 installed on a 1TB local drive
2008R2 server with file server, Active Directory and DNS roles installed. 2 vCPU’s and 2GB RAM
Windows7 client with 2 vCPU’s and 2GB RAM
ESX has no memory or CPU reservations set
All monitoring (ESX and Process Explorer etc.) show memory and CPU is WELL within limits. Unfortunately I CANNOT monitor the I/O of the Drobo (this is a major shame considering the device is marketed as a small/medium business storage appliance - not even SNMP is an option).

I also have, attached to my 1GB switch, a D-Link DNS-323. Copying from my Drobo volume to that is also slow, yet copying to or from the DNS-323 to/from anywhere else is MUCH faster. I.e. the D-Link NAS is outperforming the Drobo by a mile. That also has the same (2 of) WD20EADS drives in it.

The biggest issue isn’t so much the actual Drobo performance but the inability to monitor it so any issues can be identified or at the very least, any potential issues/bottlenecks can be pinpointed.

It’s on the ESXi side. Have you followed our VMWare best practices guide?

The guide is for 3.5 - but sure, I can try that. It wouldn’t explain the reboot though.

My situation is also a little different to the token heartbeat that ESX would normally use - I’m using a paravirtualized NIC driver for Windows, not directly attaching my iSCSI to ESX (which would use a Linux driver). I’m not convinced the heartbeat token applies to a paravirtualized network interface driver.

I’ll report back…thanks Jennifer.

You are using 4.0 on a Pro?

The pro is only certified for 3.5.

When will it be certified for 4? 3.5 is no longer supported by VMware.

Jennifer - is there any update regarding the ESX4 support/certification?

Darren

The pro is only certified for 3.5 at this time. I can see if we will be certified for 4.0.