I have a DroboPro with 4 1TB drives. I have 2 2TB volumes allocated on the Drobo. I formatted them both as NTFS in Dashboard during setup.
The first volume I have reformatted as VMFS and mapped as storage in ESX. The second volume I have allocated as a mapped raw LUN to the VM that is our primary file server.
The primary intent here is to have nearline storage with other uses for free space, so speed isn’t so much an issue. The iSCSI is not disconnecting, but I am getting burst transfers and the slowdowns others have described.
Now, to the question: does this technically qualify as having multiple hosts attempt to access the Drobo at one time (The VM with direct access to the hardware and the ESX server itself)?
Interesting question! b/f I started throwing some answers or speculations, let’s start w/ more questions:
Are U using Smart Volume option when U formatted the 2x2TB NTFS partitions?
Which version of VMware ESX you are running?
How is ur Pro connected to ur host?
Based on ur brief description, my ans is No cuz the current firmware (1.1.3) still DOES NOT support MPIO and I remember reading something from VMware vSphere 4 Performance Guide mentioned there is a CLI command to allow us to adjust the queue depth of ur iSCSI & among other best practices. Will post the URL of that informative article later today.
To answer your question, there would be two separate iSCSI connections between the DroboPro and the physical ESX host. This would be true even if the same NIC was used in the ESX host. To further answer your post question (your post subject), the DroboPro supposed multiple connections to it, so this approach should be fine. The reality is that the DroboPro has no idea one of those connection originates from a virtual machine, nor would it care.
On to some rambling…
I just recently purchased a DroboPro and have it added to my ESX 3.5 and 4.0 environments and I also notice significant slow downs. As a side note, I think that VMware has an unusual or aggressive approach to the iSCSI protocol set. Out of all of the software initiators, including Linux variants and the Microsoft iSCSI, I have had the most difficulty with VMware. The DroboPro seems to perform quite well using Microsoft’s and not so well with VMware. I have also experimented with iSCSI targets on Linux, RedHat, Ubuntu and OpenSolaris, and VMware has drops, disconnects and performance concerns while other software initiators are flawless.
My point is, (a) iSCSI is a sensitive protocol that is affected by many aspects, such as disk partition alignment, disk controller compatibility, motherboard bus speeds, NICs, switches, et cetera; and (b) VMware is also apparently sensitive which complicates the other sensitivities.
I certainly hope that Data Robotics perfects their iSCSI protocol…however, I feel as though there is a reason that quality iSCSI targets cost $25,000 and above.
A VM guest has it’s own MAC address and IP address, that is separate and distinct from it’s Host. So, yes, this qualifies as multiple hosts.
However, if I read your post correctly, the VM guest and the VM host are accessing different volumes on the Drobo. So that means this does NOT qualify as a cluster.
according to the “Best” practices document, the DroboPro can handle multiple ESX hosts accessing the exact same volume (up to 4 hosts but recommended only 2). This is a true cluster, but is NOT what you described in your post.