I recently introduced ESXi to my environment and am chomping at the bit to deploy my DroboPro as a datastore for my virtual machines; I’m out of storage capacity on the server’s direct-attached drives but I have the DroboPro locked and loaded full of new drives.
On my first attempts to mount the DroboPro volumes in ESXi, I noticed rescan and refresh functions within vSphere were slow to respond. I got everything connected and talking without further issue, until I tried to transfer some unused files from my server’s internal storage to the DroboPro (local SCSI drives to DroboPro on iSCSI)…the speed was beyond dismal and the connection completely dropped several hours (!) into the 40GB transfer. My Drobo was unresponsive and I could only get it to remount in vSphere after a hard reboot/power cycle.
The bottom line is I’m hoping the community might be able to provide me with some ESX/ESXi tips and tricks to get things running smooth. I know Data Robotics just got certified with VMware within the past week, but the latest firmware and dashboard had no effect/improvement…I know multi-host iSCSI support is still forthcoming but I’m just after reliable speeds/reliable connections on a single host at this time.
I just tried to pull a diagnostic file from the unit and it froze again
Dashboard: 1.5.1
Firmware: 1.1.2
ESXi: 4.0.0
vSphere: 4.0.0
Disks: 4xWD Green 1TB, 4xWD 160GB
Format: VMFS, 8MB block size
iSCSI: Dedicated NIC, Dedicated iSCSI vSwitch, connected to ESXi by crossover cable, manually assigned IP
Dual-Drive Redundancy: Enabled
Spin-Down: Disabled
Volumes: 2x 2TB
Volume Usage: 0% on both volumes (I have gone through multiple resets to default settings)
Thanks in advance for any help…VMware and DR should have more info coming now that they’ve formalized a partnership/certification but I need to make my $2k+ investment start returning, like, yesterday
Still my subjective opinion: As the Drobo’s speed is so unreliable (check the posts in the forum) running VM’s from it is calling for trouble…
We’re running ESX from a HP blade enclosure connected to a HP EVA SAN. Very expensive yet still we get sometimes speed problems. Add to this the question if Drobo is 100% reliable.
I’m looking for backup storage space for now…somewhere to unload VMs in my inventory that aren’t running live, or to drop sandbox copies of a VM for recovery/use later. At the moment I can’t even copy files from my local storage to the Drobo without it loosing its iSCSI connection completely…forget about running live/active VMs, I can’t get that far
I fully recognize I’m an early adopter here – the DroboPro is new, ESXi 4 is in its infancy, etc – but giving up wouldn’t be in the spirit of IT and this forum. With the new certifications and partnership with VMware I’m confident a permanent solution is forthcoming…there’s even a kb article that acknowledges the upcoming multi-host support. This could mean I’m waiting on VMware, not Data Robotics, for my solution but I’m guessing I’m not the only one in this forum with this setup…
I thought MPIO support is avaiable with the latest firmware update but my test results in our Windows Lab shows the opposite! We could only get one droboPro target accessed by only one host at one time. Additional host will detect the droboPro target status “Inactive”. So, I guess it’s forthcoming aferall. Next project - vSphere 4 or may be I’ll give ESXi a try 1st.
Tks for the confirmation! And I was trying different 3rd-party initiator & solution.
We could hardly wait for t MPIO support so that we can start testing MSCS. OTOH, we don’t have any connectivity issue w/ ours drobo connected via the iSCSI 1Gig. Well, performance is not considered an issue - at least in my definition.
One’s worst nightmare, we call it fun! So how does Ur Pro connected to ur 3.5 or 4? We are using iSCSI 1Gig & it is working fine altho we haven’t quite started to pound on the drobo due to limited larger 3.5" drives. Need to steal some 1TB drives from my co-workers’ drobos but they would know since I need to thrash their drives. That’s an annoyance or one of them we have to bear with until the next firmware release, hopefully w/ the MPIO support.
You can’t treat a Drobo like a regular disk device. It has intelligence that understands certain file system layouts so that it can determine when the OS deletes a file. It uses this information to scavenge free space back. Don’t fall into the trap of thinking this is simple RAID technology. This is thin provisioning stuff.
The Drobo works very hard to fool the OS into thinking that there is a huge volume attached and to do this it needs to understand how the OS interacts with the disk. It does this by understanding the file system and the OS behaviour towards that file system.
Give it a FS that it doesn’t understand and at best you will fill up the disks to capacity in zero time and have something that grinds to a halt because this is how Drobo deals with all its storage being full. At worst it could start deleting bits of live data… probably not though because I imagine the scavenger just refuses to do anything if the FS type is unknown.
“File System and Operating System Agnostic
DroboPro and its underlying BeyondRAID technology currently support the Windows, Mac and Linux¹ platforms, with file system support for NTFS, HFS Plus, EXT3 and FAT32. Since DroboPro is a block level system, it easily adapts to almost any environment.”
I am using all Intel, and have tried the onboard as well as PCI…all four in my server are on the HCL from VMware. I’m not sure if its possible to enable jumbo frames on the cards themselves, but I’m using a crossover so there’s no switch involved.
If it is purely down to performance it is worth recognising that the hypervisor drivers don’t support iSCSI TOE features present in most of the server class cards that you get these days so the processor has a lot of work to do. Also IIRC the iSCSI drivers don’t support jumbo frames.
For best performance iSCSI I seem to recall you have to buy one of the BusLogic adaptors that is a dedicated iSCSI HBA and doesn’t double as a NIC for general use.
Hope this helps, at least there’s some additional stuff to google ^^
yea, my server is under 0% load across the board…this isn’t so much a performance issue as it is a functional issue…when I get the unit to stay connected via iSCSI I can start performance testing…until then my DP is a boat anchor