Drobo

Windows and "unsupported" Hyper-V? Anyone?

Is anyone using a drobo[elite/pro] with Hyper-V?

I know it’s not supported- but wondering anyone was trying it. I am.

The only thing I’m seeing right now is you can’t boot a hyper-v off a Drobo volume - other than that no issues (yet).

So far, my results aren’t so great.

Even though the installation seems to work, when the virtual tried to boot it says it can’t load the operating system.

Has anyone else seen this?[hr]
K, here’s an update.

I created the volume, then mounted it in host OS. Then disabled it.

Then, under Hyper-V (I’m running 2008 R2, btw)/ vm / settings, I chose the first IDE controller. From there I could see the iSCSI disk under two locations (location 0 and location 1). I selected Location 1 then the iSCSI volume under physical hard disk.

I appears to be working. I’m trying to install XP SP3. The install went through its initial extraction, rebooted, and presented the GUI for me.

This is very promising.

I’m using a DroboElite, FTR.

K, it works.

here’s how to do it (or at least, how I did it).

  1. create volume / (don’t format
  2. Mount the volume, when it presents format, just hit cancel
  3. verify the disk is offline in disk management
  4. create your VM, choose add disk later.
  5. before turning on vm, go to vm settings
  6. select first IDE controller
  7. Choose location 1 (important!)
  8. Select your volume
    – Note on #8: Be careful to select the proper volume. Drobo sw does poor job helping you identify it in the host OS.

You should be good to go after that. I’ve successfully installed an booted an XP VM and a 64bit 2008 R2 Standard VM.

Performance is sorta choppy, but I’m using a dual core with 4GB of RAM. This particular box is slow. Network utilization is reported at below 5% on the server - Will post what the switch says.

I’ve not done too many optimizations yet, but the next step is try jumbo frames.

HTH someone.

But yes, it is technically possible to boot a Hyper-V VM off a DroboElite.[hr]
Another update -

Step 7 - you need to choose the location NOT in use.

When enabling jumbo frames, after the reboot - location 1 was in use.

this is wreaking havoc.

So - VM’s booting off a drobo elite may not gracefully recover after a host reboot. Still working on it.

Closing in on 5PM here.

I rolled back jumbo frames, and was able to restart all my VMs.

However, the drives did not reconnect in the same order.

This is a critical issue.

So, currently (brain dumping) I need to:

  1. Make all the hyper-V services depend on Drobo service (simple enough).
  2. I need to make a powershell script of a .net app that will correct the drive mappings in the VM file before restarting the VM. I think I can do this with a vbscript, which would be my quick and dirty solution.[hr]
    Yep, I think that’s the case - on each boot the Drobo connects the drives in random order.

That blows, ESPECIALLY since there’re no CLI tools.

Gonna have to figure out how to mount these guys up using pure iSCSI utilities and bypass the dashboard.

Anyone have any experience with this?[hr]
Testing a theory - I noticed under the iscsi initator for w2k8, that the drobo mounted the drives id3, id4, management.

I’m going to test this, but perhaps it mounts them by ID first, then management last.

If so, then the mount order shouldn’t change each reboot.

But it DOES mean you have to reboot the server (or somehow redo the mounting process) each time you add a volume to the server so it “binds” in the correct order.

My experience with drive letters and plug-and-play devices has been similar.

I had a case with multiple Firewire drives daisy-chained. They’d come up with random drive letters on each boot.

I ended up connecting drives to multiple controllers, because the controller iteration was fixed (so drive connected to controller #1 was always E:, drive on controller #2 was always F:, etc) even if the discovery on each controller wasn’t.

Controller? I’m not clear on what you mean. Since they’re all iSCSI I’m not sure how to force them to connect in a particular order. :frowning:

Ok, here’s an update.

I was having issues enabling jumbo frames. Under 2008 R2, I had to download the drivers from Intel then enable Jumbo frames on the physical nic AND the virtual NIC.

I probably could have found a way to do it in the registry, etc. - but not important.

Once this was done, Hyper-Vs booted cleaning off DroboElite.[hr]
doing some basic tests, the jumbo frames seemed to have made an incremental difference in performance.

Not as dramatic as I’d hoped, but some difference.[hr]
Has anyone found a way to manually mount a drobo volume using native iscsi tools from Microsoft?

My goal here is to do this:

  1. Let Drobo mount management volume automatically
  2. Then mount specific volumes via command line
  3. Start the Hyper-V service

Appendix C (page 113) may help?
iSCSI User Guide from Microsoft

ok, well here’s a kicker.

I got me new droboelite in, updated the firmware and moved the drives.

Was instructed by Drobo to update firmware then move the drives over.

Seemed to be ok. I was doing config 2 for testing and installed a 2nd nic in my server to go to config 1.

Yeah right.

The drobo and server see each other no more.

I think its some type of residual based on the old IP scheme. Dunno.

I’ve paved my 2008 R2 machine and am reinstalling it with the 2nd nic in it from the ground up this time.

Grrr. :frowning:

But I’m sure I’ll get it working again! :slight_smile:

Oooh, I hate it when there are “leftovers” that muck up the works. I had that issue with Windows Firewall once…

Keep us updated on your progress!

Well, here’s what I’ve learned.

If I disable the “external” nic, drobodashboard finds the drobo.

The nics are on different subnets. 172.16.8.0/24 externally, and 172.16.250.0/24 internally.

Having to disable the nic for the sw to see my drobo isn’t a solution. :([hr]
Woohoo. Ok, just got off the phone.

Here’s an important GEM if you have a DroboElite and are doing Config 1 (dedicated switched network for connectivity to Drobo).

Set your IPs etc. just like the manual tells you.

If the dashboard can’t see the DroboElite, open the MS iSCSI initiator.

Now, I’m running 2008 R2, so your dialogs / descriptions may vary.

In the initator, you should see the IQN for the management volume. Make sure you select that volume.

Click on Connect, make sure “add this connection to the list of favorite targets” is checked (is by default).

Click Advanced

Set the local adapter to the iSCSI initator
Set the initiator IP to the IP of the NIC facing the drobo (the internal nic, if you will)
Set the target portal IP to that of the Drobo.

Enable CHAP login
Set the username to: management
Set the password to: Drobo Dashboard

click ok and the volume should connect. Mine did immediately.

Now, I’ll test its persistence across reboots.[hr]
Persistence is good. Big improvement.

Just an important note. Make sure you pick the right IP addresses before doing anything production with the drobo elite.

I just changed the IP on iSCSI-2 and the server couldn’t see the drobo after it reset.

Restarting the iscsi initiator service and the drobo service didn’t help.

I could have manually reconnected management by entering the above information again - but who wants to do that every time.

So, I rebooted the server and it still couldn’t see it.

So, it looks like you may have to re-setup the connection to the management volume every time you reboot the drobo. And you probably will have to do this on every server that connects to the drobo.

Hopefully, this is a minor SW bug.[hr]
Here’s another thing I’ve noticed. If you are unmounting a volume, the dashboard just spins.

If you fire up the initiator dialog, you can disconnect the volume and the dashboard immediately reflects the disconnection.

I think the mounting process is critical, because I can’t get my VMs to boot off the drobo after the rebuild.

I created a volume in dashboard, connected it, then immediately tried to install an XP hyperv to it - and it wouldn’t boot.

I will test different processes and reply with a detailed creation process.

Well, so for no luck. I can’t get my VMs to boot off the droboelite now.

ARG!

However, knowing I had it working at one point means I will have to do this step by step.

Perhaps it only works in the “config 2” setup?

ok, nothing seems to work, so I’m rolling back to the config 2 (single nic) to see if that’s it.

Does anybody have any experience or suggestions?

Oh duh - I just re-read this. Yes, that makes sense.

So try swapping the roles of your two NICs (ie, make the External one the Internal one, and the Internal one the External one) - if it is dependent on the interface iteration, then you should get a different result.

I recently retired the multi-Firewire system that I referenced, out came 4 Firewire add-in cards, and 4 IDE-FireWire bridge boards with Master/Slave, hehehe.

I’m falling back to the single nic option - to try and recreate the original scenario that worked.

So far, no luck with that, either! GAH!

I’m sure I’m missing some basic step - something I did that didn’t write down.

In my notes, I have “remove rdc” between disabling autotuning and enabling dca.

Bad notes - cause I can’t figure out what rdc is in this context. It’s not RDP, obviously. :)[hr]
doh. remote differential compression = RDC

'tis what I would do - both the “falling back to the working configuration” as well as forgetting to write something down.

Once you do get back to a work config, I recommend imaging the drive with Clonezilla or another tool.

Well folks, after an extensive call with Microsoft, still no luck.

I don’t know what I did before to make this work; but I’ve not been able to recreate it since then.

At this point, I have to give up.

Nobody out there has done this I guess.

There’s one last item of note; and I’m not sure if it’s relevant - but initially, I had two 160GB WD Drives + a handful of WD 1TB Green drives (non-advanced format). I now only have WD 1TB drives.

Basically - here’s what I know.

When you mount the drive under Hyper-V, everything seems and works fine.

Except the MBR. Recovery console says the boot record on the drive is non-standard.

If I could solve that - then hyper-v’s would boot off the droboelite.

Again, I have zero clue why this was working before. Perhaps it was the drives. Perhaps it was how I’d configured the NIC.

Either way - entirely frustrating effort - but educational.