mount nfs on ESXi 5.5.0

Has anyone had issues mounting the NFS share in ESXi? I figured I should finally just mount my ISO directory so I didn’t have to keep copying ISOs to my datastore. I installed nfs 1.2.8 on my 5N.

I’ve been through every NFS thread in the forum, but no one has seemed to report this issue.

I can mount the share just fine in OSX 10.8.5, but ESXi doesn’t work. I’ve tried it on 4 different hosts (all ESXi 5.5.0).

Here’s what I get in ESXi:

~ # esxcfg-nas -a Drobo -o -s /mnt/DroboFS/Shares/Share

Connecting to NAS volume: Drobo

Unable to connect to NAS volume Drobo: Sysinfo error on operation returned status : Unable to connect to NFS server. Please see the VMkernel log for detailed error information

tail -f /var/log/vmkernel.log
2014-04-27T01:20:25.873Z cpu15:10960878)NFS: 157: Command: (mount) Server: ( IP: ( Path: (/mnt/DroboFS/Shares/Share) Label: (Drobo) Options: (None)
2014-04-27T01:20:25.873Z cpu15:10960878)StorageApdHandler: 698: APD Handle 533845b7-036908e5 Created with lock[StorageApd0x411133]
2014-04-27T01:20:30.764Z cpu13:32809)NMP: nmp_ThrottleLogForDevice:2321: Cmd 0x1a (0x412e8b8c3540, 0) to dev “mpx.vmhba33:C0:T0:L0” on path “vmhba33:C0:T0:L0” Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0. Act:NONE
2014-04-27T01:20:30.764Z cpu13:32809)ScsiDeviceIO: 2337: Cmd(0x412e8b8c3540) 0x1a, CmdSN 0x8d587 from world 0 to dev “mpx.vmhba33:C0:T0:L0” failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.
2014-04-27T01:20:56.476Z cpu0:10960878)StorageApdHandler: 745: Freeing APD Handle [533845b7-036908e5]
2014-04-27T01:20:56.476Z cpu0:10960878)StorageApdHandler: 808: APD Handle freed!
2014-04-27T01:20:56.476Z cpu0:10960878)NFS: 168: NFS mount failed: Unable to connect to NFS server.

Looks like it was a switch issue. I had the 5N connected to a 8 port workgroup switch that was trunked to the switch that the ESXi host was connected to. I plugged the 5N into the same switch as the ESXi host and it mounted right up.

I guess my indicator was that a vmkping over 1472 bytes was failing.

well done for finding the problem, and for posting back