I have to disagree with this statement. If the DroboFS implements file transfers even remotely efficiently (which, since it uses a Linux kernel, I assume it does), larger MTU should have a significant impact on network throughput.
The reason is simple: sending any data over the network has associated overheads (from the context switching of the file server process to the kernel, all the way down to being able to grab the physical layer without collision).
However, the bulk data transfer should not be one of those overheads. Most Linux servers minimize the number of in-memory copies, and whenever possible just perform zero-copy transfers directly from disk to the network socket.
Therefore, if you configure for larger MTU, you are automatically reducing the ratio between the overhead and the payload. In other words, you’ll spend (proportionally) less time managing the transfer and more time doing the transfer.
Unless the server implementation is awfully inefficient, this overhead should never be the bottleneck, especially if there is no encryption going on (e.g., SSL or SSH tunnel). Since the OP was talking about straight LAN transfers, I assume there was no encryption involved. In my opinion, in an ideal case the bottleneck will always be at the disks and/or filesystems.
That being said, the DroboFS is quite the black box. For instance, I would say that the type and size of files being copied should have a severe impact on the performance. It is quite a well-known fact that copying a large number of small files will always be worse than copying one single big file. That is true for any system, not only the DroboFS.
The way the share is mounted remotely also has a strong impact on the performance. Since the OP did not mention which protocol he/she is using we can’t know for sure (my DroboFS supports SMB, AFP and NFS). With NFS you can specify whether or not you want file writes to be synchronous, i.e., that the client wait until the server confirms that the data was indeed written to disk. That option should bring any disk performance measurements to its knees, and is not really necessary on a Drobo device. I haven’t checked for SMB and AFP, but I am pretty sure that it is something you can configure as well, and most likely at the client side.
Finally, I know that some users have Firefly installed on their DroboFS. If I remember correctly, that app indexes all the media files on the DroboFS, which means concurrent disk access, which means terrible performance drop.
So, in summary, we need to know: what kind of files and the average size, the network protocol being used, and if any other (file-indexing) apps are installed.