I’ve been setting up a new Drobo800FS to use as a network back up drive (like a Time Capsule) for our networks of 30+ Macs.
I’ve had mixed results with Drobo’s in the past. Recently I’ve been having good luck with Drobo 5n’s via thunderbolt. I’ve tried the DroboFS unit in the past and had terrible experiences with it, in particular when it comes to speed. I decided to try the 800fs now though, because it’s been a couple of years, and in light of the recent good experience with the 5d’s I thought things might be better. Honestly, If there was an 8 bay thunderbolt drobo I would have bought that, plugged it into our server, and let Mac OSX Server do the file sharing. It does a wonderful job.
As has been pointed out, since the Drobo reports the file system to be much larger than the actual capacity, you simply can’t let time machine try to use the whole thing because it will crash and burn. Since I’m backing up to a network device, time machine creates sparsebundle’s for each machine. It’s been pointed out that you can try to change the sparse bundle size, but time machine changes it right back next time it runs to the max capacity of the volume.
That would be doing via something like this which in this case is setting the size to 950GB
hdiutil resize -size 950g /Volumes/Data/Niunia.sparsebundle
I tried this, and yes in fact you can see from the logs that right away Time Machine tries to change the size.
I watch what time machine is doing in the logs by doing something like this.
tail -n 1000 -F /var/logs/system.log | grep backupd
Anyway, you can google around and see that the workaround for this is to lock the certain files in the sparsebundle like this.
hflags uchg /Volumes/Data/Niunia.sparsebundle/Info.*
I tried this and it works, you can see it by watching the logs, when Time machine tries to resize the sparsebundle, it reports a failure doing so, but continues on with the backup.
If you ever need to resize the sparsebundle later you would unlock it like this.
chflags nouchg /Volumes/Data/Niunia.sparsebundle/Info.*
I kept digging and found what seems to be a more elegant approach. You are supposed to be able to limit the size of a Time Machine backup by changing its defaults in terminal. The command looks like this. (in this example changing to 300GB 300x1024)
sudo defaults write /Library/Preferences/com.apple.TimeMachine MaxSize 307200
I’m trying this now to see if it works, but I’m pointing it out to see if anyone has tried is approach, and what your experience has been.
I’m also using TimeMachineEditor to change the default backup schedule. In my case I’m just setting it to a time after 5pm, and I’m staggering the times of all the computers by half an hour to avoid too many computer trying to back up to the drobo at the same time. I’ve already found that when this happens, if the backups are large it crashes the Drobo.
I’ve also noticed that once the backup exceeds about 100GB the speeds drop way down.
You can see from these log entries that
Jan 7 00:39:20 Fitzgerald.local com.apple.backupd[64653]: Copied 50.91 GB of 168.38 GB, 1439652 of 1586364 items
Jan 7 01:39:40 Fitzgerald.local com.apple.backupd[64653]: Copied 62.12 GB of 168.38 GB, 1448584 of 1586364 items
Jan 7 02:39:45 Fitzgerald.local com.apple.backupd[64653]: Copied 74.46 GB of 168.38 GB, 1459979 of 1586364 items
Jan 7 03:39:53 Fitzgerald.local com.apple.backupd[64653]: Copied 82.13 GB of 168.38 GB, 1464567 of 1586364 items
Jan 7 04:40:03 Fitzgerald.local com.apple.backupd[64653]: Copied 92.03 GB of 168.38 GB, 1465255 of 1586364 items
Jan 7 05:40:04 Fitzgerald.local com.apple.backupd[64653]: Copied 101.16 GB of 168.38 GB, 1465588 of 1586364 items
Jan 7 06:40:12 Fitzgerald.local com.apple.backupd[64653]: Copied 109.03 GB of 168.38 GB, 1466017 of 1586364 items
So the actual data transfer (in the middle of the night, with nothing else on the network) is about 11.6GB/hr. Pretty slow to me.