Loosing space

Ok unusual i post asking for help / opinions. I’ll try and keep it short but sweet:

yesterday drobopro was set up

8 x 2TB

Volume 1 - 1TB
Volume 2 - 8TB
Volume 3 - 8TB

free space,between 650Gb and 700GB (so yellow warning territory)

volume 3 was getting a bit full, so i thought, i’ll make volume 4 - 16Tb and move everything from Vol 3 to Vol 4 … then delete the empty Vol 3

good plan, yes?

made vol 4 - 16TB -its totally empty, freshly formatted.

nothing else has changed with the other volumes.

I now have 550Gb free - 100Gb lower than i did before. (and now into red warning territory)

i have not added any more data.

either - adding a 4th volume of 16Tb has caused drobo to “loose” 100Gb of free space

or rebooting drobo (the first time this year) has made it realise it had less free space than it thought.

yes, i ran chkdsk on the 3 volumes before and after i created the 4th volume and they are all perfect.

anyone, thoughts / ideas?

I’m tempted to delete the still empty volume 4 and see if i get my free space back?

Depending on configured blocksize, it could be filesystem metadata.

What filesystem did you format with? I’ll see if I can replicate.


but i think over 100GB seems excessive?

So, let’s create a sparsefile:

hypothesis test # dd if=/dev/zero bs=1G count=0 seek=16384 of=foo
0+0 records in
0+0 records out
0 bytes (0 B) copied, 1.1402e-05 s, 0.0 kB/s
hypothesis test # ls -las foo
0 -rw-r--r-- 1 root root 17592186044416 May 25 17:29 foo

Then format it with NTFS:

hypothesis test # mkfs.ntfs -fF foo
foo is not a block device.
mkntfs forced anyway.
The sector size was not specified for foo and it could not be obtained automatically.  It has been set to 512 bytes.
The partition start sector was not specified for foo and it could not be obtained automatically.  It has been set to 0.
The number of sectors per track was not specified for foo and it could not be obtained automatically.  It has been set to 0.
The number of heads was not specified for foo and it could not be obtained automatically.  It has been set to 0.
Cluster size has been automatically set to 4096 bytes.
To boot from a device, Windows needs the 'partition start sector', the 'sectors per track' and the 'number of heads' to be set.
Windows will not be able to boot from this device.
Creating NTFS volume structures.
mkntfs completed successfully. Have a nice day.
hypothesis test # ls -las foo
590276 -rw-r--r-- 1 root root 17592186044416 May 25 18:01 foo

Hmm, only 590M worth of metadata for an ntfs partition. Checking filesystem is good:

hypothesis test # mkdir /mnt/ntfs
hypothesis test # mount -o loop foo /mnt/ntfs
hypothesis test # df -k /mnt/ntfs/
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/loop0           17179869180    590272 17179278908   1% /mnt/ntfs

Looks to me like 100Gb would be excessive too.


Ok… minor problem… troubleshooting steps:

  1. remove newly created volume 4 - i did not get my space back - i still only have 598Gb free space

  2. recreate vol 4 - loose another 100gb instantly - i now have 489Gb free space

errr… suggestions?

Contact support. That space is being eaten somewhere in the device abstraction.

ive just realised the one thing i havent tried… a proper powerdown of drobo - so far its only been doing the “soft” reboots it does when it’s adding volumes[hr]
it did not help[hr]
ok… it is loosing the data somewhere “inside” the drobo - with 8 x 2tb drives i ought to have 12.45TB available (2.06TB used for protection) - i have 12.12tb available (2.43tb used for protection!).

Ok, getting nowhere with support.

He’s looked at my log and is blames filesystem corruption.

To me - that would explain how you are loosing space within the volume themselves, but i am loosing space to drobo’s “protection” - drobulator says it should take 2.06TB to protect 8 x 2tb drives. my space for protection is 2.43TB - and THIS is what changes when i add/remove volumes - the space DROBO is using for protection.

I’ve pointed out that i have run chkdsk/f on all three volumes, over iscsi from my x64 windows box and over usb from my x64 windows box and they all say “no errors found”.

he has now suggested i connect it to a mac and repair it using the mac disk utilities (i dont own a mac - but i thoguht they could only READ NTFS file systems - so how could they repair the volumes he is claiming are corrupt?)

thoughts/opinions guys?

bhiga - I’m looking at you! :wink:

It’s formatted to NTFS or HFS+?

NTFS - i dont even own a mac!

all three volumes are NTFS[hr]
Incident: 100525-000128

in case you want to have a look yourself :smiley:

At first I thought maybe it was just some of the hidden stuff Windows does…
Volume Shadow Copy, System Volume Information, MFT growth…

But given that you didn’t regain the space after deleting the volume, the evidence seems to point elsewhere as Drobo presents volumes, and all the filesystem-related stuff would live in the physical volume, at least as far as the OS in concerned.

Seems almost as if maybe there’s some kind of size offset/discrepancy that’s making it progressively think the volume is smaller and smaller. Does Drobo do any “alignment” in its virtualization of the physical volumes? That could potentially cause some “creeping” slack space (start/end must begin/end on some even-multiple bound). But 100GB seems awfully large for “slack” space.

I’ve seen mention of “protective MBR” on GPT volumes, but again it seems highly unlikely that it’d eat 100GB or even 1GB, at that.

What does the volume look like in Disk Management (diskmgmt.msc)?

I don’t think the Mac utilities can fix NTFS either, but it might be useful to just see if there are weird partitions that Windows is hiding from you… but in that case I’d just boot a GParted LiveCD. Don’t modify anything (luckily in GParted you have to Apply to make changes anyway), just curious to see what each device looks like.

The volumes all just look like regular volumes in disk management, the loss of space is only from drobo’s reported free space, the volumes reported data and free space does not change, its just when I create a new empty volume in.dashboard the available space reported in dashboard goes down (as in both the free space and the total available for data both reduce by 100gb) and the space used for protection increases by 100gb!

Even weirder, deleting the new volume doesn’t reclaim that 100gb

I’ve done it twice now and frankly I’ll run out of room soon!

@Docchris… can you let the drobo sit over the weekend and not add any data to it or create any more new volumes?



Well following jennifer’s advice (both on here and in a private message) she said my drobo was showing signs of heavy fragmentation (it has been hammered with data recently and was very very full) and that it was trying to reclaim the free space, currently flagged as being used for redundancy.

Sure enough after 24 hours of non use - the redundancy had fallen from 2.43TB to 2.18Tb, however then i had to use it again so it is still sitting at 2.18Tb. i wonder if i leave it some more (from tomorrow onward) it will go right the way down to drobulator’s estimate of 2.02Tb?

hmm… playing with drobulator…

only using single disk redundancy - and 2tb drives. if you put two disks in it uses 1.82tb for protection (i.e. one whole drive)

this is what i would expect.

if you add more disks - 3 or 4 or 5 in total - it still uses 1.82TB for protection (one disk equivalent) this is exactly what i would expect…

add a 6th 2Tb drive - and the space used for redundancy jumps to 2.19TB…

a 7th drive… and it goes back down to 1.86Tb…

and 8th drive and it goes back up to 2.18Tb.

this is NOT what i would expect, i would have thought it would only use the equivalent of 1 drive of capacity.

unless something to do with the way the parity is calculated makes it less efficient with even numbers of drives (despite having a computer science degree i would be a bit out of my depth discussing the exact intricacies of the various methods of parity calculation - its been a good 8 years since i studied them!)

jennifer - can you please confirm that drobulator is accurate and with 6 or 8 drive installed, drobopro will use an additional 300ish-GB for redundancy compared to having 5 or 7 drive installed?

I am inclined to think it is correct, since with a fair bit of free space (2TB+) and having been left alone, my 8 drive drobopro is now using 2.18Tb for redundancy. although i am baffled as to why (i.e rather than 1.82TB).

Ok checking on the capacity calculator.

thanks :smiley:

Maybe there are additional bugs beyond the one found in the previous thread?

BTW: Great to hear Drobo’s reclaiming the space in its housekeeping. The magic, sometimes it is delayed. :wink:

Indeed it is

It’s extra funny since that’s usually the first thing we say to people “give it time”