So my backup routine is as follows:
*iMac Time Machined to a Lacie (bootable)
*Primary iTunes files stored on 5D (about 5.5TB)
*Onsite Back-up of 5D to a Drobo 2nd Generation using CCC
*Offsite Back-up of 5D to a G-tech Thunderbolt Raid using CCC
*iPhoto, iMovie and Documents also backed-up to Crashplan & Amazon
So far so good, feeling pretty good. My problem is, how often to I backup my 5D…is there a balance between too often and too long, and if so what is it? My concern is that if I had a virus or corrupted files, if I backup too frequently, all my backups could be infected as well. However, if I backup too infrequently, I could potentially save myself from the virus/corruption, but lose important data that had appeared since my last backup.
It would only be a problem if your backup history is too short, i.e., less than a month. If you are keeping snapshots that are older than a few months, there is nothing wrong about updating as often as you feel comfortable doing. Crashplan does it every 15 minutes, if I’m not mistaken.
[quote=“Albion, post:1, topic:32053”]
how often to I backup my 5D…is there a balance between too often and too long, and if so what is it? My concern is that if I had a virus or corrupted files, if I backup too frequently, all my backups could be infected as well. However, if I backup too infrequently, I could potentially save myself from the virus/corruption, but lose important data that had appeared since my last backup.[/quote]
My view is that backups have a cost, and data loss has a cost. The important piece is to identify those costs and find a reasonable balance.
I love online backups* because they reduce the marginal cost of performing regular backups to almost zero. I mean that I don’t have to remember to plug things in or click on things. It runs automatically.
If you are concerned that something like a virus could affect all your backups, I suggest that you need a different backup technique. A client/server backup system is not usually vulnerable to such a problem. In the past, I’ve used amanda and Retrospect. Now I use CrashPlan for this.
I do not consider a copy to be equal to a backup. A copy is just a point-in-time snapshot without the concept of versions, and it will silently inherit data corruption that occurs on the source. For this reason, I not consider services like DropBox to be true “backups.”
In short, I ask 3 questions about a backup:
[list]
[]What data is backed up?
[]How far back in time?
[]How many dates are saved?[/list]
If it’s just a copy, answers 2 and 3 are a disconcerting “these specific dates” and “once or twice.”
Regular “snapshots” are the way to go to be able to “roll back” from a non-failure (technically the backup is still working, it’s just backing up unusable/undesirable data) situation.
It’s analogous to System Restore Points or Time Machine backups compared to a drive mirror.
Too frequently/infrequently depends on how much and how often you use your system (essentially, how quickly you would notice there’s a problem) and how much space you are willing to dedicate to backup snapshots.
If you have a backup solution that stores previous versions of files (like paid Dropbox) that will take you a step closer to a snapshot copy, but it’s still not the same since you can’t (AFAIK) grab all the corresponding versions from a set point in time.
hi,
also if you are regularly installing new programs, or using programs to automatically scan and process/update multiple files on your computer, then it probably more likely that something might happen which is not desireable
in which case probably good to take full snapshots, (and keep the older snapshots for a while, space permitting)
but,
if you are just using your drobo for “data files”, eg as a dump of files/file store, then maybe another type of backup is ok, such as file synchronisation/mirroring, or other tool which can just update the destination with files that are on the source.
(as long as you dont use destination as source)
Thanks for the confirmation on requirement for snapshots. Can you recommend software that would be able to peform this functionality? I think I saw Crashplan?
I think you have highlighted the huge problem with backing up large volumes of data. Too often, not often enough… there is no right answer.
For my working Windows laptop I use ShadowProtect to do Shadowcopy incremental backups. That laptop has a 500gb data partition that is fairly active, but most of the data just sits there unchanged. Plus the OS drive.
Despite that, I find that I need at least 2x to 3x the space used by those partitions to store any reasonable amount of incremental backups. I am constantly filling drives and as a result deleting incremental and monthly differentials to resolve out of space problems.
I would not even try to do that with the 4TB of data I have on my Drobo. Although the data in principle does not change very often the reality is that it would take enormous backup volumes to hold those Shadow Copies, or any other incremental versioning strategy.
I consider this an insolvable problem. I resolve it as best I can by staggering the sync frequency of my 3 backup copies. YMMV, depending on the volatility of the data.
hi neil, i know what you mean…
with a slight exaggeration, almost half of my drobo-s is filled up with a huge Acronis dump of my win7 machine.
i didnt partition it into os/data just a full dump.
with the main idea being that if anything goes wrong, i can use the acronis feature to restore the full drive to its last working config.
i heard somewhere that since xp, windows had some secret data using hardware id’s or something to stop you replicating an o/s but it seems that acronis sells a version which works.
(ideally it will also work not just if i have to get a replacement blank drive, but if i get another machine - otherwise i’d be extremely cheesed off)
hopefully i wont have to find my self in the position to have to try the restore but thats not the point
Couple of reasons I always partition my OS install separate from a data partition and try, as best possible, to relocate as much data as possible to the data drive (my documents for sure, as well as some other things)…
The chance of my wanting to restore DATA to an earlier point in time is remote. But there are many good reasons to back up the OS partition to an earlier point in time and I have done it many times over the years, and for awhile on XP I was doing it as more or less preventative maintenance once a year or so. It’s a PITA to get all the apps updated and whatnot but it helps to keep the crud under control and “refreshes” Windows, making it run better.
I currently have about 400GB of data on my main laptop but only about 40-50GB on the OS partition, without regard for swap files and hiber files, which are not saved or restored (using Shadowprotect).
The restores go much quicker.
Since I am typically trying to just restore and refresh the OS, it minimizes the amount of data that needs to be updated. And simplifies the identification of data that needs to be restored from all the various data folders hanging off the root drive, which ever drive that is.
For example, I might go back 6-12 months or more on the OS partition, and if I did that, not only would I have to unnecessarily restore 400GB of data but then I would need to also restore the last backup of my data, or somehow dif or sync the differences to a current state. I would have to deal with this any time I went back to some point prior to the very last backup.
Plus I would lose any work I did since the last backup, which may only be a part of a day (I do dailies) but it could be a problem in the event that I HAD to do a restore and could not do an “emergency” interday backup.
I also cull the old incremental and differential backups differently, simply because the data backups are so large (that partition is far more volatile in terms of gigabytes changed over any given period of time).
I also suspect that in the event of a power failure or other abnormal termination of the OS (BSOD’s) there are far more files open in the OS side of things, and therefore the OS partition is more likely to get killed than the data partition. That makes it less likely I would lose some part of a day’s work, or even more in the event my backups were failing for some reason And that happens from time to time when I am not paying attention to things.
Some people argue against splitting partitions under the theory that splitting is done to somehow increase performance. I understand that the overall performance across both drives can not benefit from that, and in fact the data partition will not perform as well as the OS partition…
But I think that is not a bad thing because most of the disk activity is done by the OS. If the OS partition is the first partition then all its tracks are in the faster inner cylinders of the drive. If a single mixed partition is used, then the initial OS install should get laid down first, on the speedier tracks. But if the OS is loaded first, then a huge slug of data, all the subsequent programs will be installed to slower outer tracks, as well as any volatile files maintained by the OS.
Just to say that the performance issue is complicated but overall I think there is a minor benefit to splitting the partitions (in the real world).
I recently had a legitimate reinstall of Windows end with the install refusing to activate. I made a decision to just reload windows rather than pursue it and don’t remember the details. It’s hard to predict what Windows activation will do. In general though, I have not had problems.
ShadowProtect has a "Hardware Independent Restore (HIR) freature that used to be it’s claim to fame verses Acronis, where it supposedly would at least get a new machine functional such that you could then get all the drivers straightened out and updated. If you use OEM Windows then you still have to buy a new key. It may be that Acronis has added that feature but I am not that familiar with current Acronis features and it may vary depending on version.
Just following the Shadowprotect forums over the years, I’m not convinced it is useful in the consumer world. It is really intended for the Enterprise side where activation is not an issue and enough machines are involved in order to make it worth the time to slog through it.
IOW it is not usually a good one-off solution. I think I’ve done it successfully once or twice, just to see if it would work, and many years ago I tried it with XP and ended up with the new machine blue screening.
It is probably better to start a new machine with a clean install, assuming you are in a position to reinstall all your software.
Those are just my thoughts on that matter, with little personal need or experience with an HIR reinstall.
If you have never done an Acronis reinstall you should seriously consider doing it as a test, on a spare hard drive, even if you need to buy one to do it. Assuming the ability to do an OS reinstall is important to you. I have always tested my ShadowProtect machines because I don’t want to find problems when I am stressed out and trying to get a reinstall done under the press of time.
In particular, if you usually need to access the Acronis files over your network, as I do, it is very important to work through the network driver issues while you have a good working machine so you can download drivers or do any required internet research, if necessary…
In my case, for my machines I need to keep working network drivers accessible some place I can access while doing a ShadowCopy reinstall under the optical disk’s special boot-up. With ShadowProtect I have to manually load those drivers from some place I can access.
I keep those files on the Drobo, which I can always plug into any machine and access in the special boot environment (and I have tested that many times with each machine doing ShadowCopy backups). I also keep them on my data partitions, which is most convenient in the case where I am just restoring the OS partition. A Flash Drive is also a good place to keep those important drivers (and any other bits and pieces required to do the reinstall).
I have, on occasion, moved the required ShadowCopy OS partition backups to my data partition, and then directly restored them from there. That in the case where it is planned in advance and I have plenty of time to move that data, plus space available. It simplifies things verses scrounging up backups and drivers over the network, where it is easy to run into catch-22’s where the network drivers are not accessible until those drivers are loaded :-). I also store those drivers in my data partition. Just another good reason to split the partitions
I bought ShadowProtect many years ago. At that time, my research suggested that Acronis is far more likely to have restore problems, and the support was not as good. It was a “get what you pay for” thing, because ShadowProtect is relatively much more expensive (depending on the era and versions).
Some of that may have shifted over time, but I’m only suggesting you test an OS restore, especially with Acronis. It is a much less expensive app, with commensurately less support when you critically need it, assuming you have time to deal with any complex problems in a crunch situation.
hmm, initially i went with acronis in Full drive dump mode, (as the single hard drive actually has a couple of other tiny pre-configures partitions, though 1 main bootable partition) and mainly because there were so many files (or info/configs) dotted throughout the windows profiles, pics/docs/etc that it seemed easier at the time to do a full dump.
(and the full dumps seem to be browsable via the dump file explorer)
plus some tech savvy friends recommended it based on their experiences
and it was cheap on offer at the time
but theres a lot to consider here neil (give me some time) :)[hr]
hmm, initially i went with acronis in Full drive dump mode, (as the single hard drive actually has a couple of other tiny pre-configures partitions, though 1 main bootable partition) and mainly because there were so many files (or info/configs) dotted throughout the windows profiles, pics/docs/etc that it seemed easier at the time to do a full dump.
(and the full dumps seem to be browsable via the dump file explorer)
plus some tech savvy friends recommended it based on their experiences
and it was cheap on offer at the time
but theres a lot to consider here neil (give me some time)
My understanding is that Acronis True Image uses the same Windows VSS (Volume ShadowCopy Service) as my StorageCraft ShadowProtect software. So they both use the same underlying “black box” to do the backups and manage incremental and differential tracking of changes. This is a sector based backup strategy verses a file/folder based strategy (you probably know that already).
My basis for that is this Acronis knowledgebase article, as well as things I have generally read on the subject.
There are differences between ShadowProtect and Acronis, but those differences are solely in how they each implement Windows VSS. So everything I have said about my ShadowProtect app is generally applicable to Acronis although the details may differ, particularly in how each might resolve driver problems when doing a Hardware Independent Restore, or whatever Acronis might call it, assuming it supports that feature.
I suspect your “disk image” is just a simple way to create multiple independent partition images with “one click”. At the Windows VSS level, VSS works at the partition level. Someone more familiar with Acronis might want to weigh in on that.
One of the features of VSS is that the resulting backups can be mounted as standard partitions, they can be modified, and the modifications can be saved as independent incremental files. It is a nifty feature that I have used a number of times in order to temporarily “blow away” Windows security settings in order to access VSS images.
Just as an aside, there is a tendency for Windows to prevent you from accessing certain folders on those mounted image partitions. Especially anything on the OS drive created by windows (including in some cases basic folders like “My Documents”). You end up in a situation where you cannot easily get into a folder but you apparently always have the authority to change the object ownership such that you can then get back into it- or at least in every case I have been able to do that.
I mention that as just one reason you want to test all this. The security problem I mentioned happens when you try to mount those images on a different machine than the machine that was the source of the image, even if the machines have a common log on user name and password. It is a “feature” of Windows that I personally despise, because it only keeps honest and unknowledgable people out of their backups, but that is the state of things.
(the root cause probably being some security crippling of Windows desktop editions that are run as peer to peer rather than run under a Windows Domain via some flavor of Windows Server, and few of us go to the expense of running Domain Servers in our home networks)
I’m surprised that the Acronis TrueImage 2013 user guide I just downloaded does not even mention the term “VSS” or “Shadow[anything]”. Perhaps they are trying to invent the illusion that it is all a custom designed secret sauce that they developed at great expense :-). Me being always the cycnic!
The main point is simply that you really need to FULLY test a restore to an empty drive. And even basic access to mounted image files (backed up and then mounted on different machines) needs to be tested.
I recently ran into a tightly related and very insidious problem related to Win security. On my main working laptop I manage both a ShadowProtect backup and a simple file/folder based backup via SecondCopy. That file/folder backup is similar to what any backup program would do where the target is a simple readable folder no different from any other data folder. I do this in order to “spread my eggs among different backup baskets”. The theory being that a problem in the VSS backup or the file based backup will not be fatal.
Some time ago I apparently changed the security on a couple of folders that I wanted to share with different authority than the main share of the data partition’s root.
I recently, totally by accident, discovered that the SecondCopy file/folder backup was missing half the data on the data drive. All that data was within a couple of root level folders.
When I checked my ShadowProtect VSS copies, via image mounts, I found I could not access those folders without changing ownership and that told me it was a security issue.
What happened is that SecondCopy was running from my file server, accessing the source folder on my remote laptop over the wired network. SecondCopy did not have access to some sub-folders in its main source folder target (the root of the partition volume). It failed silently, simply ignoring any target subfolders for which it did not have read authority. If I had run that backup directly from the laptop I would not have this problem (food for thought for any file/folder level backup app strategy).
I thought I kept up with checking things but I suspect it had gone on for months now. I’m glad I looked at the backup contents and now know I need to diff the basic file/folder contents to ensure that my backup actually has all the data specified in the source target folder specification of the backup. This stuff is really (way too) easy to screw up!!!
I still run Windows Home Server for system backups. It does cluster-level de-duplication for backups, so they don’t take up a ton of redundant space for the OS, similar to differential backups.
NeilR is totally right that backups are way too easy to screw up! I ran a long time before I realized that some (luckily not-so-important) data had been “orphaned” in the backup process, and it took even longer for me to figure out how and why it happened.
It’s good to have some kind of “sanity check” for your backups, if you can come up with one, whether it’s number of bytes, number of files, or a quick secondary scan/difference.
I use BeyondCompare to verify backups, or any check or comparison of supposedly identical folder structures, or at the file level to compare differences. The mistake I mentioned above was due to not having checked my laptop data drive file level backup for quite some time.
I like the sector level partition VSS copies from ShadowProtect because they are all or nothing. In particular, it doesn’t care about file or folder level security; it just needs access to the partition.
The downside is that one flipped bit would make an entire partition unreadable (it uses an MD5 checksum for verification). Although I’ve never actually had an SP backup come up unreadable, it is at the top of the list of why I also do the file.folder level backup with SecondCopy. I don’t know if Acronis does the same level of verification and would react as poorly to a single flipped bit.
you are both right regarding the need for checking all the various parts of the backup process. (finding the time, extra hardware, and proven process are half the battle) the other half is making the time
ACRONIS
at the time it was a very good offer that i couldnt refuse; (wthout finding a horses head in bed next to me)
something like £20 for acronis home, and a bit extra for a plus pack addon which makes it machine independent. (a bit like the windows 95 or 98 plus pack which arguably should have probably been part of windows to begin with)
the cheap price was based on a condition that a certain number of people have to pre-order the product to get the price down and luckily lots did.
BIT-FLIPs
btw for the Fliped-Bit, i can understand a crc/hash check failing to verify as being the same, but unless it took place in a vital part of the image, (such as MFT or something) then i wouldnt have thought that a full sector by sector dump image could be rendered completely unusable, if a single bit wasnt set properly.
for example if you have a floppy disk with files, you can still read part of the data (or part of the picture up to the bad area), and if a compressed/zip file is corrupt you can still read the rest of it - remember pkzipfix.exe?
EXCESSIVE DROBO BACKUPS?
On another note about “can someone backup too often”… heres something i started doing:
=
The idea is that i run SET 1 quite often, with full Syncback (slower but CRC/Verifications) as well as manually reading through the changelog popups to be sure that the files & folders that Synckback is intending to change, or remove, or copy etc are the correct ones.
&
i run SET 2 in a similar way, (though usually also ‘before’ any subsequent SET 1’s, to make sure that nothing on (b) & (d) above have changed via mysterious means.
(Ideally i would have a 2nd drobo, and with similar setup as my other Drobo-4slot > to > Drobo-4slot, but is my above scenario, actually beneficial in some way, and also maybe to compensate for the fact i dont have a 2nd drobo-s, or is this an example of the the Title of the post is all about?)
Regarding the bit flipping killing a ShadowProtect backup… it imay not be a matter of what technically would happen if one or more flipped bits were encountered. My understanding is that the issue is purely the fact that the software will refuse to restore a volume that does not pass an md5 match.
It would be easy to test by flipping a bit in an SP image file and restoring to a spare drive. I’ve just never done it. But my understanding is that it will totally fail, although it may not be that simple.
Since the restore procedure would not know if the image file failed md5 or crc verification until it had completed the restore (to the extent it could!), I’m uncertain what actually happens, especially in the case of a data drive that does not need a lot of integrity to just “boot up”.
In real life I have never had a problem with a corrupt image except that I own an old DW Mybook that laid corrupted data on the drive (from SP and another backup program, both of which failed file verification routnies that do CRC or md5 checking). But on good drives I’ve never had a problem. The problem would happen on a JBOD drive that had one or more sectors come up unreadable, likely during the actual restore process.
Image backups are different than file./folder level backups since in the later case each file “stands alone” unless the corruption is in the file system table. In the case of an image backup, think of your entire partition as one big file that needs to be right.
I guess you would have to search Acronis support resources to find real world user experiences with that kind of problem, or perhaps the standard documentation.
If all your backups are on one physical Drobo… do I need to tick off all the things that can go wrong with an entire Drobo pack? How many people have come on here with totally failed arrays? Don’t the packs usually totally fail (all partitions) when they do fail? Plus fire, theft, power surges, user error… the list goes on and on…
I would think you’ve read enough horror stories here to have your backups spread among physically separate (and usually offline) drives/packs?
I count your backup as ONE backup copy- on the Drobo.
The verbiage in those links suggest it is similar in concept to SP’s HIR. It (hopefully) gets the target machine functional enough to actually boot. Based on my experience, though, and much of what I have read,
Your Mileage May Vary on this, and it definitely needs testing before being relied upon and in our consumer environments it is unlikely something that would be or could be tested in advance. Just because a feature exists does not necessarily mean it works, and I think this is an extreme case of that principle.
I want to further clarify my comments about potential problems with VSS image files that have “flipped bits” or other minor file corruption. Even if a full drive image restore is not successful, it may be possible (and I suspect likely) that the image could be mounted such that the uncorrupted files could be copied from the mounted image to a new home (or the old source home).
It would all depend on what part of the image is corrupted, of course. It is, in real life, such a rare event that it is probably difficult or impossible to make positive and accurate generalizations. However, I am sure that when an image is mounted the entire image is not read- otherwise it would take hours to mount a large image and I know that is not the case. If it does not read the entire image it cannot possibly know if it check sums. That’s my theory anyway!
you’re right neil, its hard to (absolutely) test for the exact unforseen event, but one thing is for sure, i will need to arrange a more wholesome test of my image backups, as currently the full dump is more of a (thought about some) peace of mind, than actually could be