Let’s face it – right here, right now in 2012, we’re still applying backup strategies that were established in the 1980s. Not only that, but those strategies were originally developed for backing up big iron, huge systems in Fortune 500 companies. Given all that has changed in IT in the last 25 years, the concept of backup needs a major rethink.
There’s two significant changes to data that make traditional strategies irrelevant: data size and data speed. Not only are files incredibly larger, we’re processing way more information than ever. This impacts firms in multiple ways. On the size front, old-school thinking with new-school applications means a constant conflict of quotas, server “purges” and an ongoing war with the users to “keep your project folders controlled”. The amount of administrative overhead and negative vibe created by this conflict is one of the worst things an IT department can do to itself. Let’s be brutally honest: a 3TB hard drive is $200 now. Throwing insane amounts of space at users is dirt cheap. There’s *zero* excuse to have a quota-enforced file system in this day and age.
But! But! I can hear the 1980s folks scream. Backups! Organization! Clutter! Nah. Irrelevant, for the reason of change #2: speed of data. Back in 1985, when you received one email a day, worked on one word document, and one dwg file, the concept of not only oversight but historical recall was relevant. No more. Today’s user can potentially touch hundreds of files in a day, and can’t remember what a file looked like yesterday, let alone two weeks ago. This concept of “WE MUST HAVE HISTORICAL BACKUPS” is only around because it has been chanted like a mantra, not based in any reality of user need. Today’s user is going to come to you and say “hey, I need this word document, but I need how it looked two weeks ago, on the 14th. Not the 15th, not the 13th, the 14th”. No. Not. Going. To. Happen.
Now, will a PM come to you and say “I need to know how this project folder looked on the 14th? Possibly. But this business need – point-in-time-views of project status – is *far* better accomplished by live, in-project-folder snapshots than it is via backup. Hit a milestone? Want a snapshop? Great, create an Archive/June14 subfolder and copy everything into it. Immediate creation, and immediate retrieval, *without* involving IT. Uses a lot of disk space? Yup. Easy for the users? Yup. Easy to back up? Yes.
Max Headroom comes back and says “but now we’re backing up TERABYTES! TERABYTES!”. Yeah, so what? The issue and headaches and insane amount of backups come from the needs associated with historical retrieval. Dailies for two weeks. Weeklies for five weeks. Monthlies for 13 months. Yearlies for 7 years. Who cares? If project data – the only thing that is historically relevant – is snapshotted within the project directory, do you really care that the odd Word document is available from 8 months ago? No. The time, effort and energy put into making that file retrievable *far* outweighs the business benefit.
If you’re getting rid of quotas, allowing self-snapshots, all you need is one or two nights of backups. That’s it. Allow Window’s built-in VSS to handle minor file deletions/restores. Nightly, offsite backups to the cloud or your colo. Use bit-level changes, use deduplication, whatever. But anything beyond this week’s data, the expense doesn’t match the need.
And I’ve never understood the “clutter” argument. When did IT become housekeeping? Who are we to tell a project what files they need, or more importantly, that they have “too many” files? People will self-organize when they start feeling the pain of clutter. To have an external entity force arbitrary restrictions on the number of files in a directory is an exercise in futility.
It’s really pretty simple. Primary file server for active projects. Cheap, huge storage for inactive projects. Onsite backup if you want immediate access, but cloud-or-offsite disk based backup for DR.
Stop fighting your users. Meet the business needs. Be efficient. And realize it isn’t 1985 any more.