By Dr. Marc M. Batschkus

Some things never seem to change and it helps to consider them before diving into a new setup or expansion of your workflow.


Before expanding your setup do an inventory of your files, storage devices, capacity, connections and requirements.


You always need more storage than you think. Plan ahead before anything fills up. Filled up storage could result in a storage emergency situation. Storage should never fill up 99% (or even close) because it might become (almost) inaccessible or even corrupted. At the very least, performance might degrade to a degree where it feels like a standstill.


The capacity and speed of storage is much more important than the CPU of the Backup/Archive server (at least in most cases). Overspecify both, capacity and speed. If you think you need 30TB, buy 50TB. If you think you need 400MB/s, aim for 600MB/s or more.


Backups are essential. There is the saying: All data without a Backup is considered worthless.The more Backup copies you have and in more locations, the better. Three copies is the minimum and one should be put offsite (or in the Cloud) for disaster recovery. See the 3-2-1- Backup rule.


Segment your data and ask the question: How long can I be without this data? Time critical data might be better of with a data availability/cloning/replication solution than with a regular Backup. This way, no restore is necessary. Just switch to the secondary storage, which contains the identical file set that has been cloned over. All completed or finalised files should migrate to long-term Archive to free up space on production storage.


There is no one perfect storage for all purposes. Disk, SSD/flash, tape and cloud storage all have their own set of strengths and weaknesses. Each might only fit a portion of your files. It all depends on your requirements, setup and budget.


Using an outdated machine for a demanding Backup or Archive is a bad idea and leads to trouble sooner or later. The server used for data management, Backup and Archive should have enough CPU cores for the jobs running in parallel (one for each).


The individual file size might be more important for throughput than the size of the data set. Large numbers of even very small files might transfer (also to Backup and Archive) a lot slower than a small number of very big files (even if the total size is identical).


Storing hard drives on a shelve is not an Archive. Disks are made to spin and not to be stored. Use LTO tape or Cloud for long-term archive instead.


Document the restore process. The admin who setup and runs your system might not be available when an important restore is necessary. Have step-by-step instructions available and updated, so that someone else can step in.


Regularly test the store process. This is the only away to ensure that all components of the whole chain work together as expected. Restoring single files once a month is a good procedure. 


Logs and notifications of the storage and data management software are made to be checked and acted upon. The same goes for lights on the storage hardware (RAID, tape library). Bigger incidents and data loss can be only be avoided if these messages and signals are checked. Assign responsibility and have a regular process for checking.


Keep an older/replaced workstation that contains all (maybe now outdated) tools for production in your Archive. If an older project needs changes, it might be the only reliable option to have OS, NLE, plugins and tools ready to use like at the time the production was made (maybe years ago).


Create a file naming convention and stick to it. This is extremely helpful in locating, discovering and understanding files, their contents and relation to specific projects. It is also an excellent basis for later archiving. Assign responsibilities for keeping it consistent and document it for all users. This can not be overemphasised: keep the naming convention consistent at all times. 


Before going on holiday or vacation, check the available space on storage, Backup and Archive targets in time if you want to avoid emergency calls and catastrophes.


No storage setup is ever perfect. Don’t stretch planning to achieve perfect. Rather aim for “done”. Running a setup always builds experience and gives hints where to improve and expand next.

How to Protect Valuable Video Data in All Stages of the Workflow

Data Management, Backup and Archive for Media Professional‪s‬

Protecting media through all stages of the workflow is a challenge.This ebook offers practical advise, information and field experience to make better decisions. From ingest to long-term archiving, relevant technologies for data protection are explained and methods discussed. The aim is to support media professionals in their data management decisions and make backup, cloning and archiving transparent and easy.

Download the eBook here!
16 Basic Truths About Data Management, Backup and Archiving