By David Fox

 P5 version 5.6 and later includes a new feature which enable large datasets to be archived to cloud storage by first obtaining a disk storage device from the cloud provider for archiving locally across your LAN, and then having this device shipped back to the cloud storage provider.

Read our deeper dive into using the Amazon Snowball here.

In this article, we’ll look in detail at how to use the Backblaze B2 Fireball device to quickly transfer data up into the cloud. 


There are a number of steps involved in this process, including interactions with Backblaze Fireball.

1. Backblaze sends you a Fireball (Synology NAS storage device). You make the storage available to your P5 server machine via a network mount. You configure a ‘P5 Disk Library’ on the storage, create a Pool and Volumes and archive your data to this disk library via a P5 Archive plan. Your archived data now resides on the Backblaze Fireball, but not yet on B2 cloud storage.

2. The Pool is ‘detached’ in P5, pending later re-attachment. A local ‘staging’ folder is assigned which will later act as a temporary cache through which cloud data will be read and written.

3. The Fireball is shipped back to Backblaze. Backblaze copy its contents to a bucket within your cloud storage account.

4. Within P5’s configuration, a Cloud Service is created, providing P5 with access to the cloud storage bucket where the Volumes are stored.

5. The Pool is ‘attached’ in P5, and linked to the Cloud Service. The folder for each volume is now located on the cloud storage and re-linked so that future access will be via the cloud. 

Let’s now go through the whole process in more detail.


A – Configuring storage outside of P5

A1. Configure a share on the Fireball, using the Synology NAS device web admin interface. This share will be used exclusively by P5 during the initial archive process.


A2. Configure your share with appropriate permissions so that it can be mounted and written from your P5 Server.


A3. Now that you have your NAS device configured, go to the host running P5.

Archive and mount the share such that you can read and write to it. Make sure you begin with a completely empty share.

Note the ‘mount point’ where your OS has mounted the NAS share. You can use NFS/SMB/AFP.


B – Configuration within P5 Archive

In this section, we configure an archive workflow, via creation of a Pool and Volumes, that will allow us to archive all the data that we wish to have hosted in the cloud. Initially our disk based P5 Volumes will be written to the NAS storage.

B1. Within P5 Archive, create a new pool for Archive use and with Disk media type selected. We will create volumes in this pool and archive the data we wish to have uploaded to the cloud to this pool.

IMPORTANT: Do not ‘assign to a cloud service’ at this stage – this happens later. For now we’re just writing to disk storage.


B2. Configure a new ‘Disk Library’, hosted directly on the share you created and mounted. Configure this library with enough space to accommodate everything you wish to archive. Don’t exceed the available space on the storage device you’re writing to.

Label volumes into the pool you created in the previous step. You can do this now, while you’re creating the disk library, or manually later.


B3. Create an archive plan, selecting as Target Storage the pool you have created and have labelled volumes into. If needed, create your own index (rather than the ‘Default – Archive’ index), and configure meta-data fields and preview generation according to your requirements.

Indexes, previews and meta-data will be stored within P5’s index databases and ‘Clips’ folders and remain part of the P5 installation on local storage within the installation directory. Only the Volume folders are migrated to cloud storage.


B4. Using the Archive Plan that you just created, archive your data.


B5. Since the data being archived is being written to the volumes hosted on the storage device, you can confirm the volumes are growing in size, as expected, by viewing the the mounted share and looking at the Volume list.


B6. Once all archiving is complete, highlight the Pool and select ‘Detach’ from the cog menu at the bottom of the window.

You will be prompted for a local folder that will serve as a ‘Staging Area’. This area is used as a cache where data is stored temporarily before upload to cloud and volumes are cached during restores. The recommended size is 1/1000 of the transfer capacity.


B7. Verify pool is now detached – it will show within the P5 UI as disabled, along with all the volumes contained within it. Later, when the volumes are moved to cloud storage, we will re-attach the pool, causing the volumes to be enabled and available to use.


C – Sending data to cloud storage provider

C1. You’ll now return the Backblaze Fireball. All your data will be copied to your selected bucket within your B2 account and you’ll be notified when this is completed.

Here, we’re replicating the same operation using the ‘Cloud Sync’ tool available within the Synology OS, so copy everything we wrote to our share to a B2 bucket. Slower but achieving the same end result!

IMPORTANT: The volume folders, named 10001, 10002 etc, must reside in the root of the bucket, no sub-folder should be used.


C2. Again, with our Backblaze example, we wait for the copy to complete.

NOTE: In reality, Backblaze perform this step for you once they have received the Synology.


C3. Once the copy is complete, or the cloud provider has notified you that your data is now available via the B2 bucket, connect to the bucket and verify you can see you volumes.

Below we show the web admin interface of Backblaze B2 where we can see our volume folders, numbered 10001, 10002 etc.

Again, confirm that the volumes are in the root folder of the bucket.


D – Back to P5 Archive, re-attach the pool to connect to cloud service

D1. Back in the P5 web admin interface, create a ‘Cloud Service’ by providing the login credentials and bucket name for the type of cloud service where your data now resides.

Use the ‘Test’ button to verify P5 can connect to the cloud service before proceeding.


D2. Highlight the previously detached pool, and select ‘attach’ from the cog menu.

You will be prompted to select the Cloud Service that you set up in the previous step.


D3. The attach process that you just started will run a couple of jobs, visible in the Job Monitor window. Monitor execution of these jobs to completion.

This process connects and reads metadata information from the volumes stored in the cloud and copies some of this locally.


D4. Verify that the pool is now enabled, and all the volumes within it are also now enabled and ready for use.


D5. The staging area, where the cache of the cloud volumes is located, will store data temporarily on its way to/from the cloud bucket.


D6. Perform a test restore. Data will be downloaded from cloud. Monitor the running restore job and confirm data is being downloaded from the cloud bucket.

Transfer to Cloud Storage using a Backblaze B2 Fireball
Tagged on: