Archive

Archive is an additional method of protecting your data by archiving it to a cloud provider such as AWS S3, Azure, Backblaze B2, Cloudian, DataCore, StorageGRID, Alibaba, MinIO, ScalityRing, Wasabi or your own, custom cloud. You can create a one-time job or a scheduled job in either of the management interfaces to archive Block or VMware data to a cloud provider. An Archive-to-cloud job must be first run as base and can then have additional associated jobs run as either incremental or differential jobs.

Archive jobs can only be run for data protected with Block backup or VMware Agentless backup. Other backup types, such as File or Agentless Hyper-V, are not supported.

Prerequisites for Archiving Backup Data to Cloud Storage

A cloud provider must already be created as a Device Cluster and have an associated Media Pool created in the DPX. For more information, see Adding a Device Cluster and Adding a Media Pool.

The cloud provider must already have one or more buckets defined. Consult your cloud provider documentation. If you are using MinIO, see MinIO Buckets in Catalogic vStor in vStor 4.11 Documentation for more information.

A backup job instance must already exist and be completed prior to running an archive job.

The backup job instance is the source to be used in an archive job.

Restrictions. Archive-to-cloud jobs are only supported where the backup repository is a Catalogic vStor server. Other backup repositories, such as Catalogic DPX Open Storage Server are not supported for archiving block data to a cloud provider.

A base archive must first be created. After a base archive job is completed, you can create an incremental or differential job by following the same set of steps. The base will remain locked until the last incremental or differential has expired for that base. The base will then be cleaned. If the base expires, the recovery point will be removed but the media will not be cleaned up as long as incremental or differential backups exist for that base.

Note. Selecting incremental or differential job is not disabled even for the first job instance. If no base archive is detected, the selection will be ignored and a base archive will be run. All subsequent instances for this archive will be run as selected.

The maximum object size for S3 and S3 compatible storage buckets is 1 TB. The maximum object size for MinIO is 250 GB. The object size is based on the task and includes the object data, a small .CTL file, and may include a .BAK file which is a backup of the .CTL file.

Restrictions. To use vStor as a source for Archive, you need to add your vStor as a Client node to DPX even if it already exists as a vStor node. Otherwise, the Archive job will fail.

Last updated