Prerequisites for Archiving Block Data to Cloud Storage

A cloud provider must already be created as a Device Cluster and have an associated Media Pool created in the Catalogic DPX Enterprise.

The cloud provider must already have one or more buckets defined. Consult your cloud provider documentation. If you are using MinIO, see the topic MinIO Buckets in Catalogic vStor for more information.

A backup job instance must already exist prior to running an archive job.

The backup job instance is the source to be used in an archive job.

Archive to cloud jobs are only supported where the backup repository is a Catalogic vStor server. Other backup repositories, such as Catalogic DPX Open Storage Server are not supported for archiving block data to a cloud provider.

A base archive must first be created. After a base archive job is created, you can create an incremental or differential job by following the same set of steps. The base will remain locked until the last incremental or differential has been expired for that base. The base will then be cleaned. If the base expires, the recovery point will be removed but the media will not be cleaned up as long as incremental or differential backups exist for that base.

The maximum object size for S3 and S3 compatible storage buckets is 1 TB. The maximum object size for MinIO is 250 GB. The object size is based on the task and includes the object data, a small .CTL file, and may include a .BAK file which is a backup of the .CTL file.

Last updated