Catalog Backup
Catalog is a database that stores all job information, retention information, and information about all backed-up data. The Catalog is on the master server. Catalog protection and maintenance are vital to the success of your Catalogic DPX data protection implementation. The Catalog contains all the information important to managing resources, such as tape media and on-disk storage. The Catalog is also essential for keeping track of available protected data and displaying such data in the management console. All of the Enterprise nodes, tapes, schedules, retentions, user accounts, and backup job definitions are stored in the Catalog. The Catalog backup process is a vital part of protecting and recovering the master server, and the Catalog Condense process is vital to pruning and maintaining the Catalog.
See also. For more information about the Catalog Condense job, see Condense.
The following are strongly recommended best practices for maintaining and protecting the Catalog:
Perform a Catalog backup at least once a day.
Retain a Catalog backup for at least as long as the longest retained archive.
Run a Catalog condense at least once a day.
Schedule Catalog backup and Condense jobs for a time when system activity is relatively low.
Each Enterprise has its own Catalog located on the master server. The Catalog is a set of files stored on the master server within the directory where Catalogic DPX is installed. The important Catalog resources are contained in the following subdirectories:
db
sched
cat
The logs folder is not a vital part of the Catalog but is an important resource to add to your data protection plan. Logs are used by the management console for reviewing the state of jobs that previously ran, and in some cases, logs can be used to help find and recover data that is missing in the Catalog. Job log files are typically trimmed on a 30-day cycle; other diagnostic logging data is trimmed on a 7-day cycle. Logs are not included in a Catalog backup, and the Catalogic DPX installation directory is usually excluded from file-level backups. Therefore, it is strongly recommended to do one of the following to preserve the logs: set up e-mail job notifications, periodically copy the job logs to an alternate location, or protect the job logs with a master server Block backup.
Attention! Hosting the master server installation, including the Catalog, on a CIFS or NFS share is not recommended. It is also not recommended to use soft links (Linux) or junction points (Windows) to redirect access to the core master server database files which include the db, cat, and sched folders. Network outage or latency can disrupt master server functionality. Hosting the Catalog data on CIFS/NFS and attempting to use Catalog backup with the option Skip NFS Volumes set to No might be successful, but that operation is strongly discouraged and is not supported. Note that the Skip NFS Volumes option is set to Yes by default.
Whenever you perform a backup or make a configuration change to your Enterprise, the Catalog is updated. In particular, a backup job updates the Catalog with information identifying the location on tape of every file backed up. This information is crucial during restores.
Last updated