Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The following considerations apply to Block backup to the legacy Catalogic Open Storage Server, vStor or the NetApp storage:
For any volume backed up in a Block backup job, the initial backup is a base backup, and all subsequent backups are incremental, where only changed blocks are transferred.
An instance of a Block backup (base or incremental) is a virtual image of the entire source volume at a particular point in time.
Defining a new Block backup job with a new job name directs Catalogic DPX to perform a base backup.
Some conditions may require a base backup instead of an incremental one. See Forcing a Base Backup.
A Block backup of a volume that is currently being backed up by an Image backup or a different backup may fail. Ensure you do not run Image and Block backups of the same volume at the same time.
System State and System Table are supported for Block backups. See Backing Up System State in the DPX 4.9.x Reference Guide.
Note. By default, the Catalogic DPX Block Data Protection System State restore does not restore SPF files because of the time and storage required. For SPF restores, BMR is recommended. If you must restore SPF files without using BMR, contact the Catalogic Software Data Protection Technical Support team.
For Block backup jobs, the status of a job must be Waiting before you can suspend it.
After five minutes (300 seconds), if there is no progress on a file backup job, the job will be canceled. This is to prevent file hangups from holding up additional jobs that may be waiting to run.
If your backup source (primary system) is in a remote location and is to be backed up over a slow link, contact Catalogic Software Data Protection Technical Support for possible alternatives to improve performance.
In NetApp environments, Catalogic DPX provides two levels of backup instance verification for the Catalogic DPX Block Data Protection. These are described in Verifying a Block Backup by Using iSCSI Mapping in the DPX 4.9.x User’s Guide. Verification procedures for applications are described in Verifying Application Backups in the DPX 4.9.x User’s Guide.
A failure, such as a system error or communication interruption, that occurs during the Catalogic DPX Block Data Protection processing on a node may cause all the Catalogic DPX Block Data Protection application backups (Oracle, SQL Server, Exchange, Bare Metal Recovery) to fail on that node. If the problem occurs during the preliminary phase of the Catalogic DPX Block Data Protection processing, all backup tasks on the node will fail. If it occurs during backup processing, other backup tasks on the node will generally run successfully. In either case, backups on other nodes are not affected.
The following considerations apply only to Block backup to the NetApp storage:
The number of concurrent backups is limited by the number of NDMP connections supported by the destination NetApp storage system. Each volume in a backup job requires a connection. Consult the NetApp documentation to determine the maximum connections supported by your NetApp storage system.
Due to the NetApp storage system limitations, Block backup job names should be kept short. The job name becomes part of the qtree name on the NetApp storage system. If a qtree name is too long, you may be prevented from seeing volumes on the NetApp storage system.
There is a one-to-one relationship between a backup job and a volume on the NetApp storage system. Each defined backup job must be associated with a destination volume dedicated to that job, even though the job may back up multiple servers. Do not use a single destination volume to store backups from more than one Catalogic DPX backup job. For additional details on best practices related to configuring and using the NetApp volumes and creating the Catalogic DPX backup jobs, see the DPX Best Practices Guide in the Catalogic Knowledge Base.
When you open a previously saved Block backup job and change either the source or destination, the management console is aware of changes to the job definition and automatically attempts to delete qtrees through the Catalogic DPX condense operation. For additional information, see Considerations When Modifying a Block Backup Job in the DPX 4.9.x User’s Guide and read the knowledge base article 45784.
DPX 4.4.0 or later is required for Block backup support of Clustered Data ONTAP 8.3.1 RC1 and later.
To run a Block backup to the Catalogic Open Storage Server, storage must be made available to the DPX open storage server as physical or logical volumes. A typical setup provisions storage at one or more Windows drive letters, like D:
and E:
. Do not use system drives, such as the C:
drive, for the Catalogic DPX Open Storage Server.
Snapshot verification is available for DPX open storage. By running snapshot verification, you can quickly detect snapshot corruption. For more information, read the knowledge base article 46745.
The Catalogic DPX Archive destinations are optionally selected when the Catalogic DPX Block Data Protection job is scheduled. See Protecting Block Backups in the DPX 4.9.x User’s Guide.
In DPX open storage environments, occasionally there is the need to transfer data to a different open storage volume or server. For procedural details, read the knowledge base article 46746. The process facilitates the transfer of data on DPX open storage to a different volume or storage server and provides the ability to replace or upgrade the Catalogic DPX Open Storage Server itself.
Catalogic DPX Block Data Protection assures minimal business interruption and maximum data integrity in the event of any data resource loss – large or small. The Catalogic DPX Block Data Protection utilizes proprietary Catalogic technologies for block-level backup and fast, reliable recovery.
Block backup is the backup component of the Catalogic DPX Block Data Protection. In the desktop interface, block backups are implemented through the Block Backup Wizard. In the web interface, block backups are defined in the Job Manager section just like any other backup type.
Block backups interact with the NetApp deduplication (Advanced Single-Instance Storage) for the NetApp storage systems.
The Catalogic DPX Block Data Protection processing automatically invokes block deduplication after the data transfer; all data at the root of the volume is deduplicated at that time. Due to differences in deduplication support between Data ONTAP versions, it is strongly recommended that Data ONTAP 8.0.1 or later be used with 64-bit aggregates to maximize deduplication and processing efficiency.
Deduplication is a feature that is enabled per volume. It is recommended to start deduplication backup jobs on newly created empty NetApp volumes as the NetApp deduplication will not deduplicate data already retained in existing snapshots. For volumes with A-SIS enabled, deduplication processing is automatic and occurs immediately after the backup job completes. Scheduled volume deduplication should be disabled within Data ONTAP as the backup automatically invokes the deduplication process, and any additional scheduling is redundant and an unnecessary drain on system resources. There is no option to defer deduplication until a later time.
NetApp deduplication requires a NetApp A-SIS license.
The first time you run a backup job, it will be a base backup. Subsequent runs are automatically incremental, and only changed blocks are backed up. A change journal on the client node tracks block-level changes.
In rare cases, such as disruption caused by a sudden power failure or virus, the change journal may stop or become inconsistent. The change journal then does not precisely track changed blocks. A repeatable backup failure may be evidence of damage to the change journal.
To recover from damage to a change journal, a base backup is required. That is, you must force a base backup. This forced base is sometimes referred to as a re-base.
A forced base is usually done in communication with a support analyst. However, it can be done by knowledgeable users. A base backup is forced in the following ways:
This method does not permit the selection of backup sources within the backup job. It forces a base backup of every source defined in the job. This method is practical only for jobs that are not backing up multiple sources that contain large amounts of data.
This is the preferred method and is described below. It enables the selection of backup sources requiring a base backup. The utility clears change journaling for the selected source. This forces a base backup of that source the next time the backup job is run. Note that this method is supported only for Windows clients. For other clients, contact Catalogic Software Data Protection Technical Support.
Take the following steps to force a base backup with bexsnapmgr
:
Examine the job log to identify sources that failed to be backed up.
On the Windows client where the backup of a source failed, launch the DPX command prompt from the Windows Start menu:
For Windows earlier than 2008, click Start > All Programs > DPX > DPX Command Prompt.
For Windows 2008 and later, click Start > All Programs > DPX. Then right-click DPX Command Prompt. From the context menu, select Run as Administrator.
The DPX command prompt appears.
Enter bexsnapmgr
. The user interface appears.
Click Change Journal Configuration. The Change Journal Configuration window appears.
Next to Disable, select from the pull-down list the affected backup source (node, drive, or volume). Click Disable. A message indicates that the source is disabled. At this point, you cannot run a backup of the disabled source. Repeat this step if there are additional sources to disable.
Next to Enable, select from the pull-down list the same backup source disabled in the previous step. Click Enable. At this point, you can run a backup of the source. Repeat this step for any additional sources disabled in the previous step.
Close the user interface.
Change journaling for the selected backup source or multiple sources, is now cleared. The next time a backup job using the source runs, the backup of the source is a base backup. Backups of all other sources defined in the job continue to run as incrementals.
Attention! Do not use other functions of the utility except under the direction of Catalogic Software Data Protection Technical Support.
You may add a schedule to an existing Block backup job and save it, add a schedule to an existing job and save it under another name (thus creating a new job without affecting the job you have made edits to), or create a completely new job, adding a schedule to it.
To add a schedule to a Block backup job, create a new job or open an existing one from the Job Manager.
In the job creation/editing view, scroll down to the SCHEDULES pane and click Add Schedule. The Schedule dialog will open. Depending on the selected frequency, the dialog will display slightly different parameters to select. Below, the Weekly schedule view is shown.
Select all required parameters, such as the schedule starting time and date, day of the week (if applicable), repetition period, backup retention time, etc.
Click Add.
Important. The schedule is now added to the job, but the job is not saved yet. Continue with the procedure to save the job.
Save the job. If you have added the schedule by modifying an existing job, two options will be available.
Click Save to apply changes to the existing job; or
Click Save As to create a copy of the modified job, with the schedule added.
Attention! When editing an existing Block backup job, bear in mind that any change to the backup source or destination will result in creating a new base backup rather than an incremental or differential backup, as the relation between the job definition and the previous base backup job will be lost.
This also requires running a Condense job before running the edited Block backup job. See also Condense job.
In the desktop interface, block backups are defined and edited exclusively through the block backup wizard. This also concerns creating and editing job schedules.
Go to the Backup tab.
From the Backup Modes section in the side panel, choose Block.
In the Job Tasks section of the task panel, find and click Block Backup Wizard.
Select an existing Block backup job you want to add a schedule to and click Edit Job.
Jump through all the steps clicking Next until you reach the Save screen. Click Schedule Job.
In the Job Schedule dialog, define all the schedules you require for the job. You may add schedule exceptions, e.g. for holidays, in the Exceptions tab.
Click OK to save the schedule.
Note. If you select Cancel, the job will be saved anyway, it will not be run and the retention period will remain 90 days.
In the web interface, block backups are defined in the Job Manager section. In the desktop interface, block backups are implemented through a backup wizard.
In the main web interface view, go to Job Manager in the sidebar. Then select the New Backup Job button in the upper right corner.
Specify the Job Name (this field may contain up to 16 characters). Add an optional, brief description (this field may contain up to 48 characters).
Select Job Type – Block and the Job Folder to store the job in (see the Job Manager section for more information). By default, all jobs are stored in the SS_DEFAULT job folder.
Click Add Source in the SOURCES pane to specify which volumes you want to back up. The Source selection dialog will appear. Select the desired volumes and click Select.
Tip. If you want to perform a Bare Metal Recovery (BMR) backup which allows for further recovery of the entire machine, select the BMR volume.
For more information, see Bare Metal Recovery (BMR) Backup and Bare Metal Recovery (BMR) Restore.
Tip. You may view your current selection at any moment, using the Selected Items button next to the search field.
You can clear each item using the “X” symbol next to the item, or clear all items at once using the Clear All button.
Click Set Destination in the DESTINATION pane to choose the destination for the backup.
You may also add a schedule in the SCHEDULES pane for the backup to be run regularly. See Scheduling a Block Backup Job.
If you want to add an Archive backup to your backup job, click Add Archive in the ARCHIVE pane. The Add Archive dialog will appear.
Important. Before using this functionality, read the Archive section in the Backup chapter.
Specify Advanced Options. For details, see Job Options for Block Backup.
Click Save. The Run Job prompt will be shown, where you may determine the retention period (default: 90 days) and choose whether to run the job immediately. Either way, the job will be available in the Job Manager section.
Go to the Backup tab.
From the Backup Modes section in the side panel, choose Block.
In the Job Tasks section of the task panel, find and click Block Backup Wizard.
The Block Backup Wizard window will appear.
By default, the creator allows you to select from the drop-down list and edit an already existing job. If you want to create a new block backup job, click the New Job button in the lower right corner.
In the Select Source step, choose the volumes you want to include in the backup. Click Next.
Tip. If you want to perform a Bare Metal Recovery (BMR) backup which allows for further recovery of the entire machine, select the BMR volume.
For more information, see Bare Metal Recovery (BMR) Backup and Bare Metal Recovery (BMR) Restore.
In the Select Destination step, choose the destination for the backup job. Click Next.
In the Job Options step, define the job options for the backup job. See details below.
See also. For more information about block backup job options, see Job Options for Block Backup.
In the final Save step, you must enter the Job Name (max. 16 characters) and specify the Job Folder to store the job in (the default folder is SS_DEFAULT). You may also add a comment to the job definition or set up a schedule (See Scheduling a Block Backup Job).
In the Job Schedule dialog, you can also schedule Archive jobs for the Block backup.
Important. Before using this functionality, read the Archive section in the Backup chapter.
Catalogic DPX offers a variety of job options for Block backup. All of them are available from both interfaces, but the access thereto may differ. See details for each interface below.
In the web interface, backup job options are defined in the Advanced Options section. To access them, do the following:
Go to Job Manager in the sidebar.
Open an already existing backup job. Or create a new Block backup job, by clicking the New Backup Job button in the upper right corner and then selecting Backup Type Block.
Go to the Advanced Options section at the bottom (scroll down if necessary) and expand it. Click any of the following section headers to expand it. Each field and the available choices are explained below.
Tells DPX to run a consistency check on SQL Server before backing up a SQL Server database. The check runs three utilities that Microsoft recommends before a backup: DBCC CHECKDB, DBCC CHECKALLOC, and DBCC CHECKCATALOG.
This option controls the truncation of SQL Server Transaction Logs during a Block backup. This option must be set to On in order to enable Point-in-Time restore.
This option controls the truncation of Microsoft Exchange Logs during a Block backup.
Determines whether DPX will back up Exchange DAG from a passive node or an active node.
For Oracle backups, this option determines whether to synchronize the RMAN catalog after the job is completed. Yes is the default.
This section includes the following radio button selection:
Disable File History
Process File History on Local Client (default selection)
Process File History on Master Server
This section includes the following options:
Task Data Retry Count (default: 5)
Task Retry Interval (default: 1)
Throttle (default: 0)
Resolution Auto Cancel Timeout [Minutes] (default: 180)
Resolution Retry Count (default: 3)
Wait Interval Between Replies [Minutes] (default: 1)
Data Transfer Auto Cancel Interval [Minutes] (default: 0)
And one toggle:
Backup IA Mapped Drives (default: disabled)
The Notification Options section controls who receives messages pertaining to the current job when it is run.
This section includes the Job e-mail notification toggle.
Subject
The subject of your message. The subject line usually contains a combination of straight text and variable elements. Variables, which must begin with %
, are replaced with actual corresponding values. If you enclose variables in double quotation marks, those variables are treated as literal values. You can embed the following variables:
%JOBNAME
%JOBID
%JOBTYPE
%RC
Use %RC to include the return code in the message for this run of the job, when applicable.
To
The email address of the primary recipient of your message. Only one “To” address is permitted.
Cc
Carbon Copy. The email address(es) of the secondary recipient(s) of your message. Use a semicolon to delimit multiple email addresses.
Bcc
Blind Carbon Copy. The email address(es) of the secondary recipient(s) not identified to other recipients. Use a semi-colon to delimit multiple email addresses.
Note. Note that the following characters are invalid in all fields: <
>
;
and '
.
Note. DPX emailing must be enabled when you first configure your Enterprise. At that time, you supply general system information, including SMTP Host Name and SMTP Port. See the Administrator E-mail Settings section.
Enter the name of a script to execute prior to the actual job.
Basic usage: <script>@<node_name> <argument_list>
The action be taken if the Pre-Job Script fails to successfully complete:
Run Job/Run Post-Job Script
Skip Job/Run Post-Job Script
Skip Job/Skip Post-Job Script
The action to be taken if the Job fails to successfully complete:
Run Post-Job Script
Skip Post-Job Script
Enter the name of a script to execute after the actual job.
Basic usage: <script>@<node_name> <argument_list>
See also. For detailed information about pre- and post-job scripts, including all valid definitions, see Pre-Scripts and Post-Scripts.
In the desktop interface, backup jobs are defined exclusively through the Block Backup Wizard. See Creating a Block Backup Job. The last but one step of the Wizard, the Job Options screen contains five tabs. Each of them is explained below.
Task Data Transfer Retry Count
Controls the number of checkpoint retries of an interrupted Block backup. The retries attempt to resume the job from the last successfully backed-up data block. Enter the number of checkpoint restart attempts.
Task Retry Intervals (Minutes)
Determines how long to wait before retrying failed tasks.
When a task fails (due to permission problems, open files, interim job changes, etc.), it waits the number of minutes specified in this field before attempting that task again. Because the same failure might occur if the task is retried too soon, it is better to allow some time for an error to be corrected before retrying the task. A task is only retried once. Failing tasks appear in error message lists in the Job Log. All tasks are subject to retry.
Throttle
Enter a value in KB/s (kilobytes per second) to set the maximum transmission rate per backup task. The value 0, the default, allows the task to use the maximum bandwidth available. See Catalogic DPX Block Data Protection in the DPX 4.9.x User’s Guide.
Resolution Auto Cancel Interval (Minutes)
This option comes into play when DPX attempts to retry a failed snapshot. If the retry is unresponsive, DPX initiates job auto-cancel after this interval, in minutes, has lapsed.
Resolution Retry Count
Determines how many times to retry a failed snapshot attempt.
Wait Interval Between Retries (Minutes)
Determines the amount of time (in minutes) to wait between retry attempts for a failed snapshot attempt.
Data Transfer Auto Cancel Interval (Minutes)
This option comes into play if a job does not get an indication of “active” status during the data transfer phase of the job. DPX initiates job cancellation after this interval, in minutes, has lapsed.
Backup IA-Mapped Drive
This is an option to backup IA-Mapped drives from DPX snapshots. Note: The default setting is No and should be changed to Yes to back up the drives.
MSSQL DB Consistency Check
Tells DPX to run a consistency check on SQL Server before backing up a SQL Server database. The check runs three utilities that Microsoft recommends before a backup: DBCC CHECKDB, DBCC CHECKALLOC, and DBCC CHECKCATALOG.
Backup and Truncate SQL Logs
This option controls the truncation of SQL Server Transaction Logs during a Block backup. This option must be set to Yes to enable Point-in-Time restore.
Exchange DAG Passive Copy
Determines whether DPX will back up Exchange DAG from a passive node or an active node.
Truncate Exchange Logs
This option controls the truncation of Microsoft Exchange Logs during a Block backup.
Oracle RMAN Cataloging Control
For Oracle backups, this option determines whether to synchronize the RMAN catalog after the job is completed. Yes is the default.
NDMP File History Handling
Controls file history generation for NDMP and Block backup tasks. For Block backups, recovery through the use of Instant Access provides for granular file-level restore regardless of whether file history was generated or not. Instant Access allows Block backups to be run very frequently by eliminating the need to include file histories during backup. For information on using Instant Access for file-level restore, see Instant Access as a File History Alternative in the DPX 4.9.x User’s Guide.
Enable NDMP Server Logging
Controls the routing of NDMP server-generated log messages to the job log file.
Additional NDMP Environment
This option allows you to introduce any additional NDMP environment variables that are necessary for the backup task. Specify your environment variables as an ASCII string with an environment variable name and value pairs using the following syntax:
Note. Syntax validation is not performed on the specified value at job definition time, but rather at run time. Only valid entries are added to the NDMP operation environment.
Note. Alternative syntax, e.g. env1name value;env2name value;...
(semicolon-delimited, no equal sign) or env1name valueenv2name value...
(no delimiter, no equal sign) may be displayed in the interface. However, for the sake of clarity, the env1name=value,env2name=value,...
version is strongly recommended.
Attention! Do not specify any of the following NDMP environment variables in your variable string because DPX controls these specifically:
BASE_DATE
DEBUG
DIRECT
DUMP_DATE
EXTRACT
FILES
FILESYSTEM
HIST
LEVEL
PREFIX
RECOVER_FILEHIST
SINCE_TIME
TYPE
UPDATE
VERBOSE
Specifying the variables above may cause unexpected results due to the unpredictability of the order in which they are evaluated.
NDMP servers from different vendors may support different NDMP environment variables. Except for a few well-known environment variable names, there is currently no standardized set of such variables. This option allows you to add environmental variables specific to your NDMP server.
Enter the name of a script to execute prior to the actual job.
Basic usage: <script>@<node_name> <argument_list>
The action to be taken if the Pre-Job Script fails to complete successfully:
Run Job/Run Post-Job Script
Skip Job/Run Post-Job Script
Skip Job/Skip Post-Job Script
The action to be taken if the Job fails to complete successfully:
Run Post-Job Script
Skip Post-Job Script
Enter the name of a script to execute after the actual job.
Basic usage: <script>@<node_name> <argument_list>
See also. For detailed information about pre- and post-job scripts, including all valid definitions, see Pre-Scripts and Post-Scripts.
Two sets of mail information can be specified:
The email address of the primary recipient of your message. Only one “To” address is permitted.
Carbon Copy. The email address(es) of the secondary recipient(s) of your message. Use a semicolon to delimit multiple email addresses.
Blind Carbon Copy. The email address(es) of the secondary recipient(s) not identified to other recipients. Use a semi-colon to delimit multiple email addresses.
The subject of your message. The subject line usually contains a combination of straight text and variable elements. Variables, which must begin with %
, are replaced with actual corresponding values. If you enclose variables in double quotation marks, those variables are treated as literal values. You can embed the following variables:
%JOBNAME
%JOBID
%JOBTYPE
%RC
Use %RC to include the return code in the message for this run of the job, when applicable.
Selecting this check box option temporarily disables notifications for the job without deleting the currently defined job notification data.
Note. Note that the following characters are invalid in all fields: <
>
;
and '
.
Note. DPX emailing must be enabled when you first configure your Enterprise. At that time, you supply general system information, including SMTP Host Name and SMTP Port. See Editing an Enterprise Configuration.
The System State feature of Windows can be backed up using DPX.
Active Directory (domain controllers only)
Boot files
COM+ class registration database
Registry
System volume (SYSVOL) (domain controllers only)
Certificate server (Certificate Authority only)
System protected files
Performance counter configuration
IIS metabase
Cluster Quorum
The Backup window displays System State and all its components. You can select System State for backup as if it were an ordinary disk. You cannot select individual components. System State components are always backed up collectively, never individually.
Consider backing up the entire node, which includes the BMR object. You can use BMR to recover the entire node, including System State and its components.
For information about backing up and restoring Active Directory, see Microsoft article
For DPX, User Profile is not a component of the Registry within System State. User Profile is part of the System Table. As such, it can be individually backed up and restored. For more information, see .
The System Table is a collection of the following Windows components, which can be backed up and restored individually:
Event Logs
Removable Storage Manager
User Profiles
Terminal Server
Windows Management Instrumentation Repository
Back up these components using the ordinary backup job definition procedures: clicking the components to back up in the Backup source tree structure.
The following is a sample section of a backup definition window showing both System State and System Table selected in the desktop interface:
The components listed under System State cannot be selected individually, but components listed under System Table can be selected individually. System Table is a DPX concept, unlike System State, which is a Windows concept.
Editing a Block backup job allows you to change some parameters of an already defined job (Save), or to create a new job based on the definition of an existing one (Save as…). The procedure is similar to creating a backup job.
In the main web interface view, go to Job Manager in the sidebar. Then select the block backup job you want to edit from the list.
Tip. You can control this view by ordering items by Job Name, Type, Created Date, Description, or Job Folder. Just click the column header to enable ascending/descending ordering.
Note also the Items per page value and navigation buttons at the bottom of the list, which can be useful when managing the display of many jobs.
Save the job by clicking Save (the changes will be saved under the current job’s name, overwriting previous settings), or Save As (you will be prompted to provide a new name for the job).
Restrictions. The new job name must be unique throughout the entire DPX, regardless of the folder the job is stored in.
In the desktop interface, block backups are defined and edited exclusively through the block backup wizard.
Go to the Backup tab.
From the Backup Modes section in the side panel, choose Block.
In the Job Tasks section of the task panel, find and click Block Backup Wizard.
The Block Backup Wizard window will appear:
Select an existing Block backup job you want to edit and click Edit Job.
Restrictions. The new job name must be unique throughout the entire DPX, regardless of the folder the job is stored in.
Note. If you select Cancel, the job will be saved anyway, it will not be run and the retention period will remain 90 days.
Disabling the change journal requires correct permissions. If attempting to disable the change journal displays an error, read the knowledge base article .
Click Finish to save changes made to the job. The Final Job Run Settings dialog box will appear. You may choose to run the job immediately or save it without running. You may also change the retention period (default: 90 days).
Click Finish. The Final Job Run Settings dialog box will appear. You may choose to run the job immediately or save it without running. You may also change the retention period (default: 90 days).
See also. For information about restoring System State, see in the DPX 4.9.x Reference Guide.
Make all required changes to the job definition. The workflow is the same as in .
Attention! Changing Block backup job name forces DPX to create a new base backup when the job is run, rather than an incremental or a differential backup. See also .
Make all required changes to the job definition. The workflow is the same as in .
In the final Save screen, you may either save the changed job under the same name, or specify a new name to save the changes as a new job. You may also specify the Job Folder to store the job in (the default folder is SS_DEFAULT), as well as add a comment to the job definition or set up a schedule (See ).
Attention! Changing Block backup job name forces DPX to create a new base backup when the job is run, rather than an incremental or a differential backup. See also .
Click Finish. The Final Job Run Settings dialog box will appear. You may choose to run the job immediately or save it without running. You may also change the retention period (default: 90 days).
Toggle on
Performs the consistency check.
Toggle off
Does not perform the consistency check.
Toggle on
SQL Server transaction logs are truncated on the source database server after the backup completes. To find the backed-up SQL logs, refer to the message “BACKUP LOG <database-instance> TO DISK...” in the Job Log or to Event 18265 in the Application Event Log.
Toggle off
SQL Server transaction logs are not truncated and will therefore continue to grow on the source database server. To truncate translation logs, run SQL Server maintenance on the source machine. No is the default.
Toggle on
After the backup is complete, DPX deletes the old Exchange logs. Yes is the default.
Toggle off
DPX does not delete any Exchange logs.
Toggle on
Default setting. Back up Exchange DAG from a passive node.
Toggle off
Back up Exchange DAG from an active node.
Toggle on
Synchronize the RMAN catalog after the job completes.
Toggle off
Do not synchronize the RMAN catalog after the job is completed. If you choose this option, the job is not cataloged in RMAN.
Toggle on
The notification is sent as specified below the toggle (additional fields will appear – see below).
Toggle off
The notification is sent to the default e-mail address configured in the Administrator E-mail Settings section.
No
Does not perform the consistency check.
Yes
Performs the consistency check.
Yes
SQL Server transaction logs are truncated on the source database server after the backup completes. To find the backed up SQL logs, refer to the message “BACKUP LOG <database-instance> TO DISK...” in the Job Log or to Event 18265 in the Application Event Log.
No
SQL Server transaction logs are not truncated and will therefore continue to grow on the source database server. To truncate translation logs, run SQL Server maintenance on the source machine. No is the default.
Yes
Default setting. Back up Exchange DAG from a passive node.
No
Back up Exchange DAG from an active node.
Yes
After the backup is complete, DPX deletes the old Exchange logs. Yes is the default.
No
DPX does not delete any Exchange logs.
Yes
Synchronize the RMAN catalog after the job completes.
No
Do not synchronize the RMAN catalog after the job is completed. If you choose this option, the job is not cataloged in RMAN.
Disable File History
Disables NDMP server file history generation.
Process File History on Local Client
Enables NDMP server file history generation and processes the file history data on the NDMP client node. This is the default.
Process File History on Master Server
Enables NDMP server file history generation but transmits the file history data to the master server node for processing.
Yes
All NDMP server log messages will be routed to the master server’s job log file. Yes is the default.
No
The NDMP server log messages will be logged locally in the NDMP client node log file instead of in the master server’s job log file.
Output Email
Specifies that the subsequent fields apply to reports that are sent when a job has completed.
Operator Email
Specifies that the subsequent fields apply to mount requests, error messages, and informational messages that are sent during a job.