DPX 4.10 Guide
Catalogic SoftwareKnowledge BaseMySupport
  • Welcome to DPX!
  • Introduction
    • About DPX
  • Installation and Configuration
    • How to Start – Basic Configuration
    • DPX Master Server
      • Deploying DPX with VMware vSphere
      • Deploying DPX with Microsoft Hyper-V
      • DPX Master Server Interface
      • Configuration
    • DPX Client
      • DPX Client for Microsoft Windows
        • Requirements
        • Installation
        • How to Uninstall
      • DPX Client for Linux
        • Requirements
        • Installation
        • How to Uninstall
      • Automated Deployment of DPX Client
      • Further Actions with DPX Client
    • DPX Proxy Server
      • Deploying DPX Proxy Server for VMware
      • DPX Proxy Server Web Interface
    • DPX Hyper-V Agent
    • Nodes
      • Adding a Client Node to the Master Server during Client Deployment
      • Adding a Client Node from the Master Server Level
      • Adding a Configured Hyper-V Host as a DPX Node
      • Adding Other Node Types to the Master Server
      • Adding a Node Group
    • Devices
      • Adding a Device Cluster
      • Adding a Device
      • Adding a Tape Library
      • Adding a Tape Library Device
    • Media
      • Adding a Media Pool
      • Adding Media Volumes
    • Tape Libraries
      • Tape Library Deployment
      • Tape Library Setup
      • Manual Tape Library Installation
      • Tape Library Verification
      • Manual Tape Library Configuration
      • Setting up LTO Tape Drives for Hardware Encryption
      • Assigning and Unassigning Media to/from Tape Library Slots
    • User Configuration
      • Configure Administrators Window
    • Other Components of the DPX Suite
    • Updating DPX
      • Command Line Update
      • Online Autoupdate Method
      • Offline Autoupdate Method
  • Backup
    • Introduction and Backup Types
    • File Backup
      • Creating a File Backup Job
        • Job Source Options for File Backup
        • Job Destination Options for File Backup
        • Other Job Options for File Backup
      • Editing a File Backup Job
      • Scheduling a File Backup Job
    • Block Backup
      • Prerequisites
      • Creating a Block Backup Job
        • Job Options for Block Backup
      • Editing a Block Backup Job
      • Scheduling a Block Backup Job
      • Forcing a Base Backup
      • Backing up System State
      • Backing up System Table
    • Application Backup
      • Microsoft SQL Server Backup
        • Creating a Microsoft SQL Server Backup Job
      • Microsoft Exchange Server Backup
        • Creating a Microsoft Exchange Server Backup Job
      • Microsoft SharePoint Server Backup
        • Creating a Microsoft SharePoint Server Backup Job
      • Oracle Database Backup
        • Creating an Oracle Database Backup Job
      • SAP HANA Backup
        • Running SAP HANA Backup Job
      • OpenText GroupWise Backup
        • Creating a GroupWise Backup Job
    • Bare Metal Recovery (BMR) Backup
      • Creating a BMR Backup Job
    • Catalog Backup
      • General Considerations
      • Creating a Catalog Backup Job
        • Job Options for Catalog Backup
      • Editing a Catalog Backup Job
      • Scheduling a Catalog Backup Job
    • NDMP Backup (desktop interface only)
      • Prerequisites
      • Creating an NDMP Backup Job
        • Job Source Options for NDMP Backup
        • Job Destination Options for NDMP Backup
        • Other Job Options for NDMP Backup
      • Editing an NDMP Backup Job
      • Scheduling an NDMP Backup Job
    • Image Backup (desktop interface only)
      • Creating an Image Backup Job
      • Editing an Image Backup Job
      • Scheduling an Image Backup Job
    • Agentless Backup for VMware and Hyper-V
      • Agentless Backup for VMware
        • Agentless Backup for VMware Complements Block Data Protection
        • Agentless VMware Backup Architecture and Data Flow
        • Environmental Requirements for Agentless VMware Backup
        • Best Practices for Agentless VMware Backup
        • Creating an Agentless VMware Backup Job
          • Job Source Options for Agentless VMware Backup
          • Other Job Options for Agentless VMware Backup
        • Editing an Agentless VMware Backup Job
        • Scheduling a VMware Backup Job
        • Application-Consistent Protection
        • Architecture and Data Flow
      • Agentless Backup for Microsoft Hyper-V
        • Creating a Hyper-V Backup Job
        • Editing a Hyper-V Backup Job
        • Scheduling a Hyper-V Backup Job
    • Condense
      • Starting a Condense Job
      • Scheduling a Condense Job (desktop interface only)
    • File Exclusion Rules
    • Pre-Scripts and Post-Scripts
    • Differential and Incremental Backups
    • Archive
      • Creating an Archive Job for Block Backup
      • Creating an Agentless VMware Archive Job
  • Restore
    • Introduction and Restore Modes
    • Restore Modes in the Web Interface
      • Agentless File Restore
      • Full VM Restore for VMware
      • Full VM Restore for Hyper-V
      • Instant VM Restore
      • Agent-Based File Restore
        • Advanced Job Options
      • Instant Access
      • Virtualization
      • Application Restore
        • Microsoft SQL Server Restore
        • Microsoft Exchange Server Restore
        • Microsoft Sharepoint Server Restore
        • Oracle Database Restore
        • SAP HANA Restore
        • OpenText GroupWise Restore
      • Restoring files protected by file backup jobs
        • Job Options for Restoring files protected by file backup jobs
    • Restore Modes in the Desktop Interface
      • File Restore
        • Creating a File Restore Job
      • Image Restore
        • Creating an Image Restore Job
      • NDMP Restore
        • Creating an NDMP Restore Job
      • Block Restore
        • Creating a Block Restore Job
        • Specifying Mount Points
        • Restoring Block Data with Instant Access
        • Restoring BMR Data with Virtualization
      • Agentless VMware Restore
        • Creating an Agentless VMware Restore Job
          • Agentless VMware Restore Job Operations
        • Rapid Return to Production (RRP)
      • Tape Restore
        • Creating a Tape Restore Job
      • Application Restore
      • Scheduling a Restore Job in the Desktop Interface
        • Previewing Scheduled Jobs in the Desktop Interface
      • Restore Job Source Options
    • Catalog Restore
      • Creating a Catalog Restore Job
    • Bare Metal Recovery (BMR) Restore
    • Recovering Archived Backups
      • Recovering a VMware Archive
  • Storage
    • Backup Destinations
    • Tape Storage
    • vStor
    • Disk Directory
    • NetApp
    • OSS
    • Cloud Storage
      • Registering AWS S3 Cloud Storage
      • Registering a Generic Cloud Storage
  • Web Interface
    • Log-in Page
    • Dashboard
    • Job Monitor
    • Job Manager
    • Schedule Overview
    • Reports
      • Reports Templates
    • Nodes
    • Devices & Pools
    • Events
    • Configuring Enterprise Information
  • Desktop Interface
    • Connect to DPX Window
    • Main Window
    • Common Menu Bar Options
    • Function Tabs Desktop Interface
      • Backup Tab
      • Restore Tab
      • Monitor Jobs Tab
      • Control Devices Tab
      • Reports Tab
      • Catalog Tab
      • Copy Tab
      • Manage Tapes Tab
      • Configure Tab
      • [Legacy] Analytics Tab
    • Common Function Window Tasks
    • Configuring Parameters
    • Configuring the Enterprises
      • Editing an Enterprise Configuration
    • Configuring Devices
  • BARE METAL RECOVERY
    • Bare Metal Recovery Overview
    • Bare Metal Recovery for Windows
    • Bare Metal Recovery for Linux
    • Recovery from a Replicated or Alternate Data Resource
  • Miscellaneous
    • Configuring Keyrings
      • Adding a Keyring
      • Adding a Key
    • Getting Node Information
    • SNMP Interface
    • Troubleshooting
      • Troubleshooting Installation of the Catalogic DPX Client on UNIX and UNIX-like Systems
    • Technical Support
    • Terminology
    • Acronyms
    • Default Ports
    • Managing Licenses
Powered by GitBook
On this page
  • General Remarks
  • Valid Syntax
  • Job Definition Scripting Fields
  • Editing Job Definition Scripting Fields
  • Pre-Job Script field
  • If Pre-Script Fails field
  • If Job Fails field
  • Post-Job Script field
  • Retrieving Backup Information Using Pre-Script and Post-Script variables
  • Variables

Was this helpful?

Export as PDF
  1. Backup

Pre-Scripts and Post-Scripts

This topic discusses how to use scripts that can be run before a job (Pre-Job Script) or after a job (Post-Job Script). It also discusses how to use scripts in conjunction with DPX Block Data Protection to support applications not directly supported by DPX Block Data Protection.

General Remarks

DPX allows you to automatically run scripts on UNIX, Linux, OES Linux, or Windows nodes before or after a job. Scripts on Unix may be shell scripts (Bourne, csh, or Korn shell), Perl scripts, or a boolean return from another program on your system. On Windows, the script may be a batch file, a Perl file, a single system command, a program you have written, or anything else you would be able to execute from the DOS command line that runs in text mode.

DPX does not parse the script to ensure that the syntax is correct. DPX merely passes it to the operating system. You can use scripts to change the state of a database, gather information at the time of a job, or perform manual tasks. Following are some common situations in which a script can be useful:

  • To keep a number of jobs from running simultaneously.

  • To retrieve the return code of the job.

  • To launch a second job only if the first job succeeded.

Scripts may be located on either the master server or on any of your client nodes. However, the default is the master server. If you want to use a client node, use the format scriptname@hostname.

Important. In order for any script to be effectively called by DPX, it must be located in the right directory (or folder).

  • for master server, this is /opt/DPX/sched/scripts;

  • for Linux DPX client, if installed under the default directory, this is /opt/DPX/sched/scripts;

  • for Windows DPX client, if installed under the default directory, this is C:\Program Files\DPX\sched\scripts.

Valid Syntax

Following are three separate examples of acceptable specifications for scripts on remote nodes:

downdb.sh@sundb
envget.pl@192.0.2.24
report.bat@node7.hq.uncle.gov

The hostname of the client node is either the name by which the node is known (perhaps through a DNS server) or the IP address, not necessarily the name assigned to the node during configuration of your Enterprise.

The ownership and permissions of the script must be appropriate for the account under which DPX runs. In other words, if you are logged in as that account, you should be able to execute the script successfully from the command line.

You can pass arguments to a script at the time you set the Source Options. The following examples pass the parameters disk1 and 5 to the unmount.sh script:

unmount.sh disk1 5 (on master server)
unmount.sh@sundb disk1 5 (on client node sundb)

Note. The argument list elements use space as delimiters, as shown in the example above.

All valid forms for pre- and post-job script definitions are as follows:

  • <script> – run it on the local node

  • <script> <argument_list> – run it on the local node with arguments

  • <script>@<local_node> – run it on the local node

  • <script>@<local_node> <argument_list> – run it on the local node with arguments

  • <script>@<remote_node> – run it on a remote node

  • <script>@<remote_node> <argument_list> – run in on a remote node with arguments

A script may call another script. This may be useful in a number of circumstances:

  • You may need to execute script commands under a different user ID than the one under which DPX is running on the node. On UNIX, the su or rsh commands are ways to do this.

  • You may want to use a modular structure with common elements in different job scripts.

  • You may want to run a script with parameters assigned dynamically by a calling script.

Important. When using a script to call another script or program, make sure to provide the absolute path, as the script’s working directory when called by DPX will most likely be different from the script’s location.

When a script initiates a process in the background that continues after the script completes, you must direct any output from the background process away from the script. If you do not, the job waits for the output, causing a hang condition. For example, the following pre-job script on UNIX keeps the job from executing:

#!/bin/sh
# Example of a script which does not redirect output properly
tail -f /etc/passwd &
exit 0

There are two ways to fix this problem:

  1. You can use the standard file descriptors to redirect all output.

    Example:

    #!/bin/sh
    # Example of a script which does redirect output properly
    tail -f /etc/passwd >/opt/bex/logs/tempfile 2>&1 &
    exit 0
  2. You can use the at or batch commands to queue the command.

    Example:

    #!/bin/sh
    # Example of a script which does redirect output properly
    echo 'tail -f /etc/passwd' | batch
    exit 0

In both examples, the output from the tail command will be held until the process is terminated later. The script will exit, and the job will start normally.

If a script hangs, kill the script process manually outside the management console (through the operating system).

If a pre-job script ends with a nonzero exit code (ERRORLEVEL, for Windows), DPX considers the script to have failed. The If Pre-Script Fails option determines what action DPX takes. You can either direct DPX to run the job anyway or to skip the job.

If a post-job script ends with a nonzero exit code this will not affect the completion status of the job. If you need to trap these kinds of error conditions, you should incorporate them into the script itself (for example, sending an email to the administrator).

Job Definition Scripting Fields

In the job definition Source Options pane (for the web interface) or dialog (for the desktop interface), four fields instruct DPX how to behave before, during, and after a job runs. They are:

  • Pre-Job Script field

  • If Pre-Script Fails field

  • If Job Fails field

  • Post-Job Script field

Editing Job Definition Scripting Fields

To edit the job definition scripting fields, do the following:

  1. In the job definition view, scroll down to Advanced Options and click the section to expand it.

  2. Find the Script Options subsection and click it to expand it.

  1. Edit the fields as needed. See valid syntax for calling scripts above.

  2. Continue with defining the job and select Save. To save the changes.

  1. Go to the Backup or Restore tab, depending on the job type definition you want to edit.

  2. Open the Set Job Source Options dialog, by doing one of the following:

    • From the Other Tasks section in the left-hand side pane, choose Set Source Options.

    • From the menu bar, select Backup or Restore, respectively. Select Set Source Options.

    • Use the [Ctrl + I] keyboard shortcut.

  1. Depending on the job type, there tabs displayed within the dialog may be different. Select the Script tab.

  2. Edit the fields as needed. See valid syntax for calling scripts above.

  3. Continue with defining the job and click OK to save the changes.

Pre-Job Script field

Tells DPX to run a script prior to an operation. On UNIX, the script may be a shell script (Bourne, csh, or Korn shell), a Perl script, or a script written in any scripting language on your system. On Windows, the script may be a batch file, a Perl file, a single system command, or anything executable from the DOS command line that runs in text mode.

Use this option if you want DPX to perform certain operations, such as shutting down a database, before it performs an operation. Use the If Pre-Script Fails option to specify how you want DPX to behave if the pre-job script fails. If you use a pre-job script, it must be located in the directory SCHED/SCRIPTS in the directory in which DPX is installed on UNIX and SCHED\SCRIPTS in the directory in which is DPX installed on Windows.

Note. For Windows, the bexit command passes the exit status of batch execution to DPX. If this command is omitted, DPX receives an undefined exit status.

Sample DOS batch file using the bexit command

rem ** This is an example script for Windows **

rem Display current date and time.  
Date /T  
time /T

rem Run an executable named script_test.  
script_test

rem Return the status of the execution.  
rem This example assumes user is testing against and wants to return level 99.  
if 99 GEQ 1 (  
bexit.exe 99  
exit  
)  
bexit.exe 0 

If Pre-Script Fails field

Determines how DPX behaves if the pre-job script fails.

Run Job/Run Post-Job Script

Runs both the job and the post-job script if the pre-job script fails.

Skip Job/Run Post-Job Script

Skips the job and runs the post-job script.

Skip Job/Skip Post-Job Script

Skips both the job and the post-job script.

If Job Fails field

Determines whether DPX runs the post-job script if the job fails.

Run Post-Job Script

Runs the post-job script if the job fails.

Skip Post-Job Script

Skips the post-job script if the job fails.

Post-Job Script field

Tells DPX to run a script after the operation. On UNIX, the script may be a shell script (Bourne, csh, or Korn shell), a Perl script, or a script written in any scripting language on your system. On Windows, the script may be a batch file, a Perl file, a single system command, or anything executable from the DOS command line that runs in text mode.

Use this option if you want DPX to perform certain operations such as bringing up a database after it performs an operation. Use the If Job Fails option to determine whether the post-job script runs if the job fails. If you use a post-job script, it must be located in SCHED/SCRIPTS (UNIX) or SCHED\SCRIPTS (Windows) in the directory in which is installed.

Note. For Windows, the bexit command passes the exit status of batch execution to DPX. If this command is omitted, receives an undefined exit status.

Sample DOS batch file using the bexit command

rem ** This is an example script for Windows **

rem Display current date and time.  
Date /T  
time /T

rem Run an executable named script_test.  
script_test

rem Return the status of the execution.  
rem This example assumes user is testing against and wants to return level 99.  
if 99 GEQ 1 (  
bexit.exe 99  
exit  
)  
bexit.exe 0

Retrieving Backup Information Using Pre-Script and Post-Script variables

DPX enables you to use variables in a pre-job or post-job script to retrieve general information about a backup job. The information is obtained by the scripts at runtime.

Variables

The following keywords serve as parameters, each preceded by a percent sign, that should be added to the script. The variables are then used to read the arguments from within the script.

Keyword

JOBNAME

The name.

JOBID

The decimal unique ID.

RC

The return code.

JOBLOG

The filename of the main job log.

BACKUPTYPE

BASE: For base backups. INCR: For incremental backups. DIFR: For differential backups.

The following example is based on a Microsoft Windows batch script. Save the following as:

keyword_test.cmd in the Backup Express\sched\scripts folder.

Example:

'keyword_test.cmd

'begin_script

@echo "Name: " %1

@echo "ID: " %2

@echo "Return Code: " %3

@echo "Log File: " %4

@echo "Job Type: " %5

'end_script

In the task panel, select Other Tasks > Set Source Options > Script Options and in the Pre-job script or Post-job script field, enter:

keyword_test.cmd %JOBNAME %JOBID %RC %JOBLOG %BACKUPTYPE

DPX then substitutes the variable with the value retrieved at runtime and writes the values to the job log file.

Last updated 1 year ago

Was this helpful?

Warning! A post-job script that fails (i.e. returns non-zero code) will pass its failure code down and the job will fail with the same return code. For important additional information about post-job script return codes, read the knowledge base article .

42352