DPX 4.11 Guide
Catalogic SoftwareKnowledge BaseMySupport
  • Welcome to DPX!
  • Introduction
    • About DPX
  • Installation and Configuration
    • How to Start – Basic Configuration
      • Protocols Used for Backup and Restore Data Transfer
    • DPX Master Server
      • Deploying DPX with VMware vSphere
        • Auxiliary Tasks for VMware Deployment
      • Deploying DPX with Microsoft Hyper-V
        • Auxiliary Tasks for Hyper-V Deployment
      • DPX Master Server Interface
      • Connecting to DPX Master Server via SSH
      • Configuration
    • DPX Client
      • DPX Client for Microsoft Windows
        • Requirements
        • Installation
        • How to Uninstall
      • DPX Client for Linux
        • Requirements
        • Installation
        • How to Uninstall
      • Automated Deployment of DPX Client
      • Further Actions with DPX Client
    • DPX Proxy Server
      • Deploying DPX Proxy Server for VMware
      • DPX Proxy Server Web Interface
    • DPX Hyper-V Agent
    • Nodes
      • Adding a Client Node to the Master Server during Client Deployment
      • Adding a Client Node from the Master Server Level
      • Adding a vStor Storage Node
      • Adding a Configured Hyper-V Host as a DPX Node
      • Adding Other Node Types to the Master Server
      • Adding a Node Group
    • Devices
      • Adding a Device Cluster
      • Adding a Device
      • Adding a Tape Library
      • Adding a Tape Library Device
    • Media
      • Adding a Media Pool
      • Adding Media Volumes
    • Tape Libraries
      • Tape Library Deployment
      • Tape Library Setup
      • Manual Tape Library Installation
      • Tape Library Verification
      • Manual Tape Library Configuration
      • Setting up LTO Tape Drives for Hardware Encryption
        • Assigning and Unassigning Media to/from Tape Library Slots
    • User Configuration
      • Configure Administrators Window
    • Other Components of the DPX Suite
  • Storage
    • Backup Destinations
    • Tape Storage
    • vStor
    • Disk Directory
    • NetApp
      • NetApp SnapVault Management
        • Terminology for NetApp SnapVault Management
        • NetApp SnapVault Management Setup and Configuration
    • Cloud Storage
      • Registering AWS S3 Cloud Storage
      • Registering a Generic Cloud Storage
  • Backup
    • Introduction and Backup Types
    • File Backup
      • Creating a File Backup Job
        • Job Source Options for File Backup
        • Job Destination Options for File Backup
        • Other Job Options for File Backup
      • Editing a File Backup Job
      • Scheduling a File Backup Job
    • Block Backup
      • Prerequisites
      • Creating a Block Backup Job
        • Job Options for Block Backup
      • Editing a Block Backup Job
      • Scheduling a Block Backup Job
      • Forcing a Base Backup
      • Backing up System State
      • Backing up System Table
      • NetApp SnapVault Management Backup
    • Application Backup
      • Microsoft SQL Server Backup
        • Creating a Microsoft SQL Server Backup Job
        • AlwaysOn Database Backup
      • Microsoft Exchange Server Backup
        • Creating a Microsoft Exchange Server Backup Job
      • Microsoft SharePoint Server Backup
        • SharePoint Installation and Configuration Requirements
        • Creating a Microsoft SharePoint Server Backup Job
      • SAP HANA Backup
      • Oracle Database Backup
        • Creating an Oracle Database Backup Job
      • OpenText GroupWise Backup
        • Creating a GroupWise Backup Job
    • Bare Metal Recovery (BMR) Backup
      • Creating a BMR Backup Job
    • Catalog Backup
      • General Considerations
      • Creating a Catalog Backup Job
        • Job Options for Catalog Backup
      • Editing a Catalog Backup Job
      • Scheduling a Catalog Backup Job
    • NDMP Backup (desktop interface only)
      • Prerequisites
      • Creating an NDMP Backup Job
        • Job Source Options for NDMP Backup
        • Job Destination Options for NDMP Backup
        • Other Job Options for NDMP Backup
      • Editing an NDMP Backup Job
      • Scheduling an NDMP Backup Job
    • Image Backup (desktop interface only)
      • Creating an Image Backup Job
      • Editing an Image Backup Job
      • Scheduling an Image Backup Job
    • Agentless Backup for VMware and Hyper-V
      • Agentless Backup for VMware
        • Agentless Backup for VMware Complements Block Data Protection
        • Agentless VMware Backup Architecture and Data Flow
        • Environmental Requirements for Agentless VMware Backup
        • Best Practices for Agentless VMware Backup
        • Creating an Agentless VMware Backup Job
          • Job Source Options for Agentless VMware Backup
          • Other Job Options for Agentless VMware Backup
        • Editing an Agentless VMware Backup Job
        • Scheduling a VMware Backup Job
        • Application-Consistent Protection
        • Architecture and Data Flow
      • Agentless Backup for Microsoft Hyper-V
        • Creating a Hyper-V Backup Job
        • Editing a Hyper-V Backup Job
        • Scheduling a Hyper-V Backup Job
    • Differential and Incremental Backups
    • Archive
      • Creating an Archive Job for Block Backup
      • Creating an Agentless VMware Archive Job
  • Restore
    • Introduction and Restore Modes
    • Restore Modes in the Web Interface
      • Full VM Restore for VMware
      • Full VM Restore for Hyper-V
      • Instant VM Restore
      • Multi-VM Restore 🆕
        • Job Options for Multi-VM Restore
        • Advanced Filters
      • Agentless File Restore
      • File Restore from Block Backup
        • Advanced Job Options
      • Instant Access
      • Virtualization
      • Application Restore
        • Microsoft SQL Server Restore
        • Microsoft Exchange Server Restore
        • Microsoft Sharepoint Server Restore
        • Oracle Database Restore
        • OpenText GroupWise Restore
      • File Restore from File Backup
        • Job Options for File Restore from File Backup
    • Restore Modes in the Desktop Interface
      • File Restore
        • Creating a File Restore Job
      • Image Restore
        • Creating an Image Restore Job
      • NDMP Restore
        • Creating an NDMP Restore Job
      • Block Restore
        • Creating a Block Restore Job
          • Block Restore Job Options
        • Specifying Mount Points
        • Restoring Block Data with Instant Access
        • Restoring BMR Data with Virtualization
        • NetApp SnapVault Management Restore
      • Agentless VMware Restore
        • Creating an Agentless VMware Restore Job
          • Agentless VMware Restore Job Operations
        • Rapid Return to Production (RRP)
      • Tape Restore
        • Creating a Tape Restore Job
      • Application Restore
      • Scheduling a Restore Job in the Desktop Interface
        • Previewing Scheduled Jobs in the Desktop Interface
      • Restore Job Source Options
    • Catalog Restore
      • Creating a Catalog Restore Job
    • Bare Metal Recovery (BMR) Restore
    • Recovering Archived Backups
      • Recovering a VMware Archive
  • Web Interface
    • Log-in Page
    • Dashboard
    • Job Monitor
    • Job Manager
    • Schedule Overview
    • Reports
      • Reports Templates
    • Nodes
    • Devices & Pools
    • Events
    • Configuring Enterprise Information
    • Configuring Custom SSL Certificates
  • Desktop Interface
    • Connect to DPX Window
    • Main Window
    • Common Menu Bar Options
    • Function Tabs Desktop Interface
      • Backup Tab
      • Restore Tab
      • Monitor Jobs Tab
      • Control Devices Tab
      • Reports Tab
      • Catalog Tab
      • Copy Tab
      • Manage Tapes Tab
      • Configure Tab
      • [Legacy] Analytics Tab
    • Common Function Window Tasks
    • Configuring Parameters
    • Configuring the Enterprises
      • Editing an Enterprise Configuration
      • Configuring Devices
  • BARE METAL RECOVERY
    • Bare Metal Recovery Overview
    • Bare Metal Recovery for Windows
    • Bare Metal Recovery for Linux
    • Recovery from a Replicated or Alternate Data Resource
  • MAINTENANCE
    • Condense
      • Starting a Condense Job
      • Scheduling a Condense Job (desktop interface only)
    • Job-Related Information
      • Job Return Codes
      • Job Status and Available Actions
    • Collecting Logs
    • Getting Node Information
    • File Exclusion Rules
    • Managing Licenses
    • Pre-Scripts and Post-Scripts
    • Updating DPX
      • Command Line Update
      • Online Autoupdate Method
      • Offline Autoupdate Method
  • Miscellaneous
    • Configuring Keyrings
      • Adding a Keyring
      • Adding a Key
    • SNMP Interface
    • Troubleshooting
      • Troubleshooting Unsuccessful Multi-VM Restore Cleanup
      • Troubleshooting Installation of the Catalogic DPX Client on UNIX and UNIX-like Systems
      • Managing the CMAgent Service
    • Technical Support
    • Terminology
    • Acronyms
    • Default Ports
Powered by GitBook
On this page
  1. MAINTENANCE

Collecting Logs

PreviousJob Status and Available ActionsNextGetting Node Information

Last updated 6 months ago

There are several ways to generate log files from the Catalogic DPX environment. You can use either of the graphical interfaces or one of the built-in utilities available through SSH.

Note. The main purpose of collecting logs is to help diagnose issues, improve performance, and ensure system security. Logs provide valuable insights into the application’s behavior, allowing us to enhance user experience and maintain reliable service.

We recommend using log-collecting utilities upon request of the Catalogic Software Data Protection team.

Take the following steps to collect job logs from Catalogic DPX:

  1. Go to the Job Monitor section.

  2. In the job list, click More Actions ⋯ next to the job whose logs you want to collect and select either one of the following items:

    • Run instance log collection Download a job log of the selected job. The log file is compressed and archived in .ZIP format. The .ZIP file name is instance-log-collection_<job_name>.zip.

    • Run job log collection Download a collection of all jobs in the entire Catalogic DPX. The log file is compressed and archived in .TAR and .ZIP format. The .ZIP file name is job-collection-<job_name>.zip. Typically, this task takes a few minutes to complete.

  1. Open the Monitor Jobs tab or the Review Scheduled Jobs window from the task pane.

  2. Right-click a job whose status is either Completed, Failed, or Canceled, and click one of the following options:

  3. Collect Logs

    Collect and archive logs of the selected job in .TAR format, log-collection-<timestamp>.tar, where <timestamp> represents the start time of the last repetitive job in the UNIX time format.

  4. Collect All Job Logs

    Collect and archive all job logs in .TAR format, and store this archive file: Master-job-logs-only.tar.

Ensure that you can open the log archive files in the logs directory of the Catalogic DPX Master Server.

See also. For more information, refer to the following sections and documents:

  • tab on this page

  • in the Catalogic Software Knowledge Base.

A user can look at a log file using any file editor, either command-line or script-oriented, that opens ordinary text files.

To enable the Node-Based Logging model, you must turn on the Boolean environmental variable, SSCOMMONLOG, on each node. The Module-Based logs are generated regardless of this setting.

See also. For more information about setting variables, see in the Reference Guide.

DPX provides log collection utilities, such as DPX info collector and BEXcollect, for diagnostic purposes. The utilities gather informational files related to a specific job and other general information that may be required for diagnostic purposes. If you contact Catalogic Software Data Protection Technical Support, you may be asked to run the utility and send the results for analysis.

Running the DPX info collector

  1. Navigate to /opt/catalogic/scripts/ and run ./dpx_info_collector.sh. After a while, a .ZIP archive under the name dpx-info-collection.zip will be created.

Running the BEXcollect utility

  1. Navigate to /opt/DPX/bin/

  2. Type one of the commands below.

Command
Comment

./bexcollect <job_id>

If using the older module-based logging model

./bexcollect -n <job_id>

The results will include common logs from the specified client node

./bexcollect -x <client_node_name>

The results will encompass all logs from the specified client node (Windows only)

Note. The 10-digit job ID can be easily found in Job Monitor in the job details.

The result of bexcollect is a set of .tar files placed in the logs directory of the product directory on the master server. The logs directory will contain two types of output files:

master-<jobid>.tar

This is the file for the master server.

<nodename>-<jobid>.tar

One file will appear for each node involved in the job.

Log in to your DPX Master Server via SSH (see ).

Log in to your DPX Master Server via SSH (see ).

The options are further described in Catalogic Knowledge Base article BEXCollect Usage.

Connecting to DPX Master Server via SSH
Connecting to DPX Master Server via SSH
47223
Technical Support
BEXCollect Usage
Setting Environmental Variables for Logging Model and BEXCollect
Command-line interface