Destination Site Procedures for Linux Remote Seeding

Restore the Base Image Backup at the Linux Destination Node

This procedure is for DPX 4.2 or later. If you are using a release of earlier than DPX 4.2, use the remote seeding process described in knowledge base article 42036.

At the destination site, use the spare Linux machine as the destination node. Ensure that this machine is defined as a node, and that it has a spare volume that matches the volume size of the Linux primary.

To restore the base Image backup to the spare Linux client machine:

  1. Ensure that restore partitions at the destination are not locked by an application or process.

Note. For this example, linux.spare is the Linux spare client, /lvm is the backed up volume, and /lvmtemp is the destination volume (spare partition) for the restored data.

  1. In the management console, create a Image restore job to restore data from the backup media to the volume /lvmtemp on the spare linux.spare machine.

  2. Click Save Restore Job on the task pane to name and save this job.

Note. For this example, use imagerestest for the job name.

  1. Click the Run Restore Job task pane option, then click OK to run the job.

Note. Each volume backed up from the remote primary must be restored to its own volume on the spare machine.

Create Linux DPX Block Data Protection Base Backups on the NetApp Storage System Manually

Perform the following procedure for each selected volume at the destination site. To fill in parameter values required for this procedure, re-use parameter value information gathered in the first stage of remote seeding, as discussed in Transport Linux Media to the Destination Site.

To create the Linux Block Data Protection base backups on the NetApp storage system manually:

  1. Log into (telnet) the NetApp storage system.

  2. Run the following SnapVault command, substituting all appropriate values where any <parameters> appear:

SnapVault start -S <SpareClient’s IP or resolvable hostname>:<SpareClient’s Volume> <volumeonfiler>/[SnapVaultJobname]<Primary’sLogicalNodeName>@{volumesernumber}

If the command runs successfully, the following message appears:

qa-f270b> snapvault start -S rh55-231:/lvmtemp /vol/volbkup/[SNAPBAKTEST]linux_client@{6EEED25D5}  
Snapvault configuration for the qtree has been set.  
Transfer started.  
Monitor progress with "snapvault status" or the snapmirror log.  
qa-f270b>

Note. In the command example above, note that the value for the <Primary’sLogicalNodeName> parameter is linux.client but the value shown in the execution of the command is linux_client. When running this command, replace the dot in the logical node name with an underscore.

  1. After the qtree is set and initialized, periodically check the status of the SnapVault job using the following command:

SnapVault status -l <volumeonfiler>/[SnapVaultJobname]<Primary’sLogicalNodeName>@{volumesernumber}

Note. In this example, the Transferring status appears in the output:

qa-f270b> snapvault status –l /vol/volbkup/[SNAPBAKTEST]linux_client@
{6EEED25D5}
Snapvault secondary is ON.
Source: rh55-231:/lvmtemp
Destination: qa-f270b: /vol/volbkup/[SNAPBAKTEST]linux_client@{6EEED25D5}
Status: Transferring
Progress: 3128580 KB
State: Uninitialized
Lag: _
Mirror Timestamp: _
Base Snapshot: _
Current Transfer Type: Initialize
Current Transfer Error: _
Contents: Transitioning
Last Transfer type: _
Last Transfer Size: _
Last Transfer Duration: _
Last Transfer From: _
  1. Keep checking the status, and do not proceed to the next process until the SnapVaulted status appears:

qa-f270b> snapvault status –l /vol/volbkup/[SNAPBAK]linux_client@{6EEED25D5}  
Snapvault secondary is ON.  
  
Source: rh55-231:/lvmtemp  
Destination: qa-f270b: /vol/volbkup/[SNAPBAK]linux_client@{6EEED25D5}  
Status: Idle  
Progress: _  
State: Snapvaulted  
Lag: 00:06:52  
Mirror Timestamp: Tue Aug 9 16:47:11 EDT 2011  
Base Snapshot: qa-f270b<0084186047>_volbkup-base.248  
Current Transfer Type: _  
Current Transfer Error: _  
Contents: Replica  
Last Transfer type: Initialize  
Last Transfer Size: 78848 KB  
Last Transfer Duration: 00:00:11  
Last Transfer From: rh55-231:/lvmtemp  
qa-f270b>

Schedule Linux Block Incremental Backups

After seeding the NetApp storage system with the initial base backup for the selected volume, catalog the base backup, and then schedule regular incremental backups.

To schedule Linux Block incremental backups:

  1. Launch the management console.

Note. For this example, /lvm on the Linux.client primary node is the volume selected for backup, and /vol/volbkup is the selected destination volume on the qa-f270b NetApp storage system.

  1. Define a new Block backup job to back up all selected volumes to their destination NetApp storage systems.

Note. For this example, SNAPBAKTEST is the Block backup job’s name.

  1. Schedule and save the backup job.

Note. As defined in the schedule, runs this job as an incremental backup, but catalogs it as a base backup.

Last updated