Supported Platforms Requirements

Nutanix AHV

Disk attachment (Nutanix AHV)

Connection URL: https://PRISM_HOST:9440/api/nutanix/v3 (Prism Central or Prism Elements)

Note. when connecting via Prism Central, the same credentials will be used to access all Prism Elements.

SourceDestinationPortsDescription

Node

Prism Elements (and optionally Prism Central if used)

9440/tcp

API access to the Nutanix manager

OpenStack

Disk attachment (OpenStack)

Connection URL: https://KEYSTONE_HOST:5000/v3

SourceDestinationPortsDescription

Node

Keystone, Nova, Glance, Cinder

ports that were defined in endpoints for OpenStack services

API access to the OpenStack management services - using endpoint type that has been specified in hypervisor manager details

Node

Ceph monitors

3300/tcp, 6789/tcp

If Ceph RBD is used as the backend storage - used to collect changed-blocks lists from Ceph

SSH transfer (OpenStack)

Connection URL: https://KEYSTONE_HOST:5000/v3

Note. you also must provide SSH credentials to all hypervisors that have been detected during inventory sync.

SourceDestinationPortsDescription

Node

Hypervisor

22/tcp

SSH access

Hypervisor

Node

netcat port range defined in node configuration - by default 16000-16999/tcp

Optional netcat access for data transfer

Node

Ceph monitors

3300/tcp, 6789/tcp, 10809/tcp

If Ceph RBD is used as the backend storage - used for data transfer over NBD

Virtuozzo

SSH transfer (Virtuozzo)

Connection URL: https://KEYSTONE_HOST:5000/v3

Note. you also must provide SSH credentials to all hypervisors that have been detected during inventory sync.

SourceDestinationPortsDescription

Node

Hypervisor

22/tcp

SSH access

Hypervisor

Node

netcat port range defined in node configuration - by default 16000-16999/tcp

Optional netcat access for data transfer

Node

Ceph monitors

3300/tcp, 6789/tcp, 10809/tcp

If Ceph RBD is used as the backend storage - used for data transfer over NBD

OpenNebula

Disk attachment (OpenNebula)

Connection URL: https://MANAGER_HOST

SourceDestinationPortsDescription

Node

Manager Host

XML-RPC API port - 2633/tcp by default

API access to the OpenNebula management services

oVirt/RHV/OLVM

Disk attachment (oVirt/RHV/OLVM)

Connection URL: https://MANAGER_HOST/ovirt-engine/api

SourceDestinationPortsDescription

Node

oVirt/RHV/OLVM manager

443/tcp

oVirt/RHV/OLVM API access

Disk Image Transfer (oVirt/RHV/OLVM)

Connection URL: https://MANAGER_HOST/ovirt-engine/api

SourceDestinationPortsDescription

Node

oVirt/RHV/OLVM manager

443/tcp

oVirt/RHV/OLVM API access

Node

oVirt/RHV/OLVM hypervisor

54322/tcp

oVirt/RHV/OLVM ImageIO services - for data transfer (primary source)

Node

oVirt/RHV/OLVM manager

54323/tcp

oVirt/RHV/OLVM ImageIO services - for data transfer (fallback to ImageIO Proxy)

SSH Transfer (oVirt/RHV/OLVM)

Connection URL: https://MANAGER_HOST/ovirt-engine/api

Note. you also must provide SSH credentials to all hypervisors that have been detected during inventory sync.

SourceDestinationPortsDescription

Node

oVirt/RHV/OLVM manager

443/tcp

oVirt/RHV/OLVM API access

Node

oVirt/RHV/OLVM hypervisor

22/tcp

SSH access for data transfer

oVirt/RHV/OLVM hypervisor

Node

netcat port range defined in node configuration - by default 16000-16999/tcp

Optional netcat access for data transfer

Change-Block Tracking (oVirt/RHV/OLVM)

Connection URL: https://MANAGER_HOST/ovirt-engine/api

SourceDestinationPortsDescription

Node

oVirt/RHV/OLVM manager

443/tcp

oVirt/RHV/OLVM API access

Node

oVirt/RHV/OLVM hypervisor

54322/tcp

oVirt/RHV/OLVM ImageIO services - for data transfer

Node

oVirt/RHV/OLVM manager

54323/tcp

oVirt/RHV/OLVM ImageIO services - for data transfer

Oracle VM

Export storage domain (Oracle VM)

Connection URL: https://MANAGER_HOST:7002

SourceDestinationPortsDescription

Node

OVM manager

7002/tcp

OVM API access

Hypervisor

Node

If Node is hosting staging space: 111/tcp, 111/UDP, 2049/tcp, 2049/UDP, ports specified in /etc/sysconfig/nfs - variables MOUNTD_PORT (TCP and UDP), STATD_PORT (TCP and UDP), LOCKD_TCPPORT (TCP), LOCKD_UDPPORT (UDP), otherwise please check the documentation of your NFS storage provider

If staging space (export storage repository) is hosted on the Node - NFS access

Node and hypervisor

shared NFS storage

Check the documentation of your NFS storage provider

If staging space (export storage repository) is hosted on the shared storage - NFS access

Citrix XenServer/xcp-ng

Note. all hosts in the pool must be defined.

Single image (Citrix XenServer/xcp-ng - XVA-based)

SourceDestinationPortsDescription

Node

Hypervisor

443/tcp

API access (for data transfer management IP is used, unless transfer NIC parameter is configured in hypervisor details)

Changed-Block Tracking (Citrix XenServer/xcp-ng)

SourceDestinationPortsDescription

Node

Hypervisor

443/tcp

API access (for data transfer management IP is used, unless transfer NIC parameter is configured in hypervisor details)

Node

Hypervisor

10809/tcp

NBD access (data transfer IP is returned by hypervisor)

KVM/Xen stand-alone

SSH transfer (KVM/Xen stand-alone)

SourceDestinationPortsDescription

Node

Hypervisor

22/tcp

SSH access

Hypervisor

Node

netcat port range defined in node configuration - by default 16000-16999/tcp

Optional netcat access for data transfer

Node

Ceph monitors

3300/tcp, 6789/tcp, 10809/tcp

If Ceph RBD is used as the backend storage - used for data transfer over NBD

Proxmox VE

Export storage repository (Proxmox VE)

SourceDestinationPortsDescription

Node

Hypervisor

22/tcp

SSH access

Hypervisor

Node

If Node is hosting staging space: 111/tcp, 111/UDP, 2049/tcp, 2049/UDP, ports specified in /etc/sysconfig/nfs - variables MOUNTD_PORT (TCP and UDP), STATD_PORT (TCP and UDP), LOCKD_TCPPORT (TCP), LOCKD_UDPPORT (UDP), otherwise please check the documentation of your NFS storage provider

If staging space (export storage domain) is hosted on the Node - NFS access

Node and hypervisor

shared NFS storage

Check the documentation of your NFS storage provider

If staging space (export storage domain) is hosted on the shared storage - NFS access

Kubernetes

Connection URL: https://API_HOST:6443

SourceDestinationPortsDescription

Node

Kubernetes API host

22/tcp

SSH access

Node

Kubernetes API host

6443/tcp

API access

Openshift

Connection URL: https://API_HOST:6443

SourceDestinationPortsDescription

Node

Kubernetes API host

6443/tcp

API access

Node

Openshift Workers

2049/tcp, 2049/udp

NFS connection

Openshift Workers

Node

2049/tcp, 2049/udp

NFS connection

Node

Openshift Workers

30000-32767/tcp

Access to service exposed by vPlus plugin

SC//Platform

Export Storage Domain (SC//Platform)

Connection URL: https://MANAGER_HOST

SourceDestinationPortsDescription

Node

SC//Platform manager

443/tcp

API access

Node

SC//Platform hosts

445/tcp

SMB transfer

SC//Platform hosts

Node

445/tcp

SMB transfer

Disk Attachment (SC//Platform)

Connection URL: https://MANAGER_HOST

SourceDestinationPortsDescription

Node

SC//Platform manager

443/tcp

API access

Huawei FusionCompute

Connection URL: https://MANAGER_HOST:8443

SourceDestinationPortsDescription

Node

Huawei FusionCompute VRM

8443/tcp

API access

Node

SC//Platform hosts

445/tcp

SMB transfer

SC//Platform hosts

Node

445/tcp

SMB transfer

Microsoft 365

SourceDestinationPortsDescription

Node

Microsoft 365

443/tcp

Microsoft 365 API access

You can find more detailed descriptions of Office 365 URLs and IP address ranges in Microsoft 365 documentation.

To successfully synchronize M365 user account, it must fulfill following requirements:

  • has an email,

  • is not filtered by location, country or office location (user filter in UI),

  • field user type is set to Member,

  • has a license or is a shared mailbox.

OS Agent

SourceDestinationPortsDescription

OS Agent

Node

15900/tcp

Node - OS Agent communication

Security Requirements

User Permissions

User vprotect must be a member of group "disk".

Sudo privileges are required for the following commands:

vPlus Node:

  • /usr/bin/targetcli

  • /usr/sbin/exportfs

  • /usr/sbin/kpartx

  • /usr/sbin/dmsetup

  • /usr/bin/qemu-nbd

  • /usr/bin/guestmount

  • /usr/bin/fusermount

  • /bin/mount

  • /bin/umount

  • /usr/sbin/parted

  • /usr/sbin/nbd-client

  • /usr/bin/tee

  • /opt/vprotect/scripts/vs/privileged.sh

  • /usr/bin/yum

  • /usr/sbin/mkfs.xfs

  • /usr/sbin/fstrim

  • /usr/sbin/xfs_growfs

  • /usr/bin/docker

  • /usr/bin/rbd

  • /usr/bin/chown

  • /usr/sbin/nvme

  • /bin/cp

  • /sbin/depmod

  • /usr/sbin/modprobe

  • /bin/bash

  • /usr/local/sbin/nbd-client

  • /bin/make

vPlus server:

  • /opt/vprotect/scripts/application/vp_license.sh

  • /bin/umount

  • /bin/mount

SELinux

PERMISSIVE - currently it interferes with the mountable backups (file-level restore) mechanism. Optionally can be changed to ENFORCING if the file-level restore is not required.

HotAdd

Connection URL: https://VCENTER_HOST or https://ESXI_HOST

SourceDestinationPortsDescription

Node

vCenter/ESXi host

443/tcp

API access

Node

ESXi hosts

902/tcp

NBD transfer

NBD

Connection URL: https://VCENTER_HOST or https://ESXI_HOST

SourceDestinationPortsDescription

Node

vCenter/ESXi host

443/tcp

API access

Node

ESXi hosts

902/tcp

NBD transfer

  • Microsoft Windows 8 32-bit

  • Microsoft Windows 8 64-bit

  • Microsoft Windows 8.1 64-bit

  • Microsoft Windows 8.1 64-bit

  • Microsoft Windows 10 32-bit

  • Microsoft Windows 10 64-bit

Windows 7 (32-bit, 64-bit)

The following packages need to be installed on the operating system:

Windows 8 (32-bit, 64-bit)

The following packages need to be installed on the operating system:

Windows 8.1 (32-bit, 64-bit)

The following packages need to be installed on the operating system:

Windows 10 (32-bit, 64-bit)

The following packages need to be installed on the operating system:

Last updated