Ceph RBD
General
In order to connect to Ceph RBD, you need to provide the keyring and configuration files. The Ceph RBD storage provider should detect the volumes and pools in the environment and allow you to assign backup policies. vPlus uses the RBD-NBD approach to mount a remote RBD snapshot over NBD and read data.
Example
Complete the following steps to add the Ceph RBD storage provider:
vPlus Node supports Ceph RBD, for which you will need to install ceph libraries:
On vPlus Node, enable the required repositories:
For vPlus node installed on RHEL 7:
sudo subscription-manager repo --enable=rhel-7-server-rhceph-4-tools-rpmsFor vPlus node installed on RHEL 8:
sudo subscription-manager repo --enable=rhceph-4-tools-for-rhel-8-x86_64-rpmsFor vPlus node installed on CentOS 7:
sudo yum install epel-release
sudo rpm --import 'https://download.ceph.com/keys/release.asc'
sudo yum install https://download.ceph.com/rpm-octopus/el7/noarch/ceph-release-1-1.el7.noarch.rpmFor vPlus node installed on CentOS Stream 8:
sudo yum install epel-release
sudo rpm --import 'https://download.ceph.com/keys/release.asc'
sudo yum install https://download.ceph.com/rpm-octopus/el8/noarch/ceph-release-1-1.el8.noarch.rpmFor vPlus node installed on CentOS Stream 9:
sudo yum install epel-releaseAdd Ceph repository
vi /etc/yum.repos.d/ceph.repo[ceph]
name=Ceph packages for $basearch
baseurl=https://download.ceph.com/rpm-reef/el9/$basearch
enabled=1
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.ascInstall the rbd-nbd and ceph-common package, with all dependencies:
yum install rbd-nbd ceph-commonGo to
Storage->Storage Providersand clickCreateChoose
Ceph RBDas the type and select the node configuration responsible for backup operationsClick
Upload keyring filebutton and selectCeph keyring filewhich can be obtained from the Cinder host - for example in/etc/ceph/ceph.client.admin.keyringProvide
Ceph configuration file content, for example:[global] cluster network = 10.40.0.0/16 fsid = cc3a4e9f-d2ca-4fec-805d-2c40605723b3 mon host = ceph-mon.domain.local mon initial members = ceph-00 osd pool default crush rule = -1 public network = 10.40.0.0/16 [client.images] keyring = /etc/ceph/ceph.client.images.keyring [client.volumes] keyring = /etc/ceph/ceph.client.volumes.keyring [client.nova] keyring = /etc/ceph/ceph.client.nova.keyring
If you want to index only ceph pools of your choice, change
Storage pool management strategytoINCLUDEand add storage pool names.Click
Save- now you can initiate inventory synchronization (pop-up message) to collect information about available volumes and poolslater you can use the
Inventory Synchronizationbutton on the right of the newly created provider on the list.
Your volumes will appear in the
Instancessection in the submenu on the left, from which you can initiate backup/restore/mount tasks or view volume backup history and its details.