3 Node Cluster
Overview
We have prepared 3 machines with RedHat 8 operating system in the same network:
10.1.1.2 vprotect1.local
10.1.1.3 vprotect2.local
10.1.1.4 vprotect3.local
We will use IP 10.1.1.5/23 for floating IP of our cluster.
1. vPlus server installation
Run that steps on all machines under pacemaker cluster:
Add vPlus repository
vi /etc/yum.repos.d/vProtect.repo# vPlus - Enterprise backup solution for virtual environments repository [vprotect] name = vProtect baseurl = https://f002.backblazeb2.com/file/DPX-vPlus/current/el8 gpgcheck = 0Add MariaDB repository
vi /etc/yum.repos.d/MariaDB.repo# MariaDB 10.10 RedHatEnterpriseLinux repository list - created 2023-08-23 08:49 UTC # https://mariadb.org/download/ [mariadb] name = MariaDB # rpm.mariadb.org is a dynamic mirror if your preferred mirror goes offline. See https://mariadb.org/mirrorbits/ for details. # baseurl = https://rpm.mariadb.org/10.10/rhel/$releasever/$basearch baseurl = https://mirror.creoline.net/mariadb/yum/10.10/rhel/$releasever/$basearch # gpgkey = https://rpm.mariadb.org/RPM-GPG-KEY-MariaDB gpgkey = https://mirror.creoline.net/mariadb/yum/RPM-GPG-KEY-MariaDB gpgcheck = 1Install vPlus server
dnf install -y vprotect-serverInitialize vPlus server
vprotect-server-configureRedirect 8181 port to 443 on firewall
/opt/vprotect/scripts/ssl_port_forwarding_firewall-cmd.shAdd redirection to allow local node to communicate with server on cluster IP
firewall-cmd --permanent --direct --add-rule ipv4 nat OUTPUT 0 -p tcp -o lo --dport 443 -j REDIRECT --to-ports 8181 firewall-cmd --complete-reloadOpen firewall for MariaDB replication:
firewall-cmd --add-port=3306/tcp --permanent firewall-cmd --complete-reload
2. Configuration custom SSL certificate
All steps run as root user. All steps execute on first node of cluster.
Follow steps from enabling HTTPS connectivity for nodes.
3. vPlus node installation
Execute on all pacemaker nodes, and other vPlus node machines.
Add vPlus repository
vi /etc/yum.repos.d/vProtect.repo# vPlus - Enterprise backup solution for virtual environments repository [vprotect] name = vProtect baseurl = https://f002.backblazeb2.com/file/DPX-vPlus/current/el8 gpgcheck = 0Install vPlus node
dnf install -y vprotect-nodeInitialize vPlus node
vprotect-node-configureOnly when we want backup Proxmox by export strategy.
cd /opt/vprotect/scripts/vma
./setup_vma.sh vprotect-vma-20180128.tar4. Backup destination configuration
For multi-node/cluster environment for backup destination we suggest to use NFS, Object Storage, Deduplication appliances. In this example we use NFS.
Execute on all vPlus node machines.
Add entry in
/etc/fstabfor automount NFS10.1.1.1:/vprotect /vprotect_data nfs defaults 0 0Create directories for mount NFS share:
mkdir /vprotect_dataMount NFS share
mount -aCreate subdirectories for backup destinations (run only on single node)
mkdir /vprotect_data/backup mkdir /vprotect_data/backup/synthetic mkdir /vprotect_data/backup/filesystem mkdir /vprotect_data/backup/dbbackupAdd privileges for newly created shares
chown vprotect:vprotect -R /vprotect_data
5. Cluster Configuration
Cluster is controlled by pacemaker.
5.1 Prepare operating system
All steps run as root user. Run that steps on all machines in pacemaker cluster:
Stop all services controlled by cluster, and disable autostart.
systemctl stop vprotect-node systemctl stop vprotect-server systemctl disable vprotect-node systemctl disable vprotect-server
5.2 Set MariaDB replication
All steps run as root user. Run on all cluster nodes:
Create MariaDB user
replicationwith passwordvPr0tectfor replication:CREATE USER replicator@'10.1.1.%' IDENTIFIED BY 'vPr0tect'; GRANT SELECT,REPLICATION SLAVE,REPLICATION CLIENT ON *.* to replicator@'10.1.1.%' IDENTIFIED BY 'vPr0tect';Add changes to /etc/my.cnf.d/server.cnf in
mysqldsection:[mysqld] lower_case_table_names=1 log-bin=mysql-bin relay-log=relay-bin log-slave-updates max_allowed_packet=500M log_bin_trust_function_creators=1Add changes to /etc/my.cnf.d/server.cnf in
mysqldsection:On vprotect1.local:
server-id=10On vprotect2.local:
server-id=20On vprotect3.local:
server-id=30Restart MariaDB service:
systemctl restart mariadbOn each host show output from command:
SHOW MASTER STATUS;Output from vprotect3.local:
+------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000006 | 374 | | | +------------------+----------+--------------+------------------+ 1 row in set (0.000 sec)Output from vprotect1.local:
+------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000007 | 358 | | | +------------------+----------+--------------+------------------+ 1 row in set (0.000 sec)Output from vprotect2.local:
+------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000004 | 358 | | | +------------------+----------+--------------+------------------+ 1 row in set (0.000 sec)Set replication on each MariaDB server:
Execute on vprotect1.local:
CHANGE MASTER TO MASTER_HOST='10.1.1.4', MASTER_PORT=3306, MASTER_USER='replicator', MASTER_PASSWORD='vPr0tect', MASTER_LOG_FILE='mysql-bin.000006', MASTER_LOG_POS=37;Execute on vprotect2.local:
CHANGE MASTER TO MASTER_HOST='10.1.1.2', MASTER_PORT=3306, MASTER_USER='replicator', MASTER_PASSWORD='vPr0tect', MASTER_LOG_FILE='mysql-bin.000007', MASTER_LOG_POS=358;Execute on vprotect3.local:
CHANGE MASTER TO MASTER_HOST='10.1.1.3', MASTER_PORT=3306, MASTER_USER='replicator', MASTER_PASSWORD='vPr0tect', MASTER_LOG_FILE='mysql-bin.000004', MASTER_LOG_POS=358;Start replication MariaDB: Execute on vprotect1.local:
START SLAVE;Show output from command:
SHOW SLAVE STATUS\GWait until you see in output:
Slave_IO_Running: Yes Slave_SQL_Running: YesRepeat last step on host vprotect2.local and vprotect3.local.
5.2.1 Make same passwords for vprotect user in MariaDB
Copy password from file
/opt/vprotect/payara.propertieseu.storware.vprotect.db.password=SECRETPASSWORDLog in to MariaDB
mysql -u root -pSet password for vprotect user:
SET PASSWORD FOR 'vprotect'@'localhost' = PASSWORD('SECRETPASSWORD'); quit;Copy configuration files from vprotect1.local to other cluster hosts
cd /opt/vprotect/ keystore.jks log4j2-server.xml payara.properties vprotect.env vprotect-keystore.jks license.keyAdd permissions for copied files
chown vprotect:vprotect -R /opt/vprotect/
5.3 Configure pacemaker
All steps run as root user.
5.3.1 Run on every node in cluster
Install pacemaker packages
dnf install -y pcs pacemaker fence-agents-allCreate SSH keys, and add them on other hosts to known.
Open ports on firewall
firewall-cmd --permanent --add-service=high-availability firewall-cmd --reloadStart pcsd service
systemctl start pcsd.service systemctl enable pcsd.serviceSet same password for user hacluster
passwd hacluster
5.3.2 Run only on first node of cluster
Authenticate nodes of cluster
pcs host auth vprotect1.local vprotect2.local vprotect3.localCreate cluster
pcs cluster setup vp vprotect1.local vprotect2.local vprotect3.localRun cluster
pcs cluster start --allPower off stonith
pcs property set stonith-enabled=falseCreate floating IP in cluster
pcs resource create vp-vip1 IPaddr2 ip=10.1.1.4 cidr_netmask=23 --group vpgrpAdd vprotect-server to cluster
pcs resource create "vp-vprotect-server.service" systemd:vprotect-server.service op start on-fail="stop" timeout="300s" op stop timeout="300s" op monitor timeout="300s" --group vpgrpAdd vprotect-node to cluster
pcs resource create "vp-vprotect-node.service" systemd:vprotect-node.service op start on-fail="stop" timeout="300s" op stop timeout="300s" op monitor timeout="300s" --group vpgrp
6. Register vPlus node on server (on all hosts)
Add certificate to trusted
/opt/vprotect/scripts/node_add_ssl_cert.sh 10.1.1.5 443Register node on server
vprotect node -r ${HOSTNAME%%.*} admin https://10.1.1.5:443/api
7. Useful commands to control cluster:
For update, or service vPlus unmanaged services from cluster:
pcs resource unmanage vpgrpBack to manage:
pcs resource manage vpgrpShow status of cluster:
pcs statusStop cluster node:
pcs cluster stop vprotect1.localStop all nodes of cluster:
pcs cluster stop --allStart all nodes of cluster:
pcs cluster start --all
Clear old errors in cluster:
pcs resource cleanup