3 Node Cluster
Overview
We have prepared 3 machines with RedHat 8 operating system in the same network:
10.1.1.2 vprotect1.local
10.1.1.3 vprotect2.local
10.1.1.4 vprotect3.local
We will use IP 10.1.1.5/23 for floating IP of our cluster.
1. vPlus server installation
Run that steps on all machines under pacemaker cluster:
Add vPlus repository
vi /etc/yum.repos.d/vProtect.repo
# vPlus - Enterprise backup solution for virtual environments repository [vprotect] name = vProtect baseurl = https://f002.backblazeb2.com/file/DPX-vPlus/current/el8 gpgcheck = 0
Add MariaDB repository
vi /etc/yum.repos.d/MariaDB.repo
# MariaDB 10.10 RedHatEnterpriseLinux repository list - created 2023-08-23 08:49 UTC # https://mariadb.org/download/ [mariadb] name = MariaDB # rpm.mariadb.org is a dynamic mirror if your preferred mirror goes offline. See https://mariadb.org/mirrorbits/ for details. # baseurl = https://rpm.mariadb.org/10.10/rhel/$releasever/$basearch baseurl = https://mirror.creoline.net/mariadb/yum/10.10/rhel/$releasever/$basearch # gpgkey = https://rpm.mariadb.org/RPM-GPG-KEY-MariaDB gpgkey = https://mirror.creoline.net/mariadb/yum/RPM-GPG-KEY-MariaDB gpgcheck = 1
Install vPlus server
dnf install -y vprotect-server
Initialize vPlus server
vprotect-server-configure
Redirect 8181 port to 443 on firewall
/opt/vprotect/scripts/ssl_port_forwarding_firewall-cmd.sh
Add redirection to allow local node to communicate with server on cluster IP
firewall-cmd --permanent --direct --add-rule ipv4 nat OUTPUT 0 -p tcp -o lo --dport 443 -j REDIRECT --to-ports 8181 firewall-cmd --complete-reload
Open firewall for MariaDB replication:
firewall-cmd --add-port=3306/tcp --permanent firewall-cmd --complete-reload
2. Configuration custom SSL certificate
All steps run as root user. All steps execute on first node of cluster.
Follow steps from enabling HTTPS connectivity for nodes.
3. vPlus node installation
Execute on all pacemaker nodes, and other vPlus node machines.
Add vPlus repository
vi /etc/yum.repos.d/vProtect.repo
# vPlus - Enterprise backup solution for virtual environments repository [vprotect] name = vProtect baseurl = https://f002.backblazeb2.com/file/DPX-vPlus/current/el8 gpgcheck = 0
Install vPlus node
dnf install -y vprotect-node
Initialize vPlus node
vprotect-node-configure
Only when we want backup Proxmox by export strategy.
cd /opt/vprotect/scripts/vma
./setup_vma.sh vprotect-vma-20180128.tar
4. Backup destination configuration
For multi-node/cluster environment for backup destination we suggest to use NFS, Object Storage, Deduplication appliances. In this example we use NFS.
Execute on all vPlus node machines.
Add entry in
/etc/fstab
for automount NFS10.1.1.1:/vprotect /vprotect_data nfs defaults 0 0
Create directories for mount NFS share:
mkdir /vprotect_data
Mount NFS share
mount -a
Create subdirectories for backup destinations (run only on single node)
mkdir /vprotect_data/backup mkdir /vprotect_data/backup/synthetic mkdir /vprotect_data/backup/filesystem mkdir /vprotect_data/backup/dbbackup
Add privileges for newly created shares
chown vprotect:vprotect -R /vprotect_data
5. Cluster Configuration
Cluster is controlled by pacemaker.
5.1 Prepare operating system
All steps run as root user. Run that steps on all machines in pacemaker cluster:
Stop all services controlled by cluster, and disable autostart.
systemctl stop vprotect-node systemctl stop vprotect-server systemctl disable vprotect-node systemctl disable vprotect-server
5.2 Set MariaDB replication
All steps run as root user. Run on all cluster nodes:
Create MariaDB user
replication
with passwordvPr0tect
for replication:CREATE USER replicator@'10.1.1.%' IDENTIFIED BY 'vPr0tect'; GRANT SELECT,REPLICATION SLAVE,REPLICATION CLIENT ON *.* to replicator@'10.1.1.%' IDENTIFIED BY 'vPr0tect';
Add changes to /etc/my.cnf.d/server.cnf in
mysqld
section:[mysqld] lower_case_table_names=1 log-bin=mysql-bin relay-log=relay-bin log-slave-updates max_allowed_packet=500M log_bin_trust_function_creators=1
Add changes to /etc/my.cnf.d/server.cnf in
mysqld
section:On vprotect1.local:
server-id=10
On vprotect2.local:
server-id=20
On vprotect3.local:
server-id=30
Restart MariaDB service:
systemctl restart mariadb
On each host show output from command:
SHOW MASTER STATUS;
Output from vprotect3.local:
+------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000006 | 374 | | | +------------------+----------+--------------+------------------+ 1 row in set (0.000 sec)
Output from vprotect1.local:
+------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000007 | 358 | | | +------------------+----------+--------------+------------------+ 1 row in set (0.000 sec)
Output from vprotect2.local:
+------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000004 | 358 | | | +------------------+----------+--------------+------------------+ 1 row in set (0.000 sec)
Set replication on each MariaDB server:
Execute on vprotect1.local:
CHANGE MASTER TO MASTER_HOST='10.1.1.4', MASTER_PORT=3306, MASTER_USER='replicator', MASTER_PASSWORD='vPr0tect', MASTER_LOG_FILE='mysql-bin.000006', MASTER_LOG_POS=37;
Execute on vprotect2.local:
CHANGE MASTER TO MASTER_HOST='10.1.1.2', MASTER_PORT=3306, MASTER_USER='replicator', MASTER_PASSWORD='vPr0tect', MASTER_LOG_FILE='mysql-bin.000007', MASTER_LOG_POS=358;
Execute on vprotect3.local:
CHANGE MASTER TO MASTER_HOST='10.1.1.3', MASTER_PORT=3306, MASTER_USER='replicator', MASTER_PASSWORD='vPr0tect', MASTER_LOG_FILE='mysql-bin.000004', MASTER_LOG_POS=358;
Start replication MariaDB: Execute on vprotect1.local:
START SLAVE;
Show output from command:
SHOW SLAVE STATUS\G
Wait until you see in output:
Slave_IO_Running: Yes Slave_SQL_Running: Yes
Repeat last step on host vprotect2.local and vprotect3.local.
5.2.1 Make same passwords for vprotect user in MariaDB
Copy password from file
/opt/vprotect/payara.properties
eu.storware.vprotect.db.password=SECRETPASSWORD
Log in to MariaDB
mysql -u root -p
Set password for vprotect user:
SET PASSWORD FOR 'vprotect'@'localhost' = PASSWORD('SECRETPASSWORD'); quit;
Copy configuration files from vprotect1.local to other cluster hosts
cd /opt/vprotect/ keystore.jks log4j2-server.xml payara.properties vprotect.env vprotect-keystore.jks license.key
Add permissions for copied files
chown vprotect:vprotect -R /opt/vprotect/
5.3 Configure pacemaker
All steps run as root user.
5.3.1 Run on every node in cluster
Install pacemaker packages
dnf install -y pcs pacemaker fence-agents-all
Create SSH keys, and add them on other hosts to known.
Open ports on firewall
firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload
Start pcsd service
systemctl start pcsd.service systemctl enable pcsd.service
Set same password for user hacluster
passwd hacluster
5.3.2 Run only on first node of cluster
Authenticate nodes of cluster
pcs host auth vprotect1.local vprotect2.local vprotect3.local
Create cluster
pcs cluster setup vp vprotect1.local vprotect2.local vprotect3.local
Run cluster
pcs cluster start --all
Power off stonith
pcs property set stonith-enabled=false
Create floating IP in cluster
pcs resource create vp-vip1 IPaddr2 ip=10.1.1.4 cidr_netmask=23 --group vpgrp
Add vprotect-server to cluster
pcs resource create "vp-vprotect-server.service" systemd:vprotect-server.service op start on-fail="stop" timeout="300s" op stop timeout="300s" op monitor timeout="300s" --group vpgrp
Add vprotect-node to cluster
pcs resource create "vp-vprotect-node.service" systemd:vprotect-node.service op start on-fail="stop" timeout="300s" op stop timeout="300s" op monitor timeout="300s" --group vpgrp
6. Register vPlus node on server (on all hosts)
Add certificate to trusted
/opt/vprotect/scripts/node_add_ssl_cert.sh 10.1.1.5 443
Register node on server
vprotect node -r ${HOSTNAME%%.*} admin https://10.1.1.5:443/api
7. Useful commands to control cluster:
For update, or service vPlus unmanaged services from cluster:
pcs resource unmanage vpgrp
Back to manage:
pcs resource manage vpgrp
Show status of cluster:
pcs status
Stop cluster node:
pcs cluster stop vprotect1.local
Stop all nodes of cluster:
pcs cluster stop --all
Start all nodes of cluster:
pcs cluster start --all
Clear old errors in cluster:
pcs resource cleanup