STEP 1 -- prepare. On both hosts.
See hostname's on hosts
uname -a
and paste it to configs and hosts.
Edit hosts
nano /etc/hosts
192.168.88.9 itc-life
192.168.88.250 human-K73SM
Setup drbb via apt
apt-get install linux-image-extra-virtual
depmod -a
apt-get install -y ntp
apt-get update
apt-get install drbd8-utils pv -y
Create vggroup and storage with lvm.
pvcreate /dev/sdb4
vgcreate -s 20G itc-life-vg /dev/sdb4
lvcreate -n drbd -L 20G itc-life-vg
dd if=/dev/zero | pv -s 2G | dd of=/dev/mapper/itc--life--vg-drbd
STEP 2 -- Configure nodes. On both nodes add config.
nano /etc/drbd.conf
global { usage-count no; }
resource r0 {
protocol C;
startup {
wfc-timeout 10;
degr-wfc-timeout 10;
outdated-wfc-timeout 10;
become-primary-on both;
}
net {
max-buffers 8192;
max-epoch-size 8192;
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
unplug-watermark 128;
}
disk {
on-io-error detach;
no-disk-barrier;
no-disk-flushes;
no-md-flushes;
}
syncer {
al-extents 3389;
rate 10M;
}
on itc-life {
device /dev/drbd0;
disk /dev/mapper/itc--life--vg-drbd ;
address 192.168.88.9:7788;
meta-disk internal;
}
on human-K73SM {
device /dev/drbd0;
disk /dev/mapper/itc--life--vg-drbd ;
address 192.168.88.250:7788;
meta-disk internal;
}
}
Load the kernel module on each system.
modprobe drbd
STEP 3 -- init nodes. On both nodes add config.
drbdadm create-md r0
Start drdb on both nodes
/etc/init.d/drbd start
Next. Make host itc-life main. Go to host itc-life and execute
drbdadm -- --overwrite-data-of-peer primary all
Go to host human-K73SM and wait for progress is done.
watch -n1 cat /proc/drbd
STEP 4 -- format and mount.
mkfs.ext4 /dev/drbd0
mount /dev/drbd0 /blockstorage
FAQ.
Setup master role to server
drbdadm primary r0
Setup slave role to server
drbdadm secondary r0
Pacemaker - configure HA
We will use Pacemaker as our Cluster Resource Manager and support can be gained for this from Linbit as with DRBD. When installing Pacemaker we will also install Corosync that is used to sync the Pacemaker cluster details. On both nodes we must first ensure that the DRBD service is not enabled on either node.
sudo systemctl disable drbd
We should also ensure that the directory is not mounted and the drbd device is not in use on either node:
sudo umount /var/www/htmlsudo drbdadm down r0
Then we can install Pacemaker on both nodes:
sudo apt-get install -y pacemaker
Configure
nano /etc/corosync/corosync.conf
totem {
version: 2
cluster_name: debian
secauth: off
transport:udpu
interface {
ringnumber: 0
bindnetaddr: 192.168.88.0
broadcast: yes
mcastport: 5405
}
}
nodelist {
node {
ring0_addr: 192.168.88.9
name: itc-life
nodeid: 1
}
node {
ring0_addr: 192.168.88.250
name: human-K73SM
nodeid: 2
}
}
quorum {
provider: corosync_votequorum
two_node: 1
wait_for_all: 1
last_man_standing: 1
auto_tie_breaker: 0
}
Use your IP Addresses and make sure it is the Network address used for the bindnetaddr:
We can then restart corosync and start pacemaker on both nodes:
systemctl restart corosync
systemctl start pacemaker
Use the command crm status to see the cluster come online.
crm status
Configure cluster in crm configure in interactive
crm configure
> property stonith-enabled=false
> property no-quorum-policy=ignore
> primitive drbd_res ocf:linbit:drbd params drbd_resource=r0 op monitor interval=29s role=Master op monitor interval=31s role=Slave
> ms drbd_master_slave drbd_res meta master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true
> primitive fs_res ocf:heartbeat:Filesystem params device=/dev/drbd0 directory=/blockstorage fstype=ext4
> colocation fs_drbd_colo INFINITY: fs_res drbd_master_slave:Master
> order fs_after_drbd mandatory: drbd_master_slave:promote fs_res:start
> commit
> show
> quit
Profit
Свежие комментарии