DRBD9 | How to setup a basic three node cluster

DRBD9 | How to setup a basic three node cluster

This guide assumes that you already have DRBD9 kernel module, drbd-utils and drbdmanage installed on the servers.
In this setup, we will need 3 identical (as much as possible) servers with a dedicated network and storage backend for the DRBD replication.

HostA: drbd-host1
ip addr: 10.1.1.3
netmask: 255.255.255.0

HostB: drbd-host2
ip addr: 10.1.1.4
netmask: 255.255.255.0

HostC: drbd-host3
ip addr: 10.1.1.5
netmask: 255.255.255.0

* make sure you setup /etc/hosts on each node to reflect the ip addresses of the other nodes. This is a requirement for DRBD.

* all hosts must be able to connect to each other via ssh without password. To do so execute the following commands on each node:

ssh-keygen # Follow the wizard, make sure you don’t set a passphrase!
ssh-copy-id <node name> # where <node name> is the hostname of the opposite node, e.g if you are on drbd-host1 then the opposite hosts should be drbd-host2 and drbd-host3. Do the same on all 3 nodes.

* make sure you can connect to each node password less:

drbd-host1:~# ssh drbd-host2 && ssh drbd-host3

Ok, now that you have access to each node without needing to enter password let’s configure drbd.

First, we must select the underlying storage that we will use for drbd. In this example each host has /dev/sdb volume which we dedicate for drbd, where /dev/sdb corresponds to an RAID10 disk array on each host.

Now let’s connect to the first host, drbd-host1 and create needed LVM VG. We should name this VG ‘drbdpool’ which is how drbd will recognise it and use it to allocate the space:

drbd-host1~# pvcreate /dev/sdb
drbd-host1~# vgcreate drbdpool /dev/sdb
drbd-host1~# lvcreate -L 4T –thinpool drbdthinpool drbdpool # We create a Thin provisioned LV inside drbdpool VG. It is necessary to call it drbdthinpool otherwise operations later will fail!

* repeat steps above to the rest of nodes (drbd-host2, drbd-host3).

Now will have to use drbdmanage utility to initiate the drbd cluster. One drbd-host1 execute the following:

drbdmanage init 10.1.1.3 # where 10.1.1.3 is the ip address on drbd-host1 which is dedicated for drbd replication (see at the top of this guide).

If successful then proceed adding the rest 2 nodes in the cluster (again from drbd-host1!)

drbdmanage add-node drbd-host2 10.1.1.4 # note that here you need to specify node’s hostname as well!. You should be able to auto complete each parameter (even the ip) by pressing the TAB button.

drbdmanage add-node drbd-host3 10.1.1.5

Now verify that everything is good:

drbdmanage list-nodes # all nodes should appear with OK status

Next, let’s create the first resource with a test volume within it:

drbdmanage add-resource res01

drbdmanage add-volume vol01 40G –deploy 3 # additionally here we specify on how many nodes the volume should reside. In this case we set 3 nodes.

Verify that the above are created successfully:

ls /dev/drbd* #Here you should see /dev/drbd0, /dev/drbd1 which belong to the control volumes that drbdmanage automatically creates during “drbdmanage init“. Additionally there should be /dev/drbd100 which corresponds to the vol01 volume we created above. You can handle this as a usual block device, e.g fdisk it then create a fs with mkfs and finally mount and write data to it. All writes will be automatically replicated to the rest of nodes.

drbdmanage list-volumes

drbdmanage list-resources

drbd-overview # Shows the current status of the cluster. Please take a note of the node which is elected to be in Primary state. That is the only node which can mount the newly created drbd volume!

drbdadm status

Proxmox VE users: If you setup the DRBD9 cluster on PVE nodes make sure you add the following entries in /etc/pve/storage.cfg , Note that you don’t need to create volumes manually like we did previously. Proxmox storage plugin will create those automatically per each VM you create :

drbd: drbd9-stor1  # where drbd9-stor1 can be any arbitrary label to identify the storage
content images
redundancy 3   # This is the volume redundancy level, in this case it’s 3.

Ok, so if everything went right you should now have a working 3 node DRBD9 cluster. Of course you will need to spend a lot of time to familiarise yourself with drbd utils command line and drbd in general. Have fun!