High Availability, RAID10, and 3-2-1 Backups with FreeBSD.
What is this?
This is a guide on how to setup a primary and secondary FreeBSD server that enables high availability and 3-2-1 backups using a few key components: FreeBSD’s HAST (highly available storage), RAID10, and ZFS. No package manager or port required.
Definitions.
High availability: a characteristic of a system which aims to ensure an agreed level of operational performance, usually uptime, for a higher than normal period.
3-2-1 backups: a backup guideline that states there should be at least three copies of data, stored on two different types of storage media, and one copy should be kept offsite, in a remote location. Two or more different media should be used to help eliminate data loss. An offsite copy protects against fire, theft of physical media and natural disasters like floods and earthquakes.
RAID10: a RAID level using a mirror of stripes, achieving both replication and sharing of data between disks. The usable capacity of a RAID10 array is the same as in a RAID 1 array made of the same drives, in which one half of the drives is used to mirror the other half.
Who is this for?
This is for UNIX herders that value simplicity and self-reliance for their computing needs. During this guide we’ll begin from two FreeBSD 13 Release installations and will not invoke the package manager or compile a single port.
The target audience should be familiar with ZFS and rc.d scripting. The FreeBSD handbook is a great place to begin if you’re not familiar with any concepts presented.
Why?
I’m writing this due to personal philosophical reasons. Most people are losing their online freedom. Their data is tied up in centralized sources controlled by companies that do not have their best wishes at heart. Even if these companies were good they have to abide by domestic and foreign law. A world with more people that have control and autonomy over their data is a good thing. Doing so is hard. Leading people to centralized solutions where it’s other people’s problems. It should be easy to have reliability and confidence over your computing needs–by yourself, for yourself. FreeBSD makes it easy. Unfortunately, FreeBSD’s tools to provide this haven’t reached a wide audience. They should. We need a data hygiene popularizer. What Carl Sagan is to science is what we need for data.
I hope I can illuminate how easy it is to have high confidence in FreeBSD replication and backup solutions.
Assumptions.
You have two physically separated servers running a fresh installation of FreeBSD 13 Release.
Each server has an identical disk layout. Five disks on each. Four disks in RAID10 where root resides. This is setup through the FreeBSD installer. One additional disk/separate medium to store backups. Technically, we’ll have something greater than 3-2-1 backups by the end.
Goal.
We’ll have each FreeBSD server setup with ZFS on root. The root zpool, zroot
, will be in a RAID10 configuration using four disks. This is setup through the FreeBSD installer.
zroot
will have a zvol created, zroot/mirror
, exposing a block device. This zvol will be replicated between two servers in real time. There is a performance hit depending on the internet connectivity at your geographically isolated locations where the servers reside. I’ve found it isn’t much. At least, not enough to annoy me. Occasionally, I’ll notice a slow :wq
in Vi. There are options to adjust what performance hit you’re content with. I use the default configuration of HAST. See man hast.conf
.
The zvol will be passed to FreeBSD’s HAST. HAST will expose yet another block device. This block device will have another zpool created on it. It could be a good idea to encrypt or compress this zpool using ZFS knobs to do so. I do.
Any data placed in the zpool created on the HAST block device will be highly available, redundant, and abide by at least a 3-2-1 backup strategy.
At the time of this writing, I have 1.28TBs of data compressed and encrypted in my shared zpool on top of HAST.
The zvol backing the HAST block device will be snapshot’d via ZFS and sent to the fifth disk/separate medium occasionally. Each server will do this. Old snapshots and backups will be purged on occasion.
Therefore, there will be five copies of your data:
- Data within the HAST block device (shared between servers).
- Data within the ZFS snapshots on the primary server.
- Data within the ZFS snapshots on the secondary server.
- Data within the separate medium on the primary server.
- Data within the separate medium on the secondary server.
Drawbacks.
Only two servers may be part of a HAST configuration at one time.
Only the primary server may have the highly available block device mounted.
What can the data be?
Whatever you want, I hope. FreeBSD jails, virtual machines via bhyve, backups from other machines, media. Anything you want that redundancy, reliability, accessibility, and disaster recovery are crucial for.
The guide.
I’ll perform this guide using VirtualBox on my desktop computer. You can adjust based on your environment’s topology and needs. I’ve also tested this on Raspberry Pis a number of times.
Create a new VM. I called mine fbsd3
. This VM will have four SATA disks of the same size. Add a fifth USB disk. Ideally, the USB disk is at least half the size of the four previous disks combined. Use the network Bridged Adapter. This will give you a local IP from your router.
Place a FreeBSD 13 Release .iso
in the VM’s optical drive. Install FreeBSD. Use the four SATA disks in a RAID10 configuration. Leave the fifth disk untouched.
Do the same for another VM. I called mine fbsd4
.
Don’t forget to set the hostnames in the FreeBSD installer. Mine are fbsd3
and fbsd4
, respectively. The hostnames are important for FreeBSD’s HAST.
fbsd3
will be the primary server and has a local IP of 192.168.1.109
.
fbsd4
will be the secondary server and has a local IP of 192.168.1.110
.
Now we can get on to the terminal bits. First, let’s create a zvol on zroot
within fbsd3
. You’ll want the maximum size you can get away with. My server’s four disks total to 16TB, resulting in 8TB usable in RAID10. My vzol is 7TB in size. I’ll name the zvol mirror
.
fbsd3 $ zfs create -V 7TB zroot/mirror
You’ll now have a block device exposed at /dev/zvol/zroot/mirror
. This zvol will be given to HAST and replicated between fbsd3
and fbsd4
. Create the HAST configuration file.
# /etc/hast.conf
resource mirror {
local /dev/zvol/zroot/mirror
on fbsd3 {
remote 192.168.1.110
}
on fbsd4 {
remote 192.168.1.109
}
}
Notice both servers will each have a zvol at the same path. The remote configuration per server tells where to find the other server, remote
. Now, initialize HAST.
fbsd3 $ hastctl create mirror
Let’s start HAST and promote fbsd3
to primary.
fbsd3 $ service hastd onestart
fbsd3 $ hastctl role primary mirror
fbsd3 $ hastctl status
Name Status Role Components
mirror degraded primary /dev/zvol/zroot/mirror 192.168.1.110
We’re in a degraded state as the HAST data hasn’t been replicated to the secondary server. That’s expected at this stage. We’ll now have a block device exposed at /dev/hast/mirror
. Let’s create another zpool there. It could be a good idea to compress and encrypt this zpool. If the block device is not available give it a minute. It will be at /dev/hast/mirror
shortly.
fbsd3 $ zpool create hast /dev/hast/mirror
You’ll now have a new directory, /hast/
. Let’s connect the secondary server. We’ll begin by creating a zvol on fbsd4
of the exact 7TB size. This is important. HAST will not function otherwise.
fbsd4 $ zfs create -V 7TB zroot/mirror
Each server in a HAST deployment must have the same HAST configuration. Create the following /etc/hast.conf
on fbsd4
just as we did above for fbsd3
.
# /etc/hast.conf
resource mirror {
local /dev/zvol/zroot/mirror
on fbsd3 {
remote 192.168.1.110
}
on fbsd4 {
remote 192.168.1.109
}
}
Begin hastd and set the role on fbsd4
.
fbsd4 $ hastctl create mirror
fbsd4 $ service hastd onestart
fbsd4 $ hastctl role secondary mirror
Give it a few moments. Soon, executing hastctl status
will report complete
on both servers.
fbsd3 $ hastctl status
Name Status Role Components
mirror complete primary /dev/zvol/zroot/mirror 192.168.1.110
fbsd4 $ hastctl status
Name Status Role Components
mirror complete secondary /dev/zvol/zroot/mirror 192.168.1.109
If your two hastd services can’t complete ensure port 8457 is open on both servers to allow each server to connect to one another. This port can be changed. See
man hast.conf
.
If you’re still having issues connecting your two hastd services, try running the hastd service in foreground. See
man hastd
. You’ll see logs of any errors that may be occurring.
$ # Rather than...
$ # service hastd onestart
$ # Try...
$ hastd -F -ddd
Cool. Now let’s shut down hastd on both servers and properly script this setup with rc.d.
fbsd3 $ zpool export -f hast
fbsd3 $ hastctl role init mirror
fbsd3 $ service hastd onestop
fbsd4 $ hastctl role init mirror
fbsd4 $ service hastd onestop
We’ll create two custom rc.d scripts.
- A script to start and stop the primary server.
- A script to start and stop the secondary server.
$ mkdir -p /usr/local/etc/rc.d/
Create the following two rc.d scripts on both servers in the directory above. Don’t forget to chmod +x
’em.
#!/bin/sh
# /usr/local/etc/rc.d/zhast_primary
# PROVIDE: zhast_primary
# REQUIRE: LOGIN DAEMON NETWORKING zfs hastd
# KEYWORD: nojail shutdown
. /etc/rc.subr
name=zhast_primary
rcvar=zhast_primary_enable
start_cmd="zhast_primary_start"
stop_cmd="zhast_primary_stop"
zhast_primary_start()
{
hastctl role primary mirror ; sleep 5
zpool import -a -f
}
zhast_primary_stop()
{
zpool export -f hast
hastctl role init mirror
}
load_rc_config $name
run_rc_command "$1"
If you need to start jails, VMs, or anything else that depends on the data in HAST, within this script is the location to do so. Within zhastprimarystart
you would put another sleep
after zpool import
then begin your services, such as service jail onestart yourjailname01
. Any services started within zhastprimarystart
should be stopped in reverse order within zhastprimarystop
before the zpool is exported.
#!/bin/sh
# /usr/local/etc/rc.d/zhast_secondary
# PROVIDE: zhast_secondary
# REQUIRE: LOGIN DAEMON NETWORKING zfs hastd
# KEYWORD: nojail shutdown
. /etc/rc.subr
name=zhast_secondary
rcvar=zhast_secondary_enable
start_cmd="zhast_secondary_start"
stop_cmd="zhast_secondary_stop"
zhast_secondary_start()
{
hastctl role secondary mirror
}
zhast_secondary_stop()
{
hastctl role init mirror
}
load_rc_config $name
run_rc_command "$1"
Let’s test ’em.
fbsd3 $ service hastd onestart
fbsd3 $ service zhast_primary onestart
fbsd4 $ service hastd onestart
fbsd4 $ service zhast_secondary onestart
Give it a few moments. Run hastctl status
on both servers to confirm these scripts are working as intended.
fbsd3 $ hastctl status
Name Status Role Components
mirror complete primary /dev/zvol/zroot/mirror 192.168.1.110
fbsd4 $ hastctl status
Name Status Role Components
mirror complete secondary /dev/zvol/zroot/mirror 192.168.1.109
Great. Let’s enable both rc.d scripts at boot.
fbsd3 $ service hastd enable
fbsd3 $ service zhast_primary enable
fbsd4 $ service hastd enable
fbsd4 $ service zhast_secondary enable
Confirm this is functioning as desired once more. Reboot the servers.
$ shutdown -r now
Please don’t use
reboot
. rc.d scripts won’t have theirstop
commands called. This is especially bad for our HAST configuration. You’ll likely hang on bothreboot
andpoweroff
. Requiring a manual, hard shutdown.
When you get back into the servers both hastctl status
‘s should report complete. We’re done with the HAST setup at this point.
What about the backups and that fifth disk/USB device? We’ll create another zpool there, on each server.
$ zpool create backup /dev/da0
/dev/da0
is my USB device.
In the backup
zpool is where I also store my backup script. Let’s create that. Once again, on each server.
$ mkdir -p /backup/bin/
#!/bin/sh
# /backup/bin/backup
SEC=$(date +'%s')
zfs snap zroot/mirror@$SEC
zfs create backup/$SEC
zfs send -cwv zroot/mirror@$SEC | zfs receive -vdF backup/$SEC
Don’t forget to make /backup/bin/backup
executable, chmod +x /backup/bin/backup
.
Additionally, I like to copy all user rc.d scripts, /etc/hast.conf
, and /etc/rc.conf
to both the backup
and hast
zpool. I have a script that does this for me that I run when changes are made to any of the files listed.
Finally, make running the backup script a cron task.
That’s all there is to it. Well, you’ll have to do more in practice. Such as purge old backups and ZFS snapshots. Otherwise, the zpools backup
and zroot
would fill and your cron task would fail. I do that manually–a Nagios notification tells me when I should. I’ll automate that someday.
How do I make use of the backup zpool if disaster strikes?
It depends on what failed. I’ll assumed the failure is:
- The primary server’s RAID10 configuration is INOP–the primary server cannot boot and
- The secondary server is unreachable.
The high-level steps to get up and running on a third server would be:
- Have FreeBSD 13 Release installed on the third server.
- Unplug the USB device from the primary server (where the backups are stored) and plug into the third server.
- Import the
backup
zpool on the third server. - Create a dummy
/etc/hast.conf
. The remote server specified within doesn’t have to be reachable or even exist. The local device in/etc/hast.conf
would be the zvol within the imported zpool,backup
. - Begin the hastd service and promote the third server to primary.
The third server will be in a degraded HAST state. That’s fine. Your data is still there. If you have a fresh FreeBSD 13 Release server in standby at the same location as your primary server this task should take a couple minutes. Test your disaster recovery plans, please. It will give you confidence.
EOF