Categories
FreeBSD Hardware Linux ZFS

Fixing ZREPL after restoring a filesystem

ZREPL is a great tool for creating snapshots of your ZFS filesystems and then copying them to a remote ZFS storage server. I have been using ZREPL for some time and I recently relied upon it to restore filesystems. However, the experience revealed some gotchas.

The first time I had to rollback a filesystem from a local snapshot I found ZREPL didn’t want to continue replicating that host afterwards. I got into a mess with ZREPL’s ZFS holds and bookmarks and I ended up deleting all the snapshots on both sides of the replication, losing historical data that I wanted to keep.

A few weeks later I had to use a remote ZREPL snapshot to restore a laptop that needed a new hard drive. The restoration was successful, but getting ZREPL to continue replicating this machine was not. I manually trawled through the ZFS holds and bookmarks, clearing them as needed. I lost a few historical snapshots that I wanted to keep in the process and thought that there must be a better way. Using ZREPL after recoveries was proving to be less than ideal. I have found that better way and it is so simple that you will remember it.

Yesterday, I had to do another recovery. This time on a experimental server that I really should never have built using old SATA hard drives. The zpool had three drives in the pool, two mirrored and an online spare. When one of the mirrored drives failed, ZFS successfully resilvered to the spare. Before I had an opportunity to replace the failed drive, the other mirrored drive failed. It ran OK for a while on parity, but when I shut the machine down to replace the original failed drive, the server would not boot.

As I had hourly ZREPL replicas stored remotely and three more old replacement drives available, I was confident that this problem would not be a disaster. I replaced all three drives, reinstalled the OS and used ZFS send/recv to restore the filesystems from my remote ZREPL replica. The experimental server was operational, but ZREPL was no longer creating remote replicas as it should.

The following describes a simple method that I now use to do a full zpool restore. It is equally relevant if you have to restore an individual filesystem that has been replicated remotely by ZREPL either by rolling back a local snapshot or sending from a remote replica.

How To Restore Using ZREPL

Example Scenario

jewel.example.com
This is the host that needs a filesystem restoration. Hostname abbreviated to ‘jewel’.

zroot
This is the ZFS pool on ‘jewel’ that has a Root on ZFS filesystem containing the important data that must be restored.

safe.example.com
This is the ZREPL storage server. Hostname abbreviated to ‘safe’.

zdata
This is the ZFS pool on ‘safe’ that contains the ZFS snapshots that were replicated remotely by ZREPL.

Tip 1:

Always have a bootable recovery image available. It can be a USB datastick, a live CD or PXE boot disk. It should have the same OS (and version) as the system to be recovered. It must contain all of the necessary ZFS components so that you can import your ZFS pool without mounting the file systems. This enables you to restore a ZFS on Root system. Remember to update this recovery boot image whenever you upgrade the OS or ZFS features.

Tip 2:

Use a script that is run from Cron or Periodic to regularly create a text file report about your ZFS filesystems and their mountpoints. If you have access to a private git repository, have the script ‘add, commit and push’ the report to the repository. If you don’t have a private git repo, have the machine email you the reports instead. You will be thankful that you have the mountpoint information when you most need it!

After a fresh operating system install and the recreation of the zpool(s) required (You had that info kept safe in your private git repo), boot the host using the recovery disk image.

Establish that you can ssh as root between the remote ZREPL storage and the host to be recovered using passwordless certificate login. Get this proven to run in both directions first before doing anything else. If you are preparing your recovery image, consider using a hardware token authenticator like a Yubikey to store the private keys which is safer than leaving private keys for root on a bootable USB stick!

Login as root on jewel.example.com to import the empty ZFS pool that you have recently created as part of the OS install. The -f option is required to import a pool that has been previously mounted by another system.

root@jewel$ zpool import -f zroot

Login as root on safe.example.com to identify the snapshot(s) that you wish to restore

root@safe$ zfs list -t snapshot -H -o name \
| grep jewel.example.com \
| grep 20260212_11

-t snapshot : Lists only snapshots
-H : Strips the headers from the output
-o name : Includes only the snapshot names in the output
grep jewel.example.com : Reduces the output to reference only the machine of interest
grep 20260212_11 : Reduces the output to snapshots taken on 12 February 2026 in the hour of 11am

Select the latest snapshot that you know will contain everything that you want to restore. This will be one that completed successfully before the disaster.  Use ZFS Send/Recv over SSH to restore the filesystem.

root@safe$ zfs send -r zdata/zrepl/jewel.example.com/zroot@zrepl_20260212_1132_000 \
| pv \
| ssh root@jewel.example.com "zfs recv -Fu zroot"

Piping through ‘pv’ is optional. It shows progress of the data transfer, otherwise you just have to wait for the prompt to return on your terminal to know that it has finished.

If you cannot or do not want to perform a recursive restore as shown above, omit the -r option and send filesystems individually.

When the filesystem restoration has completed, shutdown the restored host and remove the USB live image recovery datastick.

root@jewel$ poweroff

Boot the restored host and check that all is OK and as expected.

Temporarily stop the ZREPL service at the end that drives the replica transfer. If ‘jewel’ is configured to push replicas to ‘safe’, stop ZREPL on jewel. Otherwise, stop ZREPL on safe.

The simple action that prevents ZREPL problems

Login as root on safe.example.com, now rename the dataset holding the replicas for jewel
root@safe$ zfs rename zdata/zrepl/jewel.example.com/zroot zdata/zrepl/jewel-20260212/zroot
All of that dataset’s snapshots will automatically be renamed to correspond with the new name of their parent.

Restart ZREPL and manually wake up a replication. The process will automatically create a new dataset called zdata/zrepl/jewel.example.com/zroot . This will allow ZREPL to continue to operate as normal, effectively starting afresh.

All of your historical snapshots for jewel will now be in zdata/zrepl/jewel-20260212/zroot and its descendants. These will not be pruned automatically. You can still use them for restores, but they are now outside of ZREPL.

If you want to recover disk space, delete the snapshots that you no longer need from zdata/zrepl/jewel-20260212/zroot.

Re-enable and restart ZREPL and you should find that replication is back to normal.

Categories
OPNsense

apcupsd not starting on OPNsense

From the firewall console, open a shell (Option 8), then enter the following commands:

rm -r /var/spool/lock
mkdir /var/spool/lock


Restart the apcupsd service or reboot the firewall.

Categories
Linux Void Linux

Void Linux: UK Keyboard

To configure a Void Linux host located in the United Kingdom to make correct use of a British(English) UK keyboard, do the following:

Edit /etc/rc.conf, set the following values:

KEYMAP="uk"
TIMEZONE="Europe/London"

Then save your changes.

Check which locales are installed:
$ locale -a

For the UK, you will need en_GB.utf8 and en_GB.iso88591.

These are likely to be commented out in the default install. Uncomment these in the locales configuration…

$ sudo nano /etc/default/libc-locales

When you have saved your changes, force regeneration of the required locales…

$ sudo xbps-reconfigure -f glibc-locales

You will get confirmation on screen. Logout, then login and test your £ sign on the keyboard.

Categories
FreeBSD Linux Windows

SSH: Copy keys without ssh-copy-id

I should really remember this as I have to use it often. Posting it here in the hope that it will stick eventually. I guess using it all the while instead of ssh-copy-id would do the trick.

$ cat ~/.ssh/id_ed25519.pub | ssh vincent@vlara.co.uk "mkdir -p ~/.ssh && \
cat >> ~/.ssh/authorized_keys"

Categories
FreeBSD Linux

Encrypt 7-Zip Archive List

I noticed a change in the default behaviour of 7-Zip 25.01 when using password protection, but I could be wrong. The default behaviour now is to leave the content list unencrypted and only encrypt the files in the archive.

Add the following option to your 7-Zip command line sequence to encrypt the archive list when using the -p option.

-mhe=on

Categories
FreeBSD iocage

Migrate a thick jail to another host

Migrating a jail

Thick iocage jails can be safely migrated between FreeBSD hosts using ZFS Send/Recv over SSH.

In the following example:

  • src$ is the original host
  • dst$ is the destination host
  • ‘myapp’ is the name of my jail to migrate
  • Everything is done with root privileges

Stopping processes

Stop the jail and any other ZFS replication processes.

src$ iocage stop myapp

src$ service zrepl stop

dst$ service zrepl stop

Create a snapshot

src$ zfs snapshot -r zroot/iocage/jails/myapp@migration

Send the snapshot

src$ zfs send -R zroot/iocage/jails/myapp@migration | ssh root@dst 'zfs recv -F -v zroot/iocage/jails/myapp'

Testing

Check that the jail exists and that it can be started.

dst$ iocage list

dst$ iocage start myapp

Categories
FreeBSD iocage

Convert a ‘thin’ iocage jail to ‘thick’

I have been using iocage for a number of years as the tool to manage my FreeBSD jails. There are two types of iocage jail installations that I use, thin jails (clone jails) and thick jails.

To create your first jail, you have to fetch a FreeBSD operating system release installation using ‘iocage fetch’. This is used in both types of jails and is separate from the host’s own operating system.

Thin jails share the ZFS dataset that contains the working copy of a fetched version of FreeBSD. They save a lot of space as upgrading FreeBSD one thin jail, upgrades them all.

Thick jails have their own independent copy of the chosen fetched FreeBSD. They use more disk space and take more time to upgrade as each must be done individually.

The extra time and disk space used by Thick jails are worth it. Moving jails between hosts, performing backups and restores using ZFS Send and Receive are a significant advantage.

The requirement

I needed to migrate a thin jail from one host to another but at the destination I wanted it to run as a thick jail instead. As I have found that thick jails are better for me when it comes to upgrades. The migration of a thin jail cannot be done using ZFS Send/Recv as is.

How to do it

Create a new thick jail using iocage called ‘thickjail’.

$ iocage create -T -r 14.3-RELEASE -n thickjail dhcp=on

Check that the new thick jail works by starting and stopping it.

$ iocage start thickjail

$ iocage stop thickjail

$ iocage stop myapp

Copy the files from the thin jail to the thick jail using rsync. Make sure both jails have been stopped beforehand!

$ rsync -a /zroot/iocage/jails/myapp/ /zroot/iocage/jails/thickjail/

When this completes you will notice two jails with the same name when you use ‘iocage list’. However, if you look at the ZFS datasets they will be correctly named.

To correct the listing in iocage, do the the following:

$ iocage rename myapp thinjail

$ iocage rename thickjail myapp

$ iocage start myapp

Test that the new thick jail ‘mayapp’ is working OK before deleting the old thin jail.

$ iocage list

$ iocage destroy thinjail

Categories
WordPress

WordPress: Find out what files are using your storage

I recently hit the quota limit on a WordPress site. I found a plug-in that works similarly to the GNOME Disk Usage Analyzer (baobab) that I have been using for many years.

The WordPress plug-in is called ‘Disk Usage Sunburst‘, it makes it very easy to quickly find large files within your WordPress system.

Categories
FreeBSD

PF: Read the log

I often forget this command to examine what has been logged by the PF filter.

$ tcpdump -n -e -ttt -r /var/log/pflog

To look at what is being filtered in real-time use the following command instead:

$ tcpdump -n -e -ttt -i pflog0

Categories
Hardware

Replacement fans for GSM7224 switches

I have three old Netgear GSM 7224 Ethernet switches that I use from time to time in my lab network. These switches run at 1Gbps on each port which is still plenty fast enough for my needs.

I purchased them second hand on eBay some years ago. Soon after my acquisition, I replaced the 40mm fans in two of them and upgraded the firmware. All of my GSM7224 switches now needed new fans. One had stopped altogether, while the others had become very noisy.

Having done this repair before, this time around, I carefully selected replacement fans with the correct 2-pin plug already installed. I had to cut the plugs off the old fans last time and soldered them to the replacements. Lesson learned. With the new fans I could have glued M3 nuts into the receptacles but I prefer to use brass inserts like the OEM fans for ease of installation and avoid the glue.

Brass inserts for replacement fans

Six economically priced 5Vdc 40x40x10mm fans were purchased on eBay for £2.85 each, plus a bag of 50 M3 brass inserts for £2.19. The total cost of this repair was £19.29 in October 2024.

So why did I buy no name fans? A single Noctua fan costs £20, six of them would have set me back £120. I couldn’t justify spending an extra £100 on repairing these old switches. I would have to repair them five more times with cheap fans before breaking even on the cost of Noctua fans. If the switches were going to be used in a 24×7 production network, then better quality fans would make more sense. These switches are for a development lab and only powered on when needed.

Modification

I compared the fans on the switch that still had it’s original fans to those that I had repaired previously. All of the fans were installed the same way, sucking air out of the case. I have always been doubtful of the manufacturer’s choice to install them this way, so I decided to install the replacement fans blowing air onto the heatsinks instead. I am hoping that this will keep the case temperatures lower and make the fans last longer.

The heatsinks are in the air flow of the fans

The new fans have all been installed and are still quiet. Only time will tell if I should have bought Noctua fans. So far, I am happy with my cheap repair.

Replacement fans installed

A Future Project

The next project for these switches is to replace the awful configuration web app and equally awful text mode configuration. The older FSM726 switch had an easy to use terminal interface for setup. I still have a couple of them and it is far quicker to set one of these up over a serial terminal.

I am considering setting up a bastion host with SSH2 access over Ethernet and RS232C serial to the switch console port. I could create my own text mode interface that mimics the FSM726 which runs the appropriate sequence of commands on the switch to make the required changes. Alternatively, this could be a web app or an Ansible module. My Ansible controller could become the bastion host with the addition of a few more serial ports.

Update September 2025

All of my Netgear 7224 switches were scrapped this summer and replaced with Netgear GS108Tv2 switches.

The 7224 switches had been setup on a bench for a project one evening, but I couldn’t get back to completing it for some time. What I had not realised was that they were in direct sunlight on the hottest days of the year and in a room with the windows and doors shut.

When I got back to the project I was having a lot of weird problems in data transmission, including using the serial interfaces. Slipping the cases off revealed that almost all of the electrolytic capacitors were showing signs of failure with the tops bulging. As newer switches use a lot less power, I decided to buy some used GS108Tv2 switches to replace them instead of replacing the capacitors in the 7224s.

I stripped out the new fans, separated the PCBs from the cases and dropped the parts off at our local recycling. Sometimes you just have to let go of old kit.

Privacy Preference Center

Necessary

Advertising

Analytics

Other