ZFS

nagios03: drive recovery

After zpool upgrade blocked by gpart: /dev/da0p1: not enough space, I’ve decided to create a new Azure VM, snapshot the now-faulty-drive, attach it to the host, and start zfs replication to copy the data to new drive. Or something like that. The existing drive needs to be imported with a checkpoint rollback, then copied to a drive with different partition sizes. Here’s the new host: My first impression: why only 174K for the […]

nagios03: drive recovery Read More »

zpool upgrade blocked by gpart: /dev/da0p1: not enough space

This seems to be inconvenient. Now I have to rollback to that checkpoint. [20:52 nagios03 dvl ~] % sudo zpool checkpoint zroot [20:52 nagios03 dvl ~] % sudo zpool upgrade zroot This system supports ZFS pool feature flags. Enabled the following features on ‘zroot’: redaction_list_spill raidz_expansion fast_dedup longname large_microzap block_cloning_endian physical_rewrite Pool ‘zroot’ has the bootfs property set, you might need to update the boot code. See gptzfsboot(8) and loader.efi(8) for details. [20:52

zpool upgrade blocked by gpart: /dev/da0p1: not enough space Read More »

Upgrading a jail by replacing it

This new post replaces one on another blog: https://news.freshports.org/2025/09/13/updating-a-jail-by-replacing-it-1-0-1/ I’m placing it over here because: I’m changing how I do it I couldn’t find it when I went searching – I was only looking here In this post: Old jail – FreeBSD 14.3 Host has been updated to 15.0 That old jail is in a jail already updated to 15.0 mkjail-0.0.8 The problem FreshPorts has three mail jails on each node: ingress01 –

Upgrading a jail by replacing it Read More »

I broke my FreeBSD MySQL jail; got it working again by using a snapshot

I was upgrading a host the other day. The mysql02 (that’s the hostname) jail broke. I suspect I did something like upgrading the package while the server was in use. I noticed LibreNMS wasn’t running: *** /var/log/librenms/librenms-service.log *** Feb 10 18:37:30 besser librenms[23957]: MainThread(CRITICAL):ERROR: Could not connect to MySQL database! (2003, “Can’t connect to MySQL server on ‘mysql02.int.unixathome.org’ ([Errno 61] Connection refused)”) Feb 10 18:37:30 besser librenms[23957]: (2003, “Can’t connect to MySQL server

I broke my FreeBSD MySQL jail; got it working again by using a snapshot Read More »

Copying everything off a zpool, destroying it, creating a new one, and copying everything back

I have a problem with a zpool. To be clear, this really isn’t a problem. I’m not aware of any I/O throttling etc. It is just something I would like to change. zpzpoo% [18:26 r720-02 dvl ~] % zpool status data01 pool: data01 state: ONLINE status: One or more devices are configured to use a non-native block size. Expect reduced performance. action: Replace affected devices with devices that support the configured block size,

Copying everything off a zpool, destroying it, creating a new one, and copying everything back Read More »

data04: moving data01/bacula/volumes over

After finally creating data04, it’s time to relocate some data. In this post: FreeBSD 14.3 The source The current zpool which is straining is data01: I plan to move data01/bacula/volumes over first, then data01/bacula/working. Let’s modify the source for better results on the move. What do we have now? [20:22 r730-01 dvl /jails/bacula-sd-03/usr/local/bacula/volumes] % zfs get compression,recordsize data01/bacula/volumes NAME PROPERTY VALUE SOURCE data01/bacula/volumes compression on inherited from data01 data01/bacula/volumes recordsize 1M local Let’s

data04: moving data01/bacula/volumes over Read More »

creating a raidz2 from 8x 4TB devices – here. we. go.

It’s time. Time to create the 8x 4T raidz2 zpool. These are the devices: disks=”nda0 nda1 nda2 nda3 nda5 nda8 da12 da13″ In this post: FreeBSD 14.3 Check for existing partitions Using the list above: [18:15 r730-01 dvl ~] % gpart show | grep -E “nda0|nda1|nda2|nda3|nda5|nda8|da12|da13” => 40 7814037088 nda5 GPT (3.6T) => 34 7814037101 nda8 GPT (3.6T) Note I could not do this command because it fails at the first error: [18:05

creating a raidz2 from 8x 4TB devices – here. we. go. Read More »

Making room to creating a raidz2 from 8x 4TB devices – here. we. go.

It’s time. Time to create the 8x 4T raidz2 zpool. That doesn’t happen in this post, hopefully the next one. But first, I have to move some stuff around, to free up all the 4TB devices. The host has 4 devices (NVMe) and I moved in two more, 4TB SSDs. From /var/log/messages: Dec 9 00:24:29 r730-01 kernel: mrsas0: System PD created target ID: 0xc Dec 9 00:24:29 r730-01 kernel: mrsas0: System PD created

Making room to creating a raidz2 from 8x 4TB devices – here. we. go. Read More »

Scroll to Top