ZFS

Renaming and replacing zroot filesystems using mfsBSD

I am replacing one zroot with another because of missing binaries and suspect layout. The existing zroot is an old system predating current zroot layouts. This work is being performed on a test system (that link is not safe for young eyes). In this post: FreeBSD 12.1 mfsBSD 12.1 The failed attempt I tried once, and failed, with with old zroot_old and new zroot_new mounted. zroot_new became unusable and I started again. This […]

Renaming and replacing zroot filesystems using mfsBSD Read More »

ZFS: adding a drive back into the zpool

Today I was updating some servers. One of them was rebooted three times. On the third time, one of the drives went missing. This is how I tracked down which drive, and which slot in the server, and fixed it. I’m writing it mostly so I can remember how to flash the light on the drive. There is also an off-by-one issue to avoid. In this post: FreeBSD 12.1 sesutil ZFS Supermicro 846

ZFS: adding a drive back into the zpool Read More »

using zfs rollback for cache clearing

I’m in the final stages of the FreshPorts packages project. One of the last tasks is clearing the packages cache from disk when new package information is loaded into the database. See also: Clearing the packages cache ZFS when you’re not doing ZFS Several of the configuration items have been learned from putting my poudriere instance into a jail. In this post: FreeBSD 12.1 py37-iocage-1.2_5 If I was to do this cache clearing

using zfs rollback for cache clearing Read More »

bectl details

This is just a placeholder for me to find this later. I was cleaning up some old snapshots. I’m not sure I should have removed those snapshots and I’m saving this here for next time I use bectl. [dan@slocum:~] $ grep -v autosnap ~/tmp/snapshots/snapshots zroot/bootenv/default@2020-01-11-18:16:51 zroot/bootenv/default/tmp@2020-01-11-18:16:51 zroot/bootenv/default/usr@2020-01-11-18:16:51 zroot/bootenv/default/usr/local@2020-01-11-18:16:51 zroot/bootenv/default/usr/obj@2020-01-11-18:16:51 zroot/bootenv/default/usr/src@2020-01-11-18:16:51 zroot/bootenv/default/var@2020-01-11-18:16:51 zroot/bootenv/default/var/audit@2020-01-11-18:16:51 zroot/bootenv/default/var/empty@2020-01-11-18:16:51 zroot/bootenv/default/var/log@2020-01-11-18:16:51 zroot/bootenv/default/var/tmp@2020-01-11-18:16:51 [dan@slocum:~] $ sudo zfs destroy zroot/bootenv/default/var/tmp@2020-01-11-18:16:51 [dan@slocum:~] $ sudo zfs destroy zroot/bootenv/default/var/log@2020-01-11-18:16:51 [dan@slocum:~] $ sudo zfs destroy zroot/bootenv/default/var/empty@2020-01-11-18:16:51

bectl details Read More »

knew

This post has been replaced by a newer post. For future reference, this is the knew server … oh wait, I think it’s this server which is was mounted in the 4U chassis mentioned in this post. It runs a few jails, including Bacula regression testing services. It is now mounted in a SuperChassis 846E16-R1200B This is the previous post for this system configuration. Photos of the assembly. File systems Paritions zpools Those

knew Read More »

zpool degraded – one drive missing from system

I rebooted knew yesterday for upgrades. When it came back, the main storage zpool was degraded: Is the drive alive? The drive is not listed at all in /var/run/dmesg.boot. I keep a list of the expected drives in /etc/periodic.conf, for use by a Nagios check: [dan@knew:~] $ /usr/sbin/sysrc -nf /etc/periodic.conf daily_status_smart_devices /dev/da22 /dev/da21 /dev/da20 /dev/da19 /dev/da18 /dev/da17 /dev/da16 /dev/da15 /dev/da14 /dev/da13 /dev/da12 /dev/da11 /dev/da10 /dev/da9 /dev/da8 /dev/da7 /dev/da6 /dev/da5 /dev/da4 /dev/da3 /dev/da2 /dev/da1

zpool degraded – one drive missing from system Read More »

Creating a drive-bay map

When the time comes to replace a drive, it is very nice to know which drives is missing. I created this drive map to help me figure out which drive disappeared. I created this drive-bay map using a combination of: zpool status sesutil map lsblk camcontrol /var/run/dmesg.boot I have not included /var/run/dmesg.boot here. If you click on this image, you’ll see a larger version which is easier to read. I went through sesutil

Creating a drive-bay map Read More »

zfstools & sanoid – snapshots on the local host

I’m going to implement zfstools on all my ZFS-based hosts today. I first started using this tool in July 2019. In this post: FreeBSD 12.0 and 12.1 zfstools 0.3.6_1 sanoid-2.0.1_2 Local snapshots only I will be using zfstool only for creating local snapshots. If I wanted snapshots for sending to other hosts, I would probably use a tool such as sysutils/sanoid, which is policy-driven solution for snapshot management and replication. For now, there

zfstools & sanoid – snapshots on the local host Read More »

How allow.mount.zfs affects mountpoints for ZFS

I noticed this the other day, and thought it was interesting. When the jailed property is set on a ZFS fileset, it affects the mountpoint within the jail. If your jail uses allow.mount.zfs (known as allow_mount_zfs when using iocage), the mountpoints become relative to the jail. For example, in my poudriere jail, this is what I see the main dataset: [dan@pkg01:~] $ zfs list tank_fast/poudriere NAME USED AVAIL REFER MOUNTPOINT tank_fast/poudriere 51.1G 452G

How allow.mount.zfs affects mountpoints for ZFS Read More »

Migrating drives and the zpool from one host to another.

Today is the day. Today I move a zpool from an R710 into an R720. The goal: all services on that zpool start running on the new host. Fortunately, that zpool is dedicated to jails, more or less. I have done some planning about this, including moving a poudriere on the R710 into a jail. Now it is almost noon on Saturday, I am sitting in the basement (just outside the server room),

Migrating drives and the zpool from one host to another. Read More »

Scroll to Top