ZFS

smartctl output TOSHIBA MD04ACA500 653AK2MXFS9A

It is time to replace /dev/da17 in knew. The replacement drive is also documented. The replacement procedure has also been documented. An upcoming blog post will document the replacement. I’m seeing these messages: Dec 12 09:23:03 knew smartd[2124]: Device: /dev/da17 [SAT], 40 Currently unreadable (pending) sectors Dec 12 09:53:04 knew syslogd: last message repeated 1 times Dec 12 10:23:03 knew syslogd: last message repeated 1 times Dec 12 10:53:04 knew syslogd: last message […]

smartctl output TOSHIBA MD04ACA500 653AK2MXFS9A Read More »

Duplicating a zpool

I have these boot drives, I want to copy them to another. They both happen to be zroot and bootable. I booted from mfsBSD and imported both new and old zroot, using the pool ID shown by zpool import -N. root@mfsbsd:~ # zpool import -N pool: data01 id: 2668514456528412656 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: data01 ONLINE mirror-0 ONLINE gpt/S59VNS0N809087J_S00 ONLINE gpt/S59VNJ0N631973D_S01 ONLINE mirror-1

Duplicating a zpool Read More »

Creating a 4 drive ZFS zpool: stripe over mirrors

In this post, I am creating a 4 drive ZFS pool consisting of two mirror. In this post: I’m booting from mfsBSD 12.1 ZFS Creating partitions On FreeBSD, it is convention to create partitions. It is not necessary, but I don’t use all the space in case a future replacement drive is slightly smaller than these drives. NOTE: I am not, but have not confirmed that this is no longer a problem in

Creating a 4 drive ZFS zpool: stripe over mirrors Read More »

Changing from one dataset to another within a FreeBSD [iocage] jail

ZFS has a the ability to share itself within a jail. That gives the jail some autonomy, and I like that. I’ve written briefly about that, specifically for iocage. More recently, I started using a zfs snapshot for caching clearing. The purpose of this post is to document the existing configuration of the production FreshPorts webserver and outline the plan on how to modify it for more zfs-snapshot-based cache clearing. The current configuration

Changing from one dataset to another within a FreeBSD [iocage] jail Read More »

Renaming and replacing zroot filesystems using mfsBSD

I am replacing one zroot with another because of missing binaries and suspect layout. The existing zroot is an old system predating current zroot layouts. This work is being performed on a test system (that link is not safe for young eyes). In this post: FreeBSD 12.1 mfsBSD 12.1 The failed attempt I tried once, and failed, with with old zroot_old and new zroot_new mounted. zroot_new became unusable and I started again. This

Renaming and replacing zroot filesystems using mfsBSD Read More »

ZFS: adding a drive back into the zpool

Today I was updating some servers. One of them was rebooted three times. On the third time, one of the drives went missing. This is how I tracked down which drive, and which slot in the server, and fixed it. I’m writing it mostly so I can remember how to flash the light on the drive. There is also an off-by-one issue to avoid. In this post: FreeBSD 12.1 sesutil ZFS Supermicro 846

ZFS: adding a drive back into the zpool Read More »

using zfs rollback for cache clearing

I’m in the final stages of the FreshPorts packages project. One of the last tasks is clearing the packages cache from disk when new package information is loaded into the database. See also: Clearing the packages cache ZFS when you’re not doing ZFS Several of the configuration items have been learned from putting my poudriere instance into a jail. In this post: FreeBSD 12.1 py37-iocage-1.2_5 If I was to do this cache clearing

using zfs rollback for cache clearing Read More »

bectl details

This is just a placeholder for me to find this later. I was cleaning up some old snapshots. I’m not sure I should have removed those snapshots and I’m saving this here for next time I use bectl. [dan@slocum:~] $ grep -v autosnap ~/tmp/snapshots/snapshots zroot/bootenv/default@2020-01-11-18:16:51 zroot/bootenv/default/tmp@2020-01-11-18:16:51 zroot/bootenv/default/usr@2020-01-11-18:16:51 zroot/bootenv/default/usr/local@2020-01-11-18:16:51 zroot/bootenv/default/usr/obj@2020-01-11-18:16:51 zroot/bootenv/default/usr/src@2020-01-11-18:16:51 zroot/bootenv/default/var@2020-01-11-18:16:51 zroot/bootenv/default/var/audit@2020-01-11-18:16:51 zroot/bootenv/default/var/empty@2020-01-11-18:16:51 zroot/bootenv/default/var/log@2020-01-11-18:16:51 zroot/bootenv/default/var/tmp@2020-01-11-18:16:51 [dan@slocum:~] $ sudo zfs destroy zroot/bootenv/default/var/tmp@2020-01-11-18:16:51 [dan@slocum:~] $ sudo zfs destroy zroot/bootenv/default/var/log@2020-01-11-18:16:51 [dan@slocum:~] $ sudo zfs destroy zroot/bootenv/default/var/empty@2020-01-11-18:16:51

bectl details Read More »

knew

This post has been replaced by a newer post. For future reference, this is the knew server … oh wait, I think it’s this server which is was mounted in the 4U chassis mentioned in this post. It runs a few jails, including Bacula regression testing services. It is now mounted in a SuperChassis 846E16-R1200B This is the previous post for this system configuration. Photos of the assembly. File systems Paritions zpools Those

knew Read More »

zpool degraded – one drive missing from system

I rebooted knew yesterday for upgrades. When it came back, the main storage zpool was degraded: Is the drive alive? The drive is not listed at all in /var/run/dmesg.boot. I keep a list of the expected drives in /etc/periodic.conf, for use by a Nagios check: [dan@knew:~] $ /usr/sbin/sysrc -nf /etc/periodic.conf daily_status_smart_devices /dev/da22 /dev/da21 /dev/da20 /dev/da19 /dev/da18 /dev/da17 /dev/da16 /dev/da15 /dev/da14 /dev/da13 /dev/da12 /dev/da11 /dev/da10 /dev/da9 /dev/da8 /dev/da7 /dev/da6 /dev/da5 /dev/da4 /dev/da3 /dev/da2 /dev/da1

zpool degraded – one drive missing from system Read More »

Scroll to Top