As mentioned in Doing a bit of stress work on a new HDD, I have a failing 5TB drive which is going to be replace by a 4TB drive. Only about 1.45TB are used, so there’s plenty of space to grow.
If you get one thing from this post, don’t be downsizing zpools like this. I would have had much less work and opportunity for error, if I had returned that 4TB drive and waited for a 5TB drive to arrive. Don’t do what I did.
Earlier today, I visited the Bridgewater location of 365DataCenter.com and took out the old drive and added in the new drive. As mentioned in that last post, I still have some work to do. I have to export the old zpool (to take it offline), then adjust all the mountpoint for the new zpool, then run some tests and see if I got everything right.
This post is about those changes to see how things go.
In this post:
- FreeBSD 14.2
Creating a new user
I need to create a new user, one with a home directory outside /home – that’s because the zpool in question contains that directory. If that filesystem is unavailable, I still want to be able to log in and fix it.
[23:17 x8dtu dvl ~] % zfs list -r data/home main_tank/home NAME USED AVAIL REFER MOUNTPOINT data/home 86.0G 2.43T 86.0G /data/home main_tank/home 266G 3.15T 86.0G /usr/home
Here we go:
[23:20 x8dtu dvl ~] % sudo adduser Username: dvl-alt Full name: Dan Langille - alt account Uid (Leave empty for default): Login group [dvl-alt]: Login group is dvl-alt. Invite dvl-alt into other groups? []: wheel Login class [default]: Shell (sh csh tcsh bash rbash git-shell zsh rzsh nologin) [sh]: zsh Home directory [/home/dvl-alt]: /usr/dvl-alt Home directory permissions (Leave empty for default): Use password-based authentication? [yes]: no Lock out the account after creation? [no]: no Username : dvl-alt Password :Full Name : Dan Langille - alt account Uid : 1008 Class : Groups : dvl-alt wheel Home : /usr/dvl-alt Home Mode : Shell : /usr/local/bin/zsh Locked : no OK? (yes/no) [yes]: ydx OK? (yes/no) [yes]: yes adduser: INFO: Successfully added (dvl-alt) to the user database. Add another user? (yes/no) [no]: no Goodbye! [23:22 x8dtu dvl ~] % [23:23 x8dtu dvl ~] % ls -dl /usr/dvl-alt drwxr-xr-x 2 dvl-alt dvl-alt 9 2025.07.12 23:22 /usr/dvl-alt/
Next, I’ll copy over my existing .ssh directory and see how that goes. Notice that I change the permissions after copying.
[23:23 x8dtu dvl ~] % sudo cp -r ~dvl/.ssh ~dvl-alt [23:24 x8dtu dvl ~] % ls -la ~dvl-alt total 53 drwxr-xr-x 3 dvl-alt dvl-alt 10 2025.07.12 23:24 ./ drwxr-xr-x 17 root wheel 17 2025.07.12 23:22 ../ -rw-r--r-- 1 dvl-alt dvl-alt 950 2025.07.12 23:22 .cshrc -rw-r--r-- 1 dvl-alt dvl-alt 311 2025.07.12 23:22 .login -rw-r--r-- 1 dvl-alt dvl-alt 79 2025.07.12 23:22 .login_conf -rw------- 1 dvl-alt dvl-alt 289 2025.07.12 23:22 .mail_aliases -rw-r--r-- 1 dvl-alt dvl-alt 255 2025.07.12 23:22 .mailrc -rw-r--r-- 1 dvl-alt dvl-alt 966 2025.07.12 23:22 .profile -rw-r--r-- 1 dvl-alt dvl-alt 1003 2025.07.12 23:22 .shrc drwx------ 2 root dvl-alt 11 2025.07.12 23:24 .ssh/ [23:24 x8dtu dvl ~] % sudo chown -R dvl-alt:dvl-alt ~dvl-alt/.ssh [23:24 x8dtu dvl ~] % ls -la ~dvl-alt total 53 drwxr-xr-x 3 dvl-alt dvl-alt 10 2025.07.12 23:24 ./ drwxr-xr-x 17 root wheel 17 2025.07.12 23:22 ../ -rw-r--r-- 1 dvl-alt dvl-alt 950 2025.07.12 23:22 .cshrc -rw-r--r-- 1 dvl-alt dvl-alt 311 2025.07.12 23:22 .login -rw-r--r-- 1 dvl-alt dvl-alt 79 2025.07.12 23:22 .login_conf -rw------- 1 dvl-alt dvl-alt 289 2025.07.12 23:22 .mail_aliases -rw-r--r-- 1 dvl-alt dvl-alt 255 2025.07.12 23:22 .mailrc -rw-r--r-- 1 dvl-alt dvl-alt 966 2025.07.12 23:22 .profile -rw-r--r-- 1 dvl-alt dvl-alt 1003 2025.07.12 23:22 .shrc drwx------ 2 dvl-alt dvl-alt 11 2025.07.12 23:24 .ssh/ [23:24 x8dtu dvl ~] %
First ssh in
This went well.
[19:26 air01 dan ~] % ssh dvl-alt@x8dtu This is the Z Shell configuration function for new users, zsh-newuser-install. You are seeing this message because you have no zsh startup files (the files .zshenv, .zprofile, .zshrc, .zlogin in the directory ~). This function can help you with a few settings that should make your use of the shell easier. You can: (q) Quit and do nothing. The function will be run again next time. (0) Exit, creating the file ~/.zshrc containing just a comment. That will prevent this function being run again. (1) Continue to the main menu. --- Type one of the keys in parentheses --- q x8dtu% x8dtu%
OK, what I’m going to do next is copy over the entire home directory, just to get all the other stuff I want. I could pick and choose. I’d rather just do everything. It’s only 6GB.
[23:27 x8dtu dvl ~] % sudo cp -rp . ~dvl-alt/ cp: ./.bash_profile: No such file or directory [23:28 x8dtu dvl ~] %
What’s that, oh yeah, I renamed my accounts a long time ago. It seems this was missed.
[23:28 x8dtu dvl ~] % ls -l ~/.bash_profile lrwxr-xr-x 1 dvl dvl 38 2025.02.21 12:56 /usr/home/dvl/.bash_profile -> /usr/home/dan/src/scripts/bash_profile
Let’s fix that for dvl first.
[23:30 x8dtu dvl ~] % rm .bash_profile [23:30 x8dtu dvl ~] % ln -s ~/src/scripts/bash_profile .bash_profile [23:30 x8dtu dvl ~] % Connection to x8dtu.unixathome.org closed. [19:31 air01 dan ~] % x8dtu Last login: Sat Jul 12 23:27:51 2025 from 108.52.204.170 [23:31 x8dtu dvl ~] % ls -l ~/.bash_profile lrwxr-xr-x 1 dvl dvl 38 2025.07.12 23:30 /usr/home/dvl/.bash_profile -> /usr/home/dvl/src/scripts/bash_profile [23:31 x8dtu dvl ~] %
Trying ssh again:
[19:27 air01 dan ~] % ssh dvl-alt@x8dtu Last login: Sat Jul 12 23:26:13 2025 from 108.52.204.170 [23:33 x8dtu dvl-alt ~] %
That’s better.
And fix this up here too:
[23:33 x8dtu dvl-alt ~] % ln -s ~/src/scripts/bash_profile .bash_profile [23:34 x8dtu dvl-alt ~] % ls -l ~/.bash_profile lrwxr-xr-x 1 dvl-alt dvl-alt 37 2025.07.12 23:34 /usr/dvl-alt/.bash_profile -> /usr/dvl-alt/src/scripts/bash_profile [23:34 x8dtu dvl-alt ~] %
OK, I think this user is good to go. I’m not sure why, or even if, ~/.bash_profile, is still relevant. I know I’m using this:
[23:32 x8dtu dvl ~] % ls -l ~/.zshrc lrwxr-xr-x 1 root dvl 17 2024.07.03 19:07 /usr/home/dvl/.zshrc -> src/scripts/zshrc [23:36 x8dtu dvl ~] %
And inside there, I have:
if [ -e $HOME/src/scripts/bash_profile_global ]; then source $HOME/src/scripts/bash_profile_global fi if [ -e $HOME/src/scripts/bash_profile-$HOST ]; then source $HOME/src/scripts/bash_profile-$HOST fi
Oh, that’s how it works.
Looking at symlinks:
[23:38 x8dtu dvl ~] % ls -la | grep -- '->' lrwxr-xr-x 1 dvl dvl 38 2025.07.12 23:30 .bash_profile -> /usr/home/dvl/src/scripts/bash_profile lrwxr-xr-x 1 root dvl 17 2024.07.03 19:07 .zshrc -> src/scripts/zshrc [23:37 x8dtu dvl-alt ~] % ls -la | grep -- '->' lrwxr-xr-x 1 dvl-alt dvl-alt 37 2025.07.12 23:34 .bash_profile -> /usr/dvl-alt/src/scripts/bash_profile [23:38 x8dtu dvl-alt ~] % ls -l ~/.zshrc -rw-r--r-- 1 dvl-alt dvl-alt 2599 2023.11.06 13:56 /usr/dvl-alt/.zshrc
Ahh, no symlink for .zshrc, it’s a real file. Reading man cp, perhaps -R would have been better:
[23:41 x8dtu dvl ~] % sudo cp -R ~/.zshrc ~dvl-alt/ [23:40 x8dtu dvl-alt ~] % ls -l ~/.zshrc lrwxr-xr-x 1 root dvl-alt 17 2025.07.12 23:41 /usr/dvl-alt/.zshrc -> src/scripts/zshrc [23:42 x8dtu dvl-alt ~] % Connection to x8dtu.unixathome.org closed. [19:42 air01 dan ~] % ssh dvl-alt@x8dtu Last login: Sat Jul 12 23:33:06 2025 from 108.52.204.170
Yes, that seems to have worked better.
Break for Patchwork
I’ve been asked to play a game of Patchwork. Insert break here.
The next day
It’s now 8:02 on Sunday morning – coffee in hand, or rather, on the cafe table.
Shutting down FreshPorts
This is how I shutdown anything which might be using the data I’m going to copy:
[12:33 x8dtu dvl ~] % sudo service jail stop Stopping jails: perl540 svn nginx01 ingress01.freshports ingress01 pg01. [12:33 x8dtu dvl ~] % sudo service jail disable jail disabled in /etc/rc.conf [12:33 x8dtu dvl ~] %
Existing mountpoints
Here are the existing mountpoints:
[12:34 x8dtu dvl-alt ~/tmp] % zfs get -t filesystem -r mountpoint main_tank NAME PROPERTY VALUE SOURCE main_tank mountpoint none local main_tank/backups mountpoint none local main_tank/backups/rsyncer mountpoint none inherited from main_tank/backups main_tank/backups/rsyncer/backups mountpoint /home/rsyncer/backups local main_tank/backups/rsyncer/backups/Bacula mountpoint /home/rsyncer/backups/Bacula inherited from main_tank/backups/rsyncer/backups main_tank/backups/rsyncer/backups/bacula-database mountpoint /home/rsyncer/backups/bacula-database inherited from main_tank/backups/rsyncer/backups main_tank/freshports mountpoint none inherited from main_tank main_tank/freshports/ingress01 mountpoint none local main_tank/freshports/ingress01/var mountpoint none inherited from main_tank/freshports/ingress01 main_tank/freshports/ingress01/var/db mountpoint none inherited from main_tank/freshports/ingress01 main_tank/freshports/ingress01/var/db/freshports mountpoint /jails/ingress01/var/db/freshports local main_tank/freshports/ingress01/var/db/freshports/cache mountpoint /jails/ingress01/var/db/freshports/cache inherited from main_tank/freshports/ingress01/var/db/freshports main_tank/freshports/ingress01/var/db/freshports/cache/html mountpoint /jails/ingress01/var/db/freshports/cache/html inherited from main_tank/freshports/ingress01/var/db/freshports main_tank/freshports/ingress01/var/db/freshports/cache/spooling mountpoint /jails/ingress01/var/db/freshports/cache/spooling inherited from main_tank/freshports/ingress01/var/db/freshports main_tank/freshports/ingress01/var/db/freshports/message-queues mountpoint /jails/ingress01/var/db/freshports/message-queues inherited from main_tank/freshports/ingress01/var/db/freshports main_tank/freshports/ingress01/var/db/freshports/repos mountpoint /jails/ingress01/var/db/freshports/repos inherited from main_tank/freshports/ingress01/var/db/freshports main_tank/freshports/ingress01/var/db/ingress mountpoint /jails/ingress01/var/db/ingress local main_tank/freshports/ingress01/var/db/ingress/message-queues mountpoint /jails/ingress01/var/db/ingress/message-queues inherited from main_tank/freshports/ingress01/var/db/ingress main_tank/freshports/ingress01/var/db/ingress/repos mountpoint /jails/ingress01/var/db/ingress/repos inherited from main_tank/freshports/ingress01/var/db/ingress main_tank/freshports/jailed mountpoint none inherited from main_tank main_tank/freshports/jailed/ingress01 mountpoint none local main_tank/freshports/jailed/ingress01/jails mountpoint /jails local main_tank/freshports/jailed/ingress01/jails/freshports mountpoint /jails/freshports inherited from main_tank/freshports/jailed/ingress01/jails main_tank/freshports/jailed/ingress01/mkjail mountpoint /var/db/mkjail local main_tank/freshports/jailed/ingress01/mkjail/14.1-RELEASE mountpoint /var/db/mkjail/14.1-RELEASE inherited from main_tank/freshports/jailed/ingress01/mkjail main_tank/freshports/jailed/nginx01 mountpoint none inherited from main_tank main_tank/freshports/jailed/nginx01/var mountpoint none inherited from main_tank main_tank/freshports/jailed/nginx01/var/db mountpoint none inherited from main_tank main_tank/freshports/jailed/nginx01/var/db/freshports mountpoint none inherited from main_tank main_tank/freshports/jailed/nginx01/var/db/freshports/cache mountpoint /var/db/freshports/cache local main_tank/freshports/jailed/nginx01/var/db/freshports/cache/categories mountpoint /var/db/freshports/cache/categories inherited from main_tank/freshports/jailed/nginx01/var/db/freshports/cache main_tank/freshports/jailed/nginx01/var/db/freshports/cache/commits mountpoint /var/db/freshports/cache/commits inherited from main_tank/freshports/jailed/nginx01/var/db/freshports/cache main_tank/freshports/jailed/nginx01/var/db/freshports/cache/daily mountpoint /var/db/freshports/cache/daily inherited from main_tank/freshports/jailed/nginx01/var/db/freshports/cache main_tank/freshports/jailed/nginx01/var/db/freshports/cache/general mountpoint /var/db/freshports/cache/general inherited from main_tank/freshports/jailed/nginx01/var/db/freshports/cache main_tank/freshports/jailed/nginx01/var/db/freshports/cache/news mountpoint /var/db/freshports/cache/news inherited from main_tank/freshports/jailed/nginx01/var/db/freshports/cache main_tank/freshports/jailed/nginx01/var/db/freshports/cache/packages mountpoint /var/db/freshports/cache/packages inherited from main_tank/freshports/jailed/nginx01/var/db/freshports/cache main_tank/freshports/jailed/nginx01/var/db/freshports/cache/pages mountpoint /var/db/freshports/cache/pages inherited from main_tank/freshports/jailed/nginx01/var/db/freshports/cache main_tank/freshports/jailed/nginx01/var/db/freshports/cache/ports mountpoint /var/db/freshports/cache/ports inherited from main_tank/freshports/jailed/nginx01/var/db/freshports/cache main_tank/freshports/jailed/nginx01/var/db/freshports/cache/spooling mountpoint /var/db/freshports/cache/spooling inherited from main_tank/freshports/jailed/nginx01/var/db/freshports/cache main_tank/freshports/nginx01 mountpoint none inherited from main_tank main_tank/freshports/nginx01/var mountpoint none inherited from main_tank main_tank/freshports/nginx01/var/db mountpoint none inherited from main_tank main_tank/freshports/nginx01/var/db/freshports mountpoint none inherited from main_tank main_tank/freshports/nginx01/var/db/freshports/cache mountpoint none local main_tank/home mountpoint /usr/home local main_tank/jails mountpoint /jails local main_tank/jails/ingress01 mountpoint /jails/ingress01 inherited from main_tank/jails main_tank/jails/nginx01 mountpoint /jails/nginx01 inherited from main_tank/jails main_tank/jails/perl540 mountpoint /jails/perl540 inherited from main_tank/jails main_tank/jails/pg01 mountpoint /jails/pg01 inherited from main_tank/jails main_tank/jails/svn mountpoint /jails/svn inherited from main_tank/jails main_tank/mkjail mountpoint /mkjail local main_tank/mkjail/14.1-RELEASE mountpoint /mkjail/14.1-RELEASE inherited from main_tank/mkjail main_tank/mkjail/14.2-RELEASE mountpoint /mkjail/14.2-RELEASE inherited from main_tank/mkjail
Mountpoint references
I will also need to update these entries:
[12:35 x8dtu dvl-alt ~/tmp] % grep main_tank /etc/jail.conf exec.created+="zfs jail $name main_tank/freshports/jailed/ingress01"; exec.created+="zfs set jailed=on main_tank/freshports/jailed/ingress01"; exec.created+="zfs set jailed=on main_tank/freshports/jailed/nginx01/var/db/freshports/cache"; exec.created+="zfs set jailed=on main_tank/freshports/jailed/nginx01/var/db/freshports/cache/categories"; exec.created+="zfs set jailed=on main_tank/freshports/jailed/nginx01/var/db/freshports/cache/commits"; exec.created+="zfs set jailed=on main_tank/freshports/jailed/nginx01/var/db/freshports/cache/daily"; exec.created+="zfs set jailed=on main_tank/freshports/jailed/nginx01/var/db/freshports/cache/general"; exec.created+="zfs set jailed=on main_tank/freshports/jailed/nginx01/var/db/freshports/cache/news"; exec.created+="zfs set jailed=on main_tank/freshports/jailed/nginx01/var/db/freshports/cache/packages"; exec.created+="zfs set jailed=on main_tank/freshports/jailed/nginx01/var/db/freshports/cache/pages"; exec.created+="zfs set jailed=on main_tank/freshports/jailed/nginx01/var/db/freshports/cache/ports"; exec.created+="zfs set jailed=on main_tank/freshports/jailed/nginx01/var/db/freshports/cache/spooling"; exec.created+="zfs jail $name main_tank/freshports/jailed/nginx01/var/db/freshports/cache";
That should just be a change from main_tank to data.
And these entries in the webserver:
[12:38 x8dtu dvl-alt ~/tmp] % sudo grep main_tank /jails/nginx01/usr/local/etc/freshports/* /jails/nginx01/usr/local/etc/freshports/config.sh:fp_zfs_caching_parent="main_tank/freshports/jailed/nginx01/var/db/freshports/cache" /jails/nginx01/usr/local/etc/freshports/fp-listen.ini:PKG_ZFS_SNAPSHOT = main_tank/freshports/jailed/nginx01/var/db/freshports/cache/packages@empty
One last sync
This seems repetitive.
[12:40 x8dtu dvl-alt ~] % sudo syncoid --no-privilege-elevation -r --compress=lzo --quiet main_tank data CRITICAL ERROR: Target data exists but has no snapshots matching with main_tank! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. CRITICAL ERROR: Target data/freshports exists but has no snapshots matching with main_tank/freshports! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. CRITICAL ERROR: Target data/freshports/ingress01 exists but has no snapshots matching with main_tank/freshports/ingress01! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. CRITICAL ERROR: Target data/freshports/ingress01/var exists but has no snapshots matching with main_tank/freshports/ingress01/var! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. CRITICAL ERROR: Target data/freshports/ingress01/var/db exists but has no snapshots matching with main_tank/freshports/ingress01/var/db! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. CRITICAL ERROR: Target data/freshports/ingress01/var/db/freshports exists but has no snapshots matching with main_tank/freshports/ingress01/var/db/freshports! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. CRITICAL ERROR: Target data/freshports/ingress01/var/db/freshports/cache exists but has no snapshots matching with main_tank/freshports/ingress01/var/db/freshports/cache! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. NOTE: Target data/freshports/ingress01/var/db/freshports/cache dataset is < 64MB used - did you mistakenly run `zfs create data` on the target? ZFS initial replication must be to a NON EXISTENT DATASET, which will then be CREATED BY the initial replication process. CRITICAL ERROR: Target data/freshports/ingress01/var/db/freshports/cache/spooling exists but has no snapshots matching with main_tank/freshports/ingress01/var/db/freshports/cache/spooling! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. NOTE: Target data/freshports/ingress01/var/db/freshports/cache/spooling dataset is < 64MB used - did you mistakenly run `zfs create data` on the target? ZFS initial replication must be to a NON EXISTENT DATASET, which will then be CREATED BY the initial replication process. CRITICAL ERROR: Target data/freshports/ingress01/var/db/freshports/message-queues exists but has no snapshots matching with main_tank/freshports/ingress01/var/db/freshports/message-queues! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. CRITICAL ERROR: Target data/freshports/ingress01/var/db/freshports/repos exists but has no snapshots matching with main_tank/freshports/ingress01/var/db/freshports/repos! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. NOTE: Target data/freshports/ingress01/var/db/freshports/repos dataset is < 64MB used - did you mistakenly run `zfs create data` on the target? ZFS initial replication must be to a NON EXISTENT DATASET, which will then be CREATED BY the initial replication process. CRITICAL ERROR: Target data/freshports/ingress01/var/db/ingress exists but has no snapshots matching with main_tank/freshports/ingress01/var/db/ingress! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. CRITICAL ERROR: Target data/freshports/ingress01/var/db/ingress/message-queues exists but has no snapshots matching with main_tank/freshports/ingress01/var/db/ingress/message-queues! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. NOTE: Target data/freshports/ingress01/var/db/ingress/message-queues dataset is < 64MB used - did you mistakenly run `zfs create data` on the target? ZFS initial replication must be to a NON EXISTENT DATASET, which will then be CREATED BY the initial replication process. CRITICAL ERROR: Target data/freshports/ingress01/var/db/ingress/repos exists but has no snapshots matching with main_tank/freshports/ingress01/var/db/ingress/repos! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. CRITICAL ERROR: Target data/freshports/jailed exists but has no snapshots matching with main_tank/freshports/jailed! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. CRITICAL ERROR: Target data/freshports/jailed/nginx01 exists but has no snapshots matching with main_tank/freshports/jailed/nginx01! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. CRITICAL ERROR: Target data/freshports/jailed/nginx01/var exists but has no snapshots matching with main_tank/freshports/jailed/nginx01/var! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. CRITICAL ERROR: Target data/freshports/jailed/nginx01/var/db exists but has no snapshots matching with main_tank/freshports/jailed/nginx01/var/db! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. CRITICAL ERROR: Target data/freshports/nginx01 exists but has no snapshots matching with main_tank/freshports/nginx01! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. NOTE: Target data/freshports/nginx01 dataset is < 64MB used - did you mistakenly run `zfs create data` on the target? ZFS initial replication must be to a NON EXISTENT DATASET, which will then be CREATED BY the initial replication process. CRITICAL ERROR: Target data/freshports/nginx01/var exists but has no snapshots matching with main_tank/freshports/nginx01/var! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. NOTE: Target data/freshports/nginx01/var dataset is < 64MB used - did you mistakenly run `zfs create data` on the target? ZFS initial replication must be to a NON EXISTENT DATASET, which will then be CREATED BY the initial replication process. CRITICAL ERROR: Target data/freshports/nginx01/var/db exists but has no snapshots matching with main_tank/freshports/nginx01/var/db! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. NOTE: Target data/freshports/nginx01/var/db dataset is < 64MB used - did you mistakenly run `zfs create data` on the target? ZFS initial replication must be to a NON EXISTENT DATASET, which will then be CREATED BY the initial replication process. CRITICAL ERROR: Target data/freshports/nginx01/var/db/freshports exists but has no snapshots matching with main_tank/freshports/nginx01/var/db/freshports! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. NOTE: Target data/freshports/nginx01/var/db/freshports dataset is < 64MB used - did you mistakenly run `zfs create data` on the target? ZFS initial replication must be to a NON EXISTENT DATASET, which will then be CREATED BY the initial replication process. CRITICAL ERROR: Target data/freshports/nginx01/var/db/freshports/cache exists but has no snapshots matching with main_tank/freshports/nginx01/var/db/freshports/cache! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. NOTE: Target data/freshports/nginx01/var/db/freshports/cache dataset is < 64MB used - did you mistakenly run `zfs create data` on the target? ZFS initial replication must be to a NON EXISTENT DATASET, which will then be CREATED BY the initial replication process. CRITICAL ERROR: Target data/home exists but has no snapshots matching with main_tank/home! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. CRITICAL ERROR: Target data/jails exists but has no snapshots matching with main_tank/jails! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. CRITICAL ERROR: Target data/mkjail exists but has no snapshots matching with main_tank/mkjail! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. CRITICAL ERROR: Target data/mkjail/14.1-RELEASE exists but has no snapshots matching with main_tank/mkjail/14.1-RELEASE! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. CRITICAL ERROR: Target data/mkjail/14.2-RELEASE exists but has no snapshots matching with main_tank/mkjail/14.2-RELEASE! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target.
canmount
I'll need these later:
[12:49 x8dtu dvl ~] % sudo zfs get -r -t filesystem canmount data NAME PROPERTY VALUE SOURCE data canmount on default data/backups canmount on default data/backups/rsyncer canmount on default data/backups/rsyncer/backups canmount on default data/backups/rsyncer/backups/Bacula canmount on default data/backups/rsyncer/backups/bacula-database canmount on default data/freshports canmount on default data/freshports/ingress01 canmount on default data/freshports/ingress01/var canmount on default data/freshports/ingress01/var/db canmount on default data/freshports/ingress01/var/db/freshports canmount on default data/freshports/ingress01/var/db/freshports/cache canmount on default data/freshports/ingress01/var/db/freshports/cache/html canmount on default data/freshports/ingress01/var/db/freshports/cache/spooling canmount on default data/freshports/ingress01/var/db/freshports/message-queues canmount on default data/freshports/ingress01/var/db/freshports/repos canmount on default data/freshports/ingress01/var/db/ingress canmount on default data/freshports/ingress01/var/db/ingress/message-queues canmount on default data/freshports/ingress01/var/db/ingress/repos canmount on default data/freshports/jailed canmount on default data/freshports/jailed/ingress01 canmount on default data/freshports/jailed/ingress01/jails canmount on default data/freshports/jailed/ingress01/jails/freshports canmount on default data/freshports/jailed/ingress01/mkjail canmount on default data/freshports/jailed/ingress01/mkjail/14.1-RELEASE canmount on default data/freshports/jailed/nginx01 canmount on default data/freshports/jailed/nginx01/var canmount on default data/freshports/jailed/nginx01/var/db canmount on default data/freshports/jailed/nginx01/var/db/freshports canmount on default data/freshports/jailed/nginx01/var/db/freshports/cache canmount on default data/freshports/jailed/nginx01/var/db/freshports/cache/categories canmount on default data/freshports/jailed/nginx01/var/db/freshports/cache/commits canmount on default data/freshports/jailed/nginx01/var/db/freshports/cache/daily canmount on default data/freshports/jailed/nginx01/var/db/freshports/cache/general canmount on default data/freshports/jailed/nginx01/var/db/freshports/cache/news canmount on default data/freshports/jailed/nginx01/var/db/freshports/cache/packages canmount on default data/freshports/jailed/nginx01/var/db/freshports/cache/pages canmount on default data/freshports/jailed/nginx01/var/db/freshports/cache/ports canmount on default data/freshports/jailed/nginx01/var/db/freshports/cache/spooling canmount on default data/freshports/nginx01 canmount on default data/freshports/nginx01/var canmount on default data/freshports/nginx01/var/db canmount on default data/freshports/nginx01/var/db/freshports canmount on default data/freshports/nginx01/var/db/freshports/cache canmount on default data/home canmount on default data/jails canmount on default data/jails/ingress01 canmount on default data/jails/nginx01 canmount on default data/jails/perl540 canmount on default data/jails/pg01 canmount on default data/jails/svn canmount on default data/mkjail canmount on default data/mkjail/14.1-RELEASE canmount on default data/mkjail/14.2-RELEASE canmount on default [12:49 x8dtu dvl ~] % sudo zfs get -r -t filesystem canmount main_tank NAME PROPERTY VALUE SOURCE main_tank canmount on default main_tank/backups canmount on default main_tank/backups/rsyncer canmount on default main_tank/backups/rsyncer/backups canmount on default main_tank/backups/rsyncer/backups/Bacula canmount on default main_tank/backups/rsyncer/backups/bacula-database canmount on default main_tank/freshports canmount on default main_tank/freshports/ingress01 canmount noauto local main_tank/freshports/ingress01/var canmount on default main_tank/freshports/ingress01/var/db canmount on default main_tank/freshports/ingress01/var/db/freshports canmount off local main_tank/freshports/ingress01/var/db/freshports/cache canmount on default main_tank/freshports/ingress01/var/db/freshports/cache/html canmount on default main_tank/freshports/ingress01/var/db/freshports/cache/spooling canmount on default main_tank/freshports/ingress01/var/db/freshports/message-queues canmount on default main_tank/freshports/ingress01/var/db/freshports/repos canmount on default main_tank/freshports/ingress01/var/db/ingress canmount off local main_tank/freshports/ingress01/var/db/ingress/message-queues canmount on default main_tank/freshports/ingress01/var/db/ingress/repos canmount on default main_tank/freshports/jailed canmount on default main_tank/freshports/jailed/ingress01 canmount noauto local main_tank/freshports/jailed/ingress01/jails canmount on default main_tank/freshports/jailed/ingress01/jails/freshports canmount on default main_tank/freshports/jailed/ingress01/mkjail canmount on default main_tank/freshports/jailed/ingress01/mkjail/14.1-RELEASE canmount on default main_tank/freshports/jailed/nginx01 canmount on default main_tank/freshports/jailed/nginx01/var canmount on default main_tank/freshports/jailed/nginx01/var/db canmount on default main_tank/freshports/jailed/nginx01/var/db/freshports canmount on default main_tank/freshports/jailed/nginx01/var/db/freshports/cache canmount off local main_tank/freshports/jailed/nginx01/var/db/freshports/cache/categories canmount on local main_tank/freshports/jailed/nginx01/var/db/freshports/cache/commits canmount on local main_tank/freshports/jailed/nginx01/var/db/freshports/cache/daily canmount on local main_tank/freshports/jailed/nginx01/var/db/freshports/cache/general canmount on local main_tank/freshports/jailed/nginx01/var/db/freshports/cache/news canmount on local main_tank/freshports/jailed/nginx01/var/db/freshports/cache/packages canmount on local main_tank/freshports/jailed/nginx01/var/db/freshports/cache/pages canmount on local main_tank/freshports/jailed/nginx01/var/db/freshports/cache/ports canmount on local main_tank/freshports/jailed/nginx01/var/db/freshports/cache/spooling canmount on local main_tank/freshports/nginx01 canmount on default main_tank/freshports/nginx01/var canmount on default main_tank/freshports/nginx01/var/db canmount on default main_tank/freshports/nginx01/var/db/freshports canmount on default main_tank/freshports/nginx01/var/db/freshports/cache canmount noauto local main_tank/home canmount on default main_tank/jails canmount on default main_tank/jails/ingress01 canmount on default main_tank/jails/nginx01 canmount on default main_tank/jails/perl540 canmount on default main_tank/jails/pg01 canmount on default main_tank/jails/svn canmount on default main_tank/mkjail canmount on default main_tank/mkjail/14.1-RELEASE canmount on default main_tank/mkjail/14.2-RELEASE canmount on default
export the old zpool
I will export the old zpool, so I can start using the new zpool.
[13:01 x8dtu dvl ~] % sudo zpool export main_tank cannot unmount '/usr/home': pool or dataset is busy [13:01 x8dtu dvl ~] % Connection to x8dtu.unixathome.org closed. [9:01 pro05 dvl ~] % ssh dvl-alt@x8dtu Last login: Sun Jul 13 12:58:12 2025 from 172.56.221.92 [13:01 x8dtu dvl-alt ~] % sudo zpool export main_tank cannot unmount '/usr/home': pool or dataset is busy [13:01 x8dtu dvl-alt ~] % w 1:01PM up 8 days, 22:46, 10 users, load averages: 0.13, 0.75, 1.28 USER TTY FROM LOGIN@ IDLE WHAT dvl pts/0 pool-203.0.113.12.phlapa.fios.verizon.ne Thu07PM 1day -zsh (zsh) dvl pts/1 pool-203.0.113.12.phlapa.fios.verizon.ne Thu02PM 1day -zsh (zsh) dvl-alt pts/2 203.0.113.123 12:02PM 53 -zsh (zsh) dvl pts/3 203.0.113.123 12:08PM 53 -zsh (zsh) dvl pts/5 203.0.113.123 12:32PM 27 -zsh (zsh) dvl-alt pts/4 203.0.113.123 12:28PM 20 -zsh (zsh) dvl-alt pts/6 203.0.113.123 12:39PM 8 tmux: client (/tmp/tmux-1008/default) (tmux) dvl-alt pts/7 tmux(15984).%0 12:40PM 20 -zsh (zsh) dvl-alt pts/8 203.0.113.123 1:01PM - w dvl-alt pts/10 203.0.113.123 12:50PM - -zsh (zsh) [13:02 x8dtu dvl-alt ~] % sudo revoke pts/0 pts/0: No such file or directory [13:04 x8dtu dvl-alt ~] % w [13:04 x8dtu dvl-alt ~] % sudo revoke /dev/pts/0 [13:04 x8dtu dvl-alt ~] % sudo revoke /dev/pts/1 [13:04 x8dtu dvl-alt ~] % sudo revoke /dev/pts/3 [13:04 x8dtu dvl-alt ~] % sudo revoke /dev/pts/5 [13:04 x8dtu dvl-alt ~] % w 1:04PM up 8 days, 22:49, 6 users, load averages: 0.03, 0.45, 1.06 USER TTY FROM LOGIN@ IDLE WHAT dvl-alt pts/2 203.0.113.123 12:02PM 56 -zsh (zsh) dvl-alt pts/4 203.0.113.123 12:28PM 22 -zsh (zsh) dvl-alt pts/6 203.0.113.123 12:39PM 10 tmux: client (/tmp/tmux-1008/default) (tmux) dvl-alt pts/7 tmux(15984).%0 12:40PM 23 -zsh (zsh) dvl-alt pts/8 203.0.113.123 1:01PM - w dvl-alt pts/10 203.0.113.123 12:50PM 3 -zsh (zsh)
Oh, I can't export a directory in use. So I logged my other sessions out.
[13:10 x8dtu dvl-alt ~] % sudo zpool export main_tank cannot unmount '/usr/home': pool or dataset is busy [13:11 x8dtu dvl-alt ~] % zfs list -r | grep /usr/home main_tank/home 193G 3.15T 86.0G /usr/home [13:11 x8dtu dvl-alt ~] % sudo zfs umount /usr/home cannot unmount '/usr/home': pool or dataset is busy [13:11 x8dtu dvl-alt ~] % sudo zfs umount -f /usr/home [13:11 x8dtu dvl-alt ~] %
Next, it's a bunch of stuff like like, where something is mounted, which prevents that something from being exported:
[13:12 x8dtu dvl-alt ~] % zfs list | grep pg01 data/jails/pg01 23.3G 2.36T 7.49G none main_tank/jails/pg01 23.3G 3.15T 7.49G /jails/pg01 zroot/freshports/pg01 36.3G 146G 96K none zroot/freshports/pg01/postgres 36.3G 146G 35.9G /jails/pg01/var/db/postgres [13:12 x8dtu dvl-alt ~] % sudo zfs umount zroot/freshports/pg01/postgres [13:13 x8dtu dvl-alt ~] % sudo zpool export main_tank cannot unmount '/jails/ingress01/jails/freshports': pool or dataset is busy
Perhaps I could have the jail should umount and mount that when it stops/starts.
[13:13 x8dtu dvl-alt ~] % sudo zpool export main_tank cannot unmount '/jails/ingress01/jails/freshports': pool or dataset is busy [13:14 x8dtu dvl-alt ~] % mount | grep /jails/ingress01/jails/freshports main_tank/freshports/jailed/ingress01/jails/freshports on /jails/ingress01/jails/freshports (zfs, local, noatime, nfsv4acls) devfs on /jails/ingress01/jails/freshports/dev (devfs) [13:14 x8dtu dvl-alt ~] % sudo umount /jails/ingress01/jails/freshports/dev [13:15 x8dtu dvl-alt ~] % sudo zpool export main_tank [13:15 x8dtu dvl-alt ~] % [13:15 x8dtu dvl-alt ~] % zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data 3.62T 1.16T 2.47T - - 0% 31% 1.00x ONLINE - zroot 212G 59.7G 152G - - 49% 28% 1.00x ONLINE -
Finally, I get it unmounted. These are the log enries it created.
Jul 13 13:15:04 x8dtu kernel: vdev_geom_close_locked:352[1]: Closing access to ada2p1. Jul 13 13:15:04 x8dtu kernel: vdev_geom_detach:315[1]: Detaching from ada2p1. Jul 13 13:15:04 x8dtu kernel: vdev_geom_detach:326[1]: Destroying consumer for ada2p1.
For my future reference, this confirms that ada2 is the drive to be repartitioned and added to the zpool named data.
Setting canmount
Based on information pasted above, I made these changes:
[13:18 x8dtu dvl-alt ~] % sudo zfs set canmount=noauto data/freshports/ingress01 [13:18 x8dtu dvl-alt ~] % sudo zfs set canmount=off data/freshports/ingress01/var/db/ingress [13:18 x8dtu dvl-alt ~] % sudo zfs set canmount=off data/freshports/ingress01/var/db/freshports [13:19 x8dtu dvl-alt ~] % sudo zfs set canmount=off data/freshports/ingress01/var/db/ingress [13:19 x8dtu dvl-alt ~] % sudo zfs set canmount=noauto data/freshports/jailed/ingress01 [13:19 x8dtu dvl-alt ~] % sudo zfs set canmount=off data/freshports/jailed/nginx01/var/db/freshports/cache [13:19 x8dtu dvl-alt ~] % sudo zfs set canmount=noauto freshports/nginx01/var/db/freshports/cache cannot open 'freshports/nginx01/var/db/freshports/cache': dataset does not exist [13:20 x8dtu dvl-alt ~] % sudo zfs set canmount=noauto data/freshports/nginx01/var/db/freshports/cache [13:20 x8dtu dvl-alt ~] %
Setting the mount points
Next, tell the new zpool where to mount stuff, based on what I saved above.
[13:20 x8dtu dvl-alt ~] % sudo zfs set canmount=noauto data/freshports/nginx01/var/db/freshports/cache [13:20 x8dtu dvl-alt ~] % sudo zfs set mountpoint=/home/rsyncer/backups data/backups/rsyncer/backups [13:21 x8dtu dvl-alt ~] % sudo zfs set mountpoint=/jails/ingress01/var/db/freshports data/freshports/ingress01/var/db/freshports [13:23 x8dtu dvl-alt ~] % sudo zfs set mountpoint=/jails/ingress01/var/db/ingress data/freshports/ingress01/var/db/ingress [13:23 x8dtu dvl-alt ~] % sudo zfs set mountpoint=/usr/home data/home [13:24 x8dtu dvl-alt ~] % sudo zfs set mountpoint=/jails data/jails [13:24 x8dtu dvl-alt ~] % sudo zfs set mountpoint=/mkjail data/mkjail
Some of the datasets are jailed, which means the mount point is relative to the jail into which they are jailed. I'll have to deal with those later I think. I am sure that /jails/ingress01 will be taken care of by these jail.conf directives:
exec.created+="zfs set jailed=on main_tank/freshports/jailed/nginx01/var/db/freshports/cache"; # These may no longer be required exec.created+="zfs set jailed=on main_tank/freshports/jailed/nginx01/var/db/freshports/cache/categories"; exec.created+="zfs set jailed=on main_tank/freshports/jailed/nginx01/var/db/freshports/cache/commits"; exec.created+="zfs set jailed=on main_tank/freshports/jailed/nginx01/var/db/freshports/cache/daily"; exec.created+="zfs set jailed=on main_tank/freshports/jailed/nginx01/var/db/freshports/cache/general"; exec.created+="zfs set jailed=on main_tank/freshports/jailed/nginx01/var/db/freshports/cache/news"; exec.created+="zfs set jailed=on main_tank/freshports/jailed/nginx01/var/db/freshports/cache/packages"; exec.created+="zfs set jailed=on main_tank/freshports/jailed/nginx01/var/db/freshports/cache/pages"; exec.created+="zfs set jailed=on main_tank/freshports/jailed/nginx01/var/db/freshports/cache/ports"; exec.created+="zfs set jailed=on main_tank/freshports/jailed/nginx01/var/db/freshports/cache/spooling"; exec.created+="zfs jail $name main_tank/freshports/jailed/nginx01/var/db/freshports/cache";
NOTE: I have modified the above to refer to data, not main_tank.
Starting the first jail
Here's the first jail starting, and I've logged back in as my regular user here, because /home is mounted.
[9:33 pro05 dvl ~] % x8dtu Last login: Sun Jul 13 13:01:36 2025 from 172.56.221.92 [13:33 x8dtu dvl ~] % sudo service jail onestart ingress01 Starting jails: ingress01.
and this looks right:
[13:34 x8dtu dvl-alt ~] % zfs get -r -t filesystem mountpoint data/freshports/ingress01 NAME PROPERTY VALUE SOURCE data/freshports/ingress01 mountpoint none inherited from data data/freshports/ingress01/var mountpoint none inherited from data data/freshports/ingress01/var/db mountpoint none inherited from data data/freshports/ingress01/var/db/freshports mountpoint /jails/ingress01/var/db/freshports local data/freshports/ingress01/var/db/freshports/cache mountpoint /jails/ingress01/var/db/freshports/cache inherited from data/freshports/ingress01/var/db/freshports data/freshports/ingress01/var/db/freshports/cache/html mountpoint /jails/ingress01/var/db/freshports/cache/html inherited from data/freshports/ingress01/var/db/freshports data/freshports/ingress01/var/db/freshports/cache/spooling mountpoint /jails/ingress01/var/db/freshports/cache/spooling inherited from data/freshports/ingress01/var/db/freshports data/freshports/ingress01/var/db/freshports/message-queues mountpoint /jails/ingress01/var/db/freshports/message-queues inherited from data/freshports/ingress01/var/db/freshports data/freshports/ingress01/var/db/freshports/repos mountpoint /jails/ingress01/var/db/freshports/repos inherited from data/freshports/ingress01/var/db/freshports data/freshports/ingress01/var/db/ingress mountpoint /jails/ingress01/var/db/ingress local data/freshports/ingress01/var/db/ingress/message-queues mountpoint /jails/ingress01/var/db/ingress/message-queues inherited from data/freshports/ingress01/var/db/ingress data/freshports/ingress01/var/db/ingress/repos mountpoint /jails/ingress01/var/db/ingress/repos inherited from data/freshports/ingress01/var/db/ingress [13:34 x8dtu dvl-alt ~] %
But this does not:
[13:34 x8dtu dvl-alt ~] % zfs list -r data/freshports/jailed/ingress01 NAME USED AVAIL REFER MOUNTPOINT data/freshports/jailed/ingress01 208G 2.36T 96K none data/freshports/jailed/ingress01/jails 206G 2.36T 120K none data/freshports/jailed/ingress01/jails/freshports 206G 2.36T 203G none data/freshports/jailed/ingress01/mkjail 1.73G 2.36T 909M none data/freshports/jailed/ingress01/mkjail/14.1-RELEASE 861M 2.36T 861M none
I stopped the jail. I added these entries to jail.conf:
exec.created+="zfs set jailed=on data/freshports/jailed/ingress01/jails"; exec.created+="zfs jail $name data/freshports/jailed/ingress01/jails";
I started the jail and issued these commands, which I wish I could do automatically. One day.
[13:40 x8dtu dvl ~] % sudo jexec ingress01 zfs set mountpoint=/jails data/freshports/jailed/ingress01/jails [13:40 x8dtu dvl ~] % sudo jexec ingress01 zfs set mountpoint=/var/db/mkjail data/freshports/jailed/ingress01/mkjail [13:40 x8dtu dvl-alt ~] % zfs list -r data/freshports/jailed/ingress01 NAME USED AVAIL REFER MOUNTPOINT data/freshports/jailed/ingress01 208G 2.36T 96K none data/freshports/jailed/ingress01/jails 206G 2.36T 120K /jails data/freshports/jailed/ingress01/jails/freshports 206G 2.36T 203G /jails/freshports data/freshports/jailed/ingress01/mkjail 1.73G 2.36T 909M /var/db/mkjail data/freshports/jailed/ingress01/mkjail/14.1-RELEASE 861M 2.36T 861M /var/db/mkjail/14.1-RELEASE
Got database?
Note that pg01 got started automatically because:
[13:40 x8dtu dvl ~] % jls JID IP Address Hostname Path 9 127.163.54.32 x8dtu-pg01.vpn.unixathome.org /jails/pg01 12 127.163.0.10 x8dtu-ingress01.vpn.unixathom /jails/ingress01 [13:43 x8dtu dvl ~] % grep pg01 /etc/jail.conf depend = pg01; depend = pg01; pg01 {
That is, two jails depend upon pg01. It gets started before they get started.
But that does not mean the jail is working.
[13:44 x8dtu dvl-alt ~] % zfs get canmount,mounted zroot/freshports/pg01/postgres NAME PROPERTY VALUE SOURCE zroot/freshports/pg01/postgres canmount on default zroot/freshports/pg01/postgres mounted no - [13:44 x8dtu dvl-alt ~] % sudo zfs mount zroot/freshports/pg01/postgres
That allowed PostgreSQL to start up.
Starting the webserver
[13:46 x8dtu dvl ~] % sudo service jail onestart nginx01 Starting jails: cannot start jail "nginx01": mount_nullfs: /jails/ingress01/var/db/freshports/cache/html: No such file or directory jail: nginx01: /sbin/mount -t nullfs -o ro,nosuid,noexec /jails/ingress01/var/db/freshports/cache/html /jails/nginx01/var/db/freshports/cache/html: failed . [13:49 x8dtu dvl ~] % mount | grep /jails/ingress01/var/db/freshports/cache data/freshports/ingress01/var/db/freshports/cache on /jails/ingress01/var/db/freshports/cache (zfs, local, nfsv4acls) data/freshports/ingress01/var/db/freshports/cache/html on /jails/ingress01/var/db/freshports/cache/html (zfs, local, nfsv4acls) data/freshports/ingress01/var/db/freshports/cache/spooling on /jails/ingress01/var/db/freshports/cache/spooling (zfs, local, nfsv4acls) [13:49 x8dtu dvl ~] % sudo umount data/freshports/ingress01/var/db/freshports/cache/html [13:50 x8dtu dvl ~] % sudo umount data/freshports/ingress01/var/db/freshports/cache/spooling [13:50 x8dtu dvl ~] % ls -l /jails/ingress01/var/db/freshports/cache total 0 [13:50 x8dtu dvl ~] % sudo zfs mount data/freshports/ingress01/var/db/freshports/cache/html [13:50 x8dtu dvl ~] % ls -l /jails/ingress01/var/db/freshports/cache total 9 drwxrwxr-x 2 10001 10001 10 2025.07.13 12:33 html/ [13:50 x8dtu dvl ~] % sudo zfs mount data/freshports/ingress01/var/db/freshports/cache/spooling [13:51 x8dtu dvl ~] % [13:51 x8dtu dvl ~] % ls -l /jails/ingress01/var/db/freshports/cache total 9 drwxrwxr-x 2 10001 10001 10 2025.07.13 12:33 html/ drwxr-xr-x 2 10001 10001 2 2025.07.01 00:00 spooling/ [13:52 x8dtu dvl ~] % sudo service jail onestart nginx01 Starting jails: nginx01. [13:52 x8dtu dvl ~] %
I don't know how that happened, but it did.
That got the webserver going. Good so far.
Starting the other jails
Let's try starting the other jails.
[13:54 x8dtu dvl ~] % sudo service jail onestart Starting jails: pg01 ingress01 nginx01 svn perl540. [13:54 x8dtu dvl ~] % jls JID IP Address Hostname Path 9 127.163.54.32 x8dtu-pg01.vpn.unixathome.org /jails/pg01 12 127.163.0.10 x8dtu-ingress01.vpn.unixathom /jails/ingress01 13 127.163.0.80 x8dtu-nginx01.vpn.unixathome. /jails/nginx01 14 127.163.0.253 svn.freshports.org /jails/svn 15 162.208.116.124 perl540 /jails/perl540 [13:54 x8dtu dvl ~] % sudo service jail enable jail enabled in /etc/rc.conf [13:54 x8dtu dvl ~] %
Starting up the ingress service
After running this in the ingress jail:
[14:05 x8dtu-ingress01 dvl ~] % sudo service ingress start Starting ingress. [14:05 x8dtu-ingress01 dvl ~] % sudo service ingress stop Stopping ingress. Waiting for PIDS: 51965.
I saw these errors:
*** /var/log/freshports//git.log *** 2025.07.13 14:05:36 git-delta.sh has started. Will check these repos: 'doc ports src' 2025.07.13 14:05:36 git-delta.sh XML dir is /var/db/ingress/message-queues/incoming 2025.07.13 14:05:36 git-delta.sh Now processing repo: doc --------------- 2025.07.13 14:05:36 git-delta.sh FATAL error, REPODIR='/var/db/ingress/repos/doc' is not a directory 2025.07.13 14:05:36 git-delta.sh Now processing repo: ports --------------- 2025.07.13 14:05:36 git-delta.sh FATAL error, REPODIR='/var/db/ingress/repos/ports' is not a directory 2025.07.13 14:05:36 git-delta.sh Now processing repo: src --------------- 2025.07.13 14:05:36 git-delta.sh FATAL error, REPODIR='/var/db/ingress/repos/src' is not a directory 2025.07.13 14:05:36 git-delta.sh Ending
This is another case of mounted, but not seen. A umount and mount on the jail host fixes that.
[14:08 x8dtu dvl-alt ~] % zfs get mounted data/freshports/ingress01/var/db/ingress/repos NAME PROPERTY VALUE SOURCE data/freshports/ingress01/var/db/ingress/repos mounted yes - [14:08 x8dtu dvl-alt ~] % ls -l /jails/ingress01/var/db/ingress/repos total 0 [14:09 x8dtu dvl-alt ~] % sudo zfs umount data/freshports/ingress01/var/db/ingress/repos [14:09 x8dtu dvl-alt ~] % sudo zfs mount data/freshports/ingress01/var/db/ingress/repos [14:09 x8dtu dvl-alt ~] % ls -l /jails/ingress01/var/db/ingress/repos total 26 drwxr-xr-x 7 10002 10002 11 2021.09.17 21:47 doc/ drwxr-xr-x 70 10002 10002 81 2021.09.17 21:52 ports/ drwxr-xr-x 27 10002 10002 44 2021.09.17 21:47 src/ [14:09 x8dtu dvl-alt ~] %
This time, the commits-to-process start coming in:
*** /var/log/freshports//git.log *** 2025.07.13 14:11:04 git-delta.sh has started. Will check these repos: 'doc ports src' 2025.07.13 14:11:04 git-delta.sh XML dir is /var/db/ingress/message-queues/incoming 2025.07.13 14:11:04 git-delta.sh Now processing repo: doc --------------- 2025.07.13 14:11:04 git-delta.sh REPODIR='/var/db/ingress/repos/doc' exists 2025.07.13 14:11:04 git-delta.sh Repodir is /var/db/ingress/repos/doc 2025.07.13 14:11:04 git-delta.sh Running: /usr/local/bin/git fetch: From https://git.FreeBSD.org/doc 0059394b06..ba183da7db main -> origin/main 2025.07.13 14:11:07 git-delta.sh fetch completed. origin/HEAD skipping origin/main processing **** 2025.07.13 14:11:07 git-delta.sh working on 'origin/main' 2025.07.13 14:11:07 git-delta.sh Is freshports/origin/main defined on the repo 'doc'? 2025.07.13 14:11:07 git-delta.sh running: /usr/local/bin/git rev-parse -q --verify freshports/origin/main^{} 0059394b06f45ec3ffdf302c3dea0bb8fefa9094 2025.07.13 14:11:07 git-delta.sh the latest commit we have for freshports/origin/main is: 0059394b06f45ec3ffdf302c3dea0bb8fefa9094 2025.07.13 14:11:07 git-delta.sh Running: /usr/local/bin/git rev-list freshports/origin/main..origin/main 2025.07.13 14:11:07 git-delta.sh Done. 2025.07.13 14:11:07 git-delta.sh The commits found are: 2025.07.13 14:11:07 git-delta.sh ba183da7dbc9ed31393e4e0cf9c49ba937716882 2025.07.13 14:11:07 git-delta.sh c557f6f251023465dba4e62a22acafb8a3d022c2 2025.07.13 14:11:07 git-delta.sh adf12d59665b0de71b129cd26a22e7199d70fabf 2025.07.13 14:11:07 git-delta.sh 62bbcdec39ad7a8aab212dbb216fcff662e7eed3 2025.07.13 14:11:07 git-delta.sh c594cfc979f3704d11d846b0570b68a8f5f549ea 2025.07.13 14:11:07 git-delta.sh 0069602e6d511177be4cf9836fae1b06501b07bc ...
Now we have 167 commits to process:
[14:11 x8dtu-ingress01 dvl ~] % ls ~ingress/message-queues/incoming | wc -l 167
edit, by the time I'd finished typing the next section, there were a few more waiting:
[14:19 x8dtu-ingress01 dvl ~] % ls ~ingress/message-queues/incoming | wc -l 1717
Starting up the freshports service
Attempting to learn by past events, I checked this out first. Let's see if the required directories are correctly mounted:
[14:17 x8dtu dvl-alt ~] % zfs list -r data/freshports/ingress01/var/db/freshports NAME USED AVAIL REFER MOUNTPOINT data/freshports/ingress01/var/db/freshports 12.5G 2.36T 112K /jails/ingress01/var/db/freshports data/freshports/ingress01/var/db/freshports/cache 14.0M 2.36T 96K /jails/ingress01/var/db/freshports/cache data/freshports/ingress01/var/db/freshports/cache/html 13.6M 2.36T 192K /jails/ingress01/var/db/freshports/cache/html data/freshports/ingress01/var/db/freshports/cache/spooling 160K 2.36T 96K /jails/ingress01/var/db/freshports/cache/spooling data/freshports/ingress01/var/db/freshports/message-queues 12.5G 2.36T 12.5G /jails/ingress01/var/db/freshports/message-queues data/freshports/ingress01/var/db/freshports/repos 152K 2.36T 96K /jails/ingress01/var/db/freshports/repos [14:17 x8dtu dvl-alt ~] % ls -l /jails/ingress01/var/db/freshports/message-queues total 0 [14:17 x8dtu dvl-alt ~] % ls -l /jails/ingress01/var/db/freshports/repos total 0 [14:17 x8dtu dvl-alt ~] % [14:17 x8dtu dvl-alt ~] % sudo zfs umount data/freshports/ingress01/var/db/freshports/message-queues [14:17 x8dtu dvl-alt ~] % sudo zfs mount data/freshports/ingress01/var/db/freshports/message-queues [14:18 x8dtu dvl-alt ~] % ls -l /jails/ingress01/var/db/freshports/message-queues total 2531 drwxr-xr-x 9 10001 10001 20 2025.06.02 03:01 archive/ drwxrwxr-x 2 10001 10001 2 2022.04.08 16:29 incoming/ drwxrwxr-x 4 10001 10001 714 2025.06.30 23:27 recent/ drwxr-xr-x 2 10001 10001 2 2022.04.08 16:27 retry/ drwxrwxr-x 2 10001 10001 2 2025.06.30 23:27 spooling/ [14:18 x8dtu dvl-alt ~] % sudo zfs umount data/freshports/ingress01/var/db/freshports/repos [14:18 x8dtu dvl-alt ~] % sudo zfs mount data/freshports/ingress01/var/db/freshports/repos \% [14:18 x8dtu dvl-alt ~] % ls -l /jails/ingress01/var/db/freshports/repos total 0 [14:18 x8dtu dvl-alt ~] %
The message-queues directory is vital. Let me check repos on another working host.
This is my dev jail:
[14:19 dev-ingress01 dvl /var/db/freshports] % ls -l total 19 drwxr-xr-x 4 www freshports 4 2020.06.25 11:57 cache/ drwxrwxr-x 7 freshports freshports 7 2024.02.10 01:09 message-queues/ drwxr-xr-x 41 freshports freshports 41 2024.04.10 11:45 packagesite/ drwxr-xr-x 2 freshports freshports 2 2025.07.13 14:00 signals/ drwxrwxr-x 2 freshports freshports 2 2021.02.28 17:16 tmp/
This is the jail I'm working on:
[14:16 x8dtu-ingress01 dvl ~] % ls -l /var/db/freshports total 19 drwxr-xr-x 4 root freshports 4 2025.07.13 13:51 cache/ drwxrwxr-x 7 freshports freshports 7 2022.09.30 16:41 message-queues/ drwxr-xr-x 36 freshports freshports 36 2024.04.10 11:45 packagesite/ drwxr-xr-x 2 root wheel 2 2021.09.12 14:49 repos/ drwxr-xr-x 2 freshports freshports 2 2025.07.13 14:00 signals/ drwxrwxr-x 2 freshports freshports 2 2021.09.12 15:49 tmp/
I think we can do without the repos directory. I think that is no longer used.
Let's start things up and watch.
[14:22 x8dtu-ingress01 dvl ~] % sudo service freshports start Cannot 'start' freshports. Set freshports_enable to YES in /etc/rc.conf or use 'onestart' instead of 'start'. [14:22 x8dtu-ingress01 dvl ~] % sudo service freshports enable freshports enabled in /etc/rc.conf [14:22 x8dtu-ingress01 dvl ~] % sudo service freshports start Starting freshports. [14:22 x8dtu-ingress01 dvl ~] % sudo service freshports stop Stopping freshports.
As you can see, I stopped it fairly soon. Because.. errors:
Jul 13 14:22:22 x8dtu-ingress01 freshports[60644]: removing /var/db/ingress/message-queues/incoming/2025.06.28.01.55.59.000002.5d0cf80f4501fb297bc546b624a0e527040a1843.xml Jul 13 14:22:22 x8dtu-ingress01 freshports[60644]: rm: /var/db/ingress/message-queues/incoming/2025.06.28.01.55.59.000002.5d0cf80f4501fb297bc546b624a0e527040a1843.xml: Permission denied Jul 13 14:22:22 x8dtu-ingress01 freshports[60644]: removal completed
Checking permissions, the freshports user wan't permitted to remove stuff. Here's where I fixed that:
[14:23 x8dtu-ingress01 dvl ~] % ls -l /var/db/ingress/message-queues total 450 drwxrwxr-x 2 ingress ingress 2 2025.07.13 14:05 holding/ drwxrwxr-x 2 ingress ingress 1719 2025.07.13 14:15 incoming/ drwxrwxr-x 2 ingress ingress 2 2025.07.13 14:15 spooling/ [14:23 x8dtu-ingress01 dvl ~] % sudo chgrp -R freshports /var/db/ingress/message-queues/incoming [14:24 x8dtu-ingress01 dvl ~] % sudo chgrp -R freshports /var/db/ingress/message-queues/spooling [14:24 x8dtu-ingress01 dvl ~] % ls -l /var/db/ingress/message-queues total 450 drwxrwxr-x 2 ingress ingress 2 2025.07.13 14:05 holding/ drwxrwxr-x 2 ingress freshports 1719 2025.07.13 14:15 incoming/ drwxrwxr-x 2 ingress freshports 2 2025.07.13 14:15 spooling/ [14:24 x8dtu-ingress01 dvl ~] %
That was based on a working example:
[14:23 dev-ingress01 dvl /var/db/freshports] % ls -l /var/db/ingress/message-queues total 1542 drwxrwxr-x 2 ingress ingress 2 2021.08.08 18:49 holding/ drwxrwxr-x 2 ingress freshports 2 2025.07.13 13:15 incoming/ drwxrwxr-x 2 ingress freshports 2 2025.07.13 13:15 spooling/
I started again. I saw many of these:
Jul 13 14:29:13 x8dtu-ingress01 FreshPorts[65322]: message f2d1a3d5172e459af42ebbdfc23ca1254d9a94d3 has already been added to the database (/usr/local/libexec/freshports) Jul 13 14:29:13 x8dtu-ingress01 FreshPorts[65336]: message e5716cebd1f60c16dda6eb3a5bca9eeb1f78b6de has already been added to the database (/usr/local/libexec/freshports) Jul 13 14:29:13 x8dtu-ingress01 FreshPorts[65349]: message 066925f6e1e7f78bbe3e6f142469be00afb51616 has already been added to the database (/usr/local/libexec/freshports) Jul 13 14:29:14 x8dtu-ingress01 FreshPorts[65362]: message da81bddb46f32bddaeea82f30f2f327bd8fc83c7 has already been added to the database (/usr/local/libexec/freshports)
I think that a replication issue. All those 'no snapshots matching' messages? My replication is missing data. I'm ok with that. I'll explain why. FreshPorts stores all the important data in the database. That database is not part of this zpool (data). It's all in the zroot zpool. The data within the data zpool is copies of the FreeBSD repos, caching information, etc. FreshPorts stores the last commit it processed in the local copy of the repo. That's why the ingress service pulled out so many already-processed commits.
Commits caught up
The commits are all caught up now. There was a problem with the jail in ingress01 not starting up (because the jails service was not enabled). I had to delete some processed commits and rerun them.
Now things seem OK.
What's next?
I still have to delete the main_tank zpool, redo the partitions on that drive, then add that drive into the data zpool to create a mirror.
NOTE: by delete, that's not actually an action I will take. I'll wipe the zpool labels and add it in.
2025-07-14 : Other items which needed updates
Other items which needed updates: a backup script.
[12:02 x8dtu dvl ~] % grep main_tank /usr/local/sbin/jail-snapshots-for-backup.sh ZFSJAILROOTS="main_tank/jails" # the snapshot name is of the form: main_tank/iocage/jails/fedex@snapshot-for-backup
Oh that iocage reference in the comment is memory inducing.
The fix:
[12:03 x8dtu dvl ~] % grep ZFSJAILROOTS /usr/local/sbin/jail-snapshots-for-backup.sh ZFSJAILROOTS="data/jails" for ZFSJAILROOT in $ZFSJAILROOTS
The backups should run fine tonight.