I wrote this post late one night while pondering how to solve this problem. I may have solved it with much less effort.
I want to copy a zroot from old drives to new drives. The new drives are in a test box of mine. Once the new drives are configured, I will replace the existing mirror with them.
Not shown here, I have already installed FreeBSD 12.1 on the new drives.
I think the better solution is a handful of zfs snapshot and zfs send.
The new plan
This is the new plan. This is what I started tonight.
This is the existing system (the old drives):
$ zfs list -r zroot NAME USED AVAIL REFER MOUNTPOINT zroot 25.8G 180G 23K /zroot zroot/bootenv 20.9G 180G 34K legacy zroot/bootenv/12.1 1K 180G 482M / zroot/bootenv/12S 1K 180G 600M / zroot/bootenv/default 20.9G 180G 622M / zroot/bootenv/default/tmp 72.5M 180G 944K /tmp zroot/bootenv/default/usr 9.91G 180G 2.76G /usr zroot/bootenv/default/usr/local 3.88G 180G 1.06G /usr/local zroot/bootenv/default/var 9.44G 180G 6.24G /var zroot/bootenv/default/var/audit 75K 180G 23K /var/audit zroot/bootenv/default/var/empty 23K 180G 23K /var/empty zroot/bootenv/default/var/log 214M 180G 17.6M /var/log zroot/bootenv/default/var/tmp 1.84M 180G 1.23M /var/tmp
This is how I copied that over, with renaming to be done later:
[root@slocum:~] # zfs snapshot zroot/bootenv/default/var@ns1 [root@slocum:~] # zfs send zroot/bootenv/default/var@ns1 | ssh -T root@10.55.0.247 'zfs recv zroot/var1' Password for root@newslocum.int.unixathome.org: [root@slocum:~] # [root@slocum:~] # zfs send zroot/bootenv/default/usr@ns1 | ssh -T root@10.55.0.247 'zfs recv zroot/usr1' Password for root@newslocum.int.unixathome.org: [root@slocum:~] # zfs send zroot/bootenv/default/usr/local@ns1 | ssh -T root@10.55.0.247 'zfs recv zroot/usr/local' Password for root@newslocum.int.unixathome.org: [root@slocum:~] # zfs send | ssh -T root@10.55.0.247 'zfs recv zroot/usr/local' [root@slocum:~] # zfs send zroot/bootenv/default@ns1 | ssh -T root@10.55.0.247 'zfs recv zroot/ROOT/default1' Password for root@newslocum.int.unixathome.org: [root@slocum:~] # [root@slocum:~] # zfs snapshot zroot/bootenv/default/var/tmp@ns1 [root@slocum:~] # zfs send zroot/bootenv/default/var/tmp@ns1 | ssh -T root@10.55.0.247 'zfs recv zroot/var/log1@ns1' Password for root@newslocum.int.unixathome.org: [root@slocum:~] #
These are the new drives after the above copy:
root@newslocum:~ # zfs list NAME USED AVAIL REFER MOUNTPOINT zroot 8.62G 197G 88K /zroot zroot/ROOT 755M 197G 88K none zroot/ROOT/default 477M 197G 477M / zroot/ROOT/default1 278M 197G 275M /zroot/default1 zroot/tmp 88K 197G 88K /tmp zroot/usr 625M 197G 88K /usr zroot/usr/home 124K 197G 124K /usr/home zroot/usr/local 625M 197G 625M /usr/local zroot/usr1 1.26G 197G 1.26G /zroot/usr1 zroot/var 3.04M 197G 88K /var zroot/var/audit 88K 197G 88K /var/audit zroot/var/crash 88K 197G 88K /var/crash zroot/var/log 140K 197G 140K /var/log zroot/var/log1 2.48M 197G 2.48M /var/log1 zroot/var/mail 88K 197G 88K /var/mail zroot/var/tmp 88K 197G 88K /var/tmp zroot/var1 6.01G 197G 6.01G /zroot/var1 root@newslocum:~ #
I think that’s it.
Copy again!
I’ll do another set of snapshots and incremental sends before the replacement occurs.
When the time comes…
I’ll boot the test system from mfsBSD and renamed all those new datasets. (e.g. zfs rename zroot/ROOT/default zroot/ROOT/defaultOLD). Then I’ll do a test reboot to make sure it still boot.
There’s the matter of a zpool export on the old drives and new drives and a zpool import. I thought I might have to do sudo zpool set bootfs=zroot/ROOT/default zroot but I think that’s set already, during the install of FreeBSD onto the new drives.
I’m going to post this now and review it later. There is no rush for this.
The original plan
WARNING: I wrote most of this late at night and I know I rambled on. I think it helped me work through the idea in my head.
I’m posting for entertainment purposes only.
In this post I will describe how I replace a zroot (the bootable base-OS of my FreeBSD server) with a new zroot.
In this post:
Why replace the zroot?
The existing zroot pools seems wrong. Compare this zroot:
[dan@slocum:~] $ zfs list -r zroot NAME USED AVAIL REFER MOUNTPOINT zroot 24.2G 181G 4.88G /zroot zroot/bootenv 19.3G 181G 34K legacy zroot/bootenv/12.1 1K 181G 482M / zroot/bootenv/default 19.3G 181G 604M / zroot/bootenv/default/tmp 79.0M 181G 955K /tmp zroot/bootenv/default/usr 9.27G 181G 2.38G /usr zroot/bootenv/default/usr/local 3.57G 181G 1.06G /usr/local zroot/bootenv/default/usr/obj 2.34M 181G 2.23M /usr/obj zroot/bootenv/default/usr/src 1.01G 181G 1.01G /usr/src zroot/bootenv/default/var 8.94G 181G 5.83G /var zroot/bootenv/default/var/audit 36K 181G 23K /var/audit zroot/bootenv/default/var/empty 23K 181G 23K /var/empty zroot/bootenv/default/var/log 201M 181G 19.5M /var/log zroot/bootenv/default/var/tmp 1.58M 181G 1.24M /var/tmp [dan@slocum:~] $
With this one:
[dan@r720-01:~] $ zfs list -r zroot NAME USED AVAIL REFER MOUNTPOINT zroot 13.5G 56.7G 88K /zroot zroot/ROOT 5.24G 56.7G 88K none zroot/ROOT/default 5.24G 56.7G 2.90G / zroot/tmp 9.32M 56.7G 788K /tmp zroot/usr 8.17G 56.7G 88K /usr zroot/usr/home 3.67M 56.7G 2.40M /usr/home zroot/usr/obj 6.81G 56.7G 4.87G /usr/obj zroot/usr/ports 88K 56.7G 88K /usr/ports zroot/usr/src 1.36G 56.7G 1.36G /usr/src zroot/var 18.8M 56.7G 88K /var zroot/var/audit 88K 56.7G 88K /var/audit zroot/var/crash 128K 56.7G 88K /var/crash zroot/var/log 17.5M 56.7G 1.39M /var/log zroot/var/mail 416K 56.7G 112K /var/mail zroot/var/tmp 672K 56.7G 296K /var/tmp [dan@r720-01:~] $
slocum has been around since early 2013 long before the existing standard layout arose. It wasn’t until 2018 that a zroot was added to this system. I think that is where we can trace this back.
However, this is not about layout. It is about function.
I’ve been having a lingering issue with pf and vnet: If I invoked: pfctl -f /etc/pf.conf, I must restart all vnet jails. If I do not, simple things like contacting the mail server from the vnet jail do not work. This is related to epair groups, I am sure.
Kristof Provost had a work-in-progress patch which I was eager to try. This required building a new kernel.
After starting the buildworld, error after error started to appear.
The existing zroot, I discovered today, is missing some essential items. Case in point: there is no /usr/bin/cc on this system. Intriguing.
Next, ldd was missing.
A bunch of includes were not there either.
It got to the point where I decided to build on another host, then install over NFS.
That’s why.
Creating the new zroot
I have ordered two Samsung SSD 860 EVO 250GB 2.5 Inch SATA III Internal SSD drives. They will be used to create a ZFS mirror. I would rather create a new one and leave the exist zroot untouched.
I will use another system to partition the new SSDs and install FreeBSD.
Which datasets do I need?
Which datasets from the old zroot will I need to copy somewhere else?
[dan@slocum:~] $ zfs list -r zroot NAME USED AVAIL REFER MOUNTPOINT zroot 24.2G 181G 4.88G /zroot zroot/bootenv 19.3G 181G 34K legacy zroot/bootenv/12.1 1K 181G 482M / zroot/bootenv/default 19.3G 181G 604M / zroot/bootenv/default/tmp 79.0M 181G 955K /tmp zroot/bootenv/default/usr 9.27G 181G 2.38G /usr zroot/bootenv/default/usr/local 3.57G 181G 1.06G /usr/local zroot/bootenv/default/usr/obj 2.34M 181G 2.23M /usr/obj zroot/bootenv/default/usr/src 1.01G 181G 1.01G /usr/src zroot/bootenv/default/var 8.94G 181G 5.83G /var zroot/bootenv/default/var/audit 36K 181G 23K /var/audit zroot/bootenv/default/var/empty 23K 181G 23K /var/empty zroot/bootenv/default/var/log 201M 181G 19.5M /var/log zroot/bootenv/default/var/tmp 1.58M 181G 1.24M /var/tmp [dan@slocum:~] $
I do not need /tmp because that directory is cleared upon every boot:
$ grep tmp /etc/rc.conf clear_tmp_enable="YES"
What about /usr?
Lines 8 and 9 refer to /usr and /usr/local so let’s look in there:
[dan@slocum:/usr] $ ls bin home include lib lib32 libdata libexec local obj ports sbin share src tests [dan@slocum:/usr] $
I will talk about home below, but I don’t need to move that.
Ports install to /usr/local, so I’m going to need to copy that over.
The new zroot layout will probably look like this:
zroot/usr 8.17G 56.7G 88K /usr zroot/usr/home 3.67M 56.7G 2.40M /usr/home zroot/usr/obj 6.81G 56.7G 4.87G /usr/obj zroot/usr/ports 88K 56.7G 88K /usr/ports zroot/usr/src 1.36G 56.7G 1.36G /usr/src
home, obj, ports, and src will not be required. I will zfs destroy them on the new zroot after creation.
/var/audit is empty on the existing host. Nothing to do here.
[dan@slocum:/usr] $ sudo ls -l /var/audit
total 2
drwxrwx— 2 auditdistd audit 2 Nov 8 2013 dist
drwx—— 2 auditdistd wheel 2 Nov 8 2013 remote
[dan@slocum:/usr] $ sudo ls -l /var/audit/dist
total 0
[dan@slocum:/usr] $ sudo ls -l /var/audit/remote
total 0
[dan@slocum:/usr] $
I don’t need /var/empty because it’s… empty.
I want:
/var/backups /var/cache /var/cron /var/db /var/lib /var/mail
… I think I should just copy over all of var
I need to copy over /var/log.
Looking in /var/tmp, I don’t need any of that either.
Configuration files
I need all of /etc/ and /usr/local/etc/
There must be a better way
This is getting complex.
Why don’t I go back to a simple tar | tar solution? I used this back in 1999.
Why not just copy all of zroot over using tar? That is all completely independent of ZFS datasets and mountpoints.