ZFS has a the ability to share itself within a jail. That gives the jail some autonomy, and I like that.
I’ve written briefly about that, specifically for iocage. More recently, I started using a zfs snapshot for caching clearing.
The purpose of this post is to document the existing configuration of the production FreshPorts webserver and outline the plan on how to modify it for more zfs-snapshot-based cache clearing.
The current configuration
From within the jail, we see:
[dan@x8dtu-nginx01:~] $ zfs list NAME USED AVAIL REFER MOUNTPOINT main_tank 972G 3.44T 96K /main_tank main_tank/data 797G 3.44T 88K /main_tank/data main_tank/data/freshports 42.4G 3.44T 88K /main_tank/data/freshports main_tank/data/freshports/backend 5.55G 3.44T 88K /main_tank/data/freshports/backend main_tank/data/freshports/backend/cache 14.3M 3.44T 88K /iocage/jails/x8dtu-ingress01/root/var/db/freshports/cache main_tank/data/freshports/backend/cache/packages 3.65M 3.44T 3.59M /var/db/freshports/cache/packages [dan@x8dtu-nginx01:~] $
Looking at those mountpoints, it confused me. Looking closely, I see:
[dan@x8dtu-nginx01:~] $ zfs get -r -t filesystem mounted main_tank NAME PROPERTY VALUE SOURCE main_tank mounted no - main_tank/data mounted no - main_tank/data/freshports mounted no - main_tank/data/freshports/backend mounted no - main_tank/data/freshports/backend/cache mounted no - main_tank/data/freshports/backend/cache/packages mounted yes - [dan@x8dtu-nginx01:~] $
Only main_tank/data/freshports/backend/cache/packages is mounted. That puts me at ease.
The /iocage/jails mountpoint is a leftover from before this dataset was jailed.
The host
On the host, let’s look at the filesystem, from just one level up:
[dan@x8dtu:~] $ zfs list -r main_tank/data/freshports/backend/cache NAME USED AVAIL REFER MOUNTPOINT main_tank/data/freshports/backend/cache 22.7M 3.44T 88K /iocage/jails/x8dtu-ingress01/root/var/db/freshports/cache main_tank/data/freshports/backend/cache/html 7.30M 3.44T 436K /iocage/jails/x8dtu-ingress01/root/var/db/freshports/cache/html main_tank/data/freshports/backend/cache/packages 11.4M 3.44T 11.3M /var/db/freshports/cache/packages main_tank/data/freshports/backend/cache/ports 88K 3.44T 88K /iocage/jails/x8dtu-ingress01/root/var/db/freshports/cache/ports main_tank/data/freshports/backend/cache/spooling 3.86M 3.44T 88K /iocage/jails/x8dtu-ingress01/root/var/db/freshports/cache/spooling [dan@x8dtu:~] $
This is shared between the backend (x8dtu-ingress01) and the frontend (x8dtu-nginx01).
I need to split apart from the frontend and the backend.
What I do elsewhere
For the dev, test, and stage jails (all on one host), I have it split by environment:
[dan@slocum:~] $ zfs list -r -d 1 nvd/freshports NAME USED AVAIL REFER MOUNTPOINT nvd/freshports 136G 79.2G 23K none nvd/freshports/dev 39.2G 79.2G 23K /iocage/jails/dev-ingress01/root/var/db/freshports nvd/freshports/devgit 23.9G 79.2G 23K /iocage/jails/devgit-ingress01/root/var/db/freshports nvd/freshports/stage 34.6G 79.2G 23K /iocage/jails/stage-ingress01/root/var/db/freshports nvd/freshports/test 38.0G 79.2G 23K /iocage/jails/test-ingress01/root/var/db/freshports [dan@slocum:~] $
From there, I split it into the ingress jail and the website jail.
For production
For production I want something similar to what I have in staging:
[dan@slocum:~] $ zfs list -r system/data/freshports-cache/stage-nginx01 NAME USED AVAIL REFER MOUNTPOINT system/data/freshports-cache/stage-nginx01 527K 13.4T 176K none system/data/freshports-cache/stage-nginx01/packages 176K 13.4T 176K /var/db/freshports/cache/packages system/data/freshports-cache/stage-nginx01/ports 176K 13.4T 176K /var/db/freshports/cache/ports [dan@slocum:~] $
The packages filesystem was discussed within zfs snapshot for caching clearing. Since then, I have extended that to include ports.
Occasionally, ports is cleared out completely, but rarely. A typical usage is a recent change in caching approaches.
When I went back to the production host, I discovered more:
main_tank/data/freshports/website 3.36G 3.44T 88K none main_tank/data/freshports/website/cache 3.36G 3.44T 270M /iocage/jails/x8dtu-nginx01/root/var/db/freshports/cache main_tank/data/freshports/website/cache/daily 7.99M 3.44T 7.84M /iocage/jails/x8dtu-nginx01/root/var/db/freshports/cache/daily main_tank/data/freshports/website/cache/news 131M 3.44T 128K /iocage/jails/x8dtu-nginx01/root/var/db/freshports/cache/news main_tank/data/freshports/website/cache/pages 88K 3.44T 88K /iocage/jails/x8dtu-nginx01/root/var/db/freshports/cache/pages main_tank/data/freshports/website/cache/ports 2.89G 3.44T 111M /iocage/jails/x8dtu-nginx01/root/var/db/freshports/cache/ports main_tank/data/freshports/website/cache/spooling 50.3M 3.44T 10.0M /iocage/jails/x8dtu-nginx01/root/var/db/freshports/cache/spooling [dan@x8dtu:~] $
Ahh, all that is not jailed. Perhaps it is time to jail that whole dataset instead of mounting them from outside.
What is involved in that?
It will be much the same process as for using zfs rollback for cache clearing.
In short…
I think these are the required steps:
- in x8dtu-nginx01: sudo sysrc nginx_enable=”NO”
- take down the website jail: sudo iocage stop x8dtu-nginx
- adjust the datset (the zpool name is always omitted): sudo iocage set jail_zfs_dataset=data/freshports/website x8dtu-nginx01
- start the jail: sudo iocage start x8dtu-nginx
- in the jail, adjust the mount point: sudo zfs set mountpoint=/var/db/freshports/cache main_tank/data/freshports/website/cache
- move the old cache away: cd /var/db/freshports/cache && mv packages packages.old
- create the new dataset: zfs create main_tank/data/freshports/website/cache/packages
- snapshot it: zfs snapshot main_tank/data/freshports/website/cache/packages@empty
- Change this line in /usr/local/etc/freshports/fp-listen.ini:
PKG_ZFS_SNAPSHOT = main_tank/data/freshports/backend/cache/packages@empty
to
PKG_ZFS_SNAPSHOT = main_tank/data/freshports/website/cache/packages@empty
Let’s check the existing mountpoints:
$ zfs get -t filesystem -r mountpoint main_tank/data/freshports/website NAME PROPERTY VALUE SOURCE main_tank/data/freshports/website mountpoint none local main_tank/data/freshports/website/cache mountpoint /iocage/jails/x8dtu-nginx01/root/var/db/freshports/cache local main_tank/data/freshports/website/cache/daily mountpoint /iocage/jails/x8dtu-nginx01/root/var/db/freshports/cache/daily inherited from main_tank/data/freshports/website/cache main_tank/data/freshports/website/cache/news mountpoint /iocage/jails/x8dtu-nginx01/root/var/db/freshports/cache/news inherited from main_tank/data/freshports/website/cache main_tank/data/freshports/website/cache/pages mountpoint /iocage/jails/x8dtu-nginx01/root/var/db/freshports/cache/pages inherited from main_tank/data/freshports/website/cache main_tank/data/freshports/website/cache/ports mountpoint /iocage/jails/x8dtu-nginx01/root/var/db/freshports/cache/ports inherited from main_tank/data/freshports/website/cache main_tank/data/freshports/website/cache/spooling mountpoint /iocage/jails/x8dtu-nginx01/root/var/db/freshports/cache/spooling inherited from main_tank/data/freshports/website/cache
Good: inherited
I think this should work. I will leave the actual work for another day.
To destroy later
I can destroy main_tank/data/freshports/backend/cache/packages later. I often like to rename files or datasets to something like _DELETE_ME_after-2020.08.30 and then set myself a reminder. Just in case. There is usually no rush to delete and I feel better about not deleting immediately. Just in case.