Not so long ago, a disk space issue arose. I obtained some new drives (thank you to those who donated) and I consolidated some of the space. However, I had not yet taken care of the problem.
For my first example, I’ll copy my vm data over. In my next post, I’ll copy dev.freshports.org to a new dataset.
In this post:
- FreeBSD 15.0
What happened?
Yesterday, the chickens came home to roost.
The first indication was Nagios being unable to complete a manual check of a service as I asked. That’s when I noticed the drive was full-ish.
The following is the current situation:
[12:24 r730-01 dvl ~] % zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data01 5.81T 6.37G 5.81T - - 2% 0% 1.00x ONLINE - data02 928G 819G 109G - - 75% 88% 1.00x ONLINE - data03 7.25T 1.63T 5.62T - - 49% 22% 1.00x ONLINE - data04 29.1T 7.37T 21.7T - - 0% 25% 1.00x ONLINE - zroot 107G 70.9G 36.1G - - 61% 66% 1.00x ONLINE -
I have a filesystem set aside for emergencies like this. It allows me to quickly free up space when it fills and bides me some time before action is required. This has come up before – see 452 4.3.1 Insufficient system storage – and it has come up again this morning.
[12:29 r730-01 dvl ~] % zfs list data02/reserved NAME USED AVAIL REFER MOUNTPOINT data02/reserved 20G 80.5G 96K none
On Saturday (yesterday) when this arose, it took me a while to remember this filesystem, and I modified it from 200G to 80G, freeing up 120G just like that. The following is an extract from /var/log/auth.log):
Apr 18 22:16:21 r730-01 sudo[6092]: dvl : TTY=pts/1 ; PWD=/usr/home/dvl ; USER=root ; COMMAND=/sbin/zfs set refreservation=80G data02/reserved
Overnight, there were 177 emails hitting my inbox which looked like this:
From: Mail Delivery System <MAILER-DAEMON@cliff2.int.unixathome.org> To: Postmaster <postmaster@cliff2.int.unixathome.org> Subject: Postfix SMTP server: errors from webserver.int.unixathome.org[10.55.0.3] Message-Id: <20260419050142.939E22D1D5@cliff2.int.unixathome.org> Transcript of session follows. Out: 220 cliff2.int.unixathome.org ESMTP Postfix In: EHLO webserver.int.unixathome.org Out: 250-cliff2.int.unixathome.org Out: 250-PIPELINING Out: 250-SIZE 10485760000 Out: 250-ETRN Out: 250-STARTTLS Out: 250-ENHANCEDSTATUSCODES Out: 250-8BITMIME Out: 250-DSN Out: 250-SMTPUTF8 Out: 250 CHUNKING In: STARTTLS Out: 220 2.0.0 Ready to start TLS In: EHLO webserver.int.unixathome.org Out: 250-cliff2.int.unixathome.org Out: 250-PIPELINING Out: 250-SIZE 10485760000 Out: 250-ETRN Out: 250-ENHANCEDSTATUSCODES Out: 250-8BITMIME Out: 250-DSN Out: 250-SMTPUTF8 Out: 250-CHUNKING Out: 250 REQUIRETLS In: MAIL FROM:<nagios@webserver.int.unixathome.org> Out: 452 4.3.1 Insufficient system storage Session aborted, reason: lost connection For other details, see the local mail logfile
Looking in /var/log/maillog on cliff2, I found:
Apr 19 05:01:42 cliff2 postfix/smtpd[23614]: NOQUEUE: reject: MAIL from webserver.int.unixathome.org[10.55.0.3]: 452 4.3.1 Insufficient system storage; proto=ESMTP helo=<webserver.int.unixathome.org>
Aout 20 minutes ago, I did the following command (as obtained from /var/log/auth.log)
Apr 19 12:20:37 r730-01 sudo[78444]: dvl : TTY=pts/0 ; PWD=/usr/home/dvl ; USER=root ; COMMAND=/sbin/zfs set refreservation=20G data02/reserved
The source
This is the nearly-full zpool:
[12:44 r730-01 dvl ~] % zfs list -r data02 NAME USED AVAIL REFER MOUNTPOINT data02 839G 60.5G 96K none data02/freshports 293G 60.5G 88K none data02/freshports/dev-ingress01 229G 60.5G 88K none data02/freshports/dev-ingress01/dvl-src 197G 60.5G 197G /jails/dev-ingress01/usr/home/dvl/src data02/freshports/dev-ingress01/freshports 22.9G 60.5G 2.09G /jails/dev-ingress01/var/db/freshports data02/freshports/dev-ingress01/freshports/cache 2.30M 60.5G 132K /jails/dev-ingress01/var/db/freshports/cache data02/freshports/dev-ingress01/freshports/cache/html 2.07M 60.5G 2.07M /jails/dev-ingress01/var/db/freshports/cache/html data02/freshports/dev-ingress01/freshports/cache/spooling 104K 60.5G 104K /jails/dev-ingress01/var/db/freshports/cache/spooling data02/freshports/dev-ingress01/freshports/message-queues 20.8G 60.5G 14.4M /jails/dev-ingress01/var/db/freshports/message-queues data02/freshports/dev-ingress01/freshports/message-queues/archive 20.8G 60.5G 11.7G /jails/dev-ingress01/var/db/freshports/message-queues/archive data02/freshports/dev-ingress01/ingress 5.39G 60.5G 132K /jails/dev-ingress01/var/db/ingress data02/freshports/dev-ingress01/ingress/latest_commits 528K 60.5G 108K /jails/dev-ingress01/var/db/ingress/latest_commits data02/freshports/dev-ingress01/ingress/message-queues 1.43M 60.5G 628K /jails/dev-ingress01/var/db/ingress/message-queues data02/freshports/dev-ingress01/ingress/repos 5.39G 60.5G 120K /jails/dev-ingress01/var/db/ingress/repos data02/freshports/dev-ingress01/ingress/repos/doc 549M 60.5G 546M /jails/dev-ingress01/var/db/ingress/repos/doc data02/freshports/dev-ingress01/ingress/repos/ports 2.23G 60.5G 2.22G /jails/dev-ingress01/var/db/ingress/repos/ports data02/freshports/dev-ingress01/ingress/repos/src 2.63G 60.5G 2.62G /jails/dev-ingress01/var/db/ingress/repos/src data02/freshports/dev-ingress01/jails 3.05G 60.5G 104K /jails/dev-ingress01/jails data02/freshports/dev-ingress01/jails/freshports 3.05G 60.5G 405M /jails/dev-ingress01/jails/freshports data02/freshports/dev-ingress01/jails/freshports/ports 2.65G 60.5G 2.65G /jails/dev-ingress01/jails/freshports/usr/ports data02/freshports/dev-ingress01/modules 4.38M 60.5G 4.38M /jails/dev-ingress01/usr/local/lib/perl5/site_perl/FreshPorts data02/freshports/dev-ingress01/scripts 3.30M 60.5G 3.30M /jails/dev-ingress01/usr/local/libexec/freshports data02/freshports/dev-nginx01 54.7M 60.5G 96K none data02/freshports/dev-nginx01/www 54.6M 60.5G 96K /jails/dev-nginx01/usr/local/www data02/freshports/dev-nginx01/www/freshports 51.8M 60.5G 51.8M /jails/dev-nginx01/usr/local/www/freshports data02/freshports/dev-nginx01/www/freshsource 2.71M 60.5G 2.71M /jails/dev-nginx01/usr/local/www/freshsource data02/freshports/dvl-ingress01 16.3G 60.5G 96K none data02/freshports/dvl-ingress01/dvl-src 80.3M 60.5G 80.3M /jails/dvl-ingress01/usr/home/dvl/src data02/freshports/dvl-ingress01/freshports 4.92G 60.5G 96K /jails/dvl-ingress01/var/db/freshports data02/freshports/dvl-ingress01/freshports/cache 2.45M 60.5G 96K /jails/dvl-ingress01/var/db/freshports/cache data02/freshports/dvl-ingress01/freshports/cache/html 2.15M 60.5G 2.07M /jails/dvl-ingress01/var/db/freshports/cache/html data02/freshports/dvl-ingress01/freshports/cache/spooling 208K 60.5G 208K /jails/dvl-ingress01/var/db/freshports/cache/spooling data02/freshports/dvl-ingress01/freshports/message-queues 4.92G 60.5G 19.6M /jails/dvl-ingress01/var/db/freshports/message-queues data02/freshports/dvl-ingress01/freshports/message-queues/archive 4.90G 60.5G 4.90G /jails/dvl-ingress01/var/db/freshports/message-queues/archive data02/freshports/dvl-ingress01/ingress 5.45G 60.5G 140K /jails/dvl-ingress01/var/db/ingress data02/freshports/dvl-ingress01/ingress/latest_commits 100K 60.5G 100K /jails/dvl-ingress01/var/db/ingress/latest_commits data02/freshports/dvl-ingress01/ingress/message-queues 160K 60.5G 160K /jails/dvl-ingress01/var/db/ingress/message-queues data02/freshports/dvl-ingress01/ingress/repos 5.45G 60.5G 112K /jails/dvl-ingress01/var/db/ingress/repos data02/freshports/dvl-ingress01/ingress/repos/doc 543M 60.5G 543M /jails/dvl-ingress01/var/db/ingress/repos/doc data02/freshports/dvl-ingress01/ingress/repos/ports 2.23G 60.5G 2.23G /jails/dvl-ingress01/var/db/ingress/repos/ports data02/freshports/dvl-ingress01/ingress/repos/src 2.69G 60.5G 2.69G /jails/dvl-ingress01/var/db/ingress/repos/src data02/freshports/dvl-ingress01/jails 5.83G 60.5G 104K /jails/dvl-ingress01/jails data02/freshports/dvl-ingress01/jails/freshports 5.83G 60.5G 404M /jails/dvl-ingress01/jails/freshports data02/freshports/dvl-ingress01/jails/freshports/ports 5.43G 60.5G 2.64G /jails/dvl-ingress01/jails/freshports/usr/ports data02/freshports/dvl-ingress01/modules 2.67M 60.5G 2.67M /jails/dvl-ingress01/usr/local/lib/perl5/site_perl/FreshPorts data02/freshports/dvl-ingress01/scripts 2.34M 60.5G 2.34M /jails/dvl-ingress01/usr/local/libexec/freshports data02/freshports/dvl-nginx01 22.3M 60.5G 96K none data02/freshports/dvl-nginx01/www 22.2M 60.5G 96K none data02/freshports/dvl-nginx01/www/freshports 20.4M 60.5G 20.4M /jails/dvl-nginx01/usr/local/www/freshports data02/freshports/dvl-nginx01/www/freshsource 1.78M 60.5G 1.78M /jails/dvl-nginx01/usr/local/www/freshsource data02/freshports/jailed 3.93G 60.5G 96K none data02/freshports/jailed/dev-ingress01 96K 60.5G 96K none data02/freshports/jailed/dev-nginx01 1.39G 60.5G 96K none data02/freshports/jailed/dev-nginx01/cache 1.39G 60.5G 96K /var/db/freshports/cache data02/freshports/jailed/dev-nginx01/cache/categories 1.39M 60.5G 1.31M /var/db/freshports/cache/categories data02/freshports/jailed/dev-nginx01/cache/commits 96K 60.5G 96K /var/db/freshports/cache/commits data02/freshports/jailed/dev-nginx01/cache/daily 5.65M 60.5G 5.56M /var/db/freshports/cache/daily data02/freshports/jailed/dev-nginx01/cache/general 7.19M 60.5G 7.11M /var/db/freshports/cache/general data02/freshports/jailed/dev-nginx01/cache/news 228K 60.5G 140K /var/db/freshports/cache/news data02/freshports/jailed/dev-nginx01/cache/packages 5.76M 60.5G 5.68M /var/db/freshports/cache/packages data02/freshports/jailed/dev-nginx01/cache/pages 96K 60.5G 96K /var/db/freshports/cache/pages data02/freshports/jailed/dev-nginx01/cache/ports 1.37G 60.5G 1.37G /var/db/freshports/cache/ports data02/freshports/jailed/dev-nginx01/cache/spooling 224K 60.5G 120K /var/db/freshports/cache/spooling data02/freshports/jailed/dvl-ingress01 192K 60.5G 96K none data02/freshports/jailed/dvl-ingress01/distfiles 96K 60.5G 96K none data02/freshports/jailed/dvl-nginx01 1.84M 60.5G 96K none data02/freshports/jailed/dvl-nginx01/cache 1.65M 60.5G 148K /var/db/freshports/cache data02/freshports/jailed/dvl-nginx01/cache/categories 96K 60.5G 96K /var/db/freshports/cache/categories data02/freshports/jailed/dvl-nginx01/cache/commits 96K 60.5G 96K /var/db/freshports/cache/commits data02/freshports/jailed/dvl-nginx01/cache/daily 96K 60.5G 96K /var/db/freshports/cache/daily data02/freshports/jailed/dvl-nginx01/cache/general 96K 60.5G 96K /var/db/freshports/cache/general data02/freshports/jailed/dvl-nginx01/cache/news 220K 60.5G 140K /var/db/freshports/cache/news data02/freshports/jailed/dvl-nginx01/cache/packages 96K 60.5G 96K /var/db/freshports/cache/packages data02/freshports/jailed/dvl-nginx01/cache/pages 96K 60.5G 96K /var/db/freshports/cache/pages data02/freshports/jailed/dvl-nginx01/cache/ports 448K 60.5G 368K /var/db/freshports/cache/ports data02/freshports/jailed/dvl-nginx01/cache/spooling 200K 60.5G 120K /var/db/freshports/cache/spooling data02/freshports/jailed/dvl-nginx01/freshports 96K 60.5G 96K none data02/freshports/jailed/stage-ingress01 192K 60.5G 96K none data02/freshports/jailed/stage-ingress01/data 96K 60.5G 96K none data02/freshports/jailed/stage-nginx01 1.43G 60.5G 96K none data02/freshports/jailed/stage-nginx01/cache 1.43G 60.5G 248K /var/db/freshports/cache data02/freshports/jailed/stage-nginx01/cache/categories 1.38M 60.5G 1.30M /var/db/freshports/cache/categories data02/freshports/jailed/stage-nginx01/cache/commits 96K 60.5G 96K /var/db/freshports/cache/commits data02/freshports/jailed/stage-nginx01/cache/daily 16.4M 60.5G 16.3M /var/db/freshports/cache/daily data02/freshports/jailed/stage-nginx01/cache/general 7.44M 60.5G 7.36M /var/db/freshports/cache/general data02/freshports/jailed/stage-nginx01/cache/news 228K 60.5G 140K /var/db/freshports/cache/news data02/freshports/jailed/stage-nginx01/cache/packages 3.96M 60.5G 3.88M /var/db/freshports/cache/packages data02/freshports/jailed/stage-nginx01/cache/pages 96K 60.5G 96K /var/db/freshports/cache/pages data02/freshports/jailed/stage-nginx01/cache/ports 1.40G 60.5G 1.40G /var/db/freshports/cache/ports data02/freshports/jailed/stage-nginx01/cache/spooling 232K 60.5G 120K /var/db/freshports/cache/spooling data02/freshports/jailed/test-ingress01 192K 60.5G 96K none data02/freshports/jailed/test-ingress01/data 96K 60.5G 96K none data02/freshports/jailed/test-nginx01 1.10G 60.5G 96K none data02/freshports/jailed/test-nginx01/cache 1.10G 60.5G 236K /var/db/freshports/cache data02/freshports/jailed/test-nginx01/cache/categories 896K 60.5G 808K /var/db/freshports/cache/categories data02/freshports/jailed/test-nginx01/cache/commits 96K 60.5G 96K /var/db/freshports/cache/commits data02/freshports/jailed/test-nginx01/cache/daily 23.5M 60.5G 23.4M /var/db/freshports/cache/daily data02/freshports/jailed/test-nginx01/cache/general 5.65M 60.5G 5.54M /var/db/freshports/cache/general data02/freshports/jailed/test-nginx01/cache/news 228K 60.5G 140K /var/db/freshports/cache/news data02/freshports/jailed/test-nginx01/cache/packages 1.65M 60.5G 1.57M /var/db/freshports/cache/packages data02/freshports/jailed/test-nginx01/cache/pages 96K 60.5G 96K /var/db/freshports/cache/pages data02/freshports/jailed/test-nginx01/cache/ports 1.07G 60.5G 1.07G /var/db/freshports/cache/ports data02/freshports/jailed/test-nginx01/cache/spooling 232K 60.5G 120K /var/db/freshports/cache/spooling data02/freshports/stage-ingress01 19.6G 60.5G 96K none data02/freshports/stage-ingress01/cache 2.30M 60.5G 96K /jails/stage-ingress01/var/db/freshports/cache data02/freshports/stage-ingress01/cache/html 2.10M 60.5G 2.02M /jails/stage-ingress01/var/db/freshports/cache/html data02/freshports/stage-ingress01/cache/spooling 104K 60.5G 104K /jails/stage-ingress01/var/db/freshports/cache/spooling data02/freshports/stage-ingress01/freshports 11.1G 60.5G 96K none data02/freshports/stage-ingress01/freshports/archive 11.0G 60.5G 11.0G /jails/stage-ingress01/var/db/freshports/message-queues/archive data02/freshports/stage-ingress01/freshports/message-queues 63.8M 60.5G 61.8M /jails/stage-ingress01/var/db/freshports/message-queues data02/freshports/stage-ingress01/ingress 5.41G 60.5G 96K /jails/stage-ingress01/var/db/ingress data02/freshports/stage-ingress01/ingress/latest_commits 404K 60.5G 100K /jails/stage-ingress01/var/db/ingress/latest_commits data02/freshports/stage-ingress01/ingress/message-queues 1.12M 60.5G 180K /jails/stage-ingress01/var/db/ingress/message-queues data02/freshports/stage-ingress01/ingress/repos 5.41G 60.5G 5.39G /jails/stage-ingress01/var/db/ingress/repos data02/freshports/stage-ingress01/jails 405M 60.5G 104K /jails/stage-ingress01/jails data02/freshports/stage-ingress01/jails/freshports 405M 60.5G 404M /jails/stage-ingress01/jails/freshports data02/freshports/stage-ingress01/ports 2.68G 60.5G 2.68G /jails/stage-ingress01/jails/freshports/usr/ports data02/freshports/test-ingress01 24.5G 60.5G 96K none data02/freshports/test-ingress01/freshports 13.2G 60.5G 2.05G /jails/test-ingress01/var/db/freshports data02/freshports/test-ingress01/freshports/cache 2.21M 60.5G 96K /jails/test-ingress01/var/db/freshports/cache data02/freshports/test-ingress01/freshports/cache/html 2.01M 60.5G 2.01M /jails/test-ingress01/var/db/freshports/cache/html data02/freshports/test-ingress01/freshports/cache/spooling 104K 60.5G 104K /jails/test-ingress01/var/db/freshports/cache/spooling data02/freshports/test-ingress01/freshports/message-queues 11.1G 60.5G 62.9M /jails/test-ingress01/var/db/freshports/message-queues data02/freshports/test-ingress01/freshports/message-queues/archive 11.1G 60.5G 11.1G /jails/test-ingress01/var/db/freshports/message-queues/archive data02/freshports/test-ingress01/ingress 8.23G 60.5G 128K /jails/test-ingress01/var/db/ingress data02/freshports/test-ingress01/ingress/latest_commits 344K 60.5G 100K /jails/test-ingress01/var/db/ingress/latest_commits data02/freshports/test-ingress01/ingress/message-queues 1.04M 60.5G 164K /jails/test-ingress01/var/db/ingress/message-queues data02/freshports/test-ingress01/ingress/repos 8.23G 60.5G 5.14G /jails/test-ingress01/var/db/ingress/repos data02/freshports/test-ingress01/jails 3.05G 60.5G 96K /jails/test-ingress01/jails data02/freshports/test-ingress01/jails/freshports 3.05G 60.5G 405M /jails/test-ingress01/jails/freshports data02/freshports/test-ingress01/jails/freshports/ports 2.65G 60.5G 2.65G /jails/test-ingress01/jails/freshports/usr/ports data02/jails 459G 60.5G 9.54G /jails data02/jails/bacula 16.5G 60.5G 16.4G /jails/bacula data02/jails/bacula-sd-02 3.14G 60.5G 3.05G /jails/bacula-sd-02 data02/jails/bacula-sd-03 4.33G 60.5G 4.25G /jails/bacula-sd-03 data02/jails/besser 7.02G 60.5G 5.54G /jails/besser data02/jails/certs 2.61G 60.5G 2.55G /jails/certs data02/jails/certs-rsync 2.61G 60.5G 2.55G /jails/certs-rsync data02/jails/cliff2 2.66G 60.5G 2.60G /jails/cliff2 data02/jails/dev-ingress01 4.21G 60.5G 4.13G /jails/dev-ingress01 data02/jails/dev-nginx01 3.58G 60.5G 3.50G /jails/dev-nginx01 data02/jails/dns-hidden-master 2.81G 60.5G 2.76G /jails/dns-hidden-master data02/jails/dns1 5.07G 60.5G 5.01G /jails/dns1 data02/jails/dvl-ingress01 7.51G 60.5G 6.43G /jails/dvl-ingress01 data02/jails/dvl-nginx01 1.53G 60.5G 1.43G /jails/dvl-nginx01 data02/jails/git 4.54G 60.5G 4.48G /jails/git data02/jails/jail_within_jail 598M 60.5G 585M /jails/jail_within_jail data02/jails/mqtt01 3.28G 60.5G 3.18G /jails/mqtt01 data02/jails/mydev 20.6G 60.5G 20.4G /jails/mydev data02/jails/mysql01 9.40G 60.5G 5.71G /jails/mysql01 data02/jails/mysql02 8.45G 60.5G 8.73G /jails/mysql02 data02/jails/nsnotify 2.83G 60.5G 2.75G /jails/nsnotify data02/jails/pg01 239G 60.5G 126G /jails/pg01 data02/jails/pg02 11.8G 60.5G 11.7G /jails/pg02 data02/jails/pg03 11.0G 60.5G 10.0G /jails/pg03 data02/jails/pkg01 15.5G 60.5G 15.3G /jails/pkg01 data02/jails/samdrucker 4.42G 60.5G 4.36G /jails/samdrucker data02/jails/serpico 2.71G 60.5G 2.64G /jails/serpico data02/jails/stage-ingress01 4.86G 60.5G 3.77G /jails/stage-ingress01 data02/jails/stage-nginx01 1.72G 60.5G 1.65G /jails/stage-nginx01 data02/jails/svn 10.2G 60.5G 9.97G /jails/svn data02/jails/talos 2.83G 60.5G 2.60G /jails/talos data02/jails/test-ingress01 2.13G 60.5G 2.03G /jails/test-ingress01 data02/jails/test-nginx01 1.66G 60.5G 1.58G /jails/test-nginx01 data02/jails/unifi01 16.7G 60.5G 13.3G /jails/unifi01 data02/jails/webserver 11.6G 60.5G 11.4G /jails/webserver data02/reserved 20G 80.5G 96K none data02/vm 65.9G 60.5G 7.35G /usr/local/vm data02/vm/FreeBSD-16.0-CURRENT 3.06G 60.5G 3.03G /usr/local/vm/FreeBSD-16.0-CURRENT data02/vm/freebsd-test 701M 60.5G 112K /usr/local/vm/freebsd-test data02/vm/freebsd-test/disk0 700M 60.5G 700M - data02/vm/hass 51.9G 60.5G 17.4G /usr/local/vm/hass data02/vm/home-assistant 351M 60.5G 351M /usr/local/vm/home-assistant data02/vm/myguest 2.55G 60.5G 2.55G /usr/local/vm/myguest
Let’s get stuff moving
Usually, my approach to sending data which is use is as follow: I’ll start copying some datasets to the new zpool. I won’t shutdown the applications using it. I will do an incremental send (only the changes since the previous send). For the last send, I will shutdown the applications so that I have all the stuff. NOTE: for some data, I don’t follow this approach, and instead shutdown the app first. For large datasets which take a long time to transfer, multiple sends (for me) decreases the actual downtime of the service.
I am sending from zpool data02 to zpool data04.
I will start with data02/vm, which hosts my bhyve instances.
I will do this as root from with a tmux session.
[12:47 r730-01 dvl ~] % su Password: root@r730-01:/home/dvl # tmux
This is what I want to relocate:
root@r730-01:/home/dvl # zfs list -r data02/vm NAME USED AVAIL REFER MOUNTPOINT data02/vm 65.9G 60.5G 7.35G /usr/local/vm data02/vm/FreeBSD-16.0-CURRENT 3.06G 60.5G 3.03G /usr/local/vm/FreeBSD-16.0-CURRENT data02/vm/freebsd-test 701M 60.5G 112K /usr/local/vm/freebsd-test data02/vm/freebsd-test/disk0 700M 60.5G 700M - data02/vm/hass 51.9G 60.5G 17.4G /usr/local/vm/hass data02/vm/home-assistant 351M 60.5G 351M /usr/local/vm/home-assistant data02/vm/myguest 2.55G 60.5G 2.55G /usr/local/vm/myguest
This is the destination:
[12:52 r730-01 dvl ~] % zfs list -r data04 NAME USED AVAIL REFER MOUNTPOINT data04 5.24T 15.3T 222K /data04 data04/bacula 5.23T 15.3T 205K /jails/bacula-sd-03/usr/local/bacula data04/bacula/volumes 5.23T 15.3T 205K /jails/bacula-sd-03/usr/local/bacula/volumes data04/bacula/volumes/DiffFile-03 571G 15.3T 143G /jails/bacula-sd-03/usr/local/bacula/volumes/DiffFile-03 data04/bacula/volumes/FullFile-03 4.15T 15.3T 2.58T /jails/bacula-sd-03/usr/local/bacula/volumes/FullFile-03 data04/bacula/volumes/IncrFile-03 536G 15.3T 81.2G /jails/bacula-sd-03/usr/local/bacula/volumes/IncrFile-03 data04/bacula/working 478K 15.3T 264K /jails/bacula-sd-03/usr/local/bacula/working data04/images 974M 15.3T 205K /data04/images data04/images/FreeBSD 974M 15.3T 974M /data04/images/FreeBSD data04/r720-02 1.42G 15.3T 188K /data04/r720-02 data04/r720-02/freebsd_releases 468M 15.3T 468M /var/db/mkjail data04/r720-02/jails 985M 15.3T 717K /jails data04/r720-02/jails/svn 984M 15.3T 984M /jails/svn data04/vm 410K 15.3T 205K /data04/vm data04/vm/FreeBSD-current 205K 15.3T 205K /usr/local/vm/FreeBSD-current
I notice I already have some vm related data over there. I now recognize it from when I created a FreeBSD 16 virtual machine for testing a FreeBSD ports issue which arose only on FreeBSD 16 (and all I had here was FreeBSD 15).
I’m sure there is a way to copy that over without doing several steps, like I’m about to do. I have only 6 filesystems to migrate. If it was more, I’d look at how to do this better. For now, it’s like this:
root@r730-01:/home/dvl # zfs send -vR data02/vm/FreeBSD-16.0-CURRENT@send.to.data04.1 | zfs receive data04/vm/FreeBSD-16.0-CURRENT full send of data02/vm/FreeBSD-16.0-CURRENT@autosnap_2026-03-30_00:00:14_daily estimated size is 1005M send from @autosnap_2026-03-30_00:00:14_daily to data02/vm/FreeBSD-16.0-CURRENT@autosnap_2026-03-31_00:00:19_daily estimated size is 4.43G send from @autosnap_2026-03-31_00:00:19_daily to data02/vm/FreeBSD-16.0-CURRENT@autosnap_2026-04-01_00:00:12_monthly estimated size is 624B send from @autosnap_2026-04-01_00:00:12_monthly to data02/vm/FreeBSD-16.0-CURRENT@autosnap_2026-04-01_00:00:12_daily estimated size is 624B send from @autosnap_2026-04-01_00:00:12_daily to data02/vm/FreeBSD-16.0-CURRENT@autosnap_2026-04-02_00:00:16_daily estimated size is 624B ... TIME SENT SNAPSHOT data02/vm/FreeBSD-16.0-CURRENT@autosnap_2026-04-19_10:45:12_frequently TIME SENT SNAPSHOT data02/vm/FreeBSD-16.0-CURRENT@autosnap_2026-04-19_11:00:06_hourly TIME SENT SNAPSHOT data02/vm/FreeBSD-16.0-CURRENT@autosnap_2026-04-19_11:00:06_frequently TIME SENT SNAPSHOT data02/vm/FreeBSD-16.0-CURRENT@autosnap_2026-04-19_11:15:10_frequently TIME SENT SNAPSHOT data02/vm/FreeBSD-16.0-CURRENT@autosnap_2026-04-19_11:30:03_frequently TIME SENT SNAPSHOT data02/vm/FreeBSD-16.0-CURRENT@autosnap_2026-04-19_11:45:10_frequently TIME SENT SNAPSHOT data02/vm/FreeBSD-16.0-CURRENT@autosnap_2026-04-19_12:00:02_hourly TIME SENT SNAPSHOT data02/vm/FreeBSD-16.0-CURRENT@autosnap_2026-04-19_12:00:02_frequently TIME SENT SNAPSHOT data02/vm/FreeBSD-16.0-CURRENT@autosnap_2026-04-19_12:15:12_frequently TIME SENT SNAPSHOT data02/vm/FreeBSD-16.0-CURRENT@autosnap_2026-04-19_12:30:02_frequently TIME SENT SNAPSHOT data02/vm/FreeBSD-16.0-CURRENT@autosnap_2026-04-19_12:45:10_frequently TIME SENT SNAPSHOT data02/vm/FreeBSD-16.0-CURRENT@send.to.data04.1
Repeat:
root@r730-01:/home/dvl # zfs send -vR data02/vm/freebsd-test@send.to.data04.1 | zfs receive data04/vm/freebsd-test full send of data02/vm/freebsd-test@autosnap_2026-01-01_00:00:18_monthly estimated size is 51.6K ... TIME SENT SNAPSHOT data02/vm/freebsd-test/disk0@send.to.data04.1 root@r730-01:/home/dvl #
Stopping the app
There is only one VM running here:
[13:00 r730-01 dvl ~] % sudo vm list NAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE FreeBSD-16.0-CURRENT default bhyveload 4 32GB - No Stopped freebsd-test default bhyveload 1 256M - No Stopped hass default uefi 4 8GB - Yes [1] Running (78344) home-assistant default uefi 1 1GB - No Stopped myguest default bhyveload 1 768M - No Stopped
So I stopped it:
[13:00 r730-01 dvl ~] % sudo vm stop hass Sending ACPI shutdown to hass
After waiting for it to stop:
[13:00 r730-01 dvl ~] % sudo vm list NAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE FreeBSD-16.0-CURRENT default bhyveload 4 32GB - No Stopped freebsd-test default bhyveload 1 256M - No Stopped hass default uefi 4 8GB - Yes [1] Stopped home-assistant default uefi 1 1GB - No Stopped myguest default bhyveload 1 768M - No Stopped [13:02 r730-01 dvl ~] %
Now back to the transfers.
Sending hass
I had the first snapshot; the app was still running. I haven’t used that snapshot, so I’ll take another one and send that instead.
root@r730-01:/home/dvl # zfs send -vR data02/vm/freebsd-test@send.to.data04.1 | zfs receive data04/vm/hass root@r730-01:/home/dvl # zfs snapshot -r data02/vm/hass@send.to.data04.2 root@r730-01:/home/dvl # zfs send data02/vm/hass@send.to.data04.2 | zfs receive data04/vm/hass root@r730-01:/home/dvl #
And the rest of the data:
root@r730-01:/home/dvl # zfs send data02/vm/home-assistant@send.to.data04.1 | zfs receive data04/vm/home-assistant root@r730-01:/home/dvl # zfs send data02/vm/myguest@send.to.data04.1 | zfs receive data04/vm/myguest root@r730-01:/home/dvl #
What do we have?
This is the new location.
[13:08 r730-01 dvl ~] % zfs list -r data04/vm NAME USED AVAIL REFER MOUNTPOINT data04/vm 25.2G 15.3T 205K /data04/vm data04/vm/FreeBSD-16.0-CURRENT 3.14G 15.3T 3.11G /data04/vm/FreeBSD-16.0-CURRENT data04/vm/FreeBSD-current 205K 15.3T 205K /usr/local/vm/FreeBSD-current data04/vm/freebsd-test 979M 15.3T 239K /data04/vm/freebsd-test data04/vm/freebsd-test/disk0 979M 15.3T 979M - data04/vm/hass 17.8G 15.3T 17.8G /data04/vm/hass data04/vm/home-assistant 351M 15.3T 351M /data04/vm/home-assistant data04/vm/myguest 2.94G 15.3T 2.94G /data04/vm/myguest
Taking the old data offline
In this section, I take the old data offline so it is not mounted.
[13:09 r730-01 dvl ~] % sudo zfs umount data02/vm/myguest [13:10 r730-01 dvl ~] % sudo zfs umount data02/vm/home-assistant [13:10 r730-01 dvl ~] % sudo zfs umount data02/vm/hass [13:10 r730-01 dvl ~] % sudo zfs umount data02/vm/freebsd-test [13:10 r730-01 dvl ~] % sudo zfs umount data02/vm/freebsd-test cannot unmount 'data02/vm/freebsd-test': not currently mounted [13:10 r730-01 dvl ~] % sudo zfs umount data02/vm/FreeBSD-16.0-CURRENT [13:10 r730-01 dvl ~] % sudo zfs umount data02/vm cannot unmount '/usr/local/vm': pool or dataset is busy
Let’s fix that busy bit:
[13:11 r730-01 dvl ~] % zfs list | grep /usr/local/vm data02/vm 65.8G 60.6G 7.35G /usr/local/vm data02/vm/FreeBSD-16.0-CURRENT 3.06G 60.6G 3.03G /usr/local/vm/FreeBSD-16.0-CURRENT data02/vm/freebsd-test 701M 60.6G 112K /usr/local/vm/freebsd-test data02/vm/hass 51.8G 60.6G 17.4G /usr/local/vm/hass data02/vm/home-assistant 351M 60.6G 351M /usr/local/vm/home-assistant data02/vm/myguest 2.55G 60.6G 2.55G /usr/local/vm/myguest data04/vm/FreeBSD-current 205K 15.3T 205K /usr/local/vm/FreeBSD-current [13:11 r730-01 dvl ~] % sudo zfs umount data04/vm/FreeBSD-current [13:12 r730-01 dvl ~] % sudo zfs umount data02/vm [13:12 r730-01 dvl ~] %
Setting canmount=off
The following ensure the old data does not get mounted:
[13:12 r730-01 dvl ~] % sudo zfs set canmount=off data02/vm [13:13 r730-01 dvl ~] % zfs get -r -t filesystem canmount data02/vm NAME PROPERTY VALUE SOURCE data02/vm canmount off local data02/vm/FreeBSD-16.0-CURRENT canmount on default data02/vm/freebsd-test canmount on default data02/vm/hass canmount on default data02/vm/home-assistant canmount on default data02/vm/myguest canmount on default
Oh, I need to set them all:
[13:13 r730-01 dvl ~] % sudo zfs set canmount=off data02/vm/FreeBSD-16.0-CURRENT [13:13 r730-01 dvl ~] % sudo zfs set canmount=off data02/vm/freebsd-test [13:13 r730-01 dvl ~] % sudo zfs set canmount=off data02/vm/hass [13:14 r730-01 dvl ~] % sudo zfs set canmount=off data02/vm/home-assistant [13:14 r730-01 dvl ~] % sudo zfs set canmount=off data02/vm/myguest [13:14 r730-01 dvl ~] % zfs get -r -t filesystem canmount data02/vm NAME PROPERTY VALUE SOURCE data02/vm canmount off local data02/vm/FreeBSD-16.0-CURRENT canmount off local data02/vm/freebsd-test canmount off local data02/vm/hass canmount off local data02/vm/home-assistant canmount off local data02/vm/myguest canmount off local
Setting up the new data
Right now, there is nothing here:
[13:15 r730-01 dvl ~] % ls -l /usr/local/vm total 0
That means I can proceed with the following. If there was data there, I’d have to take action… move or copy it somewhere.
Through the magic of inherited mountpoint, I have:
[13:15 r730-01 dvl ~] % sudo zfs set mountpoint=/usr/local/vm data04/vm [13:16 r730-01 dvl ~] % zfs list -r data04/vm NAME USED AVAIL REFER MOUNTPOINT data04/vm 25.2G 15.3T 205K /usr/local/vm data04/vm/FreeBSD-16.0-CURRENT 3.14G 15.3T 3.11G /usr/local/vm/FreeBSD-16.0-CURRENT data04/vm/FreeBSD-current 205K 15.3T 205K /usr/local/vm/FreeBSD-current data04/vm/freebsd-test 979M 15.3T 239K /usr/local/vm/freebsd-test data04/vm/freebsd-test/disk0 979M 15.3T 979M - data04/vm/hass 17.8G 15.3T 17.8G /usr/local/vm/hass data04/vm/home-assistant 351M 15.3T 351M /usr/local/vm/home-assistant data04/vm/myguest 2.94G 15.3T 2.94G /usr/local/vm/myguest
vm configuration
I tried this, expecting it to fail:
[15:46 r730-01 dvl ~] % sudo vm start hass /usr/local/sbin/vm: ERROR: unable to locate mountpoint for ZFS dataset data02/vm [15:46 r730-01 dvl ~] %
So where is that specified?
It took me a while to find, but I searched my blog for the oldest reference to data02/vm and found it in Getting Home Assistant running in a FreeBSD 13.1 jail
[15:48 r730-01 dvl /usr/local/vm] % grep vm /etc/rc.conf vm_enable="YES" vm_dir="zfs:data02/vm" vm_list="hass"
A quick change:
[15:49 r730-01 dvl /usr/local/vm] % sudo sysrc vm_dir="zfs:data04/vm" vm_dir: zfs:data02/vm -> zfs:data04/vm
And a new error:
[15:49 r730-01 dvl /usr/local/vm] % sudo vm start hass /usr/local/sbin/vm: ERROR: unable to locate mountpoint for ZFS dataset data04/vm
But it’s there.
[15:49 r730-01 dvl /usr/local/vm] % zfs get mountpoint data04/vm NAME PROPERTY VALUE SOURCE data04/vm mountpoint /usr/local/vm local
But not mounted:
[15:58 r730-01 dvl /usr/local/vm] % zfs get mounted data04/vm NAME PROPERTY VALUE SOURCE data04/vm mounted no -
And I can’t mount it yet:
[16:02 r730-01 dvl /usr/local/vm] % sudo zfs mount data04/vm cannot mount 'data04/vm': 'canmount' property is set to 'off'
I thought: was that the property on the old mountpoint? Will allowing it to mount overwrite / overlay the other mountpoints under there?
Trying this:
[16:02 r730-01 dvl /usr/local/vm] % ls -l /usr/local/vm total 54 drwxr-xr-x 2 root wheel 5 2026.03.30 17:20 FreeBSD-16.0-CURRENT/ drwxr-xr-x 2 root wheel 4 2025.05.26 21:37 freebsd-test/ drwxr-xr-x 2 root wheel 6 2026.04.19 13:00 hass/ drwxr-xr-x 2 root wheel 7 2023.02.12 23:00 home-assistant/ drwxr-xr-x 2 root wheel 5 2023.11.08 14:15 myguest/ [16:02 r730-01 dvl /usr/local/vm] % sudo zfs set canmount=noauto data04/vm [16:03 r730-01 dvl /usr/local/vm] % sudo zfs mount data04/vm [16:03 r730-01 dvl /usr/local/vm] % ls -l /usr/local/vm total 1 drwxr-xr-x 2 root wheel 2 2026.03.29 23:23 FreeBSD-current/
That’s not right yet. Missing some mountpoints.
Let’s back off:
[16:04 r730-01 dvl /usr/local/vm] % cd .. [16:04 r730-01 dvl /usr/local] % sudo zfs umount data04/vm [16:04 r730-01 dvl /usr/local] % ls -l vm total 3 drwxr-xr-x 2 root wheel 2 2026.04.19 13:16 FreeBSD-16.0-CURRENT/ drwxr-xr-x 2 root wheel 2 2026.04.19 13:16 freebsd-test/ drwxr-xr-x 2 root wheel 2 2026.04.19 13:16 hass/ drwxr-xr-x 2 root wheel 2 2026.04.19 13:16 home-assistant/ drwxr-xr-x 2 root wheel 2 2026.04.19 13:16 myguest/
Let’s look in there.
[16:04 r730-01 dvl /usr/local/vm] % ls -l total 3 drwxr-xr-x 2 root wheel 2 2026.04.19 13:16 FreeBSD-16.0-CURRENT/ drwxr-xr-x 2 root wheel 2 2026.04.19 13:16 freebsd-test/ drwxr-xr-x 2 root wheel 2 2026.04.19 13:16 hass/ drwxr-xr-x 2 root wheel 2 2026.04.19 13:16 home-assistant/ drwxr-xr-x 2 root wheel 2 2026.04.19 13:16 myguest/ [16:04 r730-01 dvl /usr/local/vm] % find . | less . ./FreeBSD-16.0-CURRENT ./freebsd-test ./home-assistant ./myguest ./hass
Nothing. All just mountpoints waiting to be filled.
Let’s try again:
[16:04 r730-01 dvl /usr/local/vm] % sudo rmdir * [16:04 r730-01 dvl /usr/local/vm] % cd .. [16:04 r730-01 dvl /usr/local] % sudo zfs mount data04/vm [16:04 r730-01 dvl /usr/local] % ls -l vm total 3 drwxr-xr-x 2 root wheel 2 2026.04.19 16:04 FreeBSD-16.0-CURRENT/ drwxr-xr-x 2 root wheel 2 2026.03.29 23:23 FreeBSD-current/ drwxr-xr-x 2 root wheel 2 2026.04.19 16:04 freebsd-test/ drwxr-xr-x 2 root wheel 2 2026.04.19 16:04 hass/ drwxr-xr-x 2 root wheel 2 2026.04.19 16:04 home-assistant/ drwxr-xr-x 2 root wheel 2 2026.04.19 16:04 myguest/
Yeah, OK, now I know those are all part of data04/vm
But empty still.
[16:05 r730-01 dvl /usr/local/vm] % cd hass [16:05 r730-01 dvl /usr/local/vm/hass] % ls -l total 0
Mount time:
[16:05 r730-01 dvl /usr/local/vm] % sudo zfs mount data04/vm/FreeBSD-16.0-CURRENT [16:05 r730-01 dvl /usr/local/vm] % sudo zfs mount data04/vm/FreeBSD-current [16:05 r730-01 dvl /usr/local/vm] % sudo zfs mount data04/vm/freebsd-test [16:05 r730-01 dvl /usr/local/vm] % sudo zfs mount data04/vm/hass [16:06 r730-01 dvl /usr/local/vm] % sudo zfs mount data04/vm/home-assistant [16:06 r730-01 dvl /usr/local/vm] % sudo zfs mount data04/vm/myguest
And start:
[16:06 r730-01 dvl /usr/local/vm] % sudo vm start hass Starting hass * found guest in /usr/local/vm/hass * booting... [16:06 r730-01 dvl /usr/local/vm] %
Success.
[16:06 r730-01 dvl /usr/local/vm] % sudo vm list NAME DATASTORE LOADER CPU MEMORY VNC AUTO STATE FreeBSD-16.0-CURRENT default bhyveload 4 32GB - No Stopped freebsd-test default bhyveload 1 256M - No Stopped hass default uefi 4 8GB - Yes [1] Running (51383) home-assistant default uefi 1 1GB - No Stopped myguest default bhyveload 1 768M - No Stopped [16:11 r730-01 dvl /usr/local/vm] %
I confirmed Home Assistant was up and running. All good.
The hard part
I’m always hesitant to delete original data. It just irks me.
However, I need the space.
So I wimp out and delete the less important stuff first. The things I haven’t used lately or are easily to recreate.
[16:18 r730-01 dvl ~] % sudo zfs destroy -r data02/vm/FreeBSD-16.0-CURRENT [16:19 r730-01 dvl ~] % sudo zfs destroy -r data02/vm/freebsd-test [16:19 r730-01 dvl ~] % sudo zfs destroy -r data02/vm/freebsd-test/disk0 cannot open 'data02/vm/freebsd-test/disk0': dataset does not exist [16:20 r730-01 dvl ~] % sudo zfs destroy -r data02/vm/home-assistant [16:20 r730-01 dvl ~] % sudo zfs destroy -r data02/vm/myguest [16:20 r730-01 dvl ~] %
[16:21 r730-01 dvl ~] % zpool list data02 NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data02 928G 812G 116G - - 72% 87% 1.00x ONLINE -
That didn’t free up much. It’s down to 87% down, still not great.
Next, dev.freshports.org











