Over the past few days, I’ve posted a lot about transferring jails from two hosts into one host. One of the steps involves using zfs send | zfs recv to send the files from one host to another. I’m using syncoide for that transfer.
A new fileystem’s recordsize defaults to 128K. In my case, that is usually OK, except for some specific datasets. For example:
* video
* backups
* distfiles (tarballs)
Here is a script I created to find the filesystems/datasets which need to be adjusted.
for zpool in $(zpool list -H -o name) do zfs get -t filesystem -r recordsize $zpool | grep -v 128K done
Let’s see the output for one host:
[r720-01 dan ~] % for zpool in $(zpool list -H -o name) do zfs get -t filesystem -r recordsize $zpool | grep -v 128K done NAME PROPERTY VALUE SOURCE data01 recordsize 1M local data01/pg02 recordsize 8K local data01/pg02/dan recordsize 1M local data01/pg02/postgres recordsize 8K inherited from data01/pg02 data01/pg02/rsyncer recordsize 8K inherited from data01/pg02 data01/pg03 recordsize 1M inherited from data01 data01/pg03/postgres recordsize 1M inherited from data01 data01/pg03/rsyncer recordsize 1M inherited from data01 data01/test-snap recordsize 1M inherited from data01 data01/testing recordsize 1M inherited from data01 data01/testing/12.0-RELEASE recordsize 1M inherited from data01 data01/testing/dev-pgeu recordsize 1M inherited from data01 NAME PROPERTY VALUE SOURCE tank_fast/poudriere/distfiles recordsize 1M local NAME PROPERTY VALUE SOURCE zroot/var/log recordsize 1M local
Changing recordsize is not sufficient. The data must be rewritten. recordsize only affects new writes. It does not affect data already in the filesystem.
Therefore, I’m going to take this 68G filesystem, adjust the recordsize, and copy it.
root@r730-01:~ # syncoid --no-sync-snap --preserve-recordsize data01/poudriere/distfiles data01/poudriere/distfiles.new NEWEST SNAPSHOT: syncoid_r730-01.int.unixathome.org_2023-02-18:18:29:26-GMT00:00 INFO: Sending oldest full snapshot data01/poudriere/distfiles@autosnap_2021-04-21_00:00:00_daily (~ 31.9 GB) to new target filesystem: 32.0GiB 0:00:33 [ 963MiB/s] [=================================================================================================>] 100% INFO: Updating new target filesystem with incremental data01/poudriere/distfiles@autosnap_2021-04-21_00:00:00_daily ... syncoid_r730-01.int.unixathome.org_2023-02-18:18:29:26-GMT00:00 (~ 36.4 GB): 36.4GiB 0:00:39 [ 949MiB/s] [=================================================================================================>] 100% root@r730-01:~ #
On a small dataset like this, don’t expect huge savings.
before recordsize fix:
[r730-01 dvl ~] % zfs list | grep bacula data01/bacula 1.92T 1.00T 188K /jails/bacula-sd-03/usr/local/bacula data01/bacula/volumes 1.92T 1.00T 205K /jails/bacula-sd-03/usr/local/bacula/volumes data01/bacula/volumes/DiffFile-03 404G 1.00T 404G /jails/bacula-sd-03/usr/local/bacula/volumes/DiffFile-03 data01/bacula/volumes/FullFile-03 1.04T 1.00T 1.04T /jails/bacula-sd-03/usr/local/bacula/volumes/FullFile-03 data01/bacula/volumes/IncrFile-03 492G 1.00T 492G /jails/bacula-sd-03/usr/local/bacula/volumes/IncrFile-03 data01/bacula/working 819K 1.00T 264K /jails/bacula-sd-03/usr/local/bacula/working
The source:
[slocum dan ~] % zfs list -r system/bacula NAME USED AVAIL REFER MOUNTPOINT system/bacula 1.81T 15.0T 176K /jails/bacula-sd-03/usr/local/bacula system/bacula/volumes 1.81T 15.0T 192K /jails/bacula-sd-03/usr/local/bacula/volumes system/bacula/volumes/DiffFile-03 370G 15.0T 370G /jails/bacula-sd-03/usr/local/bacula/volumes/DiffFile-03 system/bacula/volumes/FullFile-03 1.02T 15.0T 997G /jails/bacula-sd-03/usr/local/bacula/volumes/FullFile-03 system/bacula/volumes/IncrFile-03 444G 15.0T 444G /jails/bacula-sd-03/usr/local/bacula/volumes/IncrFile-03 system/bacula/working 583K 15.0T 232K /jails/bacula-sd-03/usr/local/bacula/working</pre>
After the recopy from source:
[r730-01 dvl ~] % zfs list | grep bacula data01/bacula 1.97T 978G 188K none data01/bacula/volumes 1.97T 978G 205K none data01/bacula/volumes/DiffFile-03 404G 978G 404G none data01/bacula/volumes/FullFile-03 1.09T 978G 1.04T none data01/bacula/volumes/IncrFile-03 492G 978G 492G none data01/bacula/working 623K 978G 247K none [r730-01 dvl ~] %
Eh? Now it’s using another 0.05 TB. What?
[r730-01 dvl ~] % zfs list -r -t snapshot data01/bacula/volumes/FullFile-03 NAME USED AVAIL REFER MOUNTPOINT data01/bacula/volumes/FullFile-03@autosnap_2023-01-30_00:00:40_daily 0B - 808G - data01/bacula/volumes/FullFile-03@autosnap_2023-01-31_00:01:45_daily 0B - 808G - data01/bacula/volumes/FullFile-03@autosnap_2023-02-01_00:02:42_daily 0B - 808G - data01/bacula/volumes/FullFile-03@autosnap_2023-02-02_00:02:44_daily 0B - 808G - data01/bacula/volumes/FullFile-03@autosnap_2023-02-03_00:01:04_daily 0B - 808G - data01/bacula/volumes/FullFile-03@autosnap_2023-02-04_00:00:52_daily 0B - 808G - data01/bacula/volumes/FullFile-03@autosnap_2023-02-05_00:01:06_daily 0B - 808G - data01/bacula/volumes/FullFile-03@autosnap_2023-02-06_00:02:12_daily 0B - 1.04T - data01/bacula/volumes/FullFile-03@autosnap_2023-02-07_00:01:15_daily 0B - 1.04T - data01/bacula/volumes/FullFile-03@autosnap_2023-02-08_00:03:03_daily 0B - 1.04T - data01/bacula/volumes/FullFile-03@autosnap_2023-02-09_00:01:09_daily 0B - 1.04T - data01/bacula/volumes/FullFile-03@autosnap_2023-02-10_00:03:20_daily 0B - 1.04T - data01/bacula/volumes/FullFile-03@autosnap_2023-02-11_00:02:50_daily 0B - 1.04T - data01/bacula/volumes/FullFile-03@autosnap_2023-02-12_00:01:33_daily 0B - 1.04T - data01/bacula/volumes/FullFile-03@autosnap_2023-02-13_00:02:45_daily 0B - 1.04T - data01/bacula/volumes/FullFile-03@autosnap_2023-02-14_00:02:38_daily 0B - 1.04T - data01/bacula/volumes/FullFile-03@autosnap_2023-02-15_00:02:24_daily 0B - 1.04T - data01/bacula/volumes/FullFile-03@autosnap_2023-02-16_00:01:24_daily 0B - 1.04T - data01/bacula/volumes/FullFile-03@autosnap_2023-02-17_00:02:37_daily 0B - 1.04T - data01/bacula/volumes/FullFile-03@autosnap_2023-02-18_00:01:25_daily 0B - 1.04T - data01/bacula/volumes/FullFile-03@autosnap_2023-02-19_00:02:16_daily 0B - 1.04T - data01/bacula/volumes/FullFile-03@syncoid_r730-01.int.unixathome.org_2023-02-19:02:31:41-GMT00:00 0B - 1.04T - data01/bacula/volumes/FullFile-03@autosnap_2023-02-20_00:00:39_daily 0B - 1.04T - data01/bacula/volumes/FullFile-03@syncoid_r730-01.int.unixathome.org_2023-02-20:21:40:32-GMT00:00 0B - 1.04T - data01/bacula/volumes/FullFile-03@syncoid_r730-01.int.unixathome.org_2023-02-20:21:43:53-GMT00:00 0B - 1.04T - data01/bacula/volumes/FullFile-03@autosnap_2023-02-21_00:02:19_daily 0B - 1.04T - data01/bacula/volumes/FullFile-03@syncoid_r730-01.int.unixathome.org_2023-02-21:01:59:45-GMT00:00 0B - 1.04T - data01/bacula/volumes/FullFile-03@syncoid_r730-01.int.unixathome.org_2023-02-21:14:39:34-GMT00:00 0B - 1.04T - data01/bacula/volumes/FullFile-03@autosnap_2023-02-22_00:01:04_daily 0B - 1.04T - data01/bacula/volumes/FullFile-03@syncoid_r730-01.int.unixathome.org_2023-02-22:15:20:22-GMT00:00 0B - 1.04T - data01/bacula/volumes/FullFile-03@syncoid_r730-01.int.unixathome.org_2023-02-22:15:21:08-GMT00:00 0B - 1.04T - data01/bacula/volumes/FullFile-03@syncoid_r730-01.int.unixathome.org_2023-02-22:15:23:40-GMT00:00 0B - 1.04T - data01/bacula/volumes/FullFile-03@syncoid_r730-01.int.unixathome.org_2023-02-22:15:26:22-GMT00:00 0B - 1.04T - data01/bacula/volumes/FullFile-03@syncoid_r730-01.int.unixathome.org_2023-02-22:15:27:00-GMT00:00 0B - 1.04T - data01/bacula/volumes/FullFile-03@syncoid_r730-01.int.unixathome.org_2023-02-22:15:29:52-GMT00:00 0B - 1.04T - data01/bacula/volumes/FullFile-03@syncoid_r730-01.int.unixathome.org_2023-02-22:15:31:31-GMT00:00 0B - 1.04T - data01/bacula/volumes/FullFile-03@syncoid_r730-01.int.unixathome.org_2023-02-22:16:55:08-GMT00:00 0B - 1.04T -
I think I’m going to have to get another zpool.