I’ve updated one of my PostgreSQL instances to PostgreSQL 18, it’s time to update the others. This time, I’m going to try pg_update. My usual approach is pg_dump and pg_restore.
As this is my first attempt doing this, I’m posting this mostly for future reference when I try this again. There will be another blog post when I try this again. Which should be soon. This paragraph will link to that post when it is available.
In this post:
- FreeBSD 15.0
- PostgreSQL 16.12 (pg03)
- PostgreSQL 18.2 (pg02)
The names in (brackets) are the names of the jail in question.
If you’re upgrading in place, and not copying data around like me, skip down until you see Saving the old binaries.
I’m reading http://www.unibia.com/unibianet/freebsd/upgrading-between-major-versions-postgresql-freebsd and thinking this might work well for me.
The overview of upgrade-in-place
The PostgreSQL upgrade-in-place needs these main parts:
- The old binaries (e.g. postgresql16-server-16.12.pkg)
- The new binaries (postgresql18-server-18.2.pkg)
- The old data (/var/db/postgres/data16)
Keep that in mind as I go through this. We can’t install both packages at once, so we’ll untar the old package into a safe location.
How you get that package: up to you. Try /var/cache/pkg, or the FreeBSD package servers, or (while you still have the old package), run pkg create postgresql16-server (for example).
My data
Ignore this section if you have the data. For me, I’m testing this process, and I’m documenting this part here.
This is how the data is laid out. My idea: snapshot line 7 and use it in line 12.
[18:23 r730-01 dvl ~] % zfs list | grep pg data02/jails/pg01 34.9G 175G 10.8G /jails/pg01 data02/jails/pg02 12.7G 175G 11.6G /jails/pg02 data02/jails/pg03 11.5G 175G 10.8G /jails/pg03 data03/pg01 75.7G 5.47T 96K none data03/pg01/freshports.dvl 37.1G 5.47T 27.6G /jails/pg01/var/db/postgres.freshports.dvl data03/pg01/postgres 38.7G 5.47T 28.1G /jails/pg01/var/db/postgres data03/pg02 78.5G 5.47T 88K none data03/pg02/postgres 78.5G 5.47T 51.8G /jails/pg02/var/db/postgres data03/pg02/rsyncer 1.02M 5.47T 144K /jails/pg02/usr/home/rsyncer/backups data03/pg03 769G 5.47T 88K none data03/pg03/postgres 570G 5.47T 448G /jails/pg03/var/db/postgres data03/pg03/rsyncer 199G 5.47T 33.2G /jails/pg03/usr/home/rsyncer/backups data03/poudriere/ports/pgeu_system 1.06G 5.47T 1.06G /usr/local/poudriere/ports/pgeu_system
The database is on a separate filesystem from the jail. Why? For situations just like this.
Note: I’m snapshotting a live-in-use database. That’s not always ideal. However, for this trial proof-of-concept, I’m content to accept that.
Clone, copy, and disable
As with the previous section, you can skip this one if you’re not mucking around copying data from instance to another.
[18:23 r730-01 dvl ~] % sudo zfs snapshot data03/pg03/postgres@for.copy.1 [18:34 r730-01 dvl ~] % sudo service jail stop pg02 Stopping jails: pg02. [18:36 r730-01 dvl ~] % sudo zfs rename data03/pg02/postgres data03/pg02/postgres.original [18:36 r730-01 dvl ~] % sudo zfs set canmount=off data03/pg02/postgres.original [18:36 r730-01 dvl ~] % sudo zfs clone data03/pg03/postgres@for.copy.1 data03/pg02/postgres [18:43 r730-01 dvl ~] % sudo zfs set mountpoint=/jails/pg02/var/db/postgres data03/pg02/postgres [18:37 r730-01 dvl ~] % sudoedit /jails/pg02/etc/rc.conf
That sudoedit is me setting postgresql_enable=”NO” in /etc/rc.conf so it doesn’t start up with the new data, just yet.
Then I started the jail back up:
[18:44 r730-01 dvl ~] % sudo service jail start pg02 Starting jails: pg02.
And logging in, it looks right:
[18:44 pg02 dvl ~] % ls -l /var/db/postgres total 9 drwx------ 19 postgres postgres 26 2026.02.13 18:23 data16/
Work not shown here
I’m not showing how to obtain the packages for the old binaries.
The host contains the old and new packages (not necessarily installed; I refer there to the .pkg files).
The host has already been updated to PostgreSQL 18 (the destination) from PostgreSQL 16. The initdb has not been done yet.
Saving the old binaries
My goal is to make this process data driven: Just update the vars and go.
In this section, I extract the old packages into the OLDBIN directory.
[20:26 pg02 dvl ~/tmp] % OLDBIN=~/tmp/pg-upgrade [20:26 pg02 dvl ~/tmp] % mkdir $OLDBIN [20:26 pg02 dvl ~/tmp] % OLD_POSTGRES_VERSION=16 [20:26 pg02 dvl ~/tmp] % NEW_POSTGRES_VERSION=18 [20:26 pg02 dvl ~/tmp] % OLDPKG_S=postgresql16-server-16.12.pkg [20:26 pg02 dvl ~/tmp] % OLDPKG_C=postgresql16-contrib-16.12_1.pkg [20:27 pg02 dvl /var/db/pkg] % cd /var/cache/pkg [20:27 pg02 dvl /var/cache/pkg] % tar xf $OLDPKG_S -C $OLDBIN tar: Removing leading '/' from member names [20:27 pg02 dvl /var/cache/pkg] % tar xf $OLDPKG_C -C $OLDBIN tar: Removing leading '/' from member names [20:27 pg02 dvl /var/cache/pkg] % cd $OLDBIN [20:27 pg02 dvl ~/tmp/pg-upgrade] % usr/local/bin/pg_upgrade -V pg_upgrade (PostgreSQL) 16.12 [20:27 pg02 dvl ~/tmp/pg-upgrade] %
initdb
This section does the initdb, creating the PostgreSQL 18 cluster.
[20:15 pg02 dvl ~] % ls -l /var/db/postgres
total 9
drwx------ 19 postgres postgres 26 2026.02.13 18:23 data16/
[20:15 pg02 dvl ~] % sudo service postgresql oneinitdb
initdb postgresql
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with this locale configuration:
locale provider: libc
LC_COLLATE: C
LC_CTYPE: C.UTF-8
LC_MESSAGES: C.UTF-8
LC_MONETARY: C.UTF-8
LC_NUMERIC: C.UTF-8
LC_TIME: C.UTF-8
The default text search configuration will be set to "english".
Data page checksums are enabled.
creating directory /var/db/postgres/data18 ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default "max_connections" ... 100
selecting default "shared_buffers" ... 128MB
selecting default time zone ... UTC
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
initdb: warning: enabling "trust" authentication for local connections
initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.
Success. You can now start the database server using:
/usr/local/bin/pg_ctl -D /var/db/postgres/data18 -l logfile start
[20:15 pg02 dvl ~] % ls -l /var/db/postgres
total 17
drwx------ 19 postgres postgres 26 2026.02.13 18:23 data16/
drwx------ 19 postgres postgres 24 2026.02.21 20:15 data18/
Shown above, the old and new data directories.
Not shown here
What’s not shown next is making sure the new configuration is what you want (i.e. postgresql.conf for example)
The Upgrade
With everything now in place, I become root in the pg02 jail.
[root@pg02 ~]# su -l postgres
This part is formatted for easy copy/paste:
OLDBIN=/usr/home/dvl/tmp/pg-upgrade
OLD_POSTGRES_VERSION=16
NEW_POSTGRES_VERSION=18
pg_upgrade -b ${OLDBIN}/usr/local/bin/ -d /var/db/postgres/data${OLD_POSTGRES_VERSION}/ \
-B /usr/local/bin/ -D /var/db/postgres/data${NEW_POSTGRES_VERSION}/ -U postgres
The first time I ran this, I got:
$ pg_upgrade -b ${OLDBIN}/usr/local/bin/ -d /var/db/postgres/data${OLD_POSTGRES_VERSION}/ -B /usr/local/bin/ -D /var/db/postgres/data${NEW_POSTGRES_VERSION}/ -U postgres
Performing Consistency Checks
-----------------------------
Checking cluster versions ok
old cluster does not use data checksums but the new one does
Failure, exiting
This next step is a fun trial, however, since this host is running ZFS, I’m convinced checksums at the application level make no sense if the filesystem is already doing it. Tangent: I think this postgresql_initdb_flags=”–encoding=utf-8 –lc-collate=C –no-data-checksums” added to /etc/rc.conf will suffice (based on the rc.d script and initdb). That will be tested in my next post.
At this point, I should have done another initdb, disabling checksums.
But. I. Did. Not.
OK, let’s try this (re pg_checksums), because I’ve never done it before.
$ ${OLDBIN}/usr/local/bin/pg_checksums -e -D /var/db/postgres/data${OLD_POSTGRES_VERSION}/
Checksum operation completed
Files scanned: 8344
Blocks scanned: 69123852
Files written: 7092
Blocks written: 69122772
pg_checksums: syncing data directory
pg_checksums: updating control file
Checksums enabled in cluster
Good. Now on to the main show. Notice lines 29-30.
$ ${OLDBIN}/usr/local/bin/pg_checksums -e -D /var/db/postgres/data${OLD_POSTGRES_VERSION}/
Checksum operation completed
Files scanned: 8344
Blocks scanned: 69123852
Files written: 7092
Blocks written: 69122772
pg_checksums: syncing data directory
pg_checksums: updating control file
Checksums enabled in cluster
$ time pg_upgrade -b ${OLDBIN}/usr/local/bin/ -d /var/db/postgres/data${OLD_POSTGRES_VERSION}/ \
-B /usr/local/bin/ -D /var/db/postgres/data${NEW_POSTGRES_VERSION}/ -U postgres
Performing Consistency Checks
-----------------------------
Checking cluster versions ok
Checking database connection settings ok
Checking database user is the install user ok
Checking for prepared transactions ok
Checking for contrib/isn with bigint-passing mismatch ok
Checking data type usage ok
Checking for not-null constraint inconsistencies ok
Creating dump of global objects ok
Creating dump of database schemas
ok
Checking for presence of required libraries ok
Checking database user is the install user ok
Checking for prepared transactions ok
Checking for new cluster tablespace directories ok
If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.
Performing Upgrade
------------------
Setting locale and encoding for new cluster ok
Analyzing all rows in the new cluster ok
Freezing all rows in the new cluster ok
Deleting files from new pg_xact ok
Copying old pg_xact to new server ok
Setting oldest XID for new cluster ok
Setting next transaction ID and epoch for new cluster ok
Deleting files from new pg_multixact/offsets ok
Copying old pg_multixact/offsets to new server ok
Deleting files from new pg_multixact/members ok
Copying old pg_multixact/members to new server ok
Setting next multixact ID and offset for new cluster ok
Resetting WAL archives ok
Setting frozenxid and minmxid counters in new cluster ok
Restoring global objects in the new cluster ok
Restoring database schemas in the new cluster
ok
Copying user relation files
ok
Setting next OID for new cluster ok
Sync data directory to disk ok
Creating script to delete old cluster ok
Checking for extension updates notice
Your installation contains extensions that should be updated
with the ALTER EXTENSION command. The file
update_extensions.sql
when executed by psql by the database superuser will update
these extensions.
Upgrade Complete
----------------
Some statistics are not transferred by pg_upgrade.
Once you start the new server, consider running these two commands:
/usr/local/bin/vacuumdb -U postgres --all --analyze-in-stages --missing-stats-only
/usr/local/bin/vacuumdb -U postgres --all --analyze-only
Running this script will delete the old cluster's data files:
./delete_old_cluster.sh
6199.21 real 2.83 user 1289.06 sys
$
That’s about 93 minutes. Not bad for the dataset size (78.5G) and given the host is writing and reading to the same ZFS dataset.
I’ll do those above recommended actions two sections down.
Dataset size
Here’s a list of snapshots taken on that new dataset. It’s not surprising that the size in increases as the new data arrives.
[0:42 r730-01 dvl ~] % zfs list -r -t snapshot data03/pg02/postgres NAME USED AVAIL REFER MOUNTPOINT data03/pg02/postgres@autosnap_2026-02-21_18:45:08_daily 0B - 448G - data03/pg02/postgres@autosnap_2026-02-21_18:45:08_hourly 0B - 448G - data03/pg02/postgres@autosnap_2026-02-21_19:00:06_daily 0B - 448G - data03/pg02/postgres@autosnap_2026-02-21_19:00:06_hourly 0B - 448G - data03/pg02/postgres@autosnap_2026-02-21_20:00:05_hourly 0B - 448G - data03/pg02/postgres@autosnap_2026-02-21_21:01:53_hourly 944K - 419G - data03/pg02/postgres@autosnap_2026-02-21_22:01:40_hourly 936K - 367G - data03/pg02/postgres@autosnap_2026-02-21_23:00:01_hourly 1.20M - 355G - data03/pg02/postgres@autosnap_2026-02-22_00:01:05_daily 184K - 389G - data03/pg02/postgres@autosnap_2026-02-22_00:01:05_hourly 176K - 389G - data03/pg02/postgres@autosnap_2026-02-22_01:03:33_hourly 856K - 602G - data03/pg02/postgres@autosnap_2026-02-22_02:00:08_hourly 0B - 709G - data03/pg02/postgres@autosnap_2026-02-22_03:00:00_hourly 0B - 709G - data03/pg02/postgres@autosnap_2026-02-22_04:00:01_hourly 0B - 709G - data03/pg02/postgres@autosnap_2026-02-22_05:00:05_hourly 0B - 709G - data03/pg02/postgres@autosnap_2026-02-22_06:00:02_hourly 0B - 709G - data03/pg02/postgres@autosnap_2026-02-22_07:00:05_hourly 0B - 709G - data03/pg02/postgres@autosnap_2026-02-22_08:00:07_hourly 0B - 709G - data03/pg02/postgres@autosnap_2026-02-22_09:00:04_hourly 0B - 709G - data03/pg02/postgres@autosnap_2026-02-22_10:00:05_hourly 0B - 709G - data03/pg02/postgres@autosnap_2026-02-22_11:00:03_hourly 0B - 709G - data03/pg02/postgres@autosnap_2026-02-22_12:00:04_hourly 0B - 709G - data03/pg02/postgres@autosnap_2026-02-22_13:00:01_hourly 0B - 709G - data03/pg02/postgres@autosnap_2026-02-22_13:30:00_frequently 0B - 709G - data03/pg02/postgres@autosnap_2026-02-22_13:45:08_frequently 0B - 709G - data03/pg02/postgres@autosnap_2026-02-22_14:00:06_hourly 0B - 709G - data03/pg02/postgres@autosnap_2026-02-22_14:00:06_frequently 0B - 709G - data03/pg02/postgres@autosnap_2026-02-22_14:15:09_frequently 0B - 709G - data03/pg02/postgres@autosnap_2026-02-22_14:30:01_frequently 0B - 709G - data03/pg02/postgres@autosnap_2026-02-22_14:45:11_frequently 0B - 709G - data03/pg02/postgres@autosnap_2026-02-22_15:00:06_hourly 0B - 709G - data03/pg02/postgres@autosnap_2026-02-22_15:00:06_frequently 0B - 709G -
Recommended actions
In this section, I run the commands suggested by the pg_update output.
[root@pg02 ~]# service postgresql start start postgresql [root@pg02 ~]# /usr/local/bin/vacuumdb -U postgres --all --analyze-in-stages --missing-stats-only vacuumdb: processing database "bacula": Generating minimal optimizer statistics (1 target) vacuumdb: processing database "empty": Generating minimal optimizer statistics (1 target) vacuumdb: processing database "fpphorum": Generating minimal optimizer statistics (1 target) vacuumdb: processing database "freebsddiary.org": Generating minimal optimizer statistics (1 target) vacuumdb: processing database "freshports.dev": Generating minimal optimizer statistics (1 target) vacuumdb: processing database "freshports.dvl": Generating minimal optimizer statistics (1 target) vacuumdb: processing database "freshports.stage": Generating minimal optimizer statistics (1 target) vacuumdb: processing database "freshports.test": Generating minimal optimizer statistics (1 target) vacuumdb: processing database "gitea": Generating minimal optimizer statistics (1 target) vacuumdb: processing database "nagiostest": Generating minimal optimizer statistics (1 target) vacuumdb: processing database "postgres": Generating minimal optimizer statistics (1 target) vacuumdb: processing database "samdrucker": Generating minimal optimizer statistics (1 target) vacuumdb: processing database "template1": Generating minimal optimizer statistics (1 target) vacuumdb: processing database "bacula": Generating medium optimizer statistics (10 targets) vacuumdb: processing database "empty": Generating medium optimizer statistics (10 targets) vacuumdb: processing database "fpphorum": Generating medium optimizer statistics (10 targets) vacuumdb: processing database "freebsddiary.org": Generating medium optimizer statistics (10 targets) vacuumdb: processing database "freshports.dev": Generating medium optimizer statistics (10 targets) vacuumdb: processing database "freshports.dvl": Generating medium optimizer statistics (10 targets) vacuumdb: processing database "freshports.stage": Generating medium optimizer statistics (10 targets) vacuumdb: processing database "freshports.test": Generating medium optimizer statistics (10 targets) vacuumdb: processing database "gitea": Generating medium optimizer statistics (10 targets) vacuumdb: processing database "nagiostest": Generating medium optimizer statistics (10 targets) vacuumdb: processing database "postgres": Generating medium optimizer statistics (10 targets) vacuumdb: processing database "samdrucker": Generating medium optimizer statistics (10 targets) vacuumdb: processing database "template1": Generating medium optimizer statistics (10 targets) vacuumdb: processing database "bacula": Generating default (full) optimizer statistics vacuumdb: processing database "empty": Generating default (full) optimizer statistics vacuumdb: processing database "fpphorum": Generating default (full) optimizer statistics vacuumdb: processing database "freebsddiary.org": Generating default (full) optimizer statistics vacuumdb: processing database "freshports.dev": Generating default (full) optimizer statistics vacuumdb: processing database "freshports.dvl": Generating default (full) optimizer statistics vacuumdb: processing database "freshports.stage": Generating default (full) optimizer statistics vacuumdb: processing database "freshports.test": Generating default (full) optimizer statistics vacuumdb: processing database "gitea": Generating default (full) optimizer statistics vacuumdb: processing database "nagiostest": Generating default (full) optimizer statistics vacuumdb: processing database "postgres": Generating default (full) optimizer statistics vacuumdb: processing database "samdrucker": Generating default (full) optimizer statistics vacuumdb: processing database "template1": Generating default (full) optimizer statistics [root@pg02 ~]# /usr/local/bin/vacuumdb -U postgres --all --analyze-only vacuumdb: vacuuming database "bacula" vacuumdb: vacuuming database "empty" vacuumdb: vacuuming database "fpphorum" vacuumdb: vacuuming database "freebsddiary.org" vacuumdb: vacuuming database "freshports.dev" vacuumdb: vacuuming database "freshports.dvl" vacuumdb: vacuuming database "freshports.stage" vacuumdb: vacuuming database "freshports.test" vacuumdb: vacuuming database "gitea" vacuumdb: vacuuming database "nagiostest" vacuumdb: vacuuming database "postgres" vacuumdb: vacuuming database "samdrucker" vacuumdb: vacuuming database "template1" [root@pg02 ~]# [root@pg02 ~]# sudo su -l postgres $ ls -l total 18 drwx------ 19 postgres postgres 25 Feb 21 23:51 data16 drwx------ 20 postgres postgres 27 Feb 22 15:28 data18 -rwx------ 1 postgres postgres 44 Feb 22 01:34 delete_old_cluster.sh -rw------- 1 postgres postgres 247 Feb 22 01:35 update_extensions.sql $ ./delete_old_cluster.sh $ ls -l total 10 drwx------ 20 postgres postgres 27 Feb 22 15:28 data18 -rwx------ 1 postgres postgres 44 Feb 22 01:34 delete_old_cluster.sh -rw------- 1 postgres postgres 247 Feb 22 01:35 update_extensions.sql $
And more snapshots
After the above processing, the newest snapshots look like this.
data03/pg02/postgres@autosnap_2026-02-22_15:15:09_frequently 0B - 709G - data03/pg02/postgres@autosnap_2026-02-22_15:30:01_frequently 82.4M - 709G - data03/pg02/postgres@autosnap_2026-02-22_15:45:11_frequently 761M - 355G -
Which makes sense. Those numbers represent deleted data.
What’s next
I declare a decent first attempt. I’m going to try this approach one more time, and if all goes well, target the main server directly instead of taking a copy.











