I was up at 5:30 AM today. I packed the car and headed out. I arrived within the datacenter at about 8:15 or so. By 8:50, I was on IRC and the photos of the FreeBSD racks were uploading. Since I was going there anyway, I did some inventory and disposal work (a decommissioned server, about 25 old HDD, and various bits and pieces).
I must say though, I’m not liking this option. Right now, I have two copies of my data, one in each of the zpools you’ll see listed later. Soon, I’ll destroy one of them, partition the larger drive to match the smaller drive, and add that larger drive to the smaller zpool. Destroying data causes me to go all heebie-jeebie.
Out with the old
Removing dead drive:
Jul 12 13:11:06 x8dtu kernel: ada3 at ahcich3 bus 0 scbus3 target 0 lun 0 Jul 12 13:11:06 x8dtu kernel: ada3:s/n 44E1K00IFK7A detached Jul 12 13:11:06 x8dtu kernel: (ada3:ahcich3:0:0:0): Periph destroyed
In with the new
Adding in the smaller replacement drive:
Jul 12 13:18:56 x8dtu kernel: ada3 at ahcich3 bus 0 scbus3 target 0 lun 0 Jul 12 13:18:56 x8dtu kernel: ada3:ATA8-ACS SATA 3.x device Jul 12 13:18:56 x8dtu kernel: ada3: Serial Number 382AK6KIFJKA Jul 12 13:18:56 x8dtu kernel: ada3: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes) Jul 12 13:18:56 x8dtu kernel: ada3: Command Queueing enabled Jul 12 13:18:56 x8dtu kernel: ada3: 3815447MB (7814037168 512 byte sectors) Jul 12 13:18:56 x8dtu kernel: ses0: ada3,pass3 in 'Slot 03', SATA Slot: scbus3 target 0
Import
The import:
[13:21 x8dtu dvl ~] % zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT main_tank 4.53T 1.26T 3.27T - - 26% 27% 1.00x DEGRADED - zroot 212G 56.1G 156G - - 50% 26% 1.00x ONLINE - [13:22 x8dtu dvl ~] % zpool import data cannot import 'data': no such pool available [13:22 x8dtu dvl ~] % sudo zpool import data [13:23 x8dtu dvl ~] % zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data 3.62T 1.01T 2.62T - - 0% 27% 1.00x ONLINE - main_tank 4.53T 1.26T 3.27T - - 26% 27% 1.00x DEGRADED - zroot 212G 56.1G 156G - - 50% 26% 1.00x ONLINE - [13:24 x8dtu dvl ~] %
That produced these lines in /var/log/messages:
Jul 12 13:23:47 x8dtu kernel: vdev_geom_open_by_path:799[1]: Found provider by name /dev/gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_attach:219[1]: Attaching to gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_attach:288[1]: Created consumer for gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_read_config:457[1]: Reading config from gpt/SLOT_3_TO_382AK6KIFJKA... Jul 12 13:23:47 x8dtu kernel: vdev_geom_detach:315[1]: Detaching from gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_detach:326[1]: Destroying consumer for gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_attach_ok:696[1]: guids match for provider gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_attach:219[1]: Attaching to gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_attach:288[1]: Created consumer for gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_close_locked:352[1]: Closing access to gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_detach:315[1]: Detaching from gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_detach:326[1]: Destroying consumer for gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_open_by_path:799[1]: Found provider by name /dev/gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_attach:219[1]: Attaching to gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_attach:288[1]: Created consumer for gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_read_config:457[1]: Reading config from gpt/SLOT_3_TO_382AK6KIFJKA... Jul 12 13:23:47 x8dtu kernel: vdev_geom_detach:315[1]: Detaching from gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_detach:326[1]: Destroying consumer for gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_attach_ok:696[1]: guids match for provider gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_attach:219[1]: Attaching to gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_attach:288[1]: Created consumer for gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_close_locked:352[1]: Closing access to gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_detach:315[1]: Detaching from gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_detach:326[1]: Destroying consumer for gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_open_by_path:799[1]: Found provider by name /dev/gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_attach:219[1]: Attaching to gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_attach:288[1]: Created consumer for gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_read_config:457[1]: Reading config from gpt/SLOT_3_TO_382AK6KIFJKA... Jul 12 13:23:47 x8dtu kernel: vdev_geom_detach:315[1]: Detaching from gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_detach:326[1]: Destroying consumer for gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_attach_ok:696[1]: guids match for provider gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_attach:219[1]: Attaching to gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_attach:288[1]: Created consumer for gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_close_locked:352[1]: Closing access to gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_detach:315[1]: Detaching from gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_detach:326[1]: Destroying consumer for gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_open_by_path:799[1]: Found provider by name /dev/gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_attach:219[1]: Attaching to gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_attach:288[1]: Created consumer for gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_read_config:457[1]: Reading config from gpt/SLOT_3_TO_382AK6KIFJKA... Jul 12 13:23:47 x8dtu kernel: vdev_geom_detach:315[1]: Detaching from gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_detach:326[1]: Destroying consumer for gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_attach_ok:696[1]: guids match for provider gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_attach:219[1]: Attaching to gpt/SLOT_3_TO_382AK6KIFJKA. Jul 12 13:23:47 x8dtu kernel: vdev_geom_attach:288[1]: Created consumer for gpt/SLOT_3_TO_382AK6KIFJKA.
More sync
I did this:
[13:52 x8dtu dvl ~] % sudo syncoid --no-privilege-elevation -r --compress=lzo --quiet main_tank data CRITICAL ERROR: Target data exists but has no snapshots matching with main_tank! Replication to target would require destroying existing target. Cowardly refusing to destroy your existing target. [0] 0:sudo* "x8dtu.example.org" 13:52 12-Jul-25
I got a lot of that…
The existing
This is the old zpool:
[19:54 x8dtu dvl ~] % zpool list main_tank
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
main_tank 4.53T 1.26T 3.27T – – 26% 27% 1.00x DEGRADED –
Its filesystems:
ckages main_tank/freshports/jailed/nginx01/var/db/freshports/cache/pages 96K 3.15T 96K /var/db/freshports/cache/pages main_tank/freshports/jailed/nginx01/var/db/freshports/cache/ports 994M 3.15T 971M /var/db/freshports/cache/ports main_tank/freshports/jailed/nginx01/var/db/freshports/cache/spooling 352K 3.15T 120K /var/db/freshports/cache/spooling main_tank/freshports/nginx01 480K 3.15T 96K none main_tank/freshports/nginx01/var 384K 3.15T 96K none main_tank/freshports/nginx01/var/db 288K 3.15T 96K none main_tank/freshports/nginx01/var/db/freshports 192K 3.15T 96K none main_tank/freshports/nginx01/var/db/freshports/cache 96K 3.15T 96K none main_tank/home 266G 3.15T 86.0G /usr/home main_tank/jails 59.1G 3.15T 112K /jails main_tank/jails/ingress01 14.2G 3.15T 3.62G /jails/ingress01 main_tank/jails/nginx01 9.17G 3.15T 2.12G /jails/nginx01 main_tank/jails/perl540 6.81G 3.15T 4.49G /jails/perl540 main_tank/jails/pg01 23.3G 3.15T 7.49G /jails/pg01 main_tank/jails/svn 5.61G 3.15T 4.03G /jails/svn main_tank/mkjail 1.68G 3.15T 96K /mkjail main_tank/mkjail/14.1-RELEASE 862M 3.15T 862M /mkjail/14.1-RELEASE main_tank/mkjail/14.2-RELEASE 862M 3.15T 862M /mkjail/14.2-RELEASE
The replacement zpool
This is what will replace it:
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data 3.62T 1.08T 2.54T - - 0% 29% 1.00x ONLINE -
And the replicated filesystems:
var/db/freshports/cache/pages data/freshports/jailed/nginx01/var/db/freshports/cache/ports 983M 2.43T 964M /data/freshports/jailed/nginx01/var/db/freshports/cache/ports data/freshports/jailed/nginx01/var/db/freshports/cache/spooling 272K 2.43T 120K /data/freshports/jailed/nginx01/var/db/freshports/cache/spooling data/freshports/nginx01 736K 2.43T 96K /data/freshports/nginx01 data/freshports/nginx01/var 576K 2.43T 96K /data/freshports/nginx01/var data/freshports/nginx01/var/db 416K 2.43T 96K /data/freshports/nginx01/var/db data/freshports/nginx01/var/db/freshports 256K 2.43T 96K /data/freshports/nginx01/var/db/freshports data/freshports/nginx01/var/db/freshports/cache 96K 2.43T 96K /data/freshports/nginx01/var/db/freshports/cache data/home 86.0G 2.43T 86.0G /data/home data/jails 59.3G 2.43T 112K /data/jails data/jails/ingress01 14.3G 2.43T 3.62G /data/jails/ingress01 data/jails/nginx01 9.19G 2.43T 2.12G /data/jails/nginx01 data/jails/perl540 6.81G 2.43T 4.49G /data/jails/perl540 data/jails/pg01 23.3G 2.43T 7.48G /data/jails/pg01 data/jails/svn 5.61G 2.43T 4.03G /data/jails/svn data/mkjail 1.68G 2.43T 96K /data/mkjail data/mkjail/14.1-RELEASE 862M 2.43T 862M /data/mkjail/14.1-RELEASE data/mkjail/14.2-RELEASE 862M 2.43T 862M /data/mkjail/14.2-RELEASE
What’s next? Next, I have to export the old zpool (to take it offline), then adjust all the mountpoint for the new zpool, then run some tests and see if I got everything right.
I should do that as soon as I can. Now is not that time.