ZFS

booting without mrsas driver – lots of errors

Today I noticed that all the drives in the R730 host had 150MB/s speeds mentioned in the boot messages. They are all SSDs on an SAS bus. They should be at 600.000MB/s. The HBA in question is a AVAGO MegaRAID SAS FreeBSD (aka PERC H730P Mini). It can use either the mfi driver (default) or the mrsas driver (by setting hw.mfi.mrsas_enable=”1″ in /boot/loader.conf This is an SSD in a PCIe slot: This is […]

booting without mrsas driver – lots of errors Read More »

Wrong recordsize on zfs send | zfs recv filesystems

Over the past few days, I’ve posted a lot about transferring jails from two hosts into one host. One of the steps involves using zfs send | zfs recv to send the files from one host to another. I’m using syncoide for that transfer. A new fileystem’s recordsize defaults to 128K. In my case, that is usually OK, except for some specific datasets. For example: * video * backups * distfiles (tarballs) Here

Wrong recordsize on zfs send | zfs recv filesystems Read More »

Transferring a jail from one host to another

I’ve started merging these two hosts: r710-01 slocum into this one host: r740-01 The goal is to then retire those first two hosts. This post covers the initial bulk copy of all the data from two hosts into the one. In this post: FreeBSD 13.1 Each jail runs on its own zfs filesystem Not covered by this post Creating jails Creating snapshost Detailed usage of syncoid hosts not using zfs The overall process

Transferring a jail from one host to another Read More »

error: cannot open ‘.git/FETCH_HEAD’: No space left on device

After moving the poudriere jail (pkg01) to the new host (r730-01), I noticed this message from Nagios: That “email found in . /var/mail/dan” message is significant. In general mail on my hosts/jails is not delivered locally. It all goes off-host. That’s why I have this Nagios check. When mail like that is found, it’s either a configuration error or something local has gone wrong. In this post: FreeBSD 13.1 poudriere-3.3.7_1 NOTE: This is

error: cannot open ‘.git/FETCH_HEAD’: No space left on device Read More »

using syncoid to backup ZFS snapshots – home assistant

Copies of data are good. Especially if on a different host. zfs snapshots combined with zfs send | zfs recv make replication of zfs file systems especially easy. I am already using sanoid to manage zfs snapshots. Now I’m going to use syncoid to transfer those snapshots to another host. Both sanoid and syncoid are provided by the sanoid package on FreeBSD. I’m sure any other packaging system does similar. In this post:

using syncoid to backup ZFS snapshots – home assistant Read More »

Upgrading an NVMe zpool from 222G to 1TB drives

The FreshPorts dev, test, and stage websites are hosted on a server in my basement. Each instance consists of two jails: an ingress node – for pulling in new commits (and other data) into the database. a webserver node – for displaying the web pages. The new drives: Sometimes the zpool gets too close to full. I tweeted about one incident on March 2021. I held hope that some spare SSDs might be

Upgrading an NVMe zpool from 222G to 1TB drives Read More »

Where’s my drives?

This is just rough notes for myself. List the gpart output for each drive in my system. for drive in $(sysctl -n kern.disks) do gpart show $drive done gpart: No such geom: da11. => 40 9767541088 da10 GPT (4.5T) 40 8392664 – free – (4.0G) 8392704 9758048256 3 freebsd-zfs (4.5T) 9766440960 1100168 – free – (537M) => 40 976773088 da9 GPT (466G) 40 2008 – free – (1.0M) 2048 838860800 1 freebsd-zfs (400G)

Where’s my drives? Read More »

da12 in my FreeBSD zfs array disappeared at :02 and came back at :04

This post is based on a tweet and was created after a followup incident occurred today. The post was created to consolidate the information into a blog post so I could easily find it later. Details about this host (disks, zpool, gpart, etc) are in this post. On March 15 2022, I noticed these messages in /var/log/messages: The zpool array was degraded. I don’t have a record of that message. The tweet also

da12 in my FreeBSD zfs array disappeared at :02 and came back at :04 Read More »

mps0: IOC Fault 0x40007e23, Resetting

Here I am, sitting on a beach, writing a blog post, and sipping a cool adult beverage. Reading email. I see this: I quickly ssh into the host to check zpool status: Lines 15-17 are relevant. There was a resilver event, which completed at 08:49:48 The vdev state changed event occurred at 08:49:34 That all seems to tie in, time-wise. More info than you want This displays vdev guids: Line 11 shows the

mps0: IOC Fault 0x40007e23, Resetting Read More »

I forgot to enable compression on ZFS

I forgot to enable compression on this FreshPorts node. I have no idea why/how. After doing this, I went to a directory with a lot of space and copied stuff around. [aws-1 dan /jails/ingress01/var/db/freshports/message-queues/archive] % sudo mv 2021_03.tar.gz 2021_03.tar.gz.tmp 21:01:28 [aws-1 dan /jails/ingress01/var/db/freshports/message-queues/archive] % sudo mv 2021_03.tar.gz.tmp 2021_03.tar.gz 21:01:38 That will free up *some* space. But really, most of this zpool really needs to have zfs send | zfs recv done on each

I forgot to enable compression on ZFS Read More »

Scroll to Top