ZFS

Clear the existing good drive and add it to the new zpool

In recent posts: Doing a bit of stress work on a new HDD x8dtu: adding in the smaller drive Swapping zpools – moving from using main_tank to using data If you have a look over those posts, you’ll see why you never want to downsize a zpool. In this post: FreeBSD 14.2 removing the zfs labels from an drive which was part of a zpool which is no longer in use destroying the […]

Clear the existing good drive and add it to the new zpool Read More »

Doing a bit of stress work on a new HDD

As foreshadowed in x8dtu – drive problems, I will be visiting a data center soon to replace a 4TB HDD. The replacement HDD arrived last night. It was unceremoniously tossed onto the front porch by the courier. However, it was properly packaged. I’m sure it’s fine. The original idea: put this drive into a host, and write some data to it, to exercise it a bit. As you will soon learn, that is

Doing a bit of stress work on a new HDD Read More »

x8dtu – main ssds: how worn are they?

Let’s look at these two SSDs (full smartctl output appears at the end). pool: zroot state: ONLINE status: Some supported and requested features are not enabled on the pool. The pool can still be used, but some features are unavailable. action: Enable all features using ‘zpool upgrade’. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(7) for details. scan: scrub repaired

x8dtu – main ssds: how worn are they? Read More »

Adding in a stand-by spare drive for ZFS on FreeBSD

This is a follow up to Adding in a hot-spare for zfs on FreeBSD from two months ago. The replacement for the returned drive has arrived and after sitting for many weeks on my coffee table, it is installed in r730-03. Here it is, as found in /var/log/messages – this host is a Dell R730 which has drive cages allowing me to insert the drive without powering off the host. Jun 30 16:17:15

Adding in a stand-by spare drive for ZFS on FreeBSD Read More »

What drive did I just remove from the system?

So there I was… ready to remove the drive from the system. This was the drive which was giving errors and which had already been replaced. In this post: FreeBSD 14.2 Let’s look at the drive I just wiped … I was doing this command: [20:11 r730-03 dvl ~] % sudo dd if=/dev/zero of=/dev/gpt/HGST_8CJVT8YE bs=4M Let’s run it again and see which drive LED lights up. Yep, there it is. CTL-C, LED goes

What drive did I just remove from the system? Read More »

Adding in a hot-spare for zfs on FreeBSD

But first, there’s more Right after asking “Anyone running zfsd? Did you do anything in particular to configure it? I just added my first hot-spare to a zpool.”, ivy told me “noooooooo don’t use hot spares!! Keep a cold spare or at least an online device not attached to a pool. otherwise your zpool will randomly decide to attach its hot spare due to a temporary cabling issue or something like that. the

Adding in a hot-spare for zfs on FreeBSD Read More »

Is deleting empty snapshots faster?

During the 2025-01-22 OpenZFS Production User Call, ‘atomic operations’ was mentioned with respect to ~/tmp/deleting zfs list -r -t snapshot data01/snapshots/deleting > ~/tmp/deleting 2.54s user 48.47s system 99% cpu 51.042 total 50 seconds. That’s OK. 60,000 deletes starting on the 23rd I started the delete. Actually, it’s not 60,000 deletes. It’s one destroy, of 60,000 snapshots. [12:52 r730-03 dvl ~] % time sudo zfs destroy data01/snapshots/deleting@1%60000 After starting the above command, I started

Is deleting empty snapshots faster? Read More »

Way too many snapshots

In this post In this post, we have: FreeBSD 14.1-RELEASE-p5 r730-03 Lots of boring repetitive sections, so skip over that to find what you need This article was written over a couple of days. The zpool in question is 3 pairs of 12TB HDD: [13:44 r730-03 dvl ~/tmp] % zpool status data01 pool: data01 state: ONLINE scan: scrub in progress since Fri Jan 17 05:29:21 2025 22.1T / 26.1T scanned at 200M/s, 19.2T

Way too many snapshots Read More »

Why does the same command appear on two different ports with different times?

While I was destroying some snapshots, I found this interesting / disturbing. Why is the same destroy appearing twice, on different TTYs? [16:11 r730-03 dvl ~/tmp] % w | grep air dvl pts/2 air01.startpoint.vpn.unixathome.org 1:04PM 2:18 sudo zfs destroy data01/snapshots/homeassistant-r730-01@autosnap_2024-10-19_22:15:08_frequently dvl pts/1 air01.startpoint.vpn.unixathome.org Fri11PM – w dvl pts/4 air01.startpoint.vpn.unixathome.org 4:11PM – sudo zfs destroy data01/snapshots/homeassistant-r730-01@autosnap_2024-10-19_22:15:08_frequently dvl pts/5 air01.startpoint.vpn.unixathome.org 1:07PM 33 -zsh (zsh) What’s up with that? And why different times? It is not

Why does the same command appear on two different ports with different times? Read More »

Scroll to Top