hardware

Moving a zpool to smaller drives #ZFS #FreeBSD – fails

This post is part of a series of storage-related posts. The goal: move the zroot of r730-01 to smaller drives. NOTE: This approach failed. Investigations continue, however this post is not something you should try. This other successful post might be useful, although I still don’t know why the attempt in this post fails. Earlier today, as the first step in my proof-of-concept, I moved the zroot of r730-04 from 2x SATADOM device […]

Moving a zpool to smaller drives #ZFS #FreeBSD – fails Read More »

Creating a backup boot drive from a zroot mirror

Today, using FreeBSD 14.3, I’m going to test booting from a drive which is a mirror of the boot drives. This is what I’m booting from: root@r730-04:~ # zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT zroot 923G 1.72G 921G – – 0% 0% 1.00x ONLINE – root@r730-04:~ # zpool status zroot pool: zroot state: ONLINE scan: resilvered 3.74M in 00:00:01 with 0 errors on Thu Nov 20

Creating a backup boot drive from a zroot mirror Read More »

gpart backup | gpart restore does not alway give you exact replication

Today I was setting up some drives for testing. I did this manually: root@r730-04:~ # gpart create -s gpt da2 da2 created root@r730-04:~ # gpart add -t efi -s 260M -a 4k da2 da2p1 added root@r730-04:~ # gpart add -t freebsd-swap -s 8G -a 4k da2 da2p2 added root@r730-04:~ # gpart add -t freebsd-zfs -a 4k da2 da2p3 added root@r730-04:~ # gpart show da2 => 40 62533216 da2 GPT (30G) 40 532480 1

gpart backup | gpart restore does not alway give you exact replication Read More »

Moving a zpool to larger drives #ZFS #FreeBSD

It is a rainy Wednesday morning and I’m drinking a cappuccino in my favorite cafe. A five minute walk from here, in the basement of an 1890 Victorian twin house, sits r730-04. Last weekend, I configured it to boot from two SATADOM drives. Today, I’ll move that zpool to a pair of larger drives. I’ve done that move before and it is a common procedure for increasing zpool capacity. Later today, I’ll move

Moving a zpool to larger drives #ZFS #FreeBSD Read More »

Duplicating partitions to a larger drive #ZFS #FreeBSD

As I was writing a different blog post, I came across this interesting situation. I couldn’t create the same partition layout on the new drive. This is the existing drive: root@r730-04:~ # gpart show ada0 => 40 242255584 ada0 GPT (116G) 40 532480 1 efi (260M) 532520 2008 – free – (1.0M) 534528 16777216 2 freebsd-swap (8.0G) 17311744 224942080 3 freebsd-zfs (107G) 242253824 1800 – free – (900K) root@r730-04:~ # I started to

Duplicating partitions to a larger drive #ZFS #FreeBSD Read More »

Using SATADOM drives in a Dell R730

A recent goal is to increase the available storage space in my main development server in the basement. To that end, two recent posts: r730-01: the drive replacement plan r730-01: storage plan The SATADOM devices I purchased are meant to be plugged directly into the M/B of the server. This approach will free up two PCIe slots which were hosting 2x 2.5″ SSDs in a hot-swap adaptor (the post mentions a Dell R720;

Using SATADOM drives in a Dell R730 Read More »

r730-01: storage plan

Following on from What will I do with those 2 x 1TB drives?, I received a fantastic suggestion from Mike Gerdts. Do more bifurcation. Buy an ASUS HYPER M.2 X16 GEN 4 CARD and put 4x NVMe devices on each one. I can be adding 8 more storage devices to the chassis. I see posts from people using this card in a Dell R730xd – The host in question is a Dell R730.

r730-01: storage plan Read More »

What will I do with those 2 x 1TB drives?

During zfs: setting compression and adding new vdevs I replace 2x 1TB drives with 2x 4TB drives. This afternoon as I was taking laundry down to the basement, I realized: I have two unused drives in the host. Either I remove them or I repurpose them. I know I have 4x 1TB drives (two SSDs and two NVMe). I could make a 2TB mirror out of them. Where can I use them? r730-03

What will I do with those 2 x 1TB drives? Read More »

zfs: setting compression and adding new vdevs

If you read my recent posts, I’m replacing an existing zpool with new devices. At first, I went to copy the old zpool to a new zpool. I then decided instead of copying, to replace. I’m also going to see about compression. I’m sure it won’t take effect, because replace is a block-by-block copy, or so I think. We’ll confirm. Before Here is the zpool, really, just the top-level filesystem. Notice how compression

zfs: setting compression and adding new vdevs Read More »

Scroll to Top