Search Results for: bacula

Mount your ZFS datasets anywhere you want

ZFS is very flexible about mountpoints, and there are many features available to provide great flexibility. When you create your second zpool this is what it might look like: $$ zfs list -r main_tank NAME USED AVAIL REFER MOUNTPOINT main_tank 893G 3.52T 96K /main_tank main_tank/data 786G 3.52T 88K /main_tank/data main_tank/data/dvl 755G 3.52T 755G /main_tank/data/dvl main_tank/data/freshports 31.4G 3.52T 88K /main_tank/data/freshports main_tank/data/freshports/backend 3.11G 3.52T 88K /main_tank/data/freshports/backend This is a pool I created long ago, but […]

Mount your ZFS datasets anywhere you want Read More »

slocum – the new

This post has been replaced by a new one. For reference, the previous post on this server is still available. This server was upgraded on Feb 2 2019. Only the storage persisted. Everything else was upgraded. The hardware M/B – Supermicro X9DRE-TF+ RAM – 128GB composed of 8x 16GB DDR3 1600Mhz 21300 ECC/REG CPU – 2x E5-2620v2 – Intel Six Core 2.10Ghz Xeon 15MB cache 7.2 GT/s QPI (80W) chassis – SC846E16-R1200B SAS9300-8i

slocum – the new Read More »

slocum

This post has been replaced by a newer one. For reference, the previous post on this server is still available. The most recent change was extracting the beadm boot environment to a new zpool, zroot. For future reference, this is the slocum server, which I use for various jails and services. It is mounted in the 4U chassis mentioned in this post The filesystems, well, some of them: And dmesg:

slocum Read More »

What runs on the servers at home?

I have added comments to the output of this command. This is a brief description of what runs in each jail. This post has been replaced by a newer one. slocum slocum – named for the first person to sail solo around the world This host also runs a DNS and DHCP. [dan@slocum:~] $ jls -v | grep ACTIVE | cut -f 2 -w | sort bacula – runs bacula-dir, main component for

What runs on the servers at home? Read More »

knew

This post has been replaced by a newer post. This is the previous post for this system configuration. For future reference, this is the knew server … oh wait, I think it’s this server which is was mounted in the 3U chassis mentioned in this post (perviously erroneously referred to as a 4U). It runs a few jails, including Bacula regression testing services. It is now mounted in a SuperChassis 846E16-R1200B This recent

knew Read More »

R710

I’ve been given a Dell PowerEdge 710. I’ve installed two 30GB SSDs, installed FreeBSD 11 on them, and added four 500 GB SSDs. It will become a tape library server, replacing the R610 which will no longer be used. Some of the hardware came from the R610. The zpools: The filesystems: And dmesg:

R710 Read More »

R710 – getting the drives ready

I recently installed 6 SSDs into an R710. In this post, I will describe what I did to the empty SSDs before I started using them. short test I ran a short test on each one of the drives. I already had smartctl. $ sudo smartctl -t short /dev/da0 smartctl 6.6 2017-11-05 r4594 [FreeBSD 11.1-RELEASE-p1 amd64] (local build) Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org === START OF OFFLINE IMMEDIATE AND SELF-TEST

R710 – getting the drives ready Read More »

Moving boot from one zpool to another

I’ve been having boot issues with a server containing 20 HDD, all behind HBAs. I have decided to start booting off SSDs. In that previous blog post, there was much speculation about where to mount the new drives, etc. In the meantime, I came up with a solution which did not involve magnets and used a pair of INTEL 730 SSDs I had in another chassis. Let’s start with how I configured those

Moving boot from one zpool to another Read More »

Scroll to Top