Oct 182019
 

Today the drive caddies arrived for the R720.

I refer to the services provided by the R710, not the server itself. I will list those services later and outline how I want to move them.

I could do all this over this coming weekend but I have already allocated that time to some errands I have to catch up on.

Physical things

This section discusses the physical things which must move.

Drives

There 6 drives in the R710. They make up two zpools.

[dan@r710-01:~] $ zpool list
NAME        SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
tank_fast   928G   441G   487G        -         -    46%    47%  1.00x  ONLINE  -
zroot      27.8G  12.6G  15.2G        -         -    71%    45%  1.00x  ONLINE  -

zpools can be moved from one system to another, even different operating systems, using the using the zpool export and zpool import commands.

tank_fast

tank_fast is the zpool I am moving into the R720. The significant data in that zpool are the jails I want to move.

[dan@r710-01:~] $ zfs list -r tank_fast/iocage
NAME                                                          USED  AVAIL  REFER  MOUNTPOINT
tank_fast/iocage                                              388G   458G   148K  /iocage
tank_fast/iocage/download                                     750M   458G    88K  /iocage/download
tank_fast/iocage/download/11.1-RELEASE                        119M   458G   119M  /iocage/download/11.1-RELEASE
tank_fast/iocage/download/11.2-RELEASE                        272M   458G   272M  /iocage/download/11.2-RELEASE
tank_fast/iocage/download/12.0-RELEASE                        359M   458G   359M  /iocage/download/12.0-RELEASE
tank_fast/iocage/images                                       693M   458G   693M  /iocage/images
tank_fast/iocage/jails                                        385G   458G    96K  /iocage/jails
tank_fast/iocage/jails/bacula-sd-02.int.unixathome.org       3.19G   458G    92K  /iocage/jails/bacula-sd-02.int.unixathome.org
tank_fast/iocage/jails/bacula-sd-02.int.unixathome.org/root  3.19G   458G  2.39G  /iocage/jails/bacula-sd-02.int.unixathome.org/root
tank_fast/iocage/jails/dev-pgeu                               798M   458G    92K  /iocage/jails/dev-pgeu
tank_fast/iocage/jails/dev-pgeu/root                          797M   458G  1.16G  /iocage/jails/dev-pgeu/root
tank_fast/iocage/jails/mqtt01                                3.04G   458G    92K  /iocage/jails/mqtt01
tank_fast/iocage/jails/mqtt01/root                           3.04G   458G  2.29G  /iocage/jails/mqtt01/root
tank_fast/iocage/jails/pg02.int.unixathome.org               19.4G   458G    92K  /iocage/jails/pg02.int.unixathome.org
tank_fast/iocage/jails/pg02.int.unixathome.org/root          19.4G   458G  18.8G  /iocage/jails/pg02.int.unixathome.org/root
tank_fast/iocage/jails/pg03.int.unixathome.org                359G   458G    92K  /iocage/jails/pg03.int.unixathome.org
tank_fast/iocage/jails/pg03.int.unixathome.org/root           359G   458G   132G  /iocage/jails/pg03.int.unixathome.org/root
tank_fast/iocage/log                                          132K   458G   132K  /iocage/log
tank_fast/iocage/releases                                    1.79G   458G    88K  /iocage/releases
tank_fast/iocage/releases/11.1-RELEASE                        325M   458G    88K  /iocage/releases/11.1-RELEASE
tank_fast/iocage/releases/11.1-RELEASE/root                   325M   458G   325M  /iocage/releases/11.1-RELEASE/root
tank_fast/iocage/releases/11.2-RELEASE                        968M   458G    88K  /iocage/releases/11.2-RELEASE
tank_fast/iocage/releases/11.2-RELEASE/root                   968M   458G   968M  /iocage/releases/11.2-RELEASE/root
tank_fast/iocage/releases/12.0-RELEASE                        536M   458G    88K  /iocage/releases/12.0-RELEASE
tank_fast/iocage/releases/12.0-RELEASE/root                   536M   458G   536M  /iocage/releases/12.0-RELEASE/root
tank_fast/iocage/templates                                     88K   458G    88K  /iocage/templates
[dan@r710-01:~] $ 

Also in this zpool is my poudriere server.

[dan@r710-01:~] $ zfs list -r tank_fast/poudriere
NAME                                     USED  AVAIL  REFER  MOUNTPOINT
tank_fast/poudriere                     42.6G   458G    88K  /usr/local/poudriere
tank_fast/poudriere/cache               3.86G   458G    88K  /var/cache
tank_fast/poudriere/cache/ccache        1.39G   458G  1.39G  /var/cache/ccache
tank_fast/poudriere/cache/ccache.amd64  2.19G   458G  2.19G  /var/cache/ccache.amd64
tank_fast/poudriere/cache/ccache.i386    291M   458G   291M  /var/cache/ccache.i386
tank_fast/poudriere/data                20.6G   458G   665M  /usr/local/poudriere/data
tank_fast/poudriere/data/cache          66.2M   458G  66.2M  /usr/local/poudriere/data/cache
tank_fast/poudriere/data/cronjob-logs   1.05M   458G  1.05M  /usr/local/poudriere/data/cronjob-logs
tank_fast/poudriere/data/packages       19.9G   458G  9.88G  /usr/local/poudriere/data/packages
tank_fast/poudriere/jails               2.54G   458G    88K  /usr/local/poudriere/jails
tank_fast/poudriere/jails/120amd64      1.38G   458G  1.38G  /usr/local/poudriere/jails/120amd64
tank_fast/poudriere/jails/120i386       1.16G   458G  1.16G  /usr/local/poudriere/jails/120i386
tank_fast/poudriere/ports               15.6G   458G    88K  /usr/local/poudriere/ports
tank_fast/poudriere/ports/clean          865M   458G   865M  /usr/local/poudriere/ports/clean
tank_fast/poudriere/ports/default       4.99G   458G  1.56G  /usr/local/poudriere/ports/default
tank_fast/poudriere/ports/main          1.18G   458G  1.18G  /usr/local/poudriere/ports/main
tank_fast/poudriere/ports/testing       8.57G   458G  5.11G  /usr/local/poudriere/ports/testing
[dan@r710-01:~] $ 

There is some other stuff here:

[dan@r710-01:~] $ zfs list -r tank_fast | egrep -v 'tank_fast/iocage|tank_fast/poudriere'
NAME                                                          USED  AVAIL  REFER  MOUNTPOINT
tank_fast                                                     441G   458G    88K  /tank_fast
tank_fast/dan                                                3.00G   458G    88K  none
tank_fast/dan/ports                                          3.00G   458G    88K  /usr/home/dan/ports
tank_fast/dan/ports/2019Q3                                   1.29G   458G  1.29G  /usr/home/dan/ports/2019Q3
tank_fast/dan/ports/head                                     1.71G   458G  1.71G  /usr/home/dan/ports/head
tank_fast/distfiles                                          5.98G   458G  5.98G  /usr/ports/distfiles

tank_fast/dan/ports is where I do my port maintenance work.

The distfiles are important for the poudriere server.

zroot

zroot is the standard FreeBSD install.

[dan@r710-01:~] $ zfs list -r zroot
NAME                      USED  AVAIL  REFER  MOUNTPOINT
zroot                    12.6G  14.3G    96K  /zroot
zroot/ROOT               6.76G  14.3G    96K  none
zroot/ROOT/11.1-RELEASE     8K  14.3G  1.94G  /
zroot/ROOT/11.2-RELEASE     8K  14.3G  2.38G  /
zroot/ROOT/default       6.76G  14.3G  4.22G  /
zroot/tmp                 291M  14.3G   291M  /tmp
zroot/usr                5.41G  14.3G    96K  /usr
zroot/usr/home           4.73G  14.3G  1.09G  /usr/home
zroot/usr/ports            96K  14.3G    96K  /usr/ports
zroot/usr/src             689M  14.3G   689M  /usr/src
zroot/var                8.04M  14.3G    96K  /var
zroot/var/audit            96K  14.3G    96K  /var/audit
zroot/var/crash            96K  14.3G    96K  /var/crash
zroot/var/log            6.57M  14.3G  6.57M  /var/log
zroot/var/mail            120K  14.3G   120K  /var/mail
zroot/var/tmp            1.07M  14.3G  1.07M  /var/tmp
[dan@r710-01:~] $ 

I contemplated moving this zpool into the R720, but then I realized I could just keep the R720 running. I thought that move would make it easier to copy over configuration files etc. Instead, I’ll just use scp.

SAS card

There is a SAS card in R710:

mps1: <Avago Technologies (LSI) SAS200>> port 0xdc00-0xdcff mem 0xdf5b0000-0xdf5bffff,0xdf5c0000-0xdf5fffff irq 40 at device 0.0 on pci7
mps1: Firmware: 07.15.08.00, Driver: 21.02.00.00-fbsd
mps1: IOCCapabilities: 185c<ScsiTaskFull,DiagTrace,SnapBuf,EEDP,TransRetry,IR>

This connects the bacula-sd-02 jail, and bacula-sd, to the Dell TL4000 tape library.

Services

These are the services enabled on the R710:

[dan@r710-01:~] $ grep enable /etc/rc.conf
sshd_enable="YES"
ntpd_enable="YES"
# Set dumpdev to "AUTO" to enable crash dumps, "NO" to disable
zfs_enable="YES"
daily_clean_hoststat_enable="NO"
sendmail_submit_enable="NO"
sendmail_outbound_enable="NO"
sendmail_msp_queue_enable="NO"
postfix_enable="YES"
smartd_enable="YES"
nrpe3_enable="YES"
daily_ntpd_leapfile_enable="YES"
bacula_fd_enable="YES"
snmpd_enable="YES"
nginx_enable="YES"
iocage_enable="YES"
apcupsd_enable="YES"
powerdxx_enable="YES"

Package

These are the top level packages installed on the R710:

[dan@r710-01:~] $ pkg query -e '%#r = 0' %o | sort
benchmarks/iperf
converters/p5-JSON
devel/arcanist
devel/ccache-static
devel/php-composer
devel/subversion
editors/joe
lang/python
mail/pflogsumm
mail/postfix
misc/mtx
misc/xtail
net-mgmt/nagios-check_smartmon
net-mgmt/net-snmp
net-mgmt/nrpe3
net/mosquitto
ports-mgmt/pkg
ports-mgmt/portlint
ports-mgmt/poudriere
security/base-audit
security/logcheck
security/pam_ssh_agent_auth
security/sudo
sysutils/anvil
sysutils/apcupsd
sysutils/bacula9-client
sysutils/beadm
sysutils/hare
sysutils/iocage-devel
sysutils/ncdu
sysutils/neofetch
sysutils/powerdxx
sysutils/tmux
sysutils/zfs-stats
sysutils/zfstools
www/nginx
[dan@r710-01:~] $ 

These are the packages I don’t think I need:

  1. converters/p5-JSON – I have no idea why that’s on there – EDIT 2019-10-20 : It is there because of net-snmpd – it gets installed via Ansible and is used by one of the scripts it installs for use with LibreNMS.
  2. devel/php-composer – I think that’s there as part of my LibreNMS package maintenance. I now do that work in a jail, not on this host.
  3. misc/mtx – That is only required in the bacula-sd-02 jail. It was probably installed while testing the tape library.
  4. net/mosquittomosquitto runs in a jail now.
  5. sysutils/beadm – Let’s use bectl, now in base, on this host.

Scheduling downtime

Edit: this section was added on 2019-10-19.

This morning I thought about the services provided by the R710 and how taking it offline would affect other host. Here are the primary services:

  1. pg03 – my primary database server. Taking this off line will affect these services:
    1. bacula
    2. dev.freshports.org
    3. test.freshports.org
    4. stage.freshports.org
    5. git.freshports.org

    I will need to take those jails down (it’s easier than disabling various services within each jail).

    This will create warnings on the mail servers which are trying to send mail to those jails.

  2. mqtt01 – My mosquitto server. Various notifications will be disabled. No big deal.
  3. bacula-sd-02 – my secondary Bacula storage server, used mainly for tape backups. This jail being offline will not affect anything.
  4. pg02 – another PostgreSQL database server. Not in use at present. Why do I have an unused database server? For upgrades. When I need to dump and upgrade, I run pg_dump on this server and grab the databases from the server being upgraded.

What else?

I should go down and look at the back of the server to see if there is anything else, hardware-wise, which needs to be moved. EDIT 2019-10-23 I checked: nothing.

I should create an ansible script for installing those packages, but I suspect I won’t. All that configuration should be saved in there. EDIT 2019-10-23 Done.

Edit: think about running poudriere in a jail. The last time I looked at this, something stopped me from pursuing it. I do not know what that was. The poudriere wiki has a page on it, last updated in 2015. This point was raised on Twitter and a how-to was supplied. EDIT 2019-10-23 That’s done too.

For now, it’s time for food and drink. Good night.

Website Pin Facebook Twitter Myspace Friendfeed Technorati del.icio.us Digg Google StumbleUpon Premium Responsive