The accomplished reader will first ask, how is this post any different from Clearing-out multiple drives concurrently – while Watching Fargo Season 5?
Well, first, it’s a different server. This one is r720-01.
Second, I’m watching Band of Brothers, which is completely different.
However, the approach is the same:
- Boot the host using an mfsBSD thumb drive
- ssh in as root
- wipe the drives
- watch Band of Brothers
- profit
What’s not to like? I will miss r720-01, but it is going to a good home.
A slightly new approach
This time, I thought I’d be clever and try all the commands at once. And I detached the new-session via tmux (avoid the nested tmux sessions from before).
root@mfsbsd:~ # cat clear-drives.sh #!/bin/sh DRIVES="da14 da13 da12 da11 da10 da9 da8 da7 da6 da5 da4 da3 da2 da1 da0 ada1 ada0" for drive in ${DRIVES} do zpool labelclear -f /dev/${drive}p1 zpool labelclear -f /dev/${drive}p3 gpart destroy -F ${drive} tmux new-session -d dd bs=1M if=/dev/zero of=/dev/$drive done
In my first attempt, I copy/pasted from the linked post and failed: unrecognized command ‘label’. That’s because it was zpool label labelclear.
I also added the -d parameter to my tmux command.
Now I wait and watch Episide 3 of Band of Brothers. Think might be the 4th or 5th time through.
root@mfsbsd:~ # tmux ls 0: 1 windows (created Thu Jan 25 22:10:49 2024) [80x24] 1: 1 windows (created Thu Jan 25 22:10:49 2024) [80x24] 2: 1 windows (created Thu Jan 25 22:10:49 2024) [80x24] 3: 1 windows (created Thu Jan 25 22:10:49 2024) [80x24] 4: 1 windows (created Thu Jan 25 22:10:49 2024) [80x24] 5: 1 windows (created Thu Jan 25 22:10:49 2024) [80x24] 6: 1 windows (created Thu Jan 25 22:10:50 2024) [80x24] 7: 1 windows (created Thu Jan 25 22:10:50 2024) [80x24] 8: 1 windows (created Thu Jan 25 22:10:51 2024) [80x24] root@mfsbsd:~ # ps auwwx | grep dd root 1893 2.8 0.0 12848 3440 8 DLCs+ 22:10 0:10.34 dd bs=1M if=/dev/zero of=/dev/da7 root 1887 2.7 0.0 12848 3440 7 DLCs+ 22:10 0:10.26 dd bs=1M if=/dev/zero of=/dev/da8 root 1899 2.5 0.0 12848 3440 9 DLCs+ 22:10 0:10.27 dd bs=1M if=/dev/zero of=/dev/da6 root 1875 0.6 0.0 12848 3440 5 DLCs+ 22:10 0:04.59 dd bs=1M if=/dev/zero of=/dev/da10 root 1881 0.5 0.0 12848 3440 6 DLCs+ 22:10 0:05.67 dd bs=1M if=/dev/zero of=/dev/da9 root 1857 0.4 0.0 12848 3440 2 DLCs+ 22:10 0:04.00 dd bs=1M if=/dev/zero of=/dev/da13 root 1869 0.4 0.0 12848 3440 4 DLCs+ 22:10 0:04.36 dd bs=1M if=/dev/zero of=/dev/da11 root 1863 0.2 0.0 12848 3440 3 DLCs+ 22:10 0:03.92 dd bs=1M if=/dev/zero of=/dev/da12 root 1963 0.0 0.0 11260 2732 0 S+ 22:16 0:00.00 grep dd root 1851 0.0 0.0 12848 3440 1 DLCs+ 22:10 0:01.63 dd bs=1M if=/dev/zero of=/dev/da14 root@mfsbsd:~ #
It looks like some drives have finished already. To be safe, I adjusted the script, and reran it with the drives not seen in the output output.
These are the drive sizes:
- ada0 80G
- ada1 80G
- da0 90G
- da1..da8 381G
- da9 121G
- da10 da11 da12 da13 476G
Doing random stuff
The next day, after all the dd processes had run their course, I came back and decided, without real reason, to do it again, with /dev/random in places of /dev/zero.
As expected, this takes more CPU, because the CPU is generating values instead of just using zeroes.
last pid: 2162; load averages: 17.32, 17.26, 16.62 up 0+17:17:57 15:04:18 29 processes: 17 running, 12 sleeping CPU: 0.0% user, 0.0% nice, 43.2% system, 0.0% interrupt, 56.8% idle Mem: 25M Active, 309M Inact, 2114M Wired, 53M Buf, 309G Free Swap: PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 2105 root 1 103 0 13M 3440K CPU35 35 47:54 104.50% dd bs=1M if=/dev/random of=/dev/da5 2142 root 1 103 0 13M 3440K CPU24 24 47:13 102.46% dd bs=1M if=/dev/random of=/dev/da11 2136 root 1 103 0 13M 3440K CPU0 0 47:13 101.93% dd bs=1M if=/dev/random of=/dev/da13 2154 root 1 103 0 13M 3440K CPU32 32 47:12 101.12% dd bs=1M if=/dev/random of=/dev/da7 2139 root 1 103 0 13M 3440K CPU37 37 47:13 99.92% dd bs=1M if=/dev/random of=/dev/da12 2111 root 1 103 0 13M 3440K CPU39 39 47:54 98.95% dd bs=1M if=/dev/random of=/dev/da3 2157 root 1 103 0 13M 3440K CPU23 23 47:12 98.31% dd bs=1M if=/dev/random of=/dev/da6 2145 root 1 103 0 13M 3440K CPU30 30 47:13 97.27% dd bs=1M if=/dev/random of=/dev/da10 2151 root 1 103 0 13M 3440K CPU28 28 47:12 96.51% dd bs=1M if=/dev/random of=/dev/da8 2148 root 1 103 0 13M 3440K CPU27 27 47:08 95.50% dd bs=1M if=/dev/random of=/dev/da9 2117 root 1 103 0 13M 3440K CPU10 10 47:54 95.20% dd bs=1M if=/dev/random of=/dev/da1 2114 root 1 103 0 13M 3440K CPU17 17 47:54 95.18% dd bs=1M if=/dev/random of=/dev/da2 2120 root 1 102 0 13M 3440K CPU21 21 47:01 94.10% dd bs=1M if=/dev/random of=/dev/da0 2108 root 1 103 0 13M 3440K CPU7 7 47:54 93.52% dd bs=1M if=/dev/random of=/dev/da4 2123 root 1 101 0 13M 3440K CPU19 19 45:20 91.86% dd bs=1M if=/dev/random of=/dev/ada1 2126 root 1 101 0 13M 3440K CPU15 15 45:20 91.83% dd bs=1M if=/dev/random of=/dev/ada0 2133 root 1 52 0 13M 3440K physwr 5 39:49 78.14% dd bs=1M if=/dev/random of=/dev/da14 2162 root 1 20 0 13M 3724K CPU8 8 0:01 0.02% top 1808 root 1 20 0 20M 11M select 0 0:00 0.00% sshd: root@pts/0 (sshd)
As you can see above, it’s pretty busy:
root@mfsbsd:~ # ps auwwx | grep dd root 2154 100.0 0.0 12848 3440 16 RCs+ 14:16 47:36.03 dd bs=1M if=/dev/random of=/dev/da7 root 2136 100.0 0.0 12848 3440 10 RCs+ 14:16 47:36.61 dd bs=1M if=/dev/random of=/dev/da13 root 2139 100.0 0.0 12848 3440 11 RCs+ 14:16 47:36.78 dd bs=1M if=/dev/random of=/dev/da12 root 2145 100.0 0.0 12848 3440 13 RCs+ 14:16 47:36.99 dd bs=1M if=/dev/random of=/dev/da10 root 2105 100.0 0.0 12848 3440 1 RCs+ 14:15 48:17.57 dd bs=1M if=/dev/random of=/dev/da5 root 2151 100.0 0.0 12848 3440 15 RCs+ 14:16 47:35.47 dd bs=1M if=/dev/random of=/dev/da8 root 2142 100.0 0.0 12848 3440 12 RCs+ 14:16 47:36.76 dd bs=1M if=/dev/random of=/dev/da11 root 2148 100.0 0.0 12848 3440 14 RCs+ 14:16 47:31.63 dd bs=1M if=/dev/random of=/dev/da9 root 2157 100.0 0.0 12848 3440 17 RCs+ 14:16 47:35.23 dd bs=1M if=/dev/random of=/dev/da6 root 2108 100.0 0.0 12848 3440 2 RCs+ 14:15 48:18.32 dd bs=1M if=/dev/random of=/dev/da4 root 2111 100.0 0.0 12848 3440 3 RCs+ 14:15 48:17.40 dd bs=1M if=/dev/random of=/dev/da3 root 2114 100.0 0.0 12848 3440 4 RCs+ 14:15 48:17.45 dd bs=1M if=/dev/random of=/dev/da2 root 2117 100.0 0.0 12848 3440 5 RCs+ 14:15 48:18.10 dd bs=1M if=/dev/random of=/dev/da1 root 2120 100.0 0.0 12848 3440 6 RCs+ 14:15 47:24.28 dd bs=1M if=/dev/random of=/dev/da0 root 2126 96.0 0.0 12848 3440 8 DLCs+ 14:15 45:42.51 dd bs=1M if=/dev/random of=/dev/ada0 root 2123 93.8 0.0 12848 3440 7 RCs+ 14:15 45:42.00 dd bs=1M if=/dev/random of=/dev/ada1 root 2133 85.3 0.0 12848 3440 9 RCs+ 14:16 40:09.09 dd bs=1M if=/dev/random of=/dev/da14 root 2164 0.0 0.0 11260 2732 0 S+ 15:04 0:00.00 grep dd root@mfsbsd:~ #
I’ll just let this run, and then be done with it. That’s enough erasing for my needs.