Open Source

Bacula – certificates expired

The new year started off poorly for my backups, and it is all my fault. Each of my remote servers is backed up utilizing Bacula’s TLS (Transport Layer Security) features. This requires an X.509 certificate which I obtain via CACert. These certificates have expired: 01-Jan 02:00 bacula-dir JobId 44464: Start Backup JobId 44464, Job=nyi_maildir.2011-01-01_02.00.00_07 01-Jan 02:00 bacula-dir JobId 44464: Using Device “MegaFile” 01-Jan 02:00 bacula-dir JobId 44464: Error: tls.c:92 Error with certificate at […]

Bacula – certificates expired Read More »

zfs destroy

Here I go, removing an old ZFS dataset that is no longer required. But first, some stats (which have been edited to remove non-zfs output): [dan@kraken:~] $ df -h Filesystem Size Used Avail Capacity Mounted on storage 1.4T 15G 1.4T 1% /storage storage/bacula.DO.NOT.USE 5.8T 4.4T 1.4T 76% /storage/bacula.DO.NOT.USE storage/compressed 1.4T 49K 1.4T 0% /storage/compressed storage/compressed/bacula 4.4T 3.0T 1.4T 68% /storage/compressed/bacula storage/compressed/home 1.4T 2.7G 1.4T 0% /storage/compressed/home storage/pgsql 1.4T 5.5G 1.4T 0% /storage/pgsql And

zfs destroy Read More »

How much are those volumes in the library? (redone)

I have an improvement on the original query. This one outputs ‘nil’ if the volume is empty. bacula=# SELECT mediaid, volumename, volstatus, slot, inchanger, pool.name, CASE WHEN volbytes < = 64512 THEN 'nil' else pg_size_pretty(volbytes) END FROM media, pool WHERE media.poolid = pool.poolid AND volumename LIKE 'ETU0%' AND inchanger != 0 ORDER BY slot; mediaid | volumename | volstatus | slot | inchanger | name | pg_size_pretty ---------+------------+-----------+------+-----------+---------------+---------------- 1328 | ETU021 | Append

How much are those volumes in the library? (redone) Read More »

How much are those volumes in the library?

I pretty much ignore my tape library. It asks for tapes when it runs out. However, I think I should be looking ahead and anticipating requests. Thus, I came up with this SQL query: [dan@ngaio:~] $ psql bacula psql (8.4.5) Type “help” for help. bacula=# SELECT mediaid, volumename, volstatus, slot, inchanger, pool.name, volbytes FROM media, pool WHERE media.poolid = pool.poolid AND volumename LIKE ‘ETU0%’ AND inchanger != 0 ORDER BY slot; mediaid |

How much are those volumes in the library? Read More »

Bacula Copy Job

With Bacula, a great piece of backup software [disclosure: I wrote the PostgreSQL module for Bacula], you can copy a Job after it has run. This is often used to backup first to disk, and then Copy that Job to tape. That is exactly what I’ve been doing recently. NOTE: You can also migrate a Job, which is very similar to a Job Copy. With a Job Copy, the original Job remains and

Bacula Copy Job Read More »

Why not use the tape that’s there?

*run A job name must be specified. The defined Job resources are: 1: BackupCatalog 2: RestoreFiles 3: MigrateDiskToTape 4: CopyMegaFileToTape 5: laptop 6: laptop-vpn 7: laptop-wifi-5.3 8: laptop-verify 9: laptop-websites 10: laptop-xp 11: wocker 12: bast basic 13: bast home 14: dbclone basic 15: dbclone home 16: polo basic 17: polo 18: kraken basic 19: kraken 20: nyi maildir 21: nyi maildir tarball 22: nyi basic 23: nyi 24: supernews basic 25: supernews

Why not use the tape that’s there? Read More »

Running a Bacula Copy Job

Over the past few months, I’ve been getting ready to copy Bacula backups from disk to tape, via a Copy Job. Tonight was my first attempt. I failed. And I think I know why. Concurrency. The output so far: *run job=CopyMegaFileToTape Using Catalog “MyCatalog” Run Copy job JobName: CopyMegaFileToTape Bootstrap: *None* Client: wocker-fd FileSet: wocker files Pool: MegaFile-wocker (From Job resource) Read Storage: MegaFile-wocker (From Pool resource) Write Storage: DigitalTapeLibrary (From Storage from

Running a Bacula Copy Job Read More »

pf – cannot define table: Cannot allocate memory

I use pf on FreeBSD 8.1-stable as my firewall of choice. Recently I’ve started using the University of Alberta list of trapped addresses. As of the time of writing this, the list contains about 56,000 IP addresses. When combined with my own list of 27,000 SPAMTRAP addresses, I quick ran into a problem when reload my rule set: $ sudo pfctl -f /etc/pf.rules No ALTQ support in kernel ALTQ related functions disabled /etc/pf.rules:69:

pf – cannot define table: Cannot allocate memory Read More »

zfs send | zfs receive

For the record, I was doing a copy within a system recently: # time zfs send storage/bacula@transfer | mbuffer | zfs receive storage/compressed/bacula-buffer in @ 0.0 kB/s, out @ 0.0 kB/s, 4394 GB total, buffer 100% fullll summary: 4394 GByte in 35 h 11 min 35.5 MB/s real 2111m18.302s user 11m44.831s sys 258m59.151s I was copying my Bacula Volumes from non-compressed ZFS to compressed ZFS. # zfs list NAME USED AVAIL REFER MOUNTPOINT

zfs send | zfs receive Read More »

Scroll to Top