Running a Bacula Copy Job

Over the past few months, I’ve been getting ready to copy Bacula backups from disk to tape, via a Copy Job. Tonight was my first attempt. I failed. And I think I know why. Concurrency.

The output so far:

*run job=CopyMegaFileToTape
Using Catalog "MyCatalog"
Run Copy job
JobName:       CopyMegaFileToTape
Bootstrap:     *None*
Client:        wocker-fd
FileSet:       wocker files
Pool:          MegaFile-wocker (From Job resource)
Read Storage:  MegaFile-wocker (From Pool resource)
Write Storage: DigitalTapeLibrary (From Storage from Pool's NextPool resource)
JobId:         *None*
When:          2010-11-23 03:25:03
Catalog:       MyCatalog
Priority:      10
OK to run? (yes/mod/no): yes
Job queued. JobId=40091
You have messages.
*m
23-Nov 03:25 bacula-dir JobId 40091: The following 1 JobId was chosen to be copied: 39676
23-Nov 03:25 bacula-dir JobId 40091: Copying using JobId=39676 Job=wocker.2010-11-07_05.55.00_25
23-Nov 03:25 bacula-dir JobId 40091: Bootstrap records written to /home/bacula/working/bacula-dir.restore.3.bsr
*m
23-Nov 03:25 bacula-dir JobId 40091: Start Copying JobId 40091, Job=CopyMegaFileToTape.2010-11-23_03.25.08_07

So.. why is it not running? This is why:

 $ h)`bcon': bconsole
Connecting to Director bacula.example.org:9101
1000 OK: bacula-dir Version: 5.0.3 (04 August 2010)
Enter a period to cancel a command.
*status dir
bacula-dir Version: 5.0.3 (04 August 2010) i386-portbld-freebsd8.1 freebsd 8.1-STABLE
Daemon started 18-Oct-10 20:56, 914 Jobs run since started.
 Heap: heap=0 smbytes=321,921 max_bytes=10,142,514 bufs=2,643 max_bufs=2,780

Scheduled Jobs:
Level          Type     Pri  Scheduled          Name               Volume
===================================================================================
Incremental    Backup    20  23-Nov-10 04:00    nyi maildir        FileAuto-0483
Incremental    Backup    10  23-Nov-10 05:55    wocker             FileAuto-1176
Incremental    Backup    10  23-Nov-10 05:55    kraken             FileAuto-1174
Incremental    Backup    10  23-Nov-10 05:55    kraken basic       FileAuto-1174
Incremental    Backup    10  23-Nov-10 05:55    polo               FileAuto-1149
Incremental    Backup    10  23-Nov-10 05:55    polo basic         FileAuto-1149
Incremental    Backup    10  23-Nov-10 05:55    ngaio basic        FileAuto-1327
Incremental    Backup    10  23-Nov-10 05:55    dbclone home       FileAuto-0285
Incremental    Backup    10  23-Nov-10 05:55    dbclone basic      FileAuto-0285
Incremental    Backup    10  23-Nov-10 05:55    bast home          FileAuto-1305
Incremental    Backup    10  23-Nov-10 05:55    bast basic         FileAuto-1305
Incremental    Backup    10  23-Nov-10 05:55    ngaio databases    FileAuto-1327
Incremental    Backup    10  23-Nov-10 05:55    ngaio              FileAuto-1327
Incremental    Backup    20  23-Nov-10 05:55    nyi basic          FileAuto-0483
Incremental    Backup    20  23-Nov-10 05:55    nyi                FileAuto-0483
Incremental    Backup    20  23-Nov-10 08:00    nyi maildir        FileAuto-0483
Incremental    Backup    20  23-Nov-10 08:15    supernews basic    FileAuto-1275
Incremental    Backup    20  23-Nov-10 08:15    supernews          FileAuto-1275
Incremental    Backup    20  23-Nov-10 08:15    latens basic       FileAuto-1179
Incremental    Backup    20  23-Nov-10 08:15    latens home        FileAuto-1179
Full           Backup   100  23-Nov-10 08:15    BackupCatalog      FileAuto-0341
Differential   Backup    20  23-Nov-10 12:01    nyi maildir        FileAuto-0483
Full           Backup    20  23-Nov-10 12:01    nyi maildir tarball FileAuto-0483
Incremental    Backup    20  23-Nov-10 16:00    nyi maildir        FileAuto-0483
Incremental    Backup    20  23-Nov-10 20:00    nyi maildir        FileAuto-0483
Incremental    Backup    20  24-Nov-10 00:00    nyi maildir        FileAuto-0483
====

Running Jobs:
Console connected at 23-Nov-10 03:19
Console connected at 23-Nov-10 03:29
 JobId Level   Name                       Status
======================================================================
 40091 Full    CopyMegaFileToTape.2010-11-23_03.25.08_07 is waiting on Storage DigitalTapeLibrary
 40092 Increme  wocker.2010-11-23_03.25.09_08 is waiting on Storage MegaFile-wocker
====

Terminated Jobs:
 JobId  Level    Files      Bytes   Status   Finished        Name
====================================================================
 40081  Incr        111    259.4 M  OK       22-Nov-10 08:19 supernews_basic
 40082  Incr         51    152.6 M  OK       22-Nov-10 08:21 supernews
 40083  Incr         76    96.41 M  OK       22-Nov-10 08:23 latens_basic
 40084  Incr          0         0   OK       22-Nov-10 08:23 latens_home
 40085  Full          1    727.4 M  OK       22-Nov-10 08:25 BackupCatalog
 40086  Diff     18,024    436.6 M  OK       22-Nov-10 12:11 nyi_maildir
 40087  Full          1    1.425 G  OK       22-Nov-10 12:39 nyi_maildir_tarball
 40088  Incr        151    3.167 M  OK       22-Nov-10 16:00 nyi_maildir
 40089  Incr        182    3.897 M  OK       22-Nov-10 20:00 nyi_maildir
 40090  Incr        292    5.557 M  OK       23-Nov-10 00:00 nyi_maildir

====
*

Two jobs, both waiting on the same SD. That is not obvious from the output, but I assure you, both DigitalTapeLibrary and MegaFile-wocker are on the same SD.

If you read http://www.bacula.org/manuals/en/concepts/concepts/Migration_Copy.html you’ll see that concurrency is indeed the issue. I’ll fix that another night. It’s bed time….

Website Pin Facebook Twitter Myspace Friendfeed Technorati del.icio.us Digg Google StumbleUpon Premium Responsive

2 thoughts on “Running a Bacula Copy Job”

  1. Hi, This works only (migration) if you has two SD running?
    Then, if the Devices are on the same SD (Tape and HardDisk by example) it’s not possible to migrate Jobs from one Device to another? .

    Thank’s.
    Javier

  2. You are correct. Migration (and Copy) work within a single SD.

    It is *not* possible to migrate/copy a job from one SD to another.

    EDIT: It is now possible with more recent versions of Bacula.

Leave a Comment

Scroll to Top