avoiding duplicate cronjobs

Sometimes you want to run a cronjob on a frequent basis, but you rarely want a new one to start before the old one has finished.

There is an easy tool in FreeBSD base called lockf. It is similar to the flock tool.

Here is an example I am using:

04  *  *  *  *  /usr/bin/lockf -t 0 /tmp/.rsyncer.rsync.papers ${HOME}/bin/rsync-backup-from-papers.sh

Based on the man page, the lockf utility acquires an exclusive lock on /tmp/.rsyncer.rsync.papers, creating it if necessary, and removing the file on exit unless explicitly told not to. While holding the lock, it executes a command with optional arguments.


I have used this at $WORK before but I could not recall what it was called. I went searching in my blog and could not find it. Queries on IRC led me along the right path.

lockf can be used for much more than cronjobs. Imagine.

Website Pin Facebook Twitter Myspace Friendfeed Technorati del.icio.us Digg Google StumbleUpon Premium Responsive

2 thoughts on “avoiding duplicate cronjobs”

  1. I have always solved this problem with daemontools. Create a service with a down flag so it doesn’t start automatically then in cron you can run it once like this…

    */5 * * * * root /usr/local/bin/svc -o /var/service/send_alerts

    The svc flag -o (once) differs from -u (up) in that it will start the service but when it exits it will not be restarted. If the service is already running it won’t start again. I can also start jobs very easily from the command line with svc -o. As a bonus I get logs.

Leave a Comment

Scroll to Top