Author name: Dan Langille

sudo: effective uid is not 0, is sudo installed setuid root?

Tonight I created a new jail, and later installed sudo into that jail. Then I tried to run sudo: $ sudo make install clean sudo: effective uid is not 0, is sudo installed setuid root? What? It is setuid: $ ls -l `which sudo` -rwsr-xr-x 1 root wheel 117112 Jul 26 17:08 /usr/local/bin/sudo I had no idea. wxs had the answer: mount points. Here is the problem, as revealed by this command in […]

sudo: effective uid is not 0, is sudo installed setuid root? Read More »

nagios check_smartmon fails with SATA presented as SCSI devices

I’ve been using Nagios for a while. I use it to monitor many things, varying from disk space to disk temperature. One of the packages I use for this is net-mgmt/nagios-check_smartmon. This code is getting out of date it seems. According to the timestamp at the top of the file, the last time it was updated was 2006-03-24 10:30:20. So it’s not surprising that it’s failing to work properly on a few cases.

nagios check_smartmon fails with SATA presented as SCSI devices Read More »

Moving zfs datasets / filesystems around

After the zfs send, I had system/foo/bar, but I really just wanted system/bar. How do I fix that? mv(1) was my first guess. I was wrong. What I wanted was zfs(8) rename. Let me explain. After doing my: zfs send storage/compressed/bacula@2013.07.22:06:33 | mbuffer | ssh -t dan@10.5.0.20 ‘zfs receive system/usr/local/bacula/FromKraken’ I quite, correctly, wound up with system/usr/local/bacula/FromKraken, not the system/usr/local/bacula which I wanted. Here’s how I fixed that, but first, I took snapshots,

Moving zfs datasets / filesystems around Read More »

zfs send | zfs receive; finally finished

Today it finished. That ZFS send I started on Monday finally finished. Recap The original command was: # zfs send storage/compressed/bacula@2013.07.22:06:33 | mbuffer | ssh -t dan@10.5.0.20 ‘zfs receive system/usr/local/bacula/FromKraken’ That’s was the initial send of the filesystem. But more data has been added to it. So I take another snapshot, and send that: Send what’s accumulated since previous send # zfs snapshot storage/compressed/bacula@2013.07.25:08:20 And now I send the incremental stream: # zfs

zfs send | zfs receive; finally finished Read More »

newsyslog: can’t notify daemon, pid 2090: No such process

I keep getting this message in my email, daily, at midnight UTC: newsyslog: can’t notify daemon, pid 2090: No such process This is generated by this entry: $ grep hp http /etc/newsyslog.conf /var/log/httpd-*.log 640 28 * $D5 BG /var/run/httpd.pid 30 This rolls over the log files for me. The message has been the same every day: pid 2090. However, there is no process with that ID: $ ps auwx | grep 2090 $

newsyslog: can’t notify daemon, pid 2090: No such process Read More »

zfs send taking a while…

That ZFS send from yesterday, is taking more than 24 hours. Current status: in @ 53.7 MB/s, out @ 53.7 MB/s, 3892 GB total, buffer 100% full That’s about 4TB out nearly 13TB: $ zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT storage 12.7T 12.0T 739G 94% 1.00x ONLINE – Based on that progress, my prediction for completion is Thursday morning. On a related note, my pool needs to be upgraded:

zfs send taking a while… Read More »

zfs send from old file server

Earlier today, I started a zfs send | zfs receive between the old file server and the new file server. It won’t finish until early morning tomorrow. [root@kraken ~]# zfs send storage/compressed/bacula@2013.07.22:06:33 | mbuffer | ssh -t dan@10.5.0.20 ‘zfs receive system/usr/local/bacula/FromKraken’ Pseudo-terminal will not be allocated because stdin is not a terminal. in @ 54.9 MB/s, out @ 54.9 MB/s, 47.6 GB total, buffer 100% full Why mbuffer? Because it shows me some

zfs send from old file server Read More »

raidz2: 10 disks – half and half

Following up on last weeks comparison of 10 disks vs 8 disks on raidz2, it was suggested by Marie Helene that I try more HDD on the m/b. I’m here to report just that. I’ve put 5 HDD on the SUPERMICRO MBD-X8SIL-O Micro ATX Server m/b and five on the LSI SATA/SAS 9211-8i 6Gb/s. Here are the pg_restore times: 18m28.511s 18m32.359s 18m30.077s An average of 18m 30s. This is the slowest of all

raidz2: 10 disks – half and half Read More »

raidz2: 10 disks vs 8 disks

Yesterday, I mentioned I wanted to compare raidz2 when using 10 disks versus using 8 disks. Along the way, I noticed that my ZFS-setup script needed a change to cope with mixed devices. Once I got past that hurdle, I started adding packages and setting up the system for some very simple tests. The test My test of preference is loading a database, and then dumping it. Each pg_restore was done into a

raidz2: 10 disks vs 8 disks Read More »

Problem with disk numbering in my ZFS creation script

THe last system I set up with ZFS, I used a script. Today, when trying a system with different device types, I encountered a limitation of that script. I need to fix it. In the above script, NUM takes on the value of what digits appear in the string of I. Given that DISKS contains both ada0 and da0, two disks will get the same value for $I. That’s a problem. I need

Problem with disk numbering in my ZFS creation script Read More »

Scroll to Top