FreeBSD with new xorg 1.12.X

A while ago I switched my FreeBSD desktop to the new xorg 1.12.X. In ports this requires adding WITH_NEW_XORG=yes to /etc/make.conf.

I only found out today that for packages you need to add a file called /etc/pkg/FreeBSD-new-xorg.conf containing:

FreeBSD_new_xorg: {
 url: "pkg+http://pkg.FreeBSD.org/${ABI}/new_xorg",
 mirror_type: "srv",
 signature_type: "fingerprints",
 fingerprints: "/usr/share/keys/pkg",
 enabled: yes
 }

Re-installing everything to see if it is more stable

Patch to APC PowerChute Network Shutdown for Ubuntu

The following patch fixes the install script for Ubuntu 13.04

--- install.sh~orig 2013-09-25 16:07:54.772107691 +0000
+++ install.sh 2013-09-26 12:05:46.214115186 +0000
@@ -1,4 +1,4 @@
-#!/bin/sh
+#!/bin/bash
 ######################################################################
 #
 # PowerChute Network Shutdown v.3.0.1
@@ -22,6 +22,7 @@
 JRE_REQUIRED_MINI=0
 JRE_REQUIRED_MICRO=0
 LINUX="Linux"
+LINUX_UBUNTU="Linux_Ubuntu"
 SOLARIS="Solaris"
 HPUX="HP-UX"
 AIX="AIX"
@@ -288,6 +289,11 @@
 # We're a Linux derivative, decide which one.
 if [ -f /etc/vima-release ] || [ -f /etc/vma-release ]; then
 OS=$VIMA
+ elif [ -x /usr/bin/lsb_release ]; then
+ /usr/bin/lsb_release -i | grep Ubuntu > /dev/null
+ if [ $? -eq 0 ]; then
+ OS=$LINUX_UBUNTU
+ fi
 else
 grep XenServer /etc/redhat-release > /dev/null 2>&1
 if [ $? -eq 0 ]; then
@@ -368,6 +374,11 @@
 PCBE_STARTUP=/etc/rc.d/init.d/PBEAgent
 PCS_STARTUP=/etc/rc.d/init.d/pcs
 ;;
+ $LINUX_UBUNTU)
+ STARTUP=/etc/init.d/PowerChute
+ PCBE_STARTUP=/etc/init.d/PBEAgent
+ PCS_STARTUP=/etc/init.d/pcs
+ ;;
 $SOLARIS)
 STARTUP=/etc/rc2.d/S99PowerChute
 PCBE_STARTUP=/etc/rc2.d/S99PBEAgent
@@ -563,6 +574,9 @@
 /etc/rc.d/init.d/PowerChute stop
 fi
 ;;
+ $LINUX_UBUNTU)
+ /etc/init.d/PowerChute stop
+ ;;
 $SOLARIS)
 /etc/rc2.d/S99PowerChute stop
 ;;
@@ -977,6 +991,10 @@
 cp Linux/notifier.sh notifier
 cp Linux/shutdown.sh shutdown
 ;;
+ $LINUX_UBUNTU)
+ cp Linux/notifier.sh notifier
+ cp Linux/shutdown.sh shutdown
+ ;;
 $SOLARIS)
 cp Solaris/notifier.sh notifier
 cp Solaris/shutdown.sh shutdown
@@ -1057,6 +1075,14 @@
 rm -f /etc/rc.d/rc4.d/*99PowerChute
 rm -f /etc/rc.d/rc5.d/*99PowerChute
 rm -f /etc/rc.d/rc6.d/*99PowerChute
+ elif [ $OS = "$LINUX_UBUNTU" ] ; then
+ rm -f /etc/rc0.d/*99PowerChute
+ rm -f /etc/rc1.d/*99PowerChute
+ rm -f /etc/rc2.d/*99PowerChute
+ rm -f /etc/rc3.d/*99PowerChute
+ rm -f /etc/rc4.d/*99PowerChute
+ rm -f /etc/rc5.d/*99PowerChute
+ rm -f /etc/rc6.d/*99PowerChute
 fi
 fi
 }
@@ -1069,6 +1095,14 @@
 elif [ -f /sbin/chkconfig ]; then
 chkconfig --add PowerChute
 chkconfig PowerChute on
+ elif [ $OS = "$LINUX_UBUNTU" ] ; then
+ ln -s /etc/init.d/PowerChute /etc/rc0.d/K99PowerChute
+ ln -s /etc/init.d/PowerChute /etc/rc1.d/K99PowerChute
+ ln -s /etc/init.d/PowerChute /etc/rc2.d/S99PowerChute
+ ln -s /etc/init.d/PowerChute /etc/rc3.d/S99PowerChute
+ ln -s /etc/init.d/PowerChute /etc/rc4.d/S99PowerChute
+ ln -s /etc/init.d/PowerChute /etc/rc5.d/S99PowerChute
+ ln -s /etc/init.d/PowerChute /etc/rc6.d/K99PowerChute
 else
 ln -s /etc/rc.d/init.d/PowerChute /etc/rc.d/rc0.d/K99PowerChute
 ln -s /etc/rc.d/init.d/PowerChute /etc/rc.d/rc1.d/K99PowerChute
@@ -1120,6 +1154,14 @@
 if [ -f /sbin/chkconfig ]; then
 chkconfig PowerChute off
 chkconfig --del PowerChute
+ elif [ $OS = "$LINUX_UBUNTU" ] ; then
+ rm -f /etc/rc0.d/*99PowerChute
+ rm -f /etc/rc1.d/*99PowerChute
+ rm -f /etc/rc2.d/*99PowerChute
+ rm -f /etc/rc3.d/*99PowerChute
+ rm -f /etc/rc4.d/*99PowerChute
+ rm -f /etc/rc5.d/*99PowerChute
+ rm -f /etc/rc6.d/*99PowerChute
 else
 rm -f /etc/rc.d/rc1.d/K99PowerChute
 rm -f /etc/rc.d/rc2.d/S99PowerChute

Juniper SRX mucking with DNS

I was getting some strange DNS answers on the servers in a trust zone on my SRX. All the servers are statically NAT’d to external IP’s and run their own caching resolvers but when I tried to query for the servers A RR I kept getting the internal IP address. No name server either internal or external was serving that A RR. Eventually I realised that it was the SRX changing the answer section of the DNS response. I don’t know if it is on by default, or if I switched it on by mistake but it was the DNS Application Layer Gateway (ALG) trying to help by making use of what it knew about the static NATs. Switching DNS ALG off solved the issue. For a detailed description of what the ALG does see the Junos OS Security Configuration Guide.

FreeBSD svn and passwords

I have been getting the message about storing un-encrypted svn passwords for ages on FreeBSD and Ubuntu. I finally thought I would look into how you store encrypted passwords.

-----------------------------------------------------------------------
ATTENTION! Your password for authentication realm:
<https://dev.sinodun.com:443> Sinodun Projects Subversion Repository
can only be stored to disk unencrypted! You are advised to configure
your system so that Subversion can store passwords encrypted, if
possible. See the documentation for details.
You can avoid future appearances of this warning by setting the value
of the 'store-plaintext-passwords' option to either 'yes' or 'no' in
'/home/jad/.subversion/servers'.
-----------------------------------------------------------------------

Lots of goggling later the best I could find was an article about gnome-keyring-daemon and subversion which mentions the need for libsvn_auth_gnome_keyring-1.so. A quick search found this was missing on my system. A moment of dumbness later I remembered make config and found the port config option needed.

Screenshot

re-installing the port unfortunately broke svn with the following message:

svn: E175002: OPTIONS of 'https://dev.sinodun.com/svn/projects': SSL handshake failed: SSL disabled due to library version mismatch (https://dev.sinodun.com)

Thanks to this that was quickly fixed by rebuilding neon:

sudo portupgrade -f neon*

Flush DNS cache on Ubuntu

This weekend I came across several pages on the web suggesting ways to flush the DNS cache on Ubuntu. I will not point to the offending pages here but they are easy to find with Google. Most of them are wrong in several ways.

  1. Ubuntu doesn’t install a DNS cache by default
  2. Most of the pages say something along the lines of to flush the DNS cache you need to restart the nscd deamon.
  3. See 1
  4. As a result of 1 – They tell you to install nscd so that you can then restart nscd!
  5. I redirect your attention to issue 1.

If you really want a local cache then I would suggest installing unbound which will work better and give you DNSSEC as well.

Ubuntu, efi booting and Nvidia

I have just set up a Ubuntu 12.10 workstation on a new motherboard (ASUS P8Z77 WS) which has efi instead of the traditional BIOS and installed a NVidia Quadro 600 graphics card.

You can ignore the efi and install your OS as standard if you want. The efi provides BIOS compatibility. However, that would be too easy!

efi booting

Getting the Ubuntu DVD to install in efi booting mode is easy. With the DVD in the drive the efi sees both a efi and regular version of the DVD. Just set the boot order to start with the efi version of the DVD, you get a text display with several options including one to install Ubuntu. At the end of the install Grub2 is installed and configured to support efi booting. You may need to update the efi firmware to get the newly installed Ubuntu to boot.

Nvidia drivers

There is some mention on the web that the nvidia drivers may not work with efi booting. If you rely on Ubuntu to do the correct thing when you apt-get install nvidia-current then you might think that is correct. However, after a bit of investigation, they work fine for me. The major problem is that the Nouveau display driver is getting control of the card before the nvidia one does. You can see which driver is loaded by using

lspci -v

You will get output like this (This example is after fixing things, the “Kernel driver in use” line shows nouveau by default)

04:00.0 VGA compatible controller: NVIDIA Corporation GF108GL [Quadro 600] (rev a1) (prog-if 00 [VGA controller])
 Subsystem: NVIDIA Corporation Device 0835
 Flags: bus master, fast devsel, latency 0, IRQ 16
 Memory at f6000000 (32-bit, non-prefetchable) [size=16M]
 Memory at e8000000 (64-bit, prefetchable) [size=128M]
 Memory at f0000000 (64-bit, prefetchable) [size=32M]
 I/O ports at e000 [size=128]
 [virtual] Expansion ROM at f7000000 [disabled] [size=512K]
 Capabilities: <access denied>
 Kernel driver in use: nvidia
 Kernel modules: nvidia, nouveau, nvidiafb

You can remove the offending Xorg driver like this:

sudo apt-get --purge remove xserver-xorg-video-nouveau

However that doesn’t actually remove the nouveau kernel module. A quick fix is to:

sudo rm -rf /lib/modules/3.5.0-18-generic/kernel/drivers/gpu/drm/nouveau

and then rebuild the initramfs

sudo update-initramfs -u

This final command is important as it removes the offending driver from the boot ram disk.

Library Versioning

I have been discovering all about library versioning. What I found came as a surprise to me and others especially those from a sysadmin background.

Basically it all comes down to the fact that the numbers on the end of a library name may not be what you expect and it is even more complex on OS X.

Libtool on Linux

For example consider the following libraries on a recent CentOS 6.2 install:

  • /lib64/libparted-2.1.so.0.0.0
  • /lib64/libproc-3.2.8.so
  • /lib64/libgcrypt.so.11.5.3

These show the three different ways in which a library may be versioned. Chapter 7 of the libtool manual discuses this subject. Libtool library versions are purely concerned with the supported API or interface that the library supports. It determines what the linker will allow to be linked.

libtool library versions are described by three integers:

  • current
    • The most recent interface number that this library implements.
  • revision
    • The implementation number of the current interface.
  • age
    • The difference between the newest and oldest interfaces that this library imple- ments. In other words, the library implements all the interface numbers in the range from number current – age to current.

If two libraries have identical current and age numbers, then the dynamic linker chooses the library with the greater revision number.

This is (or should be) what you see for libgcrypt.so.11.5.3. Running gpg –version shows that the version of libgcrypt is in fact

# gpg --version
gpg (GnuPG) 2.0.14
libgcrypt 1.4.5

The version of libgcrypt is 1.4.5 but its API is 11.5.3.

In order to set the Libtool library version you set the  -version-info 11:5:3 flag using libmylib_la_LDFLAGS in your Makefile.am (See the automate manual).

By contrast /lib64/libproc-3.2.8.so has no numbers after the .so. They all appear before it. This reflects the fact that the developers, for whatever reason decided to use the version number of the application in the library. Running free gives

# free -V
procps version 3.2.8

This is achieved by using the flag  -release in place of -version-info in Makefile.am.

libparted-2.1.so.0.0.0 shows both. Running parted gives

# parted -v
parted (GNU parted) 2.1

OS X

According to Fink the situation on OS X is slightly different. There is a “strict distinction between shared libraries and dynamically loadable modules”. On linux any “shared code can be used as a library and for dynamic loading”. On OS X there are two types of library MH_DYLIB (a shared library) and MH_BUNDLE (a module). MH_DYLIB carry the extension .dylib and can be linked against in the familiar -lmylib at compile/link time. They can not be loaded as a module using dlopen(). MH_BUNDLE on the other hand can be dlopened. Apple suggest naming these .bundle however, much ported software uses .so instead. It is not directly possible to link against bundles as if they were shared libraries.

Selecting an interval from a integer column in PostgreSQL

I needed to create a select that combined a timestamp in one column with an interval specified in minutes from another column. After much manual reading and searching I found the answer on stackoverflow. My SQL looked something like:

SELECT d.time AT TIME ZONE 'UTC' + interval '1 minute' * d.minute as time FROM my data d ...

Much faster than using Perl’s DateTime::Format::Pg and DateTime::Duration modules to do the addition.

Upgrading the Subversion plugin on Jenkins

Since upgrading our subversion server to 1.7 I have tried several times to upgrade the subversion plugin on Jenkins (v1.457) from 1.34 to 1.39. Using the plugin manager it downloads the plugin and asks you to restart. However, when you restart the jenkins.war expands and overwrites the newly downloaded plugin with the 1.34 version that I guess is shipped in the war.

The solution is easy. Use the plugin manager to download the plugin and change the ownership of the subversion.jpi file so that the jenkins user can not overwrite it. Then restart. (This will make future upgrades more difficult as you will have to revert the ownership).