RSS Atom Add a new post titled:
yggdrasil 300::/7 network with openwrt

What is yggdrasil

Recently yggdrasil got my interest after playing around with wireguard then tailscale.

I was in the right mood when I read a blogpost by John Goerzen about yggdrasil

In short: yggdrasil is an overlay network. You generate a private key then you get your own stable and presistent ipv6 address (which is available within the network you join or create).

yggdrasil networks

There are two kind of addresses you can get. One is a single ipv6 address from 200::/7. There is an optional network associated with the former from 300::/7, but instead of a single address it is a ::/64

If you have printers or other stuff available on that network that you cannot or do not want to run the yggdrasil daemon on you can advertise this network with radvd:

interface eth0
{
     AdvSendAdvert on;
     AdvDefaultLifetime 0;
     prefix 300:1111:2222:3333::/64 {
         AdvOnLink on;
         AdvAutonomous on;
     };
     route 200::/7 {};
};

but openwrt deprecated radvd (in favor of odhcpd) some time ago this means it is not available for install. How to do this on openwrt then? let's find out!

openwrt

Recent versions of openwrt includes yggdrasil thus installation is not an issue. There is a lua module so it can be poked from the luci webui.

A new interface shows up with the name yggdrasil. As I accept no external peers other than the ones I control I put it into the lan zone in /etc/config/firewall:

config zone
    option name 'lan'
    ...
    list network 'lan'
    list network 'yggdrasil'

then I created ygglan as an alias interface in /etc/config/network:

config interface 'ygglan'
    option proto 'static'
    list ip6addr '300:1111:2222:3333::1/64'
    option ip6prefix '300:1111:2222:3333::/64'
    option device 'br-lan'

No this is not the same as in the radvd example as I was unable to figure out how to announce a custom route. I am using for my benefit that openwrt already advertises a default route for ipv6. (If there is an issue with ipv6 and openwrt stops advertising a default route then yggdrasil breaks as well as a result.)

Posted
practical dynamic network setup for laptops

I really wanted to describe my current network setup I use on my laptop. So here is this post.

My objectives were: keep the configuration simple and easy to use for the user.

What I have

My laptop automatically connects to wifi (for configured SSIDs) and obtains an IP address. If I connect the ethernet cable it gets an IP there as well and the same goes for USB (tethering). Each interface gets the default route and they are inserted with different metrics. The wired connection wins. If the preferred interface gets disconnected (e.g cable pulled out) it automatically starts to use the other available interface.

# ip route show

default via 192.168.44.129 dev usb0  metric 267 
default via 141.70.74.1 dev wlan0  metric 303 
141.70.74.0/21 dev wlan0  proto kernel  scope link  src 141.70.74.145  metric 303 
141.70.74.145 via 127.0.0.1 dev lo  metric 303 
192.168.44.0/24 dev usb0  proto kernel  scope link  src 192.168.44.18  metric 267 
192.168.44.18 via 127.0.0.1 dev lo  metric 267 

Components

Setup

dhcpd5

This is the dhcp client daemon that tries to get IP addresses on interfaces that are operational mode up. When a link comes up which is: usually after cable insertion for wired ethernet or successful AP association for wireless.

This tool also takes care to add the default routes with the mentioned metrics attached.

To get rid the legacy isc dhclient I did (there may be cleaner ways):

dpkg-divert --divert /sbin/dhclient.orig --rename /sbin/dhclient
ln -s /bin/true /sbin/dhclient

I created a systemd unit file (the default init script is also fine): /etc/systemd/system/dhcpcd.service

[Unit]
Description=dhcpcd5 - IPv4 DHCP client with IPv4LL support
Before=runlevel2.target runlevel3.target runlevel4.target runlevel5.target shutdown.target
After=local-fs.target
Conflicts=shutdown.target

[Service]
ExecStart=/sbin/dhcpcd5 -B -L -d

[Install]
WantedBy=multi-user.target

wpa_supplicant and /etc/network/interfaces

iface wlan0 inet dhcp
  wpa-conf /etc/network/wpa_supplicant.conf

wpa_supplicant.conf looks like this for me:

ctrl_interface=/var/run/wpa_supplicant
update_config=1

network={
    ssid="eduroam"
    scan_ssid=1
    key_mgmt=WPA-EAP
    group=CCMP TKIP
    eap=PEAP
    identity="someuser@login.ppke.hu"
    password="arealpasswordgoeshere"
    ca_path="/etc/ssl/certs"
    subject_match="/C=HU/ST=Budapest/L=Budapest/O=Pazmany Peter Katolikus Egyetem/CN=tutela.itk.ppke.hu"
    phase1="peapver=0"
    phase2="MSCHAPV2"
}

network={
    ssid="Butterfly13"
    scan_ssid=1
    key_mgmt=NONE
    priority=4
    disabled=1
}

network={
    ssid="homewifi"
    scan_ssid=1
    psk="thepskgoeshere"
    key_mgmt=WPA-PSK
    priority=4
}

I use wpa_cli to control the wifi configuration or if I need to add ad-hoc changes (e.g for debconf). It is also easy to select networks to prefer or just disable all except for one.

The networks needs to be up:

ifup eth0
ifup wlan0
Posted
rsnapshot with zfs snapshotting

First of all I feel bad, because I promised John Goerzen to share my experiences, but as with other things I am lagging behind.
Still better late than never.

A few words on rsnapshot

rsnapshot is a great tool for making efficient backups. It basically creates directories like

daily.0
daily.1
daily.2
daily.3
...
daily.6

The directory contents are hard linked on each other and then it is overwritten when the contents (files) are changed.

The way this is achived is by cp -al daily.0 daily.1 then rsync-ing the fresh content to the pre-populated directories. Sometimes rm -rf daily.N (where N is the number of days you want to keep) is run remove old content. (For simplicity I do not talk about weekly and monthly features of rsnapshot).

There is also a rollback mechanism in place if the backup fail to get the last consistent state for the next run as a base.

The problem is that cp -al and rm -rf and the rollback mechanism itself is an expensive operation and does not use the possibilities of ZFS.

ZFS snapshots

ZFS have a convenient way to create snapshots of datasets which are consistent point-in-time state of the filesystem. The snapshots are read-only, mountable and almost free with zfs. As ZFS is a copy-on-write filesystem when you modify a file it is copied first and when you have a snapshot the original is kept after it is changed. Only the changed content takes space.

ZFS and rsnapshot

I choose an easy and convenient way to integrate the ZFS snapshotting from rsnapshot some may call it a hack.

There is a cmd_cp configuration parameter and I wrote the script that does the job for me:

#!/usr/local/bin/bash

#echo "snapshot created at $(date)" > $2/info

# name of the volume
zname=$(dirname $(echo $2 | cut -b 2-))

# name of the future snapshot
bsname="$zname@rsnap-$(date +%F)"

sname=$bsname
for i in 1 2 3 4 5
do
  # exit if success
  zfs snapshot $sname 
  if [ "x$?" == "x0" ]; then
    echo "backup started at $(date)" > $2/info
    exit 0
  fi
  # iterate over it if already exists
  sname=$bsname-v$i
done

echo ERROR zfs_cp: something is really broken
exit 1

So every rsnapshot directory is a seperate dataset and when the rsnapshot starts the tedius work and of running cp is eliminated. I will always only have daily.0 there never will be a daily.1 as my script never creates it. This way the rm -rf and the rollback part is also eliminated as there is nothing to delete and nothing to rollback to. If a backup does not complete for one reason or another the next one supposed to correct it.

One thing that is missing: Eliminating and removing of old snapshots. I to keep the 7 recent backup then keeping 4 weekly is fine and 6 monthly. There are tools available to do that I need to look at that.

zfs list -t all sniplet:

NAME                                                  USED  AVAIL  REFER  MOUNTPOINT
...
backup14q1/rsnapshot_1                               1.85T  15.6T  1.45T  /backup14q1/rsnapshot_1
backup14q1/rsnapshot_1@rsnap-2014-01-08              5.38G      -  1.44T  -
backup14q1/rsnapshot_1@rsnap-2014-01-09              3.82G      -  1.44T  -
backup14q1/rsnapshot_1@rsnap-2014-01-10              3.82G      -  1.44T  -
backup14q1/rsnapshot_1@rsnap-2014-01-11              4.06G      -  1.44T  -
backup14q1/rsnapshot_1@rsnap-2014-01-12              3.86G      -  1.44T  -
backup14q1/rsnapshot_1@rsnap-2014-01-13              5.72G      -  1.44T  -
backup14q1/rsnapshot_1@rsnap-2014-01-14              3.89G      -  1.44T  -
backup14q1/rsnapshot_1@rsnap-2014-01-15              3.90G      -  1.44T  -
backup14q1/rsnapshot_1@rsnap-2014-01-15-v1           3.66G      -  1.44T  -
backup14q1/rsnapshot_1@rsnap-2014-01-17              3.89G      -  1.45T  -
...
backup14q1/rsnapshot_1@rsnap-2014-02-12              4.21G      -  1.45T  -
backup14q1/rsnapshot_1@rsnap-2014-02-13              4.19G      -  1.45T  -

Summary

Some features of rsnapshot like:

  • weekly, monthly snapshots
  • rollback in case of failures

are lost in the process, but overall I am satisfied with the end results. Further improvements are possible.

Posted
pvmove makes your XFS filesystem corrupt (or unavailable at least)

Yes, this is sad, but true.

It just happened to me that I migrated between two 2TB size PVs. One on an old and slow FC storage, the other one a fast and new.

Steps (as usual):

# sdl1 is the new PV (let's create it)
pvcreate /dev/sdl1 

# extend the VG with this new PV
vgextend somevg /dev/sdl1

# now move off from the old one
pvmove /dev/sdd1 # with -n one can tell which LV should be moved

# the old PV can be removed with
vgreduce somevg /dev/sdd1

# and this a useful command to find which LV is on which PV
# lvs --segments -o +pe_ranges

And this is it! At least this should work and it worked for me before, but not now.

Instead what I got is:

I/O error in filesystem ("dm-1") meta-data dev dm-1 block 0xe0e280       ("xfs_trans_read_buf") error 11 buf count 4096
xfs_force_shutdown(dm-2,0x1) called from line 395 of file /build/buildd/linux-lts-backport-natty-2.6.38/fs/xfs/xfs_trans_buf.c.  Return address = 0xffffffffa0111943
xfs_imap_to_bp: xfs_trans_read_buf()returned an error 5 on dm-2.  Returning error.
Filesystem dm-2: I/O Error Detected.  Shutting down filesystem: dm-2
Please umount the filesystem, and rectify the problem(s)
Filesystem dm-2: xfs_log_force: error 5 returned.
xfs_force_shutdown(dm-2,0x1) called from line 1111 of file /build/buildd/linux-lts-backport-natty-2.6.38/fs/xfs/linux-2.6/xfs_buf.c.  Return address = 0xffffffffa011cc03

(I suppressed the same messages.)

I had to shutdown the application (which was a pain for the users) and let is continue its work. Here is the explanation:

[dm-devel] [BUG] pvmove corrupting XFS filesystems (was Re: [BUG] Internal error xfs_dir2_data_reada_verify)

Hopefully after it finished and I rebooted the system it came up OK. 7 hours of downtime. At least I started it late.

Posted
Zimbra preauth and Shibboleth

How Shibboleth is used to authenticate PPKE users

Zimbra has a mechanism called preauth which makes it possible to interface with other authentication systems.

The big picture

A Shibboleth SP with apache is needed. This authenticates the user with shibboleth and relays it to Zimbra.

In our case shibboleth attribute eduPersonPrincipalName is the same as the email address in Zimbra. The identity provider releases this attribute to the service provider. There a Perl CGI script accesses this as an environment variable. After a valid shibboleth session is established with the SP the user is redirected to a Zimbra URL with a GET variable encoding a signed cookie which identifies the user to Zimbra. This cookie is validated and user access is granted if found valid.

Shibboleth SP setup

From the apache side this is the main part of the configuration. This is needed for the directory containing our custom cgi script. (shibboleth2.xml is needed as usual)

<Directory /where-your-cgi-is>
  AuthType shibboleth
  require shibboleth
  ShibRequireSession On
</Directory>

The cgi script

First the shared secret must be obtained with zmprov:

prov> gdpak example.org

This value is required for the script.

#!/usr/bin/perl

use strict;
use warnings;

use Digest::HMAC_SHA1 qw(hmac_sha1 hmac_sha1_hex);
use Data::Dump qw(pp);
use CGI;

my $cgi = new CGI->new;

my %k;

#prov> gdpak example.org
#preAuthKey: aaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbccccccccccccccccc

$k{"example.org"}="aaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbccccccccccccccccc";

my $zmda = $ENV{"eduPersonPrincipalName"};
my $urlbase = "mail.example.org";

unless($k{$domain}) {
  print $cgi->header(-status=>'500 Service Unavailable');
  print "unknown domain, are you authorized for zimbra?";
  die;
}

my $redir = buildZimbraPreAuthUrl($k{$domain},
            "https://$urlbase/service/preauth",
            $zmda,
            "name");

print $cgi->redirect( -uri => $redir ); 

## @method private string buildZimbraPreAuthUrl(string key, string url, string account, string by)
# Build Zimbra PreAuth URL
# @param key PreAuthKey
# @param url URL
# @param account User account
# @param by Account type
# @return Zimbra PreAuth URL
sub buildZimbraPreAuthUrl {
  my ( $key, $url, $account, $by ) = splice @_;

  # Expiration time
  my $expires = 0;

  # Timestamp
  my $timestamp = time() * 1000;

  # Compute preauth value
  my $computed_value =
    hmac_sha1_hex( "$account|$by|$expires|$timestamp", $key );

  # Build PreAuth URL
  my $zimbra_url;
  $zimbra_url .= $url;
  $zimbra_url .= '?account=' . $account;
  $zimbra_url .= '&by=' . $by;
  $zimbra_url .= '&timestamp=' . $timestamp;
  $zimbra_url .= '&expires=' . $expires;
  $zimbra_url .= '&preauth=' . $computed_value;

  return $zimbra_url;
}

Zimbra can redirect unauthenticated users to our cgi

We can achive that with changing the domain attribute zimbraWebClientLoginURL.

Posted
Ubuntu grub setting when upgrading zimbra

zimbra on ubuntu upgrade woes

So I (and my co-worker) at my previous place wasted 4 hours because someone thought that skipping the boot prompt (in Ubuntu 10.04.4) is a good idea.

Try to do that in a virtual machine when you have to boot the server to init 1 because otherwise a service would start what you actually want to maintain. If that starts and mail clients connect your backup is not uptodate anymore...

The update process failed because of a stale pid file. We actually pull the 4 hours old backup, cleaned up that stale pid file and now everything is smooth.

... UPDATE: and now a kernel bug. Love you Ubuntu, really :-(

Posted
Ideas for the Perl BoF

This used to be a draft. Now it is part of the gobby document for the Perl BoF session.

my agenda regarding debian-perl

  • knowing the perl tools (used in the debian infrastructure) better
  • keeping Mojolicious in debian uptodate
  • promoting the use of it in debian
  • making enhancement to the perl tools
    • first I thought about collecting best practices and recommendations
      • dam encouraged me to just do it :-) (aka. shut up and hack)
    • it seems (I hope) that I will be able to fix some problems related to packages.d.o
Posted
Debconf13 with some pictures

I am at debconf13

First here are some pictures of debconf13 at Vaumarcus, Switzerland. The landmark with the nearby lake (Neuchâtel) is beautiful.

overview
bar building, talk room2, front desk
lake and some buildings
main talk room, dining room
after sunset

I will do some posts about my experiences and some of the achievements I made.

Update: Joey also has some pictures from the cheese and wine party: http://joeyh.name/blog/entry/swiss_cheese/

Posted
The idea of open access

Open vs. authenticated only access

I really liked the idea of providing open comment access to pages here. But today some spammer found the site and I had to disable it :-( .

Comments are still welcome, but one have to authenticate first.

Anonymous comments are moderated first, anonymous edit are not allowed. It should be easy to authenticate with with openid.

Posted