I really wanted to describe my current network setup I use on my laptop. So here is this post.

My objectives were: keep the configuration simple and easy to use for the user.

What I have

My laptop automatically connects to wifi (for configured SSIDs) and obtains an IP address. If I connect the ethernet cable it gets an IP there as well and the same goes for USB (tethering). Each interface gets the default route and they are inserted with different metrics. The wired connection wins. If the preferred interface gets disconnected (e.g cable pulled out) it automatically starts to use the other available interface.

# ip route show

default via 192.168.44.129 dev usb0  metric 267 
default via 141.70.74.1 dev wlan0  metric 303 
141.70.74.0/21 dev wlan0  proto kernel  scope link  src 141.70.74.145  metric 303 
141.70.74.145 via 127.0.0.1 dev lo  metric 303 
192.168.44.0/24 dev usb0  proto kernel  scope link  src 192.168.44.18  metric 267 
192.168.44.18 via 127.0.0.1 dev lo  metric 267 

Components

Setup

dhcpd5

This is the dhcp client daemon that tries to get IP addresses on interfaces that are operational mode up. When a link comes up which is: usually after cable insertion for wired ethernet or successful AP association for wireless.

This tool also takes care to add the default routes with the mentioned metrics attached.

To get rid the legacy isc dhclient I did (there may be cleaner ways):

dpkg-divert --divert /sbin/dhclient.orig --rename /sbin/dhclient
ln -s /bin/true /sbin/dhclient

I created a systemd unit file (the default init script is also fine): /etc/systemd/system/dhcpcd.service

[Unit]
Description=dhcpcd5 - IPv4 DHCP client with IPv4LL support
Before=runlevel2.target runlevel3.target runlevel4.target runlevel5.target shutdown.target
After=local-fs.target
Conflicts=shutdown.target

[Service]
ExecStart=/sbin/dhcpcd5 -B -L -d

[Install]
WantedBy=multi-user.target

wpa_supplicant and /etc/network/interfaces

iface wlan0 inet dhcp
  wpa-conf /etc/network/wpa_supplicant.conf

wpa_supplicant.conf looks like this for me:

ctrl_interface=/var/run/wpa_supplicant
update_config=1

network={
    ssid="eduroam"
    scan_ssid=1
    key_mgmt=WPA-EAP
    group=CCMP TKIP
    eap=PEAP
    identity="someuser@login.ppke.hu"
    password="arealpasswordgoeshere"
    ca_path="/etc/ssl/certs"
    subject_match="/C=HU/ST=Budapest/L=Budapest/O=Pazmany Peter Katolikus Egyetem/CN=tutela.itk.ppke.hu"
    phase1="peapver=0"
    phase2="MSCHAPV2"
}

network={
    ssid="Butterfly13"
    scan_ssid=1
    key_mgmt=NONE
    priority=4
    disabled=1
}

network={
    ssid="homewifi"
    scan_ssid=1
    psk="thepskgoeshere"
    key_mgmt=WPA-PSK
    priority=4
}

I use wpa_cli to control the wifi configuration or if I need to add ad-hoc changes (e.g for debconf). It is also easy to select networks to prefer or just disable all except for one.

The networks needs to be up:

ifup eth0
ifup wlan0
Posted Sat 22 Aug 2015 11:23:06 AM CEST Tags:

Yes, this is sad, but true.

It just happened to me that I migrated between two 2TB size PVs. One on an old and slow FC storage, the other one a fast and new.

Steps (as usual):

# sdl1 is the new PV (let's create it)
pvcreate /dev/sdl1 

# extend the VG with this new PV
vgextend somevg /dev/sdl1

# now move off from the old one
pvmove /dev/sdd1 # with -n one can tell which LV should be moved

# the old PV can be removed with
vgreduce somevg /dev/sdd1

# and this a useful command to find which LV is on which PV
# lvs --segments -o +pe_ranges

And this is it! At least this should work and it worked for me before, but not now.

Instead what I got is:

I/O error in filesystem ("dm-1") meta-data dev dm-1 block 0xe0e280       ("xfs_trans_read_buf") error 11 buf count 4096
xfs_force_shutdown(dm-2,0x1) called from line 395 of file /build/buildd/linux-lts-backport-natty-2.6.38/fs/xfs/xfs_trans_buf.c.  Return address = 0xffffffffa0111943
xfs_imap_to_bp: xfs_trans_read_buf()returned an error 5 on dm-2.  Returning error.
Filesystem dm-2: I/O Error Detected.  Shutting down filesystem: dm-2
Please umount the filesystem, and rectify the problem(s)
Filesystem dm-2: xfs_log_force: error 5 returned.
xfs_force_shutdown(dm-2,0x1) called from line 1111 of file /build/buildd/linux-lts-backport-natty-2.6.38/fs/xfs/linux-2.6/xfs_buf.c.  Return address = 0xffffffffa011cc03

(I suppressed the same messages.)

I had to shutdown the application (which was a pain for the users) and let is continue its work. Here is the explanation:

[dm-devel] [BUG] pvmove corrupting XFS filesystems (was Re: [BUG] Internal error xfs_dir2_data_reada_verify)

Hopefully after it finished and I rebooted the system it came up OK. 7 hours of downtime. At least I started it late.

Posted Tue 21 Jan 2014 09:11:43 PM CET Tags:

How Shibboleth is used to authenticate PPKE users

Zimbra has a mechanism called preauth which makes it possible to interface with other authentication systems.

The big picture

A Shibboleth SP with apache is needed. This authenticates the user with shibboleth and relays it to Zimbra.

In our case shibboleth attribute eduPersonPrincipalName is the same as the email address in Zimbra. The identity provider releases this attribute to the service provider. There a Perl CGI script accesses this as an environment variable. After a valid shibboleth session is established with the SP the user is redirected to a Zimbra URL with a GET variable encoding a signed cookie which identifies the user to Zimbra. This cookie is validated and user access is granted if found valid.

Shibboleth SP setup

From the apache side this is the main part of the configuration. This is needed for the directory containing our custom cgi script. (shibboleth2.xml is needed as usual)

<Directory /where-your-cgi-is>
  AuthType shibboleth
  require shibboleth
  ShibRequireSession On
</Directory>

The cgi script

First the shared secret must be obtained with zmprov:

prov> gdpak example.org

This value is required for the script.

#!/usr/bin/perl

use strict;
use warnings;

use Digest::HMAC_SHA1 qw(hmac_sha1 hmac_sha1_hex);
use Data::Dump qw(pp);
use CGI;

my $cgi = new CGI->new;

my %k;

#prov> gdpak example.org
#preAuthKey: aaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbccccccccccccccccc

$k{"example.org"}="aaaaaaaaaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbbbbbbbbccccccccccccccccc";

my $zmda = $ENV{"eduPersonPrincipalName"};
my $urlbase = "mail.example.org";

unless($k{$domain}) {
  print $cgi->header(-status=>'500 Service Unavailable');
  print "unknown domain, are you authorized for zimbra?";
  die;
}

my $redir = buildZimbraPreAuthUrl($k{$domain},
            "https://$urlbase/service/preauth",
            $zmda,
            "name");

print $cgi->redirect( -uri => $redir ); 

## @method private string buildZimbraPreAuthUrl(string key, string url, string account, string by)
# Build Zimbra PreAuth URL
# @param key PreAuthKey
# @param url URL
# @param account User account
# @param by Account type
# @return Zimbra PreAuth URL
sub buildZimbraPreAuthUrl {
  my ( $key, $url, $account, $by ) = splice @_;

  # Expiration time
  my $expires = 0;

  # Timestamp
  my $timestamp = time() * 1000;

  # Compute preauth value
  my $computed_value =
    hmac_sha1_hex( "$account|$by|$expires|$timestamp", $key );

  # Build PreAuth URL
  my $zimbra_url;
  $zimbra_url .= $url;
  $zimbra_url .= '?account=' . $account;
  $zimbra_url .= '&by=' . $by;
  $zimbra_url .= '&timestamp=' . $timestamp;
  $zimbra_url .= '&expires=' . $expires;
  $zimbra_url .= '&preauth=' . $computed_value;

  return $zimbra_url;
}

Zimbra can redirect unauthenticated users to our cgi

We can achive that with changing the domain attribute zimbraWebClientLoginURL.

Posted Fri 20 Dec 2013 12:03:59 AM CET Tags:

zimbra on ubuntu upgrade woes

So I (and my co-worker) at my previous place wasted 4 hours because someone thought that skipping the boot prompt (in Ubuntu 10.04.4) is a good idea.

Try to do that in a virtual machine when you have to boot the server to init 1 because otherwise a service would start what you actually want to maintain. If that starts and mail clients connect your backup is not uptodate anymore...

The update process failed because of a stale pid file. We actually pull the 4 hours old backup, cleaned up that stale pid file and now everything is smooth.

... UPDATE: and now a kernel bug. Love you Ubuntu, really :-(

Posted Sat 30 Nov 2013 02:22:27 PM CET Tags:

This used to be a draft. Now it is part of the gobby document for the Perl BoF session.

my agenda regarding debian-perl

  • knowing the perl tools (used in the debian infrastructure) better
  • keeping Mojolicious in debian uptodate
  • promoting the use of it in debian
  • making enhancement to the perl tools
    • first I thought about collecting best practices and recommendations
      • dam encouraged me to just do it :-) (aka. shut up and hack)
    • it seems (I hope) that I will be able to fix some problems related to packages.d.o
Posted Thu 15 Aug 2013 09:17:45 AM CEST Tags:

I am at debconf13

First here are some pictures of debconf13 at Vaumarcus, Switzerland. The landmark with the nearby lake (Neuch√Ętel) is beautiful.

overview
bar building, talk room2, front desk
lake and some buildings
main talk room, dining room
after sunset

I will do some posts about my experiences and some of the achievements I made.

Update: Joey also has some pictures from the cheese and wine party: http://joeyh.name/blog/entry/swiss_cheese/

Posted Tue 13 Aug 2013 10:42:43 AM CEST Tags:

I did two presentations at a Hungarian networking (reseach and academic community) conference.

The slides are online (both in English):

Posted Wed 27 Mar 2013 11:45:25 AM CET Tags:

Open vs. authenticated only access

I really liked the idea of providing open comment access to pages here. But today some spammer found the site and I had to disable it :-( .

Comments are still welcome, but one have to authenticate first.

Anonymous comments are moderated first, anonymous edit are not allowed. It should be easy to authenticate with with openid.

Posted Sun 24 Feb 2013 07:08:02 PM CET Tags:

Non-Puppet best practice

A friend of mine asked me how I use puppet and what I consider best practice. He thinks I still use puppet as I gave a talk about it 6 years ago (in Hungarian).
(Oh, that was quite a while ago :-) )

At that time I really thought Puppet is the way to go, but a few years ago I really did not have time to keep puppet up and running as I was reassigned at work to another organizational unit and many thing changed at once.

About a year ago I revisited the idea of reviving/redeploying puppet, but as my thinking changed recently I started to feel more disgust as I tried to use it again.

There are usability problems, upgrade incompatibilities, receipts requiring newer puppet etc... I will not go into the details as Martin summarized it quite well and I agree with him.

Slaughter instead

Instead of puppet I use slaughter now. Only for some basic things like:

  • deploying configuration files
  • installing packages
  • adding all admins with ssh keys to all machine

and I did not yet deployed it to all machine.
Still I feel confident because of Perl. There is no hidden magic involved. No DSL to learn. It has less power, but much more easier at the same time.

General ideas, may work with puppet...

My idea (that requires some work in Slaughter) is to use some external database/data structure and using that instead of defining the same multiple times.

What do I mean? I do not want a fancy way to configure DNS, Nagios, Graphite, PF and defining the same over and over again.
Instead define it once "here in my public http service running on host myhost.domain port 80" then the configuration management should generate a

  • DNS record for the host (if it does not have already)
  • configure the firewall to let port 80 throu
  • install apache on the host
  • monitor the service

Puppet has a mechanism (module?) called Hiera that lets you do this. I think with Chef this is built-in. (Chef is so hard to start with that I did not dared to try it at all.)


recent link http://blog.steve.org.uk/more_competition_for_server_management_and_automation_is_good.html

Posted Mon 04 Feb 2013 04:43:56 PM CET Tags:

Icinga told me that HTTPS was failing on one of my servers.

I tried to restart apache, but it did not help. There were some SSH attempts (obviously automated bots) in the logs, but nothing serious.

Oh, wait. Only HTTPS is failing, HTTP is fine. Okay I just filtered ssh from the outside network and now everything works as expected :).

HTTPS actually worked, but very slowly, because SSH attempts delepeted the available entropy.

Posted Wed 30 Jan 2013 10:39:03 PM CET Tags: