Below you will find pages that utilize the taxonomy term “Debian”
vcswatch and git --filter
Debian is running a “vcswatch” service that keeps track of the status of all packaging repositories that have a Vcs-Git (and other VCSes) header set and shows which repos might need a package upload to push pending changes out.
Naturally, this is a lot of data and the scratch partition on qa.debian.org had to be expanded several times, up to 300 GB in the last iteration. Attempts to reduce that size using shallow clones (git clone –depth=50) did not result more than a few percent of space saved. Running git gc on all repos helps a bit, but is tedious and as Debian is growing, the repos are still growing both in size and number. I ended up blocking all repos with checkouts larger than a gigabyte, and still the only cure was expanding the disk, or to lower the blocking threshold.
Since we only need a tiny bit of info from the repositories, namely the content of debian/changelog and a few other files from debian/, plus the number of commits since the last tag on the packaging branch, it made sense to try to get the info without fetching a full repo clone. The question if we could grab that solely using the GitLab API at salsa.debian.org was never really answered. But then, in #1032623, Gábor Németh suggested the use of git clone –filter blob:none. As things go, this sat unattended in the bug report for almost a year until the next “disk full” event made me give it a try.
The blob:none filter makes git clone omit all files, fetching only commit and tree information. Any blob (file content) needed at git run time is transparently fetched from the upstream repository, and stored locally. It turned out to be a game-changer. The (largish) repositories I tried it on shrank to 1/100 of the original size.
Poking around I figured we could even do better by using tree:0 as filter. This additionally omits all trees from the git clone, again only fetching the information at run time when needed. Some of the larger repos I tried it on shrank to 1/1000 of their original size.
I deployed the new option on qa.debian.org and scheduled all repositories to fetch a new clone on the next scan:
The initial dip from 100% to 95% is my first “what happens if we block repos > 500 MB” attempt. Over the week after that, the git filter clones reduce the overall disk consumption from almost 300 GB to 15 GB, a 1/20. Some repos shrank from GBs to below a MB.
Perhaps I should make all my git clones use one of the filters.
PostgreSQL Popularity Contest
Back in 2015, when PostgreSQL 9.5 alpha 1 was released, I had posted the PostgreSQL data from Debian’s popularity contest.
8 years and 8 PostgreSQL releases later, the graph now looks like this:
Currently, the most popular PostgreSQL on Debian systems is still PostgreSQL 13 (shipped in Bullseye), followed by PostgreSQL 11 (Buster). At the time of writing, PostgreSQL 9.6 (Stretch) and PostgreSQL 15 (Bookworm) share the third place, with 15 rising quickly.
PostgreSQL and Undelete
pg_dirtyread
Earlier this week, I updated pg_dirtyread to work with PostgreSQL 14. pg_dirtyread is a PostgreSQL extension that allows reading “dead” rows from tables, i.e. rows that have already been deleted, or updated. Of course that works only if the table has not been cleaned-up yet by a VACUUM command or autovacuum, which is PostgreSQL’s garbage collection machinery.
Here’s an example of pg_dirtyread in action:
# create table foo (id int, t text);
CREATE TABLE
# insert into foo values (1, 'Doc1');
INSERT 0 1
# insert into foo values (2, 'Doc2');
INSERT 0 1
# insert into foo values (3, 'Doc3');
INSERT 0 1
# select * from foo;
id │ t
────┼──────
1 │ Doc1
2 │ Doc2
3 │ Doc3
(3 rows)
# delete from foo where id < 3;
DELETE 2
# select * from foo;
id │ t
────┼──────
3 │ Doc3
(1 row)
Oops! The first two documents have disappeared.
Now let’s use pg_dirtyread to look at the table:
# create extension pg_dirtyread;
CREATE EXTENSION
# select * from pg_dirtyread('foo') t(id int, t text);
id │ t
────┼──────
1 │ Doc1
2 │ Doc2
3 │ Doc3
All three documents are still there, but only one of them is visible.
pg_dirtyread can also show PostgreSQL’s system colums with the row location and visibility information. For the first two documents, xmax is set, which means the row has been deleted:
# select * from pg_dirtyread('foo') t(ctid tid, xmin xid, xmax xid, id int, t text);
ctid │ xmin │ xmax │ id │ t
───────┼──────┼──────┼────┼──────
(0,1) │ 1577 │ 1580 │ 1 │ Doc1
(0,2) │ 1578 │ 1580 │ 2 │ Doc2
(0,3) │ 1579 │ 0 │ 3 │ Doc3
(3 rows)
Undelete
Caveat: I’m not promising any of the ideas quoted below will actually work in practice. There are a few caveats and a good portion of intricate knowledge about the PostgreSQL internals might be required to succeed properly. Consider consulting your favorite PostgreSQL support channel for advice if you need to recover data on any production system. Don’t try this at work.
I always had plans to extend pg_dirtyread to include some “undelete” command to make deleted rows reappear, but never got around to trying that. But rows can already be restored by using the output of pg_dirtyread itself:
# insert into foo select * from pg_dirtyread('foo') t(id int, t text) where id = 1;
This is not a true “undelete”, though - it just inserts new rows from the data read from the table.
pg_surgery
Enter pg_surgery, which is a new PostgreSQL extension supplied with PostgreSQL 14. It contains two functions to “perform surgery on a damaged relation”. As a side-effect, they can also make delete tuples reappear.
As I discovered now, one of the functions, heap_force_freeze(), works nicely with pg_dirtyread. It takes a list of ctids (row locations) that it marks “frozen”, but at the same time as “not deleted”.
Let’s apply it to our test table, using the ctids that pg_dirtyread can read:
# create extension pg_surgery;
CREATE EXTENSION
# select heap_force_freeze('foo', array_agg(ctid))
from pg_dirtyread('foo') t(ctid tid, xmin xid, xmax xid, id int, t text) where id = 1;
heap_force_freeze
───────────────────
(1 row)
Et voilà, our deleted document is back:
arm64 on apt.postgresql.org
The apt.postgresql.org repository has been extended to cover the arm64 architecture.
We had occasionally received user request to add “arm” in the past, but it was never really clear which kind of “arm” made sense to target for PostgreSQL. In terms of Debian architectures, there’s (at least) armel, armhf, and arm64. Furthermore, Raspberry Pis are very popular (and indeed what most users seemed to were asking about), but the raspbian “armhf” port is incompatible with the Debian “armhf” port.
Now that most hardware has moved to 64-bit, it was becoming clear that “arm64” was the way to go. Amit Khandekar made it happen that HUAWEI Cloud Services donated a arm64 build host with enough resources to build the arm64 packages at the same speed as the existing amd64, i386, and ppc64el architectures. A few days later, all the build jobs were done, including passing all test-suites. Very few arm-specific issues were encountered which makes me confident that arm64 is a solid architecture to run PostgreSQL on.
We are targeting Debian buster (stable), bullseye (testing), and sid (unstable), and Ubuntu bionic (18.04) and focal (20.04). To use the arm64 archive, just add the normal sources.list entry:
deb https://apt.postgresql.org/pub/repos/apt buster-pgdg main
Ubuntu focal
At the same time, I’ve added the next Ubuntu LTS release to apt.postgresql.org: focal (20.04). It ships amd64, arm64, and ppc64el binaries.
deb https://apt.postgresql.org/pub/repos/apt focal-pgdg main
Old PostgreSQL versions
Many PostgreSQL extensions are still supporting older server versions that are EOL. For testing these extension, server packages need to be available. I’ve built packages for PostgreSQL 9.2+ on all Debian distributions, and all Ubuntu LTS distributions. 9.1 will follow shortly.
This means people can move to newer base distributions in their .travis.yml, .gitlab-ci.yml, and other CI files.
Announcing apt-archive.postgresql.org
Users had often asked where they could find older versions of packages from apt.postgresql.org. I had been collecting these since about April 2013, and in July 2016, I made the packages available via an ad-hoc URL on the repository master host, called “the morgue”. There was little repository structure, all files belonging to a source package were stuffed into a single directory, no matter what distribution they belonged to. Besides this not being particularly accessible for users, the main problem was the ever-increasing need for more disk space on the repository host. We are now at 175 GB for the archive, of which 152 GB is for the morgue.
Our friends from yum.postgresql.org have had a proper archive host (yum-archive.postgresql.org) for some time already, so it was about time to follow suit and implement a proper archive for apt.postgresql.org as well, usable from apt.
So here it is: apt-archive.postgresql.org
The archive covers all past and current Debian and Ubuntu distributions. The apt sources.lists entries are similar to the main repository, just with “-archive” appended to the host name and the distribution:
deb https://apt-archive.postgresql.org/pub/repos/apt DIST-pgdg-archive main deb-src https://apt-archive.postgresql.org/pub/repos/apt DIST-pgdg-archive main
The oldest PostgreSQL server versions covered there are 8.2.23, 8.3.23, 8.4.17, 9.0.13, 9.1.9, 9.2.4, 9.3beta1, and everything newer.
Some example:
$ apt-cache policy postgresql-12
postgresql-12:
Installed: 12.2-2.pgdg+1+b1
Candidate: 12.2-2.pgdg+1+b1
Version table:
*** 12.2-2.pgdg+1+b1 900
500 http://apt.postgresql.org/pub/repos/apt sid-pgdg/main amd64 Packages
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
100 /var/lib/dpkg/status
12.2-2.pgdg+1 500
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
12.2-1.pgdg+1 500
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
12.1-2.pgdg+1 500
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
12.1-1.pgdg+1 500
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
12.0-2.pgdg+1 500
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
12.0-1.pgdg+1 500
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
12~rc1-1.pgdg+1 500
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
12~beta4-1.pgdg+1 500
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
12~beta3-1.pgdg+1 500
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
12~beta2-1.pgdg+1 500
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
12~beta1-1.pgdg+1 500
500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
Because this is hosted on S3, browsing directories is only supported indirectly by static index.html files, so if you want to look at some specific URL, append “/index.html” to see it.
The archive is powered by a PostgreSQL database and a bunch of python/shell scripts, from which the apt index files are built.
Archiving old distributions
I’m also using the opportunity to remove some long-retired distributions from the main repository host. The following distributions have been moved over:
- Debian etch (4.0)
- Debian lenny (5.0)
- Debian squeeze (6.0)
- Ubuntu lucid (10.04)
- Ubuntu saucy (13.10)
- Ubuntu utopic (14.10)
- Ubuntu wily (15.10)
- Ubuntu zesty (17.04)
- Ubuntu cosmic (18.10)
They are available as “DIST-pgdg” from the archive, e.g. squeeze:
deb https://apt-archive.postgresql.org/pub/repos/apt squeeze-pgdg main deb-src https://apt-archive.postgresql.org/pub/repos/apt squeeze-pgdg main
Cool Unix Features: paste
paste is one of those tools nobody uses [1]. It puts two file side by side, line by line.
One application for this came up today where some tool was called for several files at once and would spit out one line by file, but unfortunately not including the filename.
$ paste <(ls *.rpm) <(ls *.rpm | xargs -r rpm -q --queryformat '%{name} \n' -p)
[1] See “J” in The ABCs of Unix
[PS: I meant to blog this in 2011, but apparently never committed the file…]
Stepping down as DAM
After quite some time (years actually) of inactivity as Debian Account Manager, I finally decided to give back that Debian hat. I’m stepping down as DAM. I will still be around for the occasional comment from the peanut gallery, or to provide input if anyone actually cares to ask me about the old times.
Thanks for the fish!
Salsa batch import
Now that Salsa is in beta, it’s time to import projects (= GitLab speak for “repository”). This is probably best done automated. Head to Access Tokens and generate a token with “api” scope, which you can then use with curl:
$ cat salsa-import
#!/bin/sh
set -eux
PROJECT="${1%.git}"
DESCRIPTION="$PROJECT packaging"
ALIOTH_URL="https://anonscm.debian.org/git"
ALIOTH_GROUP="collab-maint"
SALSA_URL="https://salsa.debian.org/api/v4"
SALSA_GROUP="debian" # "debian" has id 2
SALSA_TOKEN="yourcryptictokenhere"
# map group name to namespace id (this is slow on large groups, see https://gitlab.com/gitlab-org/gitlab-ce/issues/42415)
SALSA_NAMESPACE=$(curl -s https://salsa.debian.org/api/v4/groups/$SALSA_GROUP | jq '.id')
# trigger import
curl -f "$SALSA_URL/projects?private_token=$SALSA_TOKEN" \
--data "path=$PROJECT&namespace_id=$SALSA_NAMESPACE&description=$DESCRIPTION&import_url=$ALIOTH_URL/$ALIOTH_GROUP/$PROJECT&visibility=public"
This will create the GitLab project in the chosen namespace, and import the repository from Alioth.
Pro tip: To import a whole Alioth group to GitLab, run this on Alioth:
for f in *.git; do sh salsa-import $f; done
(Update 2018-02-04: Query namespace ID via the API)
vcswatch is now looking for tags
About a week ago, I extended vcswatch to also look at tags in git repositories.
Previously, it was solely paying attention to the version number in the top paragraph in debian/changelog, and would alert if that version didn’t match the package version in Debian unstable or experimental. The idea is that “UNRELEASED” versions will keep nagging the maintainer (via DDPO) not to forget that some day this package needs an upload. This works for git, svn, bzr, hg, cvs, mtn, and darcs repositories (in decreasing order of actual usage numbers in Debian. I had actually tried to add arch support as well, but that VCS is so weird that it wasn’t worth the trouble).
There are several shortcomings in that simple approach:
- Some packages update debian/changelog only at release time, e.g. auto-generated from the git changelog using git-dch
- Missing or misplaced release tags are not detected
The new mechanism fixes this for git repositories by also looking at the output of git describe –tags. If there are any commits since the last tag, and the vcswatch status according to debian/changelog would otherwise be “OK”, a new status “COMMITS” is set. DDPO will report e.g. “1.4-1+2”, to be read as “2 commits since the tag [debian/]1.4-1”.
Of the 16644 packages using git in Debian, currently 7327 are “OK”, 2649 are in the new “COMMITS” state, and 4227 are “NEW”. 723 are “OLD” and 79 are “UNREL” which indicates that the package in Debian is ahead of the git repository. 1639 are in an ERROR state.
So far the new mechanism works for git only, but other VCSes could be added as well.
10 Years Debian Developer
I knew it was about this time of the year 10 years ago when my Debian account was created, but I couldn’t remember the exact date until I looked it up earlier this evening: today :). Rene Engelhard had been my advocate, and Marc Brockschmidt my AM. Thanks guys!
A lot of time has passed since then, and I’ve worked in various parts of the project. I became an application manager almost immediately, and quickly got into the NM front desk as well, revamping parts of the NM process which had become pretty bureaucratic (I think we are now, 10 years later, back where we should be, thanks to almost all of the paperwork being automated, thanks Enrico!). I’ve processed 37 NMs, most of them between 2005 and 2008, later I was only active as front desk and eventually Debian account manager. I’ve recently picked up AMing again, which I still find quite refreshing as the AM will always also learn new things.
Quality Assurance was and is the other big field. Starting by doing QA uploads of orphaned packages, I attended some QA meetings around Germany, and picked up maintenance of the DDPO pages, which I still maintain. The link between QA and NM is the MIA team where I was active for some years until they kindly kicked me out because I was MIA there myself. I’m glad they are still using some of the scripts I was writing to automate some things.
My favorite MUA is mutt, of which I became co-maintainer in 2007, and later maintainer. I’m still listed in the uploaders field, but admittedly I haven’t really done anything there lately.
Also in 2007 I started working at credativ, after having been a research assistant at the university, which meant making my Debian work professional. Of course it also meant more real work and less time for the hobby part, but I was still very active around that time. Later in 2010 I was marrying, and we got two kids, at which point family was of course much more important, so my Debian involvement dropped to a minimum. (Mostly lurking on IRC ;)
Being a PostgreSQL consultant at work, it was natural to start looking into the packaging, so I started submitting patches to postgresql-common in 2011, and became a co-maintainer in 2012. Since then, I’ve mostly been working on PostgreSQL-related packages, of which far too many have my (co-)maintainer stamp on them. To link the Debian and PostgreSQL worlds together, we started an external repository (apt.postgresql.org) that contains packages for the PostgreSQL major releases that Debian doesn’t ship. Most of my open source time at the moment is spent on getting all PostgreSQL packages in shape for Debian and this repository.
According to minechangelogs, currently 844 changelog entries in Debian mention my name, or were authored by me. Scrolling back yields memories of packages that are long gone again from unstable, or I passed on to other maintainers. There are way too many people in Debian that I enjoy(ed) working with to list them here, and many of them are my friends. Debian is really the extended family on the internet. My last DebConf before this year had been in Mar del Plata - I had met some people at other conferences like FOSDEM, but meeting (almost) everyone again in Heidelberg was very nice. I even remembered all basic Mao rules :D.
PostgreSQL 9.5 in Debian
Today saw the release of PostgreSQL 9.5 Alpha 1. Packages for all supported Debian and Ubuntu releases are available on apt.postgresql.org:
deb http://apt.postgresql.org/pub/repos/apt/ YOUR_RELEASE_HERE-pgdg main 9.5The package is also waiting in NEW to be accepted for Debian experimental.
Being curious which PostgreSQL releases have been in use over time, I pulled some graphics from Debian’s popularity contest data:
Before we included the PostgreSQL major version in the package name, “postgresql” contained the server, so that line represents the installation count of the pre-7.4 releases at the left end of the graph.
Interestingly, 7.4 reached its installation peak well past 8.1’s. Does anyone have an idea why that happened?
apt.postgresql.org statistics
At this year’s FOSDEM I gave a talk in the PostgreSQL devroom about Large Scale Quality Assurance in the PostgreSQL Ecosystem. The talk included a graph about the growth of the apt.postgresql.org repository that I want to share here as well:
The yellow line at the very bottom is the number of different source package names, currently 71. From that, a somewhat larger number of actual source packages that include the “pgdgXX” version suffixes targeting the various distributions we have is built (blue). The number of different binary package names (green) is in about the same range. The dimension explosion then happens for the actual number of binary packages (black, almost 8000) targeting all distributions and architectures.
The red line is the total size of the pool/ directory, currently a bit less than 6GB.
(The graphs sometimes decrease when packages in the -testing distributions are promoted to the live distributions and the old live packages get removed.)
New urxvt in current directory
Following Enrico’s terminal-emulators comparison, I wanted to implement “start a new terminal tab in my current working directory” for rxvt-unicode aka urxvt. As Enrico notes, this functionality is something between “rather fragile” and non-existing, so I went to implement it myself. Martin Pohlack had the right hint, so here’s the patch:
--- /usr/lib/urxvt/perl/tabbed 2014-05-03 21:37:37.000000000 +0200
--- ./tabbed 2014-07-09 18:50:26.000000000 +0200
@@ -97,6 +97,16 @@
$term->resource (perl_ext_2 => $term->resource ("perl_ext_2") . ",-tabbed");
};
+ if (@{ $self->{tabs} }) {
+ # Get the working directory of the current tab and append a -cd to the command line
+ my $pid = $self->{cur}{pid};
+ my $pwd = readlink "/proc/$pid/cwd";
+ #print "pid $pid pwd $pwd\n";
+ if ($pwd) {
+ push @argv, "-cd", $pwd;
+ }
+ }
+
push @urxvt::TERM_EXT, urxvt::ext::tabbed::tab::;
my $term = new urxvt::term
@@ -312,6 +322,12 @@
1
}
+sub tab_child_start {
+ my ($self, $term, $pid) = @_;
+ $term->{pid} = $pid;
+ 1;
+}
+
sub tab_start {
my ($self, $tab) = @_;
@@ -402,7 +418,7 @@
# simply proxies all interesting calls back to the tabbed class.
{
- for my $hook (qw(start destroy key_press property_notify)) {
+ for my $hook (qw(start destroy key_press property_notify child_start)) {
eval qq{
sub on_$hook {
my \$parent = \$_[0]{term}{parent}
Comparing Version Numbers in Shell
On RedHat/CentOS/rpm systems, there’s no dpkg –compare-versions available - sort -V can help to compare version numbers:
version_lt () {
newest=$( ( echo "$1"; echo "$2" ) | sort -V | tail -n1)
[ "$1" != "$newest" ]
}
$ version_lt 1.5 1.1 && echo yes
$ version_lt 1.5 1.10 && echo yes
yes
PostgreSQL 9.4 on Debian
Yesterday saw the first beta release of the new PostgreSQL version 9.4. Along with the sources, we uploaded binary packages to Debian experimental and apt.postgresql.org, so there’s now packages ready to be tested on Debian wheezy, squeeze, testing/unstable, and Ubuntu trusty, saucy, precise, and lucid.
If you are using one of the release distributions of Debian or Ubuntu, add this to your /etc/apt/sources.list.d/pgdg.list to have 9.4 available:
deb http://apt.postgresql.org/pub/repos/apt/ codename-pgdg main 9.4
On Debian jessie and sid, install the packages from experimental.
Happy testing!
Trusty and Saucy on apt.postgresql.org
Over the past few weeks, new distributions have been added on apt.postgresql.org: Ubuntu 13.10 codenamed “saucy” and the upcoming Ubuntu LTS release 14.04 codenamed “trusty”.
Adding non-LTS releases for the benefit of developers using PostgreSQL on their notebooks and desktop machines has been a frequently requested item since we created the repository. I had some qualms about targeting a new Ubuntu release every 6 months, but with having automated more and more parts of the repository infrastructure, and the bootstrapping process now being painless, the distributions are now available for use. Technically, trusty started as empty, so it hasn’t all packages yet, but of course all the PostgreSQL server packages are there, along with pgAdmin. Saucy started as a copy of precise (12.04) so it has all packages. Not all packages have been rebuilt for saucy, but the precise packages included (you can tell by the version number ending in .pgdg12.4+12 or .pgdg13.10+1) will work, unless apt complains about dependency problems. I have rebuilt the packages needing it I was aware about (most notably the postgresql-plperl packages) - if you spot problems, please let us know on the mailing list.
Needless to say, last week’s PostgreSQL server updates are already included in the repository.
ci.debian.net on DDPO
More and more packages are getting autopkgtest aka DEP-8 testsuites these days. Thanks to Antonio Terceiro, there is ci.debian.net" running the tests.
Last weekend, I’ve added a “CI” column on DDPO that shows the current test results for your packages. Enjoy, and add tests to your packages!
TF101 flickering and a loose cable
My ASUS Transformer TF101 had suddenly started flickering in all sorts of funny colors some weeks ago. As tapping it gently on the table in the right angle made the problem go away temporarily, it was clear the problem was about a loose cable, or some other hardware connection issue.
As I needed to go on a business trip the other day, I didn’t look up the warranty expiration day until later that week. Then, Murphy struck: the tablet was now 2 years + 1 day old! Calling ASUS, some friendly guy there suggested I still tried to get ASUS to accept it for warranty, because the tablet had been with them last year for 5 days, so if they added that, it would still be within the warranty period. I filled out the RMA form, but one hour later the reply was they rejected it because it was out of warranty. Another guy on the phone then said they would probably only do the adding if it had been with them for maybe 10 days, or actually really 30 days, or whatever.
Some googling suggested that the loose cable theory was indeed worth a try, so I took it apart. Thanks to a forum post I could then locate the display connector and fix it.
Putting the case back together was actually harder than disassembling it because some plastic bits got stuck, but now everything is back to normal.
Jessie, the HP 6715b, and Wifi
If you are upgrading your HP/Compaq 6715b to Debian Jessie, and suddenly Wifi stops working because the PCI device is gone, install the “rfkill” package:
# lspci | tail -2 02:04.1 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 IEEE 1394 Controller (rev 02) 10:00.0 Ethernet controller: Broadcom Corporation NetLink BCM5787M Gigabit Ethernet PCI Express (rev 02) # rfkill list 1 1: hp-wifi: Wireless LAN Soft blocked: yes Hard blocked: no # rfkill unblock wifi # rfkill list 1 1: hp-wifi: Wireless LAN Soft blocked: no Hard blocked: no # lspci | tail -2 10:00.0 Ethernet controller: Broadcom Corporation NetLink BCM5787M Gigabit Ethernet PCI Express (rev 02) 30:00.0 Network controller: Broadcom Corporation BCM4311 802.11a/b/g (rev 02)
Reports on the internet say that the same could be done by going into the BIOS and selecting “Reset to default” - this makes the Wifi LED active until about udev is started on the next boot.
To be done: figure out how to automate this.
Version Numbers
Following an idea by Ansgar Burchardt, I’ve done some digging on version numbers in Debian:
Most common version numbers:
projectb=> select version::text, count(*) from source group by 1 order by 2 desc; version | count ------------+------- 4:4.10.5-1 | 131 1.0-1 | 120 1.0.0-1 | 95 1.1-1 | 95 1.0.1-1 | 93 1.2-1 | 88 1.0-2 | 82 0.2-1 | 80 0.3-1 | 79 0.5-1 | 77 0.04-1 | 76 1.1.1-1 | 76 0.10-1 | 74 1.4-1 | 72 1.1-2 | 71 0.1-1 | 70 0.11-1 | 70
Version number with the most spellings: (considered equal by the dpkg definition, implemented in the "debversion" type)
projectb=> select version::text, count(*) from source where version = '1.02-1' group by 1 order by 2 desc; version | count ------------+------- 1.2-1 | 88 1.02-1 | 46 1.002-1 | 4 1.000002-1 | 1 001.002-1 | 1 1.00002-1 | 1
If we look at equivalent version numbers, the first table above looks entirely different:
projectb=> select version, count(*) from source group by 1 order by 2 desc limit 30; version | count ------------+------- 0.3-1 | 162 1.0-1 | 160 0.05-1 | 156 0.04-1 | 154 0.02-1 | 151 1.02-1 | 141 0.006-1 | 133 1.001-1 | 131 4:4.10.5-1 | 131 0.7-1 | 127
(I’m also participating in the “longest version number” contest, I’ve just uploaded bind9 version 1:9.8.4.dfsg.P1-6+nmu2+deb7u1~bpo60+1 to backports.)
How not to monitor a boolean
We were lazy and wrote a simple PostgreSQL monitoring check in shell instead of using some proper language. The code looked about this:
out=$(psql -tAc "SELECT some_stuff, t > now() - '1 day'::interval FROM some_table" some_db 2>&1) case $out in *t) echo "OK: $out" ;; *) echo "NOT OK: $out" ;; esac
If the string ends with ’t’, all is well, if it ends with ‘f’ or someting else, something is wrong.
Unfortunately, this didn’t go that well:
OK: psql: FATAL: database “some_db” does not exist
PostgreSQL minor releases
We’ve just put the new PostgreSQL minor releases live on apt.postgresql.org. Building 5 major versions for 10 distributions produces quite a lot of stuff:
- 25 .dsc files (source packages)
- 745 .deb files (360 *_amd64.deb + 360 *_i386.deb + 25 *_all.deb)
- 497 MB in *_amd64.deb files
- 488 MB in *_i386.deb files
- 58 MB in *_all.deb files
- 73 MB in *.orig.tar.bz2 files
- in total 1118 MB
Compiling took a bit more than 10 hours on a 2-cpu VM. Of course that includes running regression tests and the postgresql-common testsuite.
Note: This will be the last update published on pgapt.debian.net. Please update your sources.list entries to point to apt.postgresql.org!
apt.postgresql.org
So we finally made it, and sent out an official announcement for apt.postgresql.org.
This new repository hosts packages for all PostgreSQL server versions (at the moment 8.3, 8.4, 9.0, 9.1, 9.2) for several Debian/Ubuntu distributions (squeeze, wheezy, sid, precise) on two architectures (amd64, i386). Now add packages for extension modules on top of all these, and you get a really large amount of binaries from a small number of sources. Right now there’s 1670 .deb files and 148 .dsc files, but the .dsc count includes variants that only differ in the version number per distribution (we attach .pgdg60+1 for squeeze packages, .pgdg70+1 for wheezy and so on), so the real number of different sources is rather something like 81, with 38 distinct source package names.
Dimitri Fontaine, Magnus Hagander, and I have been working on this since I first presented the idea at PGconf.EU 2011 in Amsterdam. We now have a Jenkins server building all the packages, an archive server with the master repository, and a feed that syncs the repository to the postgresql.org FTP (well, mostly http) server.
If you were previously using pgapt.debian.net, that’s the same archive as on apt.postgresql.org (one rsync away). Please update your sources.list to point to apt.postgresql.org, I’ll shut down the archive at that location at the end of January.
Here’s the Quickstart instructions from the Wiki page:
Import the repository key from http://apt.postgresql.org/pub/repos/apt/ACCC4CF8.asc:
wget -O - http://apt.postgresql.org/pub/repos/apt/ACCC4CF8.asc | sudo apt-key add -
Edit /etc/apt/sources.list.d/pgdg.list. The distributions are called codename-pgdg. In the example, replace squeeze with the actual distribution you are using:
deb http://apt.postgresql.org/pub/repos/apt/ squeeze-pgdg main
Configure apt’s package pinning to prefer the PGDG packages over the Debian ones in /etc/apt/preferences.d/pgdg.pref:
Package: * Pin: release o=apt.postgresql.org Pin-Priority: 500
Note: this will replace all your Debian/Ubuntu packages with available packages from the PGDG repository. If you do not want this, skip this step.
Update the package lists, and install the pgdg-keyring package to automatically get repository key updates:
apt-get update apt-get install pgdg-keyring
pgbouncer running on the same hardware
We have a PostgreSQL server with 16 cores that was apparently running well below its capacity: load something between 3.0 and 4.0, around 200 active database connections, almost all always being <IDLE>. However, when the tps count reached 7k transactions per second, things started to throttle, and pgbouncer (running on the database server) started listing up to half of the client connections to be in cl_waiting state. Load was still low, but application performance was bad.
The culprit turned out to be the kernel scheduler, fairly distributing CPU time among all running processes. There’s one single pgbouncer process, but hundreds of postgres processes.
A simple renice of the pgbouncer process did the trick and gave us another extra 2k tps.
Shared Memory and Swapping
We have this PostgreSQL server with plenty of RAM that is still using some of its swap over the day (up to 600MB). Then suddenly everything is swapped in again.
It turned out the reason is there are two clusters running, and the second one isn’t used as heavily as the first one. Disk I/O activity of the first cluster slowly evicts pages from the second shared buffers cache to swap, and then the daily pg_dump run reads them back every evening.
I was not aware of an easy way to get numbers for “amount of SysV shared memory swapped to disk”, but some googling led to shmctl(2):
#define _GNU_SOURCE 1
#include <sys/ipc.h>
#include <sys/shm.h>
#include <stdio.h>
#include <unistd.h>
int main ()
{
struct shm_info info;
int max;
long PAGE_SIZE = sysconf(_SC_PAGESIZE);
max = shmctl(0, SHM_INFO, (struct shmid_ds *) &info);
printf ("max: %d\nshm_tot: %ld\nshm_rss: %ld\nshm_swp: %ld\n",
max,
info.shm_tot * PAGE_SIZE,
info.shm_rss * PAGE_SIZE,
info.shm_swp * PAGE_SIZE);
return 0;
}
The output looks like this:
max: 13 shm_tot: 13232308224 shm_rss: 12626661376 shm_swp: 601616384
Update: Mark points out that ipcs -mu shows the same information. Thanks for the hint!
# ipcs -mu ------ Shared Memory Status -------- segments allocated 2 pages allocated 3230544 pages resident 3177975 pages swapped 51585 Swap performance: 0 attempts 0 successes
grep -r foobar
In Wheezy’s grep version [1], you can omit the “.” in
$ grep -r foobar .
and just write
$ grep -r foobar
[1] actually since 2.11
mkfs.ext3 -b 1024
If you think you are smart and create an ext filesystem with 1024 bytes blocksize because there will be zillions of very small files, and then run into ENOSPC errors while there’s both space and inodes left, you will probably see ext3_dx_add_entry: Directory index full! in the kernel log.
Turns out that there’s a limit of approximately 300,000 files per directory with 1k blocks, after which some hash tables are full. Recreate the filesystem with 2k blocks and the limit will be MUCH higher.
PostgreSQL in Debian Hackathon
Almost a year has passed since my talk at pgconf.eu 2011 in Amsterdam on Connecting the Debian and PostgreSQL worlds, and unfortunately little has happened on that front, mostly due to my limited spare time between family and job. pgapt.debian.net is up and running, but got few updates and is lagging behind on PostgreSQL releases.
Luckily, we got the project moving. Dimitri Fontaine and Magnus Hagander suggested to do a face-to-face meeting, so we got together at my house for two days last week and discussed ideas, repository layouts, build scripts, and whatnot to get all of us aligned for pushing the project ahead. My employer sponsored my time off work for that. We almost finished moving the repository to postgresql.org infrastructure, barring some questions of how to hook the repository into the existing mirror infrastructure; this should get resolved this week.
The build server running Jenkins is still located on my laptop, but moving this to a proper host will also happen really soon now. We are using Mika Prokop’s jenkins-debian-glue scripts for driving the package build from Jenkins. The big plus point about Jenkins is that it makes executing jobs on different distributions and architectures in parallel much easier than a bunch of homemade shell scripts could get us with reasonable effort.
Here’s a list of random points we discussed:
- We decided to go for “pgdg” in version numbers and distribution names, i.e. packages will have version numbers like 9.1.5-1.pgdg+1, with distributions wheezy-pgdg, squeeze-pgdg, and so on.
- There will be Debian-testing-style distributions called like wheezy-pgdg-testing that packages go into for some time before they get promoted to the “live” distributions.
- PostgreSQL versions out of support (8.2 and below) will not be removed from the repository, but will be moved to distributions called like wheezy-pgdg-deprecated. People will still be able to use them, but the naming should make it clear that they should really be upgrading.
- We have a slightly modified (compared to Debian unstable) postgresql-common package that sets the “supported-versions” to all versions supported by the PostgreSQL project. That will make the postgresql-server-dev-all package pull in build-dependencies for all server versions, and make extension module packages compile for all of them automatically. (Provided they are using pg_buildext.)
- There’s no Ubuntu support in there yet, but that’s mostly only a matter of adding more cowbuilder chroots to the build jobs. TBD soon.
We really aim at using unmodified packages from Debian as much as possible, and in fact this project doesn’t mean to replace Debian’s PostgreSQL packaging work, but to extend it beyond the number of server versions (and Debian and Ubuntu versions covered) supported. The people behind the Debian and Ubuntu packages, and this repository are mostly the same, so we will claim that “our” packages will be the same quality as the “original” ones. Big thanks go to Martin Pitt for maintaining the postgresql-common testsuite that really covers every aspect of running PostgreSQL servers on Debian/Ubuntu systems.
Stay tuned for updates! :)
Machine Check Exception
Today’s wtf:
Wed Jul 18 20:25:01 CEST 2012 MCA: Generic CACHE Generic Generic Error
Cool Unix Features: /usr/bin/time
Ever wondered how much memory a program needed? Install the “time” package:
$ /usr/bin/time ls [...] 0.00user 0.00system 0:00.00elapsed 0%CPU (0avgtext+0avgdata 4000maxresident)k 0inputs+0outputs (0major+311minor)pagefaults 0swaps
Unfortunately the “time” bash built-in makes it necessary to use the full path.
Thanks to youam for the tip.
Update: aba notes that calling \time works as well. Thanks!
How not to edit files
When packaging new backport versions, I diff the old Debian package with the old backport package to extract the changes I did there, and then apply this patch to the new version. There is always a reject in debian/changelog because the topmost bpo entry won’t apply cleanly to the new changelog. To fix this, I invoke “vi debian/changelog*” and manually copy the rejected hunk.
Unfortunately, I regularly end up copying it from debian/changelog.rej (buffer 3) into debian/changelog.orig (buffer 2) instead of debian/changelog (buffer 1). Here’s the fix in my .vimrc:
" Prevent accidental editing of patch .orig files autocmd BufRead *.orig set readonly
Generating symbols files from a series of .deb files
PostgreSQL’s libpq5 package got its symbols file somewhere around version 8.4, so the symbols in there were all marked with version “8.4~” or greater. As I’m working on pgapt.debian.net aiming at providing packages for all upstream-supported PG versions (currently 8.3 and up), this made packages incompatible with 8.3’s libpq5.
There didn’t seem to be a ready program to feed a series of .deb files which would then run dpkg-gensymbols and build a symbols file, so I wrote this shell script:
#!/bin/sh
set -eu
[ -d tmp ] || mkdir tmp
i=1
for pkg in "$@" ; do
echo "$pkg"
test -e "$pkg"
name=$(dpkg-deb -I "$pkg" | perl -lne 'print $1 if /^ Package: (.+)/')
version=$(dpkg-deb -I "$pkg" | perl -lne 'print $1 if /^ Version: (.+)/')
out=$(printf "tmp/%03d_%s" $i "$version")
dpkg-deb -x "$pkg" "$out"
dpkg-gensymbols -P"$out" -p"$name" -v"$version" \
${oldsymbols:+-I"$oldsymbols"} -O"$out.symbols" | \
tee "$out.symbols.diff"
test -s "$out.symbols.diff" || rm "$out.symbols.diff"
oldsymbols="$out.symbols"
rm -rf "$out"
i=$(expr $i + 1)
done
To use it, do the following:
- debsnap -a i386 libpq5
- ls binary-libpq5/*.deb > files
- edit "files" to have proper ordering (~rc versions before releases, remove bpo versions, etc.)
- ./walk-symbols $(cat files)
The highest-numbered *.symbols file in tmp/ will then have symbol information
for all packages. I then did some manual post-processing like s/rc1-1// to
get nice (and backportable) version numbers.
Another nice trick (pointed out by jcristau) is to replace the lowest version number of that package (8.2~ here, where libpq changed SONAME) by 0 which will make dpkg-shlibdeps omit the >= version part. (Most packages depending on libpq5 will profit from that.)
I’m still pondering whether this script is non-trivial enough add to devscripts. (The people I asked so far only made comments about the mkdir call…)
Tab completion in vim
Vim’s habit of completing the full filename of the first match on :e has always annoyed me. Rhonda pointed me to wildmode - thanks!
set wildmode=longest,list:longest,list:full
Kudos to dh_python2
Transitioning Python modules to dh_python2 is straightforward. I removed LOADS of magic from python-radix. I especially like the complexity reduction in debian/rules, but debian/control also got some fields removed.
DLV removes all *.de records
Because of some problem with Denic’s DNSSEC testbed and bind resolvers, dlv.isc.org has removed all DLV records for *.de domains. WTF.
PostgreSQL in Debian
At work, I’m dealing with lots of different database setups, luckily mostly PostgreSQL running on Debian. At the same time, a fair amount of the tools in the PostgreSQL ecosystem (not the PostgreSQL server packages itself) are not in the best shape in the Debian archive.
I’m trying to change that by adopting some of the packages. So far, I have fixed a few RC bugs where packages where suddenly trying to build against PostgreSQL 9.0 while expecting 8.4. To my surprise, there are no packages yet in the archive that support multiple PostgreSQL versions in parallel. There is even a package ready to help doing this - postgresql-server-dev-all, but apparently nobody has used it yet. It turned out that after working around a few trivial problems and adding just a few lines of sh code, it was pretty straightforward to port skytools and postgresql-pllua to 9.0 while keeping 8.4 support. The latter has no version-specific code left in debian/ except for a list of supported versions in debian/pgversions, so a future port to 9.1 will be trivial. (Fun fact: the old postgresql-pllua version 0.8.1 was actually a typoed 0.3.1 version.)
Most PostgreSQL tools use a common subversion repository on Alioth, but there is no common mailing list address that is put into the Uploaders fields, so it is hard to get an overview over the state of all packages. I’ve compiled a list of all packages in svn, git, bzr (the server packages), and a few others in DDPO to fix that for now.
Other packages I’ve updated so far are pgtune, pgbouncer, and pgpool2.
Heartbeat and bind9
Using a virtual IP managed by heartbeat is a nice way to work around the slow fallback behavior with multiple nameservers in /etc/hosts. But unlike ntpd, bind9 doesn’t automatically listen on new IPs on the system.
Here’s a cute hack to fix that:
# cat /etc/ha.d/haresources
server01 bind9release IPaddr::10.0.0.3 bind9takeover
# cat /etc/ha.d/resource.d/bind9release
#!/bin/sh
# when giving up resources, reload bind9
case $1 in
stop) /etc/init.d/bind9 reload ;;
esac
exit 0
# cat /etc/ha.d/resource.d/bind9takeover
#!/bin/sh
# on takeover, reload bind9
case $1 in
start) /etc/init.d/bind9 reload ;;
esac
exit 0
HP 2605dn not regcognizing new cartridge
When my wife bought her HP 2605dn color laser, she had also ordered a set of replacement cartridges because the originals would supposedly not last very long. It turned out that they would last over two years, though.
Now, the black cartridge was empty. The replacements are original HP components, but recycled/refilled second hand from a different source. I put in the black one, and the trouble started with the new drum having a big scratch that would appear in every print. After two years, returning the cartridge was not an option. As the old drum was still good, the idea was to put the new toner into the old cartridge. A bit of googling (and visual inspection) quickly revealed that you are not supposed to do this, the device doesn’t have any facilities to access anything.
Some post did indicate though that the cartridge actually consists of two parts that are only held together by two pivot pins and two small springs. If you pull the two parts apart a bit, you can feel the axis the parts are moving around. So, I would try to use the old drum part with the new toner container part.
Removing the springs is easy (also removing the flap covering the drum, along with a tiny third spring). The two pivot pins are pretty inaccessible, but I managed to pull out both using a pointy tong. One pin is made of metal, if you need more space you can drill a small hole right next to it. The other pin is made of plastic with a small hole in it, in one case I succeeded by screwing in a little screw and pulling at it.
Reassembling the parts is not so hard. I put the “new” cartridge into the printer, waited for the initialization to finish, only to find out the printer would still think the cartridge was empty and should be replaced. Duh.
I then discovered a small chip attached to one corner of the drum unit, probably carrying product ID and some serial number. Luckily, it was easy to be removed and exchanged with the other drum unit. Still, the printer thought the cartridge was empty. Some (windows) driver re-installations later it was clear that there is no “ignore fill level and print anyway” option. (Cups didn’t print either.)
The HP Support Forum has a long Ink cartridge empty but it’s not thread with lots of complaints but few answers. (Their problem is mostly with new HP cartridges becoming “empty” after a few days.) Someone suggests trying to cold reset the printer. For the 2605dn, that’s holding the left and right buttons while powering on. I removed the cartridges for doing so. When I had put them back in, the printer asked me to confirm that non-HP material was installed, and subsequently didn’t display any fill level for the black cartridge anymore, just “|?|” in the display.
Printing works now. Yay.
(Lesson learned: I still hate dealing with printers.)
I need to look this up every time
I need to look this up every time I need a backport (mostly PostgreSQL) at a customer site with limited networking:
$ lftp -c 'mget http://backports.debian.org/debian-backports/pool/main/p/postgresql-8.4/*_8.4.5-1~bpo50+1_amd64.deb'
Hopefully I can remember this in the future.
belier
Sometimes remote accounts are only reachable with a series of ssh/su/sudo commands. Using ProxyCommands in .ssh/config works for simple cases, but not with several hops, or if passwords have to be entered.
The belier tool takes as input a series of user@hostname strings and will produce an expect script that does the actual login work.
$ cat input
user@host1 root pw1
user2@host2 pw2
$ bel -e input
$ cat host2.sh
#!/usr/bin/expect -f
set timeout 10
spawn ssh -o NoHostAuthenticationForLocalhost=yes -o StrictHostKeyChecking=no user@host1
expect -re "(%|#|\\$) $"
send -- "su - root\r"
expect ":"
send -- "pw1\r"
expect -re "(%|#|\\$) $"
send -- "ssh -o NoHostAuthenticationForLocalhost=yes -o StrictHostKeyChecking=no user2@host2\r"
expect -re {@[^\n]*:}
send -- "pw2\r"
expect -re "(%|#|\\$) $"
interact +++ return
The generated host2.sh script uses ugly ssh options, but is easily edited.
Now I need the same thing for scp…
Update: Carl Chenet, belier’s author, kindly pointed me to the documentation which has examples how belier can set up ssh tunnels to copy files.
Clamav segfaults
If you are still using clamav on etch, you might want to upgrade now:
# /etc/init.d/clamav-daemon start Starting ClamAV daemon: clamd LibClamAV Warning: *********************************************************** LibClamAV Warning: *** This version of the ClamAV engine is outdated. *** LibClamAV Warning: *** DON'T PANIC! Read http://www.clamav.net/support/faq *** LibClamAV Warning: *********************************************************** /etc/init.d/clamav-daemon: line 240: 5221 Segmentation fault start-stop-daemon --start -o -c $User --exec $DAEMON failed!
Rolling back to yesterday’s daily.cld fixes the issue, at least for the segfault.
Cool Unix Features: /dev/full
I’ve always thought about collecting random bits of useful/interesting/cool Unix features and the like. Before I let that rot indefinitely in a text file in my $HOME, I’ll post it in a series of blog posts. So here’s bit #1:
Everyone knows /dev/null, and most will know /dev/zero. But /dev/full was unknown to me until some time ago. This device will respond to any write request with ENOSPC, No space left on device. Handy if you want to test if your program catches “disk full” - just let it write there:
$ echo foo > /dev/full
bash: echo: write error: No space left on device
Cool Unix Features: column -t
/etc/fstab files tend to be an unreadable mess of unaligned fields.
# /etc/fstab: static file system information.
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
/dev/mapper/benz-root / ext3 errors=remount-ro 0 1
/dev/sda1 /boot ext3 defaults 0 2
/dev/mapper/benz-home /home ext3 defaults 0 2
/dev/mapper/benz-swap_1 none swap sw 0 0
newton:/home /nfs nfs defaults,soft,intr,users 0 0
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto 0 0
Let’s remove some whitespace in the third line:
#<filesystem> <mountpoint> <type> <options> <dump> <pass>
And then pipe everything from line 3 to the end through column -t:
# /etc/fstab: static file system information.
#
#<filesystem> <mountpoint> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
/dev/mapper/benz-root / ext3 errors=remount-ro 0 1
/dev/sda1 /boot ext3 defaults 0 2
/dev/mapper/benz-home /home ext3 defaults 0 2
/dev/mapper/benz-swap_1 none swap sw 0 0
newton:/home /nfs nfs defaults,soft,intr,users 0 0
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto 0 0
Thanks to SP8472 for bringing this to my attention.
Cool Unix Features: date -d @
I’ve always been annoyed about how hard it is to convert seconds-since-epoch to strings. I’ve always been using “date -d ‘1970-01-01 + 1234 sec’”, but as it turned out, that’s wrong because it uses the wrong timezone. Luckily, there’s a slick replacement:
$ date -d '@1234'
Do 1. Jan 01:20:34 CET 1970
The right version of the “long” version is:
$ date -d '1970-01-01 UTC + 1234 sec'
Do 1. Jan 01:20:34 CET 1970
Cool Unix Features: df -T
Just discovered (thanks to XTaran): df -T – show file system type.
$ df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/benz-root
ext3 6.6G 4.9G 1.4G 79% /
tmpfs tmpfs 1.6G 0 1.6G 0% /lib/init/rw
udev tmpfs 1.6G 228K 1.6G 1% /dev
tmpfs tmpfs 1.6G 0 1.6G 0% /dev/shm
/dev/sda1 ext3 236M 21M 203M 10% /boot
/dev/mapper/benz-home
ext3 135G 82G 53G 61% /home
newton:/home nfs 459G 145G 291G 34% /nfs
Cool Unix Features: flock
Lockfiles are usually hard to get right, especially in sh scripts. The best bet so far was “lockfile” included with procmail, but its semantics are pretty weird. (Try to understand what exit code you want, with or without “-!”.) Not to mention that failing to clean up the lockfile will stop the next cron run, etc.
Since Lenny, util-linux ships “flock”. Now you simply say
$ flock /path/to/lockfile /path/to/command
and are done. If you want it non-blocking, add “-n”:
$ flock -n /path/to/lockfile /path/to/command
I should probably migrate all my cronjobs to use this.
Cool Unix Features: nproc
Since coreutils 8.1 (in Squeeze, not Lenny), there is a command that simply prints out how many processors (cores, processing units) are available:
$ nproc
2
The use case is obvious:
$ make -j $(nproc)
On a side note, this is gnulib’s nproc module wrapped into a C program. If you didn’t know gnulib before (it had slipped my attention until recently), it is a library of portability functions and other useful things. Adding it to a project is simply done by calling gnulib-tool and tweaking a few lines in the automake/whatever build scripts.
PS: do not use nproc unconditionally in debian/rules. Parse DEB_BUILD_OPTIONS for “parallel” instead.
Cool Unix Features: paste
paste is one of those tools nobody uses [1]. It puts two file side by side, line by line.
One application for this came up today where some tool was called for several files at once and would spit out one line by file, but unfortunately not including the filename.
$ paste <(ls *.rpm) <(ls *.rpm | xargs -r rpm -q --queryformat '%{name} \n' -p)
[1] See “J” in The ABCs of Unix
Cool Unix Features: stat
“stat” is “date +format” for files:
$ stat -c %s ~/me.jpg # size
520073
$ stat -c %U ~/me.jpg # owner
cbe
No more parsing of “ls” output or similar hacks.
It also shows detailed information about files.
$ stat ~/me.jpg
File: „/home/cbe/me.jpg“
Size: 520073 Blocks: 1024 IO Block: 4096 reguläre Datei
Device: fe03h/65027d Inode: 12427268 Links: 1
Access: (0600/-rw-------) Uid: ( 2062/ cbe) Gid: ( 2062/ cbe)
Access: 2010-06-06 12:58:07.000000000 +0200
Modify: 2010-04-09 22:38:46.000000000 +0200
Change: 2010-04-26 14:18:00.000000000 +0200
It supports similar features for stat’ing filesystems.
Firefox: HeaderControl
I’m a fan of LC_MESSAGES=de_DE and localized web pages – almost. Unfortunately, the wording of many translations in Debian often drives me nuts. This applies mostly to web pages and package descriptions. Apologies to all hard-working translators – the percentage of good translations does not outweigh the “wtf” cases for me. Still, I didn’t want to switch to English for all the web.
Rhonda pointed me to a Firefox ^W Iceweasel plugin: HeaderControl. Now *.debian.org is English for me (reading the constitution in German doesn’t solve DAM problems anyway). Yay!
seq is nice, but ...
$ for i in `seq 1 40` ; do ./something $i ; done
$ for i in $(seq 1 40) ; do ./something $i ; done
seq is nice for that, but the syntax feels a bit hard to type. If you don’t need sh compatibility (read: in bash/zsh), try:
$ for i in {1..40} ; do ./something $i ; done
PS: no spaces allowed
PS 2: another useful feature is prefixes/suffixes as in “touch {1..40}.txt”
tenace 0.10 released
It has been a while since the last update for tenace, my bridge hand viewer. The highlight in version 0.10 is version 2.0 of the double dummy engine dds which has been updated to support parallel computation in multiple threads. The parscore computation in tenace now uses all available CPU cores. Even my notebook has two CPUs :).
More on the technical side, the GUI has been switched to use GtkBuilder which comes with Gtk so there is no external library needed anymore (previously libglade). The looks are pretty much the same as before, though.
The previous version 0.9 had added windows support via mingw. I would still appreciate if people could test it and tell me which bits I need to improve.
Upgrading only installed packages
When there a security update, just logging in to every host and issuing “apt-get install $pkg” doesn’t work as the package might not be installed there. The fact that apt-get doesn’t understand “apt-get upgrade $pkg” has bugged me for a long time. Recent aptitude versions support that, but that’s not part of Lenny.
Here’s a shell function that does the trick:
upgrade () {
if [ "$*" ] ; then
set -- $(dpkg -l "$@" | grep ^ii | awk '{ print $2 }')
if [ "$*" ] ; then
echo "apt-get install $@"
sudo apt-get install "$@"
else
echo "Nothing to upgrade"
fi
else
sudo apt-get upgrade
fi
}
One application is upgrading a lot of hosts when logged in with clusterssh.
Using multiple IMAP accounts with Mutt
Mutt’s configuration is sometime more a toolbox than something offering ready solutions, and “how to I use multiple accounts?” is one of the most FAQs. Here’s a condensed version of my setup.
In the most simple solution, just ‘c’hange folders to the IMAP server:
c imaps://imap.example.com <enter>
c imaps://imap.otherdomain.tld <enter>
That’s cumbersome to type, so let’s automate it:
# .mutt/muttrc
macro index <f2> '<change-folder>imaps://imap.example.com<enter>'
macro index <f3> '<change-folder>imaps://imap.otherdomain.tld<enter>'
That would be the basic setup.
The two accounts have settings associated with them, we put them in two files:
# .mutt/account.example
set from=me@example.com
set hostname="example.com"
set folder="imaps://imap.example.com/"
set postponed="=Drafts"
set spoolfile="imaps://imap.example.com/INBOX"
set signature="~/.mutt/signature.example"
set smtp_url="smtp://me@mail.example.com" smtp_pass="$my_pw_example"
# .mutt/account.otherdomain
set from=myself@otherdomain.tld
set hostname="otherdomain.tld"
set folder="imaps://imap.otherdomain.tld/"
set postponed="=Drafts"
set spoolfile="imaps://imap.otherdomain.tld/INBOX"
set signature="~/.mutt/signature.otherdomain"
set smtp_url="smtp://myself@mail.otherdomain.tld" smtp_pass="$my_pw_otherdomain"
Now all that’s left to do is two folder-hooks to load the files:
# .mutt/muttrc
folder-hook 'example.com' 'source ~/.mutt/account.example'
folder-hook 'otherdomain.tld' 'source ~/.mutt/account.otherdomain'
# switch to default account on startup
source ~/.mutt/account.example
A slight variation of the macros also uses the account files:
macro index <f2> '<sync-mailbox><enter-command>source ~/.mutt/account.example<enter><change-folder>!<enter>'
macro index <f3> '<sync-mailbox><enter-command>source ~/.mutt/account.otherdomain<enter><change-folder>!<enter>'
To save entering the password all the time, we use account-hooks:
account-hook example.org 'set imap_user=me imap_pass=pw1'
account-hook otherdomain.tld 'set imap_user=myself imap_pass=pw2'
Putting passwords in configs isn’t something I like, so I pull them from the Gnome keyring:
set my_pw_example=`gnome-keyring-query get mutt_example`
set my_pw_otherdomain=`gnome-keyring-query get mutt_otherdomain`
account-hook example.org 'set imap_user=me imap_pass=$my_pw_example'
account-hook otherdomain.tld 'set imap_user=myself imap_pass=$my_pw_otherdomain'
(I found gnome-keyring-query in the Gentoo Wiki.)
Martti Rahkila has more verbose article with similar ideas.
Update 2013-01-02: smtp_url and smtp_pass added
Dedicating Lenny to Thiemo
I just finished packing up the signature collection for dedication-5.0.txt. Hopefully I didn’t miss any signatures, saving about 400 attachments is tedious. Fortunately most people got the filenames right. On the down side, there were about 80 signatures that I couldn’t verify because the corresponding key could not be found on the key servers. I’ve mailed the owners, and about 20 have uploaded their key until now.
There are 355 valid signatures. Thanks to all for such a huge participation, that’s about twice the number (185) we got for dedicating Potato to Espy in 2000. (These were DD-only, though.)
Comparing to the various Debian keyrings, 207 active DDs participated, 3 former DDs, 12 Debian maintainers, and 133 other contributors.
Let’s hope Lenny will be as great as Thiemo would have liked it.
RFH: mutt
From: Christoph Berg <myon@debian.org> To: Debian Bugs Submit <submit@bugs.debian.org> Subject: Bug#512072: RFH: mutt -- text-based mailreader supporting MIME, GPG, PGP and threading Package: wnpp
Hi,
the Debian Mutt package needs more maintainers.
There are almost 200 open bugs. Some of these are already forwarded upstream and might just need some triaging/poking. Some need forwarding. Some might be fixed with a trivial patch. Others are Debian specific, mostly for the extra patches we include. There’s duplicates and sub-wishlist items. We are using bts-link to link to dev.mutt.org’s trac, but that also needs more tweaking.
There’s a new upstream version 1.5.19 pending, we need people to check which bugs still apply.
The (in?)famous mutt-patched package needs love to get the sidebar patch updated to the new version, and to get some of mutt-ng’s extra gimmicks and add-ons integrated.
I’ve moved packaging to git since that seems to be the least scary choice nowadays:
http://git.debian.org/?p=pkg-mutt/mutt.git
The repository has been updated to 1.5.19. The mutt-patched build is disabled for now, as the patch doesn’t apply. Testers welcome :)
If you are interested in helping out, please get in contact with me. To get started, subscribe to mutt’s PTS feed. There’s no extra maintainer mailinglist, mailing mutt@packages.debian.org will work for PTS subscribers. If you prefer IRC, join #mutt on irc.freenode.net.
Thanks,
Christoph
Too much magic in Gentoo
Five years after the release of patch 2.5.9, patch 2.6 was released two weeks ago. The most interesting feature is that it will now produce reject files in unified format when the input is in that format. Lucky for those of us whose brain hurts when trying to read context diffs.
Of course there are more changes, and one of them is affecting Gentoo in particular: Gentoo service announcement: keep clear of GNU patch-2.6. The linked Gentoo bug #293570 has attachments to reproduce the bug:
benz:~/tmp/tripwire-2.3.1-2 $ patch -p0 -F3 < ../tripwire-friend-classes.patch patching file tripwire-2.3.1-2-p1/src/fco/fconame.h Hunk #1 succeeded at 48 with fuzz 3 (offset -1 lines). patching file tripwire-2.3.1-2-p1/src/fco/fcosetimpl.h Hunk #1 succeeded at 45 with fuzz 3 (offset -1 lines). patching file tripwire-2.3.1-2-p1/src/tw/fcoreport.h Hunk #1 succeeded at 84 with fuzz 3 (offset -1 lines). benz:~/tmp/tripwire-2.3.1-2 $ ls -l drwxr-xr-x 4 cbe credativ 4096 18. Nov 05:03 src/ drwxr-xr-x 3 cbe credativ 4096 2. Dez 12:23 tripwire-2.3.1-2-p1/
Note that it created a new tripwire-2.3.1-2-p1 directory inside the existing one, which is consistent with -p0. The newly created file contain just the “+” lines from the patch. (There are no “-” lines.) patch 2.5.9 would have refused the patch because it couldn’t find the file to patch.
This is arguably a surprising behavior, but what really happens here is that -F3 (fuzz 3) will explicitly tell patch to ignore 3 context lines if it cannot apply the patch otherwise. diff’s default behavior is to create patches with 3 lines of context. patch’s default fuzz factor is one less than that, 2. Now, 3
- 3 is zero, which doesn’t leave any context left.
Apparently lots of Gentoo ebuilds use -F3, and try to guess the correct number of directories to strip, starting with -p0, -p1, … No wonder that things break badly for them now, as patches will suddenly succeed with -p0.
I’m not sure I like the new patch behavior, but in Gentoo’s case I’d say they are using way too much magic in their build system, and now get bitten by options they shouldn’t have put there in the first place.
Useless statistics
projectb =>
SELECT
array_accum(SUBSTRING(source FROM 1 FOR 1)) AS first,
SUBSTRING(source FROM 2) AS rest
FROM
(SELECT DISTINCT source FROM source) AS source
GROUP BY rest
HAVING
COUNT(SUBSTRING(source FROM 1 FOR 1)) >= 4
ORDER BY rest;
first | rest
---------------------+---------
{c,f,j,l,p,s} | am
{a,g,n,y} | ap
{c,d,p,r,t,x} | ar
{b,d,l,s} | ash
{h,i,l,n,r,v} | at
{b,m,r,s} | c
{g,j,p,x} | cal
{e,g,p,x} | cb
{d,j,k,n,t} | cc
{a,f,i,v} | check
{a,d,e,g,m,u} | cl
{e,j,n,q,u,x} | d
{e,g,l,t,x} | db
{c,d,r,s} | dd
{j,l,o,u} | de
{f,g,l,s,w,x} | dm
{a,l,p,u} | dns
{e,l,n,q,r} | e
{j,s,t,x} | ed
{g,l,m,n} | edit
{c,e,f,x} | fingerd
{b,d,l,p,x} | fm
{g,k,p,y} | forth
{c,g,j,l} | ftp
{b,g,m,o,p} | identd
{f,k,t,v,z} | ile
{s,u,v,z} | im
{b,d,m,s,w} | ing
{b,g,w,z} | ip
{j,l,s,t} | irc
{a,c,f,j,u} | lex
{e,p,s,z} | lib
{f,k,r,x} | log
{i,m,o,q,s,v} | m
{c,q,s,w,x} | mail
{c,d,h,m,o,p,s,t,x} | make
{e,i,l,x} | mms
{b,g,x,z} | oo
{b,c,r,s} | play
{e,f,g,r} | pm
{g,p,t,w,x} | pp
{i,q,w,x} | print
{a,b,m,o} | sc
{b,c,d,k,p,z} | sh
{a,f,p,x} | sp
{a,h,s,x} | t
{g,u,x,y} | talk
{a,e,i,k,q,w,x} | term
{a,h,i,m,n,p} | top
{c,k,q,r} | torrent
{a,h,l,n,o} | tp
{g,i,j,o} | ts
{c,j,n,o,r} | unit
{f,g,l,p,w} | v
{c,k,p,r} | vm
{i,l,n,s,x} | watch
{9,b,d,j,l,p,t} | wm
{g,j,r,w,x} | zip
(58 Zeilen)
… and the winner is make!
Arrogant database
Imagine every Unix command would…
$ mysqladmin foo 2>&1 | hd 00000000 07 6d 79 73 71 6c 61 64 6d 69 6e 3a 20 63 6f 6e |.mysqladmin: con|
… prepend ^G to its error messages.
Cruftiness
Definition: A system’s cruftiness is the number of
“apt-get remove `deborphan` ” invocations needed to clean up.
Dear irssi
when a connection is lagging, and I type /reconnect, please don’t try to reconnect to the same IP in the rotation.
Thank you.
debcheckout saves the day
I have a small set of config files and cfengine scripts that I want to deploy on every machine I own. Of course, the Debian way is to put them into a package, and install that. So far so good.
Problems arise when trying to maintain this package on several different $HOME directories - depending on where I am when I want a change. Using $vcs (svn in this case) works, but I’d also like source packages. Only that I occasionally forget where I put the repository. Or uploading new source packages is too tedious.
The later problem was solved by a cute combination of dput and reprepro. My .dput.cf has targets that rsync the package over to my server, and automatically install the packages into the repository there.
For the repository problem, Stefano Zacchiroli’s debcheckout comes to the rescue. Vcs-Svn: in debian/control points to my svn+ssh:// repository. Instant “DWIM” checkouts! (To actually make this work, the latest cfengine tweaks add deb-src lines for my local repository that were previously missing.)
Kernelspotting
I just discovered a gem in the watch(1) manpage:
You can watch for your administrator to install the latest kernel with
watch uname -r
(Just kidding.)
Mouseover titles
Best(*) Firefox extension ever: Long Titles
(Spotted on http://xkcd.org/about/)
(*) PS: Of course Open in browser and Generic URL creator are also way cool.
mutt-patched
There have been several long-term wishlist bugs on Mutt to include the (in?)famous “sidebar” patch. It adds a panel on the left side of the screen to list mailboxes with message counts.
We do ship several patches with Debian’s Mutt package that are not considered for inclusion by the upstream authors, but this one is different. It touches the core of the UI renderer and adds extensive counting of messages. There’s been quite a hype about the patch, it was even included in the (now discontinued [1]) mutt-ng fork. Unfortunately, the patch was technically questionable, it was written for the Mutt tarballs and didn’t easily apply to CVS, and most versions floating around contained lots of cruft like backup files and temporary editor files. Furthermore, it is far from bug-free and even segfaults occasionally.
Some months ago we decided we would build a second binary package to put the patch in so we don’t destabilize the standard package. Thanks Dann Frazier, we got a clean patch, Adeodato had the right idea on how to build two binaries from the same, differently-patched source (make install would run havoc, the trick was to build the patched version before the regular one), and I resolved some more involved conflicts with the maildir-mtime patch.
So, finally, there it is: mutt-patched.
Disclaimer: As said, there are bugs. YMMV.
On related news, there’s now also a -dbg package now for the victims of the IMAP and header cache segfaults that we still see sometimes.
Update: The sidebar patch was not the reason for the creation of mutt-ng, but many mutt-ng users were using it because of this patch.
Update 2008-01-08: [1] mutt-ng used to be a Mutt fork. It is now maintained as a patch collection for Mutt.
New toy
Soon after Bernd Zeimetz joined our company, he infected me with the geocaching virus. In the meantime, I bought a GPS receiver, and am now endeavoring around the town and the nearby woods, searching for caches.
There’s not that much free GPS and/or geocaching software around. gpsd works quite well, but I don’t take my computer to the woods ;) Viking is very nice for visualizing waypoints and sorting them in categories, but is still somewhere between alpha and beta state. gpsbabel reads and writes a zillion different file format and talks to a vast array of receivers, but often needs several attempts to upload data to mine. Things are moving fast though and the GPS community looks active.
Last week I started looking into OpenStreetMap and already added a few roads in my neighborhood. Compared to commercial maps, there’s still much to do, especially for smaller tracks and features like foot bridges or woods outside bigger cities. I’ll keep collecting GPS data :) For the free software side, neither OSM’s online editor worked here (flash), nor the recommended tool for local use (java), so I tried merkaartor (qt4) which works nicely. It also has some bugs, but feels very usable (though sometimes slow). Starting from upstream’s debian/ dir, building packages was straightforward, and after some more editing, packages are now in unstable. (The svn repository is in collab-maint, contributors welcome.)
Packages up for adoption
(Advertisment)
I have several packages to give away:
- endeavour - file and disk management suite
- avscan - GTK frontend for the Clam AntiVirus scanner (ClamAV)
- cryopid - Dumps a process into a self-executing file
- xar - eXtensible ARchiver
- minimalist - MINImalistic MAiling LISTs manager
- dtmfdial - DTMF Tone Dialer
If you are interested in taking any of these, please contact me.
The first three have open RC-bugs that need some attention, they have been orphaned for some weeks, with no takers. endeavour and avscan are already on the way to RM, if you are interested in these, speak up now.
Thanks :)
RFC wanted
There should be a law or something forbidding this:
Root> ^D (nothing happens) Root> quit Error 234: Invalid command Root> exit Error 234: Invalid command Root> logout Connection to remote host closed.
Tenace
In contrast to chess, there is very little free software for bridge players. Some deal generators have been around for years, but basically, that was all.
Nowadays major online bridge site is bridgebase.com. Their windows client actually runs quite well in cedega (and wine), with the notable exception that the built-in double dummy solver doesn’t work there. (A double dummy solver is a program that computes the optimal line of play with all four hands open (both sides dummy), and hence can compute the theoretically optimal score for a given board.)
Furthermore, viewing records of played boards is a bit tedious as launching the viewer of course requires wine as well. Editing boards is not possible. This is where I started writing some perl scripts that would at least dump the board records as text files. At the same time I thought about doing something interesting with Gnome’s glade UI builder. Over the past two years, I have been working on a GTK+ version of a bridge hand viewer and editor I called tenace.
When Bo Haglund released his double dummy solver library as GPL software, I packaged it for Debian and worked on integration into tenace. It will compute “best” cards to play, determine par scores.
Now tenace should be stable enough so I can risk announcing it to the world.
Coincidentally, the screenshot shows a board from last week’s club championships. East can beat 3NT by returning a Spade, but at my table they didn’t and so the board became my first squeeze hand :)
Time between releases
19:19 <BTS> Closed #31581 in project by Christoph Berg (myon) «Time between releases is too long». http://bugs.debian.org/31581
Sounds like now I can claim I’ve fixed Debian all by myself.
Time machine
Wanted: a device that will rewind time, turn off greylisting in my mail server, wait for the other side send the mail, and then resume business as before.
cfengine
I’ve given the idea of centrally configuring my hosts another go. Previously I had some meta packages that would pull in packages, but that’s not very interesting. Furthermore the archive I set up didn’t scale, dpkg-scanpackages plus makefiles aren’t really fun to use.
Now, I have a set of cfengine scripts that gets distributed as .deb. That sounds messy, but connecting reprepro with the right .dput.cf makes updating a breeze.
cfengine itself is a beast not easily tamed. It has some weird ideas about timeouts and when (not) to execute scripts. So far I’m only using “editfiles” in this setup to do tweaks like comment HashKnownHosts in ssh_config, add some sources.list entries, add %adm to sudoers, etc. Next step will be to also automatically push config into chroots, and to pull passwd.db and friends for use with libnss-db.
Datamining
I’m just digging through OFTC’s nickserv database to do some cleaning. We have a bit over 20k nicknames in the database on 18k accounts which means about 10% of registered nicks are linked to other master nicks.
By the power of sql, here’s some statistics on the domain names of the email addresses our users:
com 9144 net 2241 org 1778 de 1016 uk 392 nl 288 fr 223 edu 217 au 203 it 193 br 190 ru 174 ca 165 se 88 dk 74 at 70 fi 62 cx 59 info 58
gmail.com 3997 hotmail.com 1405 yahoo.com 843 gmx.de 221 gmx.net 201 web.de 156 debian.org 148 free.fr 94 aol.com 90 msn.com 73 comcast.net 72 gentoo.org 71 mail.ru 65 xs4all.nl 59 linuxmail.org 58 verizon.net 57 yahoo.co.uk 54 yahoo.com.br 45 googlemail.com 45 student.uq.edu.au 44 sbcglobal.net 36 earthlink.net 33 users.sourceforge.net 32
The numbers could use some aggregation as some providers use zillions of TLDs (yahoo, gmx).
My personal favorite in there is root@localhorst :-)
DebConf: Arrived
You know you have left the continent when you see this:
I’ve arrived all well, only that I made the same mistake as always - leaving the house without writing down (or even looking up) the precise address where to go. Luckily the information centre at the EDI airport had a web browser. (And they didn’t get scared away by repeated “This page contains insecure media” warnings on debconf.org.)
DebConf: Flights booked
I just booked my Debconf flights. Airline websites are a major PITA. Germanwings wasn’t even available, “maintenance today from 23 to 2h” - sorry, no happy customer.
Ryanair’s appearance was the worst. I don’t mind crappy html as long as it works, but why do I have to choice between “Herr” (Mr), “Frau” (Mrs), “Mrs” (!), and “Miss”? When I said I lived in “Mönchengladbach” they replied with the equivalent of “Please do not use special characters like { } | < > [ ].” Apparently I’m allowed some amount (“20kg 15kg for travel past November 2006”) of luggage (except if I’m below the age of 2), but later they charge for it. Then there’s a drop-down menu asking me in I which country live, but it doesn’t really mention that this is really where I select whether I want travel insurance (I don’t). At the end of the page I’m asked to confirm that I accept the travel insurance terms (I still don’t), but that’s also the checkbox for their general terms of service. That flight would have gone through Prestwick (PIK) which is apparently fairly well connected to Edinburgh, but the flight back would leave at 7.50 am. I don’t think I like that.
In parallel, I tried easyjet.com. They fly from Dortmund, and while looking where that airport is, I first tried “airport lounges” which was obviously the wrong place. A bit more hidden was a list of airports, but it didn’t include Dortmund. Google for the rescue… Then they tell me “your total amount is 99,71€”, only to additionally add 7,50€ on the next page just because I’m using a credit card - who doesn’t? Of course I’m willing to fill out personal information like my address etc, but why do they want my phone number and email address? And why do they claim “you haven’t entered a phone number” when I put it (rather a fake one) in the “mobile phone” field (below “home”)? What really drove me crazy was when they refused to accept the booking when I left the “we like to find out about our customers […] reason for your trip” field empty. That’s a drop-down menu with the choices “Business”, “Visiting friends or family and staying with them”, “Holiday”, and “Visiting friends or family but staying elsewhere”. No, I’m not going to tell them. (I randomly selected “Business”.) In the end, I booked there, but it wasn’t really fun.
Whatever. Meet you at Debconf :)
[Update] Oh, and I had to confirm I “have read, understood, and accept easyJet’s terms and conditions, including the new rules for hold baggage”. Thats a minimum of 10kB legal blurb - a simple “accept” checkbox would have been enough…
DebConf: Home Again
I’m finally home again.
Thanks to all the people who make DebConf such a great experience.
DebConf: In Transit
10:51 -!- Irssi: Topic: -: So: Myon 10:51 -!- Irssi: Topic: +: Transit: Myon
Given the steady stream of “are you here yet?” questions in my irc client, it looks like I will meet a lot of old and new friends in EDI.
My flight will leave from DTM at 17:30 today, arriving at 18:15 in EDI. Still pondering if going by train or car, probably the former.
gpg --send
Although I missed the deadline for the DC7 KSP, I just imported the keyring. First I ran gpgsigs -r ksp-dc7.txt, and then gpg –import ksp-dc7.asc. The result is scary:
gpg: Total number processed: 182 gpg: imported: 44 (RSA: 3) gpg: unchanged: 117 gpg: new user IDs: 18 gpg: new subkeys: 1 gpg: new signatures: 64
In other words, there are about 18 people who have added a a UID, but not sent it to a keyserver. One key wasn’t even there. This includes a fair number of people whom I usually trust to handle such things more carefully. Please consider running gpg –send :)
Update: On a closer look, at least one of the “new user IDs” was already present on subkeys.pgp.net. Maybe my gpg was just to stupid to handle its own keyrings. (The missing key was really not there, I’ve uploaded it in the meantime.)
IPv6
Thanks to sixxs.net I’m now connected to the web 6.0. Setting up the tunnel on the server machine was easy thanks to aiccu, only the openvpn routes for my other machines was a bit more tricky. (Openvpn doesn’t support “mode server” for ipv6 yet.)
On a sidenote, freewrt is much nicer to use on my Asus WL500gp than openwrt - feels much more like Linux (even Debian) than the nvram stuff on openwrt.
Looking for a Window Manager
I had been using fvwm2 for some 10 years when around the beginning of this year, I thought it might be time for a change. My config was originally copied from some SuSE templates and then heavily tweaked over the years, but recently broke more and more in subtle way with new fvwm upstream versions, e.g. moving windows suddenly required a different mouse button. Probably the config was just slightly out of spec and fvwm got “fixed”, but it was annoying.
The “tiled” window managers I had seen on others’ desktops made me curious, so I gave ion3 a try some months ago. The overall appearance was all cool, but it tried a bit too hard to squeeze all windows into tiled windows - of course there’s the floating workspace, but creating one was weird, and moving windows even weirder. And if only the (default) key bindings were more vi-like… I admit I never really bothered to read the documentation - probably everything would have been much nicer otherwise.
Then there was the big license “wtf” with ion, at which point I started looking around further. dwm looks clearly too l33t to be serious, so wmii was the next choice, 3.1 to be exact. vi key bindings, a nicely configurable status bar, easy workspace switching and window moving. wmii doesn’t have horizontally split windows (windows are always in columns), and no “tabbed” windows, though. It didn’t warp the pointer to the currently active window (something I got used to with fvwm), but some config tweaks mostly fixed that. This time, I read the documentation, but there’s only a 10 page pdf beginners document, but there’s not much to configure anyway.
Then came lenny. Testing and unstable currently feature version 3.6 which is a big disappointment. It is broken (the default config is unusable), and even after fixing that, it looks like they removed all the little details that I liked. The status bar is still there (though with a new location in the virtual filesystem), the color cannot be changed anymore (unless changing some other color as well). The windows used to have slim 1-pixel (configurable) borders. Now they still have, but the border will be expanded if the window chooses a different size - xterm always rounds down to the next character size, so all my terminals have fat surroundings now. The last-active window in a inactive column still had some markup so I knew which window Alt-Left/Right wound return to, now I have to guess. I wouldn’t mind fixing my 3.1 config for 3.6, but I don’t think I will, given these issues look unfixable. The 3.1 wmiirc file looked like a sh script, the 3.6 uses eval and a bunch of functions that frighten me. On a positive note, it is now possible to create new workspaces by just selecting them, which is much handier than start-program, wait, move-window-over, move-workspace.
I guess I will give ion (2, 3?) another try…
Mönchengladbach
To also announce it here, I’ve moved to Mönchengladbach to work at credativ, along with several other DDs, Postgresql, and other open source folks. So far work has been pretty cool, and I’m fairly sure it will stay that way.
I’ve been mostly inactive lately because I didn’t have internet access at home yet. In fact, I officially still don’t, but my provider is so “nice” to let me connect to the DSL line, login in, and then tell me via http that username/password were wrong. But, at this point, port 53/udp is open :) Ganneff was so kind to set up an openvpn gateway for me and forward 22/tcp to my server, so I can ssh, and tunnel everything else I like via that.
Sorry to my NMs for any delays lately, I hope to catch up this week.
On a different note, I’m playing with ikiwiki and converting my blog, so please excuse if that causes flooding on planet.debian.org (which I of course hope to avoid by keeping the old timestamps, mmmv).
Mutt hack
When using a <limit> in mutt’s message index, I always try to hit ‘q’ to get back to the index view, but of course there’s nothing to quit. The macro below changes ’l’ such that ‘q’ will unlimit, and a subsequent ‘q’ will then quit mutt:
macro index l '<enter-command>macro index q "<limit\>.<enter\><enter-command\>bind index q quit<enter\>"<enter><limit>' 'limit with quit enabled'
(The weird > quotes trick the parser into not parsing <fct> in the outer <enter-command> layer.)
New Printer: HL-2030
I finally decided I needed a printer at home. Of course it had to be a laser. When asked about Linux compatibility, the guys at the local store said “uhmmm… Brother… maybe”. They pointed me at some printer on special offer, a Brother HL-2030 for 110€. Naturally the box didn’t say anything about Linux, but I was promised I could return it if it didn’t work (unless I unpacked the toner cartridge, whatever).
Back at home, I just had to install cupsys, cupsys-client, foomatic-db-engine and foomatic-db (using etch), fetch the HL-2060 (sic) ppd from linuxprinting.org, do some clicks in the CUPS web interface and everything worked out of the box. Setting the resolution to 1200x600 gave weird results though, so I'm doing with 600x600 now.
OpenPGP keys in DNS
The latest addition to the mutt CVS tree is PKA support via gpgme. While trying to figure out how that works in mutt (I haven’t yet…) I configured my DNS server for PKA and CERT records.
PKA
PKA (public key association) puts a pointer where to obtain a key into a TXT record. At the same time that can be used to verify that a key belongs to a mail address. The documentation is at the g10code website (only in German so far). I put the following into the df7cb.de zone:
cb._pka IN TXT "v=pka1;fpr=D224C8B07E63A6946DA32E07C5AF774A58510B5A;uri=finger:cb@df7cb.de"
$ host -t TXT cb._pka.df7cb.de cb._pka.df7cb.de descriptive text "v=pka1\;fpr=D224C8B07E63A6946DA32E07C5AF774A58510B5A\;uri=finger:cb@df7cb.de"
Now gpg can be told to use PKA to find the key:
$ echo foo | gpg --auto-key-locate pka --recipient cb@df7cb.de --encrypt -a gpg: no keyserver known (use option --keyserver) gpg: requesting key 58510B5A from finger:cb@df7cb.de gpg: key 58510B5A: public key "Christoph Berg" imported gpg: Total number processed: 1 gpg: imported: 1 gpg: automatically retrieved `cb@df7cb.de' via PKA
CERT
CERT records work similarly. Records are generated by make-dns-cert (from the tools directory in the gnupg source). cb.gpg is a stripped-down gpg keyring (created with pgp-clean -s and converting from .asc to .gpg).
$ ./make-dns-cert -f D224C8B07E63A6946DA32E07C5AF774A58510B5A -n cb cb TYPE37 \# 26 0006 0000 00 14 D224C8B07E63A6946DA32E07C5AF774A58510B5A $ ./make-dns-cert -k cb.gpg -n cb cb TYPE37 \# 1338 0003 0000 00 9901A20440 [...] 509C96D4BFF17B7
With a new bind and host (backports.org!) the format looks a bit nicer, that’s also what I copied into the zone file:
$ host -t CERT cb.df7cb.de ;; Truncated, retrying in TCP mode. cb.df7cb.de has CERT record PGP 0 0 mQGiBECBGdAR [...] UDlCcltS/8Xtw== cb.df7cb.de has CERT record 6 0 0 FNIkyLB+Y6aUbaMuB8Wvd0pYUQta
Again, gpg can be told to use that:
$ echo foo | gpg --auto-key-locate cert --recipient cb@df7cb.de --encrypt -a gpg: key 58510B5A: public key "Christoph Berg" imported gpg: Total number processed: 1 gpg: imported: 1 gpg: automatically retrieved `cb@df7cb.de' via DNS CERT
Thanks to weasel for some hints on using CERT.
SSHFP
I’m also mentioning SSHFP records here since it fits in the topic - I have been using them for some months now:
$ host -t SSHFP tesla.df7cb.de tesla.df7cb.de has SSHFP record 1 1 EE49B803541293656C33B86ECD781BD8F1D78AB5 tesla.df7cb.de has SSHFP record 2 1 3E82FB5EE8AA0205305F0D0186F94D6FB3E0E744 $ ssh -o 'VerifyHostKeyDNS yes' tesla.df7cb.de The authenticity of host 'tesla.df7cb.de (88.198.227.218)' can't be established. RSA key fingerprint is 5a:c9:38:ca:c0:2b:11:c1:c8:fb:f1:ad:73:a1:9c:8b. Matching host key fingerprint found in DNS. Are you sure you want to continue connecting (yes/no)?
The records are generated with ssh-keygen -r.
Postpone
I just finished implementing postpone, a wrapper that is intended to take an arbitrary command, fork into the background, wait until some lockfile is freed, and then run the command. Of course the idea is that the lockfile is /var/lib/dpkg/lock, and that postpone is used in maintainer scripts. (Update-menus already does that, and I’ve basically grabbed that code and generalized it as a separate program.)
As a test implementation, I modified the post{inst,rm} templates in the tex-common package and rebuilt texlive-lang-* using that. dpkg -i texlive-lang-*.deb takes over 4 minutes in the old version, but only a total of 60s with postpone used (35s for dpkg -i plus 25s for the background jobs).
A Debian package is currently sitting in NEW, let’s hope it will actually get used in maintainer scripts.
Screen window titles
0-$ bash 1$ bash 2*$ bash
If screen’s default ^A w status line isn’t really useful, put this in your bash prompt:
PS1=’\[\033k\u@\h\033\\\] …'
In other words, ESCk ESC\ sets the screen window title. This is independent from the xterm title. Thanks to formorer for the pointer.
Update: fixed quoting in PS1
The DM GR
I haven’t said anything in the DM threads yet because I still don’t know which actual problem the introduction of DMs is trying to solve.
IMHO the current process with sponsors reviewing and uploading packages has proven to work nicely, i.e. the amount of broken packages uploaded is not too high. Most of the perceived problems with this process stem from the fact that most of the packages offered on debian-mentors or #debian-mentors are initially crap and need lots of review cycles. Once people produce good packages asking the last sponsor for another upload should work. (And at that point NM will be a breeze.)
Particularly I don’t like the fact that the “initial policy for an individual to be included in the keyring” does not include any check of any technical or non-technical skills besides having a gpg key and be able to tick 3 checkboxes. I fear this will lead to people blacklisting “DM” packages because they don’t want low-quality packages on their machines.
At the same time, the rest of the GR text is micro-managing every other detail of the process in a way that doesn’t leave much room for practical implementation decisions.
It appears to me that the DM concept as sketched in the GR is mainly meant to let NMs upload earlier, i.e. it tries to fix the fact that front-desk or DAM approval take too long. I think the fix for that is just to find someone besides Joerg to also read the AM reports. DMs as in the GR are a workaround, not a solution.
On a sidenote, I’m still wondering why front-desk (and afaict the DAMs) were never asked about their opinion while/after the GR was drafted. I had some chats with Anthony on IRC on the topic, but that was shortly after Debconf 6 (there was a related BoF), nothing in the past months.
(not with the front-desk hat on, but having it within reach)
PS: I voted “-1”.
Unix Locking
Note to myself: fcntl() locks vanish after a fork(). flock() works, but doesn’t work over NFS. Not that I care about the later, but sometimes I wonder why Unix is so weird.
Becoming an early riser
At the beginning of this year, becoming an early riser was a recurrent theme on planet.debian.org. At the same time, the urge to put some order in my messy sleeping pattern had grown, so I was very happy to read Steve Pavlina’s blog posting on that. Before, I would usually stay up until very late (3am), being over-tired, and then sleep 9 or even 10 hours, sometimes still being tired the next day. The advice Steve gives is very simple: Go to bed only when you’re too sleepy to stay up, and get up at a fixed time every morning.
This sounded exactly like what I wanted. Now, I'm still rather the night owl type, so I didn't pick something like 5am as Steve did, but a friendlier 9am. The first 2 or 3 days were somehow hard, but I soon realized how healthier the new rhythm was. I was getting up at the same time every day, including weekends, and go to bed around 1.30am. The downside was that 2..3am (CET) is often the best time on IRC and also very nice for hacking, but the additional time gained in the morning was well worth it.
The hard part is to actually notice when you are tired. For me, it works best to read a book, and then go to sleep when I can't concentrate on reading any more. The difficulty for us computer people is that staying in front of the screen somehow makes you unaware of how tired you actually are - just turn the machine off and do something else (doh!). In the morning, it's important to actually get up, since the body will adjust to the additional time in bed. What I'm doing is to put the radio's timer at 9am, and the alarm clock at 9.05, so I don't have to jump out of bed. Many times, I would wake up myself just at that time.
Of course, when going out late, I don't get up at 9, but try to get back to the rhythm immediately. Also, Debconf had thrown me out for some weeks, but luckily I managed to get back in.
Debcamp6 Hacklab Webcam
I apologize for the far less than optimal placement of the camera, but since it is built in into my Vaio, I cannot really put it higher above the table...
DPL candidates wishlist
My wishlist for the next DPL election ballot: (in random order, and incomplete)
- azeem
- bdale
- marga
- Joey
- liw
- mako
- ths
- aba
- otavio
- Q
- Maulkin
- Yoe
- fjp
- joeyh
- vorlon
- dondelelcaro
- Sesse
Incrementing DNS Zone Serials
$cat .vim/after/ftplugin/dns.vim
nmap _a !!perl -pe '($y,$m,$d) = (localtime)[5,4,3]; $d = sprintf("\%04d\%02d\%02d", $y+1900, $m+1, $d); s/\b(?\!$d)(\d{8})(\d{2})/${d}00/'<cr><c-a>
Junior Bridge Camp in Rieneck
I spent the first 12 days of August at the German Junior Bridge camp[*] in Rieneck.
The reason I'm blogging about that in the "debian" section is that it felt very much like DebConf - lots of weird just-like-you people. We played about two tournaments per day, always using different scoring methods or other ways to make up pairs of players - we were not supposed to play with the same partner twice, though Frederic, my flatmate, and I were casted together for a second time in a young-plays-with-old tourney. In this sport, you are a junior until well past 30, and he was just above the cut of 35 of this event. We also bribed the good fairy to let Eva and me play together in the final masterpoint tourney. We didn't score especially well since we were quite tired, but hey :*
The site is an old castle, now operated by a German Scouts organization. Somehow, the castle operates in a different time zone, breakfast was served at 12am, lunch at 7pm, and dinner at 11pm. DebConf should consider adopting that :) It would also be interesting to see if we could have a "performance" at the end, and who would run for Miss and Mister DebConf...
The pictures I took are in my gallery. Sadly I didn't take a picture of the coffee tally list where one guy made two ticks in the "flatrate" column 8-)
[*] beware, this is ugly html
Update: Next year's bridge camp will be from July 26th to August 5th - please pick a disjoint range for DebConf...
Oaxtepec, Debcamp, and Internet Mañana
The flight to Mexico last Saturday went pretty much all ok, except that the plane had to start through at MEX because of some unexpected wind and the generally low air pressure at high altitudes. While this delayed the landing for about 20 minutes, I had the opportunity for a second look at this incredibly large city. After getting through customs, we (with Herman, Tore, and Annabelle) met Neil McGovern in the airport lounge, and after Marcella and Luciano had arrived, we had a rather bumpy ride to Oaxtepec, amounting to a 20h journey for me.
The site here is really beautiful, with lots of green, palms, other plants, and of the swimming pool. However, so far Debcamp has been rather unproductive, we haven't had net access until Monday evening, and even yesterday round trip times and packet loss were so high that neither ssh (IRC!) nor scp/ftp/http were really usable. Using irssi's proxy module allowed me to get rid of the lag locally for IRC, but the connection was still unstable. I wrote some patches which I have yet to submit and replied to most of my NM mail, but the next projects will all require access to the BTS and debian.org machines to be worked on. At the moment, network is down again because the antenna connecting us from the village fell down. The main activity for me so far has ben to play Mao with Jesus, Adeodato, Marga, Graham, Gerfried, and others :-)
Todo: get some real work done; upload pictures; go to the pool.
Ah... network just went up again, yet still slow... mail queue flushed... network down again...
OFTC NetRep
15:10 [oftc] -!- Myon [cb@meitner.df7cb.de] 15:10 -!- Irssi: Got a positive response from NickServ/oftc 15:10 [oftc] !keid.oftc.net Activating Cloak: myon.netrep.oftc.net 15:10 [oftc] -!- Myon [cb@myon.netrep.oftc.net]
Today, I was accepted as OFTC network representative. Basically this means I’m now officially supposed to behave on IRC ;-)
More seriously, I'll try to make the network a better place by helping the OFTC staff answer user requests, mostly on the main #oftc channel and various #debian* channels.
Post-DebConf Woes
After getting back from DebConf and the 1-week trip afterwards, it took me a full week to get out of most of the chaos travelling usually leaves behind. I still haven’t unpacked all stuff from the suitcase, but at least managed to wash the dirty clothes. Oh, and I’ve signed all keys from the KSP - including madduck’s, he showed me his proper German ID card when I asked what this funny Transnational Republic was. (Oh, and if you were listed at the KSP, didn’t show up, but I’ve signed your key anyway, that’s because we met before and you have a new UID, in case you are wondering.) The pictures I took are still scatered over several home dirs on different hosts, I’ll try to sort/rotate/rename them over the next days. (Oh, and DebConf was great, the trip afterwards likewise - I’ll try to write a separate blog post on that later…)
At the moment I'm slowly starting to get into frontdesk work. I have some notes from Neil's and Enrico's AM BoF and the discussion BoF round around my "NM process future" proposal that I still have to write down once I'm finished with answering my own NMs' mails. Both BoFs went pretty well, the major outcome being that we should discuss much more among AMs on -newmaint and IRC, have older AM reports available for reference by other AMs, and have some sort of "AM report template" that lists the points the DAMs want to see in the report. About the propsal, my feeling is that we shouldn't implement any major NM changes (like the DM idea) until we really understand what the problem with the current process is. (It takes too long, but that can be fixed by throwing out the not-ready applicants more aggressively.) Thanks to all the people who provided very valuable feedback during the BoFs and in the private conversations afterwards.
So far, I've assigned a few applicants to AMs, added comments to the database on the level of prior contribution of the new applicants, and worked a bit on the web pages. The major visible change is that the NM graphs are now included on nm.d.o. The 'big' thing I'd like to implement next is to let the applicants write some application email where they list what they've done for Debian so far before the AM assignment, so we can match NM-AM better, and also improve our judgement of the level of prior contribution. Also, we will probably require 2 advocates per applicant in the future, to make sure people have worked with more than a single DD before. Of course, all this must not get excessive, we don't want to make joining Debian harder than it already is.
Then there's also the usual 2k+ message backlog waiting in the Debian mailbox...
Spamassassin, RBLs, and Dialup
I’m using SMTP over dialup to my smarthost. Unfortunately that doesn’t stop spamassassin to think I am a spammer:
Received: from tesla.df7cb.de ([88.198.227.218] ident=postfix)
by merkel.debian.org with esmtp (Exim 4.50) id 1Gm6h1-0001Fw-26
for ???@qa.debian.org; Mon, 20 Nov 2006 03:48:03 -0700
Received: from volta.df7cb.de (dslb-084-058-218-241.pools.arcor-ip.net [84.58.218.241])
by tesla.df7cb.de (Postfix) with ESMTP id 2F5364475D
for <???@qa.debian.org>; Mon, 20 Nov 2006 11:48:24 +0100 (CET)
Received: by volta.df7cb.de (Postfix, from userid 1000)
id BBABE18E95; Mon, 20 Nov 2006 11:49:32 +0100 (CET)
0.1 FORGED_RCVD_HELO Received: contains a forged HELO
0.1 RCVD_IN_SORBS_DUL RBL: SORBS: sent directly from dynamic IP address
[84.58.218.241 listed in dnsbl.sorbs.net]
1.8 RCVD_IN_BL_SPAMCOP_NET RBL: Received via a relay in bl.spamcop.net
[Blocked - see <http://www.spamcop.net/bl.shtml?70.103.162.29>]
2.5 RCVD_IN_XBL RBL: Received via a relay in Spamhaus XBL
[84.58.218.241 listed in sbl-xbl.spamhaus.org]
1.7 RCVD_IN_NJABL_DUL RBL: NJABL: dialup sender did non-local SMTP
[84.58.218.241 listed in combined.njabl.org]
My previous solution was to use an openvpn tunnel to the smarthost, but my current one (provided by codebreaker) is a vserver, so that doesn’t work. Ganneff provided the workaround: make postfix drop the Received: header.
[0] cb@tesla:~ $grep header /etc/postfix/main.cf
header_checks = pcre:/etc/postfix/header_checks
[0] cb@tesla:~ $cat /etc/postfix/header_checks
/^Received: from [a-z]*\.df7cb\.de \(dslb-[0-9.-]*\.pools\.arcor-ip\.net/ IGNORE
Received: from tesla.df7cb.de ([88.198.227.218] ident=postfix)
by merkel.debian.org with esmtp (Exim 4.50) id 1GmX3t-0000Pw-4I
for ???@qa.debian.org; Tue, 21 Nov 2006 07:57:27 -0700
Received: by volta.df7cb.de (Postfix, from userid 1000)
id 69AB218EAC; Tue, 21 Nov 2006 15:58:57 +0100 (CET)
Of course this is a gross hack, but I’m happy with it :)
Today's screen hack: bce
When you have trouble cut-and-pasting in screen because it copies full terminal lines filled up with spaces, put the following into your .screenrc:
defbce on term screen-bce
Ganneff reports that the first line is enough, and even works at the : prompt of a running screen. Happy pasting :)
Good Times!
The sdate ITP sparked a little flamewar on -devel, but the feedback was positive enough so I went ahead. NEW processing is lightning fast these days:
18793 + 0,2K Sep 4254 9:58 Archive Adminis cb Processing of sdate_0.2_multi.changes 18794 + 1,5K Sep 4254 10:02 Debian Installe cb sdate_0.2_multi.changes is NEW 18795 + 0,4K Sep 4254 11:40 Debian Installe cb sdate_0.2_multi.changes ACCEPTED
Thanks to Marc Brockschmidt for sponsoring the upload and Joerg Jaspert for the ftp-master part.
Oh, here's the current time: So Sep 4254 13:33:12 CEST 1993
Debian Quiz
With the help of Alexander Schmehl I’ve created a Debian Quiz. Test your knowledge about Debian’s distribution, people, mailing lists, etc.!
PS: Alex, you could have as well told me the reason why I have to use a TP cable going to your router right now instead of blogging it :-)
Doing P&P and T&S
I was assigned an AM last Friday. Marc sent me the P&P questions on Sunday, and I was able to get through all immediately, so that I spent this afternoon on T&S. The first P&P half is really brain-damaging when you have to read all that legalese stuff. The second half was definitely more fun, and the first T&S part also. Now it’s getting harder again, but it looks like I should be able to get through the rest of T&S quickly. Then there’s the packages check left. The next endeavour2 should be released soon - let’s see if it comes in time.
Hacking DDPO
Over the last few weeks, I’ve implemented some new features in DDPO. The system, originally written by Igor Genibel, is a mix of Python, Perl, and PHP generated from WML, so it’s quite interesting to see how these languages interact.
The main new features are the ability to add arbitrary packages to the list displayed, and an automatic listing of all NMUs and sponsored uploads in the new "uploads" section. (Thanks to Ryan Murray, Joerg Jaspert, and Joey Schulze for helping fix the projectb for that!) I won't repeat the details here, read the d-d-a posting for that. Another nice thing is the link to Ian Lynagh's popcon graphs which I had mostly ignored so far.
Have a look at my DDPO page to see the new features - feedback welcome!
Killfiling
I don’t like killfiles and /ignore mainly due to the fact that it’s very confusing to see only one side of a conversation afterwards. I’m using some kind of “greylisting”. For email, I have a procmail recipe that looks like:
# killfile :0fhw * !^Status: * ^From: .*<(noob@bla.com|luser@foo.org|troll@evil.net)> | formail -i"Status: RO"
This will mark the messages as read but leaves all threads intact. (I’m using mbox folders.) That way, I’m not bothered to read the messages unless I really want to.
In irssi, I have a /hide macro:
hide color set $0 15 ; color save
Together with a patched version of nickcolor.pl, this will color the nick in light grey. Since that is hardly readable on white background, I can only see which troll is talking there if I have a close look.
LinuxDays.lu: Day 1
Yesterday evening we arrived in Luxembourg. It is a really beautiful city with lots of bridges and old castle-like buildings. The youth hostel there has a friendly staff and can really be recommended.
This morning we arrived a the LinuxDays.lu site and set up the Debian booth. We brought several boxes along, but didn't set up all yet since so far Alex and me are the only staff personnel yet. Just opposite is the symlink.ch booth, and some guys from quintessenz are just setting up their's. So far, I haven't really spotted one thing: visitors. It looks like all people here are either exhibitors or staff. Maybe the others are all hiding in the IPv6 tutorial...
In the meantime, I'm upgrading my old alpha box that suffered from random memory failures when I tried last. It looks like disassembling the case and cleaning all contacts helped. The harddisk is only 500MB, so I probably won't be able to upgrade to XFree 4 and Firefox, not to mention that it's still running a 2.2 kernel from 1999. It's still a nice ssh terminal, maybe also for XDMCP.
While writing this, some visitors are dropping in, but still not that much that we wouldn't have time for reducing mail backlog and IRCing.
LinuxDays.lu: Day 2
Today, the booth was a bit busier than yesterday, but we still had plenty of time to spend on other stuff. Today’s good news was that mutt development seems to have started again and 1.5.7 should be released on Friday.
In the meantime, I'm back in Saarbrücken and put some pictures I took on the event on my website.
LinuxTag in Karlsruhe
When my brother and I arrived at the LinuxTag site on Tuesday evening, the booth was already half way set up and they were waiting for the remaining parts to arrive. Most had already had dinner, so those few who hadn’t yet left for Gigi’s restaurant (who can be said to be a Debian fan, since he remembers the “swirl” guys from last year). When we returned, the vitrine was set up too :-)
During the last Debian events, we had always maintained a tick list for "when will Sarge arrive?" questions. Fact was, that this was also the most common question on Wednesday, since the DVDs created for the LinuxTag didn't arrive like promised and we had to tell people to come back later.
Today was Debian Day I with talks by Joey on Debian security, Luk on i18n/l10n, Goswin on the Debian archive structure, Martin on volatile, Meike on debian-women, and Enrico on custom Debian distributions. Most of the talks were quite nice to hear and well organised. I took some pictures of the booth and the talks. (Sorry, not yet renamed/rotated/pruned.)
[Update 2005-06-24: Goswins talk was on the Debian archive structure, new pictures added, pictures rotated.]
LinuxTag in Karlsruhe cont'd
Yesterday evening saw the KALUG party which resulted in quite a lot of funny pictures of Debian folks. Alexander Schmehl won an “Etch prerelease”. Later during the night, for what was declared the official German Sarge release party, Martin Zobel-Helas and Alexander Schmehl had prepared a Debian quiz show. People from the audience were “elected” to answer questions ranging from “bring sarge/slink/hamm/rex in the right order” to “how many signatures does Peter Palfrader have on his primary key?”
This morning, I took the level 101 LPI exam. The questions were mostly simple ("name a command that lists all PCI devices"), but some were outdated (USB on Linux 2.2) or wrong (the command "export set DISPLAY=:0.1" does not really make sense). The results will only be available in a few weeks, but I'm sure I passed even without any training before.
Next thing to happen: the social event.
List and Archive for planet.debian.org
Planet posters and readers might be interested in my proposal in #323227: new list: debian-planet to distribute planet.debian.org postings; archive to enable searching.
Please follow-up on -devel for comments.
myon@debian.org
$ finger myon@db.debian.org [db.debian.org] uid=myon,ou=users,dc=debian,dc=org First name: Christoph Last name: Berg Email: Christoph Berg <myon@debian.org>
:-)
Thanks to all who sponsored uploads for me during the NM phase (in random order): René Engelhard, Frank Küster, Gerfried Fuchs, Peter Palfrader, Alexander Wirt, Brian Nelson, Jörg Jaspert, and Marc Brockschmidt. The folks on #debian.de always made me cheer up and are great fun to hang around with. Thank you girls and guys!
Next event: the QA meeting in Darmstadt.
NM finished
The remaining parts of my NM process went through surprisingly well. Marc had only a few points that I needed to clarify. He (HE) found some minor bugs in my debian/rules files and a missing copyright attribution that I fixed this evening. My application is now waiting for front desk approval. Thanks Marc for being such a responsive AM!
Afterwards, I went over the NM templates from alioth's CVS and corrected some stuff that I had spotted while I was answering the questions. The patch is mostly reformatting and grammar fixes, but got quite big:
[0] cb@planck:~/debian/newmaint/templates $LC_ALL=C wc nm_* dak.txt | tail -1 921 6452 39307 total [0] cb@planck:~/debian/newmaint/templates $wc nm-templates.patch 918 6947 42773 nm-templates.patchLet's hope it helps future applicants.
NM Graphs
I've created two graphs that show who-advocated-whom and who-was-whose-AM for the NM process. Hopefully, I'll appear in there soon, too :-)
NM graphs update
I’ve updated my NM graphs. There were some cases where the advocate and AM entries in the NM database didn’t match the Debian user name, so some people appeared with two different names. There are still lots of unconnected components in the advocate graph, which is because the “advocate” field in the NM database was added only later, and the older entries all have “sunset” as advocate which I skip when generating the graphs. If you want your advocate listed, or have advocated someone back then, please tell me so I can add that to my scripts.
Wouter mentioned that besides neato there was springgraph to create this type of graph. Coincidentally, I happened to adopt that package last week and as upstream hasn't updated it for some years, I'm probably also its new upstream. However, I haven't managed yet to produce any useful output on the NM graphs with it yet (it's still chewing on advocate.dot while I'm writing this), so I'll stick to neato for the time being. (Since graphviz is now free, I don't have any qualms against doing so.)
On a sidenote, I'm wondering how many software patents I'm touching with these scripts in our fine banana republic.
QA Meeting in Darmstadt
Random list of things that happened:
- the day before, Werner Koch gave a talk on GnuPG
- gpg --list-keys has a short equivalent (gpg -k), it's just not documented
- I became an AM
- I got my first applicant
- entering an "appointment" in my new mobile phone does not make it ring at that time. Use "alarm" if you want to get woken up.
- thanks to weasel, most of us now have fancy black light LED thingies that we will use to really check IDs at keysigning-parties
- I uploaded a new signing-party release and at the same time, XTaran filed a new bug
- we discussed lots of stuff:
- ways of improving the PTS
- ways of improving the NM process
- ways of maintaining orphaned packages (which actually turned out to be "how can we make sponsoring easier")
- what the QA group can to for security
- how not to release
- barbecue, tea, beer
- keysigning
- I uploaded a package for XTaran and at the same time, djpig filed a new RC bug on it
- lack of sleep
- meeting lots of nice people
Photos to come.
[Update 23:42 CET] Photos are online.
Randomsort
Adam: there’s randomize-lines:
Package: randomize-lines Maintainer: Arthur de JongDescription: randomize lines of input text rl is a command-line tool that reads lines from an input file or stdin, randomizes the lines and outputs a specified number of lines. It does this with only a single pass over the input while trying to use as little memory as possible. . Currently randomize-lines is under development and command-line arguments may change slightly until a 1.0 release is made.
Survived LinuxTag
The social event on Friday night was nice - this year, the queue in front of the buffet wasn’t 50m long. At one of the sillier moments we took pictures for Hot Alfie and Hot enrico (ITPs to be submitted X-).
The main event on Saturday was the keysigning party. Peter tried a little variation of the usual "all line up" scheme by initializing it with participants 70 to 1 on the right side and 71 to 154 on the left. Surprisingly, everyone found their place quite fast and we were finished in some 80min. caff seems to be a real success this year, I've already received signatures from some 20 people using it. cabot is still used used by some. I found most other systems either hard to handle (requiring me to sign a challenge several times per key or send mail "from" a uid) or broken (someone claimed one of my keys was signing-only which isn't true). caff cannot yet handle multiple keys in batch, so I will probably implement that before having to sign every key twice.
We started dismantling the booth around 5pm, which caused a last-minute rush for the Debian T-shirts. Apparently, we sold the coolest on the whole fair :-) Back at home at 9pm, I was quite exhausted, but managed to rename all pictures and remove some of the less interesting. If I got some names wrong or omitted some, please tell me.
Overall, Saturday saw more "end users" at the booth compared to the days before, but certainly less than 2004. I don't know how many visitors managed to get one of the free "eticket" invitations, but the default 15€ per day certainly kept some away. If this trend continues, and a good part of the LinuxTag staff is quitting as rumors say, maybe there won't be a LinuxTag 2006 - which would really be sad.
[Update 21:26 UTC: image URLs fixed]
Endeavour upload
Alexander Wirt sponsored the 2.4.6-1 version of endeavour. The Makefile previously defined CPP=g++, which works fine as long as you don’t touch it. If you do, you get weird breakages because suddenly the source is printed on stdout. Then you notice that using CXX instead would be saner. Ouch. Thanks Alex for uploading, and happy birthday!


