- 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2020 2021 2022 2023 2024
- List of all posts
Debian is running a "vcswatch" service that keeps track of the status of all packaging repositories that have a Vcs-Git (and other VCSes) header set and shows which repos might need a package upload to push pending changes out.
Naturally, this is a lot of data and the scratch partition on qa.debian.org had to be expanded several times, up to 300 GB in the last iteration. Attempts to reduce that size using shallow clones (git clone --depth=50) did not result more than a few percent of space saved. Running git gc on all repos helps a bit, but is tedious and as Debian is growing, the repos are still growing both in size and number. I ended up blocking all repos with checkouts larger than a gigabyte, and still the only cure was expanding the disk, or to lower the blocking threshold.
Since we only need a tiny bit of info from the repositories, namely the content of debian/changelog and a few other files from debian/, plus the number of commits since the last tag on the packaging branch, it made sense to try to get the info without fetching a full repo clone. The question if we could grab that solely using the GitLab API at salsa.debian.org was never really answered. But then, in #1032623, Gábor Németh suggested the use of git clone --filter blob:none. As things go, this sat unattended in the bug report for almost a year until the next "disk full" event made me give it a try.
The blob:none filter makes git clone omit all files, fetching only commit and tree information. Any blob (file content) needed at git run time is transparently fetched from the upstream repository, and stored locally. It turned out to be a game-changer. The (largish) repositories I tried it on shrank to 1/100 of the original size.
Poking around I figured we could even do better by using tree:0 as filter. This additionally omits all trees from the git clone, again only fetching the information at run time when needed. Some of the larger repos I tried it on shrank to 1/1000 of their original size.
I deployed the new option on qa.debian.org and scheduled all repositories to fetch a new clone on the next scan:
The initial dip from 100% to 95% is my first "what happens if we block repos > 500 MB" attempt. Over the week after that, the git filter clones reduce the overall disk consumption from almost 300 GB to 15 GB, a 1/20. Some repos shrank from GBs to below a MB.
Perhaps I should make all my git clones use one of the filters.
Back in 2015, when PostgreSQL 9.5 alpha 1 was released, I had posted the PostgreSQL data from Debian's popularity contest.
8 years and 8 PostgreSQL releases later, the graph now looks like this:
Currently, the most popular PostgreSQL on Debian systems is still PostgreSQL 13 (shipped in Bullseye), followed by PostgreSQL 11 (Buster). At the time of writing, PostgreSQL 9.6 (Stretch) and PostgreSQL 15 (Bookworm) share the third place, with 15 rising quickly.
I had been looking around for some time if someone had already managed to get I/Q data from Icom's IC-7610 HF transceiver without going through the binary-only hdsdr driver provided by Icom, but couldn't find anything.
First attempts at doing that on Debian Linux using libftdi1 didn't work, so I resorted to using the (again binary-only) libftd3xx driver from FT, and succeeded after some tinkering around.
The program writes raw I/Q data to a file in int16-int16 format.
- ic7610ftdi: https://github.com/df7cb/ic7610ftdi
- Get libftd3xx from https://ftdichip.com/drivers/d3xx-drivers/
- IC-7610 I/Q port reference: https://www.icomjapan.com/support/manual/1792/
- The IC-7610 needs to be connected using a decent USB 3 cable, preferably without any hub in-between
- If the SuperSpeed-FIFO Bridge disappears after some time, re-plug the cable or power-cycle the transceiver
$ lsusb | grep IC
Bus 002 Device 024: ID 0c26:0029 Prolific Technology Inc. IC-7610 SuperSpeed-FIFO Bridge
$ make
cc -Wall -g -c -o ic7610ftdi.o ic7610ftdi.c
cc ic7610ftdi.o -lftd3xx -o ic7610ftdi
$ ./ic7610ftdi iq.cs16
Device[0]
Flags: 0x4 [USB 3] | Type: 600 | ID: 0x0C260029
SerialNumber=23001123
Description=IC-7610 SuperSpeed-FIFO Bridge
fe fe 98 e0 1a 0b fd ff
fe fe e0 98 1a 0b 00 fd IQ data output: 0
fe fe 98 e0 1a 0b 01 fd
fe fe e0 98 fb fd ff ff OK
RX 42 MiB ^C
fe fe 98 e0 1a 0b 00 fd
fe fe e0 98 fb fd ff ff OK
$ ls -l iq.cs16
-rw-rw-r-- 1 myon myon 44040192 26. Aug 22:37 iq.cs16
$ inspectrum -r 1920000 iq.cs16 &
To talk to my IC-7610 via wireguard, I set up UDP port forwarding for ports 50001 50002 50003 on a raspi:
domain (ip) {
table filter {
chain FORWARD {
policy ACCEPT;
}
}
table nat {
chain PREROUTING {
interface (wg0) {
proto udp dport 50001 DNAT to 192.168.0.6;
proto udp dport 50002 DNAT to 192.168.0.6;
proto udp dport 50003 DNAT to 192.168.0.6;
}
}
chain POSTROUTING {
outerface (eth0) {
proto udp dport 50001 SNAT to-source 192.168.0.3;
proto udp dport 50002 SNAT to-source 192.168.0.3;
proto udp dport 50003 SNAT to-source 192.168.0.3;
}
}
}
}
LoRa APRS iGate
The official documentation of https://github.com/lora-aprs/LoRa_APRS_iGate uses the PlatformIO plugin of MS Visual Studio Code. Here are the commands to get it running without the GUI:
git clone https://github.com/lora-aprs/LoRa_APRS_iGate.git
cd LoRa_APRS_iGate
- Edit data/is-cfg.json with your station info
- Edit platformio.ini:
board = ttgo-lora32-v21
pip3 install platformio
pio run
...
Building .pio/build/lora_board/firmware.bin
...
pio run --target upload
...
Uploading .pio/build/lora_board/firmware.bin
...
pio run --target uploadfs
...
Building SPIFFS image from 'data' directory to .pio/build/lora_board/spiffs.bin
/is-cfg.json
...
Uploading .pio/build/lora_board/spiffs.bin
...
LoRa APRS Tracker
The procedure for the tracker is the same, but the GPS module might need a reset first:
git clone https://github.com/lora-aprs/TTGO-T-Beam_GPS-reset.git
cd TTGO-T-Beam_GPS-reset
pio run -e ttgo-t-beam-v1
pio run --target upload -e ttgo-t-beam-v1
# screen /dev/ttyACM0 115200
... and then upload https://github.com/lora-aprs/LoRa_APRS_Tracker.git
Classic ham radio transceivers have physical connectors for morse keys and microphones. When the transceiver is a software defined radio (SDR) device, voice operation is easy by attaching a headset, but solutions to connect a morse key, be it a straight key or paddles, to a modern PC are rare. In the old times, machines had serial ports with RTS/DTR lines, but these do not exist anymore, so a new interface is needed.
I am using a LimeSDR as ground station for the QO-100 satellite, and naturally also wanted to do CW operation there. I started with SDRangel which has a built-in morse generator, but naturally wanted to connect a CW key. At first sight, all the bits are there, there's a tune button that could be used as a straight key, as well as keyboard bindings for dots and dashes. But the delay key->local audio is almost a full second, so that's a no-go. I then went to hack my K3NG keyer to output ^ (high) _ (low) signals on the USB interface, and have a smallish Python program read that and send SDRangel REST API requests. Works, but that solution always felt "too big" to me, plus the sidetone from the buzzer inside the Arduino case could be heard in the whole house. And the total TX-RX delay was well over a second.
Next I tried building some GNU Radio flowcharts to solve the same problem but which all had the same trouble that the buffers grew way too big to allow the sidetone to be used for keying. At the same time, I switched the transceiver from SDRangel to another GR flowchart which reduced the overall TX-RX delay to something much shorter, but the local audio delay was still too slow for CW.
So after some back and forth, I came up with this solution: the external interface from the CW paddles to the PC is a small DigiSpark board programmed to output MIDI signals, and on the (Linux) PC side, there is a Python program listening for MIDI and acting as a iambic CW keyer. The morse dots and dashes are uploaded as "samples" to PulseAudio, where they are played both on the local sidetone channel (usually headphones) and on the audio channel driving the SDR transceiver. There is no delay. :)
DigiSpark hardware
The DigiSpark is a very small embedded computer that can be programmed using the Arduino toolchain.
Of the 6 IO pins, two are used for the USB bus, two connect the dit and dah lines of the CW paddle, one connects to a potentiometer for adjusting the keying speed, and the last one is unconnected in this design, but could be used for keying a physical transceiver. (The onboard LED uses the this pin.)
+---------------+
| P5 o -- 10k potentiometer middle pin
===== Attiny85 P4 o -- USB (internal)
USB ----- P3 o -- USB (internal)
----- P2 o -- dah paddle
===== 78M05 P1 o -- (LED/TRX)
| P0 o -- dit paddle
+---o-o-o-------+
There is an extra 27 kΩ resistor in the ground connection of the potentiometer to keep the P5 voltage > 2.5 V, or else the DigiSpark resets. (This could be changed by blowing some fuses, but is not necessary.)
The Arduino sketch for the keyer uses the DigisparkMIDI library. The code is quite simple: if the paddles are pressed, send a MIDI note_on event (dit = note 1, dah = note 2), when released, send note_off. When the potentiometer is changed, send a control_change event (control 3), the value read is conveniently scaled to wpm speed values between 8 and 40.
if (dit)
midi.sendNoteOn(NOTE_DIT, 1);
else
midi.sendNoteOff(NOTE_DIT, 0);
if (dah)
midi.sendNoteOn(NOTE_DAH, 1);
else
midi.sendNoteOff(NOTE_DAH, 0);
if (new_speed != old_speed)
midi.sendControlChange(CHANNEL_SPEED, new_speed);
The device uses a generic USB id that is recognized by Linux as a MIDI device:
$ lsusb
Bus 001 Device 008: ID 16c0:05e4 Van Ooijen Technische Informatica Free shared USB VID/PID pair for MIDI devices
$ amidi -l
Dir Device Name
IO hw:2,0,0 MidiStomp MIDI 1
$ aseqdump -l
Port Client name Port name
24:0 MidiStomp MidiStomp MIDI 1
$ aseqdump --port MidiStomp
Source Event Ch Data
24:0 Control change 0, controller 3, value 24
24:0 Note on 0, note 1, velocity 1
24:0 Note on 0, note 2, velocity 1
24:0 Note off 0, note 1, velocity 0
24:0 Note off 0, note 2, velocity 0
24:0 Control change 0, controller 3, value 25
24:0 Control change 0, controller 3, value 26
Python and PulseAudio software
On the Linux host side, a Python program is listening for MIDI events and acts as a iambic CW keyer that converts the stream of note on/off into CW signals.
Instead of providing a full audio stream, dit and dah "samples" are uploaded to PulseAudio, and triggered via the pulsectl library. On speed changes, new samples are uploaded. The samples are played on two channels, one for the sidetone on the operator headphones, and one on the audio input device for the SDR transmitter.
The virtual "tx0" audio device can be created on boot using this systemd config snippet:
# $HOME/.config/systemd/user/pulseaudio.service.d/override.conf
[Service]
ExecStartPost=/usr/bin/pacmd load-module module-null-sink sink_name=tx0 sink_properties=device.description=tx0
The CW text sent is printed on stdout:
$ ./midicwkeyer.py
TX port is tx0 (3)
Sidetone port is Plantronics Blackwire 3225 Series Analog Stereo (7)
CQ CQ DF7CB
Download
Needless to say, this is open source: https://github.com/df7cb/df7cb-shack/tree/master/midicwkeyer
pg_dirtyread
Earlier this week, I updated pg_dirtyread to work with PostgreSQL 14. pg_dirtyread is a PostgreSQL extension that allows reading "dead" rows from tables, i.e. rows that have already been deleted, or updated. Of course that works only if the table has not been cleaned-up yet by a VACUUM command or autovacuum, which is PostgreSQL's garbage collection machinery.
Here's an example of pg_dirtyread in action:
# create table foo (id int, t text);
CREATE TABLE
# insert into foo values (1, 'Doc1');
INSERT 0 1
# insert into foo values (2, 'Doc2');
INSERT 0 1
# insert into foo values (3, 'Doc3');
INSERT 0 1
# select * from foo;
id │ t
────┼──────
1 │ Doc1
2 │ Doc2
3 │ Doc3
(3 rows)
# delete from foo where id < 3;
DELETE 2
# select * from foo;
id │ t
────┼──────
3 │ Doc3
(1 row)
Oops! The first two documents have disappeared.
Now let's use pg_dirtyread to look at the table:
# create extension pg_dirtyread;
CREATE EXTENSION
# select * from pg_dirtyread('foo') t(id int, t text);
id │ t
────┼──────
1 │ Doc1
2 │ Doc2
3 │ Doc3
All three documents are still there, but only one of them is visible.
pg_dirtyread can also show PostgreSQL's system colums with the row location and visibility information. For the first two documents, xmax is set, which means the row has been deleted:
# select * from pg_dirtyread('foo') t(ctid tid, xmin xid, xmax xid, id int, t text);
ctid │ xmin │ xmax │ id │ t
───────┼──────┼──────┼────┼──────
(0,1) │ 1577 │ 1580 │ 1 │ Doc1
(0,2) │ 1578 │ 1580 │ 2 │ Doc2
(0,3) │ 1579 │ 0 │ 3 │ Doc3
(3 rows)
Undelete
Caveat: I'm not promising any of the ideas quoted below will actually work in practice. There are a few caveats and a good portion of intricate knowledge about the PostgreSQL internals might be required to succeed properly. Consider consulting your favorite PostgreSQL support channel for advice if you need to recover data on any production system. Don't try this at work.
I always had plans to extend pg_dirtyread to include some "undelete" command to make deleted rows reappear, but never got around to trying that. But rows can already be restored by using the output of pg_dirtyread itself:
# insert into foo select * from pg_dirtyread('foo') t(id int, t text) where id = 1;
This is not a true "undelete", though - it just inserts new rows from the data read from the table.
pg_surgery
Enter pg_surgery, which is a new PostgreSQL extension supplied with PostgreSQL 14. It contains two functions to "perform surgery on a damaged relation". As a side-effect, they can also make delete tuples reappear.
As I discovered now, one of the functions, heap_force_freeze(), works nicely with pg_dirtyread. It takes a list of ctids (row locations) that it marks "frozen", but at the same time as "not deleted".
Let's apply it to our test table, using the ctids that pg_dirtyread can read:
# create extension pg_surgery;
CREATE EXTENSION
# select heap_force_freeze('foo', array_agg(ctid))
from pg_dirtyread('foo') t(ctid tid, xmin xid, xmax xid, id int, t text) where id = 1;
heap_force_freeze
───────────────────
(1 row)
Et voilà, our deleted document is back:
# select * from foo;
id │ t
────┼──────
1 │ Doc1
3 │ Doc3
(2 rows)
# select * from pg_dirtyread('foo') t(ctid tid, xmin xid, xmax xid, id int, t text);
ctid │ xmin │ xmax │ id │ t
───────┼──────┼──────┼────┼──────
(0,1) │ 2 │ 0 │ 1 │ Doc1
(0,2) │ 1578 │ 1580 │ 2 │ Doc2
(0,3) │ 1579 │ 0 │ 3 │ Doc3
(3 rows)
Disclaimer
Most importantly, none of the above methods will work if the data you just deleted has already been purged by VACUUM or autovacuum. These actively zero out reclaimed space. Restore from backup to get your data back.
Since both pg_dirtyread and pg_surgery operate outside the normal PostgreSQL MVCC machinery, it's easy to create corrupt data using them. This includes duplicated rows, duplicated primary key values, indexes being out of sync with tables, broken foreign key constraints, and others. You have been warned.
pg_dirtyread does not work (yet) if the deleted rows contain any toasted values. Possible other approaches include using pageinspect and pg_filedump to retrieve the ctids of deleted rows.
Please make sure you have working backups and don't need any of the above.
The apt.postgresql.org repository has been extended to cover the arm64 architecture.
We had occasionally received user request to add "arm" in the past, but it was never really clear which kind of "arm" made sense to target for PostgreSQL. In terms of Debian architectures, there's (at least) armel, armhf, and arm64. Furthermore, Raspberry Pis are very popular (and indeed what most users seemed to were asking about), but the raspbian "armhf" port is incompatible with the Debian "armhf" port.
Now that most hardware has moved to 64-bit, it was becoming clear that "arm64" was the way to go. Amit Khandekar made it happen that HUAWEI Cloud Services donated a arm64 build host with enough resources to build the arm64 packages at the same speed as the existing amd64, i386, and ppc64el architectures. A few days later, all the build jobs were done, including passing all test-suites. Very few arm-specific issues were encountered which makes me confident that arm64 is a solid architecture to run PostgreSQL on.
We are targeting Debian buster (stable), bullseye (testing), and sid (unstable), and Ubuntu bionic (18.04) and focal (20.04). To use the arm64 archive, just add the normal sources.list entry:
deb https://apt.postgresql.org/pub/repos/apt buster-pgdg main
Ubuntu focal
At the same time, I've added the next Ubuntu LTS release to apt.postgresql.org: focal (20.04). It ships amd64, arm64, and ppc64el binaries.
deb https://apt.postgresql.org/pub/repos/apt focal-pgdg main
Old PostgreSQL versions
Many PostgreSQL extensions are still supporting older server versions that are EOL. For testing these extension, server packages need to be available. I've built packages for PostgreSQL 9.2+ on all Debian distributions, and all Ubuntu LTS distributions. 9.1 will follow shortly.
This means people can move to newer base distributions in their .travis.yml, .gitlab-ci.yml, and other CI files.
Users had often asked where they could find older versions of packages from apt.postgresql.org. I had been collecting these since about April 2013, and in July 2016, I made the packages available via an ad-hoc URL on the repository master host, called "the morgue". There was little repository structure, all files belonging to a source package were stuffed into a single directory, no matter what distribution they belonged to. Besides this not being particularly accessible for users, the main problem was the ever-increasing need for more disk space on the repository host. We are now at 175 GB for the archive, of which 152 GB is for the morgue.
Our friends from yum.postgresql.org have had a proper archive host (yum-archive.postgresql.org) for some time already, so it was about time to follow suit and implement a proper archive for apt.postgresql.org as well, usable from apt.
So here it is: apt-archive.postgresql.org
The archive covers all past and current Debian and Ubuntu distributions. The apt sources.lists entries are similar to the main repository, just with "-archive" appended to the host name and the distribution:
deb https://apt-archive.postgresql.org/pub/repos/apt DIST-pgdg-archive main deb-src https://apt-archive.postgresql.org/pub/repos/apt DIST-pgdg-archive main
The oldest PostgreSQL server versions covered there are 8.2.23, 8.3.23, 8.4.17, 9.0.13, 9.1.9, 9.2.4, 9.3beta1, and everything newer.
Some example:
$ apt-cache policy postgresql-12 postgresql-12: Installed: 12.2-2.pgdg+1+b1 Candidate: 12.2-2.pgdg+1+b1 Version table: *** 12.2-2.pgdg+1+b1 900 500 http://apt.postgresql.org/pub/repos/apt sid-pgdg/main amd64 Packages 500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages 100 /var/lib/dpkg/status 12.2-2.pgdg+1 500 500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages 12.2-1.pgdg+1 500 500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages 12.1-2.pgdg+1 500 500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages 12.1-1.pgdg+1 500 500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages 12.0-2.pgdg+1 500 500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages 12.0-1.pgdg+1 500 500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages 12~rc1-1.pgdg+1 500 500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages 12~beta4-1.pgdg+1 500 500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages 12~beta3-1.pgdg+1 500 500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages 12~beta2-1.pgdg+1 500 500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages 12~beta1-1.pgdg+1 500 500 https://apt-archive.postgresql.org/pub/repos/apt sid-pgdg-archive/main amd64 Packages
Because this is hosted on S3, browsing directories is only supported indirectly by static index.html files, so if you want to look at some specific URL, append "/index.html" to see it.
The archive is powered by a PostgreSQL database and a bunch of python/shell scripts, from which the apt index files are built.
Archiving old distributions
I'm also using the opportunity to remove some long-retired distributions from the main repository host. The following distributions have been moved over:
- Debian etch (4.0)
- Debian lenny (5.0)
- Debian squeeze (6.0)
- Ubuntu lucid (10.04)
- Ubuntu saucy (13.10)
- Ubuntu utopic (14.10)
- Ubuntu wily (15.10)
- Ubuntu zesty (17.04)
- Ubuntu cosmic (18.10)
They are available as "DIST-pgdg" from the archive, e.g. squeeze:
deb https://apt-archive.postgresql.org/pub/repos/apt squeeze-pgdg main deb-src https://apt-archive.postgresql.org/pub/repos/apt squeeze-pgdg main
paste is one of those tools nobody uses [1]. It puts two file side by side, line by line.
One application for this came up today where some tool was called for several files at once and would spit out one line by file, but unfortunately not including the filename.
$ paste <(ls *.rpm) <(ls *.rpm | xargs -r rpm -q --queryformat '%{name} \n' -p)
[1] See "J" in The ABCs of Unix
[PS: I meant to blog this in 2011, but apparently never committed the file...]
After quite some time (years actually) of inactivity as Debian Account Manager, I finally decided to give back that Debian hat. I'm stepping down as DAM. I will still be around for the occasional comment from the peanut gallery, or to provide input if anyone actually cares to ask me about the old times.
Thanks for the fish!
Now that Salsa is in beta, it's time to import projects (= GitLab speak for "repository"). This is probably best done automated. Head to Access Tokens and generate a token with "api" scope, which you can then use with curl:
$ cat salsa-import #!/bin/sh set -eux PROJECT="${1%.git}" DESCRIPTION="$PROJECT packaging" ALIOTH_URL="https://anonscm.debian.org/git" ALIOTH_GROUP="collab-maint" SALSA_URL="https://salsa.debian.org/api/v4" SALSA_GROUP="debian" # "debian" has id 2 SALSA_TOKEN="yourcryptictokenhere" # map group name to namespace id (this is slow on large groups, see https://gitlab.com/gitlab-org/gitlab-ce/issues/42415) SALSA_NAMESPACE=$(curl -s https://salsa.debian.org/api/v4/groups/$SALSA_GROUP | jq '.id') # trigger import curl -f "$SALSA_URL/projects?private_token=$SALSA_TOKEN" \ --data "path=$PROJECT&namespace_id=$SALSA_NAMESPACE&description=$DESCRIPTION&import_url=$ALIOTH_URL/$ALIOTH_GROUP/$PROJECT&visibility=public"
This will create the GitLab project in the chosen namespace, and import the repository from Alioth.
Pro tip: To import a whole Alioth group to GitLab, run this on Alioth:
for f in *.git; do sh salsa-import $f; done
(Update 2018-02-04: Query namespace ID via the API)
About a week ago, I extended vcswatch to also look at tags in git repositories.
Previously, it was solely paying attention to the version number in the top paragraph in debian/changelog, and would alert if that version didn't match the package version in Debian unstable or experimental. The idea is that "UNRELEASED" versions will keep nagging the maintainer (via DDPO) not to forget that some day this package needs an upload. This works for git, svn, bzr, hg, cvs, mtn, and darcs repositories (in decreasing order of actual usage numbers in Debian. I had actually tried to add arch support as well, but that VCS is so weird that it wasn't worth the trouble).
There are several shortcomings in that simple approach:
- Some packages update debian/changelog only at release time, e.g. auto-generated from the git changelog using git-dch
- Missing or misplaced release tags are not detected
The new mechanism fixes this for git repositories by also looking at the output of git describe --tags. If there are any commits since the last tag, and the vcswatch status according to debian/changelog would otherwise be "OK", a new status "COMMITS" is set. DDPO will report e.g. "1.4-1+2", to be read as "2 commits since the tag [debian/]1.4-1".
Of the 16644 packages using git in Debian, currently 7327 are "OK", 2649 are in the new "COMMITS" state, and 4227 are "NEW". 723 are "OLD" and 79 are "UNREL" which indicates that the package in Debian is ahead of the git repository. 1639 are in an ERROR state.
So far the new mechanism works for git only, but other VCSes could be added as well.
I knew it was about this time of the year 10 years ago when my Debian account was created, but I couldn't remember the exact date until I looked it up earlier this evening: today :). Rene Engelhard had been my advocate, and Marc Brockschmidt my AM. Thanks guys!
A lot of time has passed since then, and I've worked in various parts of the project. I became an application manager almost immediately, and quickly got into the NM front desk as well, revamping parts of the NM process which had become pretty bureaucratic (I think we are now, 10 years later, back where we should be, thanks to almost all of the paperwork being automated, thanks Enrico!). I've processed 37 NMs, most of them between 2005 and 2008, later I was only active as front desk and eventually Debian account manager. I've recently picked up AMing again, which I still find quite refreshing as the AM will always also learn new things.
Quality Assurance was and is the other big field. Starting by doing QA uploads of orphaned packages, I attended some QA meetings around Germany, and picked up maintenance of the DDPO pages, which I still maintain. The link between QA and NM is the MIA team where I was active for some years until they kindly kicked me out because I was MIA there myself. I'm glad they are still using some of the scripts I was writing to automate some things.
My favorite MUA is mutt, of which I became co-maintainer in 2007, and later maintainer. I'm still listed in the uploaders field, but admittedly I haven't really done anything there lately.
Also in 2007 I started working at credativ, after having been a research assistant at the university, which meant making my Debian work professional. Of course it also meant more real work and less time for the hobby part, but I was still very active around that time. Later in 2010 I was marrying, and we got two kids, at which point family was of course much more important, so my Debian involvement dropped to a minimum. (Mostly lurking on IRC ;)
Being a PostgreSQL consultant at work, it was natural to start looking into the packaging, so I started submitting patches to postgresql-common in 2011, and became a co-maintainer in 2012. Since then, I've mostly been working on PostgreSQL-related packages, of which far too many have my (co-)maintainer stamp on them. To link the Debian and PostgreSQL worlds together, we started an external repository (apt.postgresql.org) that contains packages for the PostgreSQL major releases that Debian doesn't ship. Most of my open source time at the moment is spent on getting all PostgreSQL packages in shape for Debian and this repository.
According to minechangelogs, currently 844 changelog entries in Debian mention my name, or were authored by me. Scrolling back yields memories of packages that are long gone again from unstable, or I passed on to other maintainers. There are way too many people in Debian that I enjoy(ed) working with to list them here, and many of them are my friends. Debian is really the extended family on the internet. My last DebConf before this year had been in Mar del Plata - I had met some people at other conferences like FOSDEM, but meeting (almost) everyone again in Heidelberg was very nice. I even remembered all basic Mao rules :D.
So, thanks to everyone out there for making Debian such a wonderful place to be!
Today saw the release of PostgreSQL 9.5 Alpha 1. Packages for all supported Debian and Ubuntu releases are available on apt.postgresql.org:
deb http://apt.postgresql.org/pub/repos/apt/ YOUR_RELEASE_HERE-pgdg main 9.5
The package is also waiting in NEW to be accepted for Debian experimental.
Being curious which PostgreSQL releases have been in use over time, I pulled some graphics from Debian's popularity contest data:
Before we included the PostgreSQL major version in the package name, "postgresql" contained the server, so that line represents the installation count of the pre-7.4 releases at the left end of the graph.
Interestingly, 7.4 reached its installation peak well past 8.1's. Does anyone have an idea why that happened?
At this year's FOSDEM I gave a talk in the PostgreSQL devroom about Large Scale Quality Assurance in the PostgreSQL Ecosystem. The talk included a graph about the growth of the apt.postgresql.org repository that I want to share here as well:
The yellow line at the very bottom is the number of different source package names, currently 71. From that, a somewhat larger number of actual source packages that include the "pgdgXX" version suffixes targeting the various distributions we have is built (blue). The number of different binary package names (green) is in about the same range. The dimension explosion then happens for the actual number of binary packages (black, almost 8000) targeting all distributions and architectures.
The red line is the total size of the pool/ directory, currently a bit less than 6GB.
(The graphs sometimes decrease when packages in the -testing distributions are promoted to the live distributions and the old live packages get removed.)
If you think this is confusing ...
$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=1008105,mode=755) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,relatime,size=1616484k,mode=755) /dev/mapper/benz-root on / type ext4 (rw,relatime,errors=remount-ro,data=ordered) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=22,pgrp=1,timeout=300,minproto=5,maxproto=5,direct) mqueue on /dev/mqueue type mqueue (rw,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime) /dev/sda1 on /boot type ext2 (rw,relatime) /dev/mapper/benz-home on /home type ext4 (rw,relatime,data=ordered) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) tmpfs on /run/user/124 type tmpfs (rw,nosuid,nodev,relatime,size=808244k,mode=700,uid=124,gid=131) /dev/mapper/benz-home on /var/lib/schroot/mount/sid-amd64-ed5e5176-2c0f-4708-b259-663d20011b26 type ext4 (rw,relatime,data=ordered) proc on /var/lib/schroot/mount/sid-amd64-ed5e5176-2c0f-4708-b259-663d20011b26/proc type proc (rw,nosuid,nodev,noexec,relatime) sysfs on /var/lib/schroot/mount/sid-amd64-ed5e5176-2c0f-4708-b259-663d20011b26/sys type sysfs (rw,nosuid,nodev,noexec,relatime) udev on /var/lib/schroot/mount/sid-amd64-ed5e5176-2c0f-4708-b259-663d20011b26/dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=1008105,mode=755) devpts on /var/lib/schroot/mount/sid-amd64-ed5e5176-2c0f-4708-b259-663d20011b26/dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) /dev/mapper/benz-home on /var/lib/schroot/mount/sid-amd64-ed5e5176-2c0f-4708-b259-663d20011b26/home type ext4 (rw,relatime,data=ordered) /dev/mapper/benz-root on /var/lib/schroot/mount/sid-amd64-ed5e5176-2c0f-4708-b259-663d20011b26/tmp type ext4 (rw,relatime,errors=remount-ro,data=ordered) /dev/mapper/benz-home on /var/lib/schroot/mount/sid-amd64-ed5e5176-2c0f-4708-b259-663d20011b26/srv type ext4 (rw,relatime,data=ordered) /dev/mapper/benz-root on /var/lib/schroot/mount/sid-amd64-ed5e5176-2c0f-4708-b259-663d20011b26/media type ext4 (rw,relatime,errors=remount-ro,data=ordered) //newton/credativ on /credativ/credativ type cifs (rw,nosuid,nodev,noexec,relatime,vers=1.0,cache=strict,username=cbe,domain=CREDATIV,uid=0,noforceuid,gid=0,noforcegid,addr=172.26.14.2,unix,posixpaths,serverino,acl,rsize=1048576,wsize=65536,actimeo=1,user) //newton/cbe on /credativ/cbe type cifs (rw,nosuid,nodev,noexec,relatime,vers=1.0,cache=strict,username=cbe,domain=CREDATIV,uid=0,noforceuid,gid=0,noforcegid,addr=172.26.14.2,unix,posixpaths,serverino,acl,rsize=1048576,wsize=65536,actimeo=1,user) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) devpts on /var/lib/schroot/mount/sid-amd64-ed5e5176-2c0f-4708-b259-663d20011b26/dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run/user/2062 type tmpfs (rw,nosuid,nodev,relatime,size=808244k,mode=700,uid=2062,gid=2062) gvfsd-fuse on /run/user/2062/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=2062,group_id=2062) /dev/mapper/benz-home on /var/lib/schroot/mount/wheezy-i386-a6e9e772-1061-4587-8321-c1615f21ebb2 type ext4 (rw,relatime,data=ordered) proc on /var/lib/schroot/mount/wheezy-i386-a6e9e772-1061-4587-8321-c1615f21ebb2/proc type proc (rw,nosuid,nodev,noexec,relatime) sysfs on /var/lib/schroot/mount/wheezy-i386-a6e9e772-1061-4587-8321-c1615f21ebb2/sys type sysfs (rw,nosuid,nodev,noexec,relatime) udev on /var/lib/schroot/mount/wheezy-i386-a6e9e772-1061-4587-8321-c1615f21ebb2/dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=1008105,mode=755) devpts on /var/lib/schroot/mount/wheezy-i386-a6e9e772-1061-4587-8321-c1615f21ebb2/dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) devpts on /var/lib/schroot/mount/sid-amd64-ed5e5176-2c0f-4708-b259-663d20011b26/dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) /dev/mapper/benz-home on /var/lib/schroot/mount/wheezy-i386-a6e9e772-1061-4587-8321-c1615f21ebb2/home type ext4 (rw,relatime,data=ordered) /dev/mapper/benz-root on /var/lib/schroot/mount/wheezy-i386-a6e9e772-1061-4587-8321-c1615f21ebb2/tmp type ext4 (rw,relatime,errors=remount-ro,data=ordered) /dev/mapper/benz-home on /var/lib/schroot/mount/wheezy-i386-a6e9e772-1061-4587-8321-c1615f21ebb2/srv type ext4 (rw,relatime,data=ordered) /dev/mapper/benz-root on /var/lib/schroot/mount/wheezy-i386-a6e9e772-1061-4587-8321-c1615f21ebb2/media type ext4 (rw,relatime,errors=remount-ro,data=ordered) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) devpts on /var/lib/schroot/mount/sid-amd64-ed5e5176-2c0f-4708-b259-663d20011b26/dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) devpts on /var/lib/schroot/mount/wheezy-i386-a6e9e772-1061-4587-8321-c1615f21ebb2/dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
... try mounttree:
$ mounttree
/ rootfs rw / /dev/mapper/benz-root (ext4) rw,relatime,errors=remount-ro,data=ordered boot /dev/sda1 (ext2) rw,relatime credativ/cbe //newton/cbe (cifs) rw,nosuid,nodev,noexec,relatime,vers=1.0,cache=strict,username=cbe,domain=CREDATIV,uid=0,noforceuid,gid=0,noforcegid,addr=172.26.14.2,unix,posixpaths,serverino,acl,rsize=1048576,wsize=65536,actimeo=1 credativ/credativ //newton/credativ (cifs) rw,nosuid,nodev,noexec,relatime,vers=1.0,cache=strict,username=cbe,domain=CREDATIV,uid=0,noforceuid,gid=0,noforcegid,addr=172.26.14.2,unix,posixpaths,serverino,acl,rsize=1048576,wsize=65536,actimeo=1 dev udev (devtmpfs) rw,relatime,size=10240k,nr_inodes=1008105,mode=755 hugepages hugetlbfs rw,relatime mqueue mqueue rw,relatime pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 shm tmpfs rw,nosuid,nodev home /dev/mapper/benz-home (ext4) rw,relatime,data=ordered proc proc rw,nosuid,nodev,noexec,relatime sys/fs/binfmt_misc systemd-1 (autofs) rw,relatime,fd=22,pgrp=1,timeout=300,minproto=5,maxproto=5,direct sys/fs/binfmt_misc binfmt_misc rw,relatime run tmpfs rw,nosuid,relatime,size=1616484k,mode=755 lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k user/124 tmpfs rw,nosuid,nodev,relatime,size=808244k,mode=700,uid=124,gid=131 user/2062 tmpfs rw,nosuid,nodev,relatime,size=808244k,mode=700,uid=2062,gid=2062 gvfs gvfsd-fuse (fuse.gvfsd-fuse) rw,nosuid,nodev,relatime,user_id=2062,group_id=2062 sys sysfs rw,nosuid,nodev,noexec,relatime fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset devices cgroup rw,nosuid,nodev,noexec,relatime,devices freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib/systemd/systemd-cgroups-agent,name=systemd fs/fuse/connections fusectl rw,relatime fs/pstore pstore rw,nosuid,nodev,noexec,relatime kernel/debug debugfs rw,relatime kernel/security securityfs rw,nosuid,nodev,noexec,relatime var/lib/schroot/mount/sid-amd64-ed5e5176-2c0f-4708-b259-663d20011b26 /dev/mapper/benz-home (ext4) rw,relatime,data=ordered dev udev (devtmpfs) rw,relatime,size=10240k,nr_inodes=1008105,mode=755 pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 home /dev/mapper/benz-home (ext4) rw,relatime,data=ordered media /dev/mapper/benz-root (ext4) rw,relatime,errors=remount-ro,data=ordered proc proc rw,nosuid,nodev,noexec,relatime srv /dev/mapper/benz-home (ext4) rw,relatime,data=ordered sys sysfs rw,nosuid,nodev,noexec,relatime tmp /dev/mapper/benz-root (ext4) rw,relatime,errors=remount-ro,data=ordered var/lib/schroot/mount/wheezy-i386-a6e9e772-1061-4587-8321-c1615f21ebb2 /dev/mapper/benz-home (ext4) rw,relatime,data=ordered dev udev (devtmpfs) rw,relatime,size=10240k,nr_inodes=1008105,mode=755 pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 home /dev/mapper/benz-home (ext4) rw,relatime,data=ordered media /dev/mapper/benz-root (ext4) rw,relatime,errors=remount-ro,data=ordered proc proc rw,nosuid,nodev,noexec,relatime srv /dev/mapper/benz-home (ext4) rw,relatime,data=ordered sys sysfs rw,nosuid,nodev,noexec,relatime tmp /dev/mapper/benz-root (ext4) rw,relatime,errors=remount-ro,data=ordered
Following Enrico's terminal-emulators comparison, I wanted to implement "start a new terminal tab in my current working directory" for rxvt-unicode aka urxvt. As Enrico notes, this functionality is something between "rather fragile" and non-existing, so I went to implement it myself. Martin Pohlack had the right hint, so here's the patch:
--- /usr/lib/urxvt/perl/tabbed 2014-05-03 21:37:37.000000000 +0200 +++ ./tabbed 2014-07-09 18:50:26.000000000 +0200 @@ -97,6 +97,16 @@ $term->resource (perl_ext_2 => $term->resource ("perl_ext_2") . ",-tabbed"); }; + if (@{ $self->{tabs} }) { + # Get the working directory of the current tab and append a -cd to the command line + my $pid = $self->{cur}{pid}; + my $pwd = readlink "/proc/$pid/cwd"; + #print "pid $pid pwd $pwd\n"; + if ($pwd) { + push @argv, "-cd", $pwd; + } + } + push @urxvt::TERM_EXT, urxvt::ext::tabbed::tab::; my $term = new urxvt::term @@ -312,6 +322,12 @@ 1 } +sub tab_child_start { + my ($self, $term, $pid) = @_; + $term->{pid} = $pid; + 1; +} + sub tab_start { my ($self, $tab) = @_; @@ -402,7 +418,7 @@ # simply proxies all interesting calls back to the tabbed class. { - for my $hook (qw(start destroy key_press property_notify)) { + for my $hook (qw(start destroy key_press property_notify child_start)) { eval qq{ sub on_$hook { my \$parent = \$_[0]{term}{parent}
On RedHat/CentOS/rpm systems, there's no dpkg --compare-versions available - sort -V can help to compare version numbers:
version_lt () {
newest=$( ( echo "$1"; echo "$2" ) | sort -V | tail -n1)
[ "$1" != "$newest" ]
}
$ version_lt 1.5 1.1 && echo yes
$ version_lt 1.5 1.10 && echo yes
yes
Yesterday saw the first beta release of the new PostgreSQL version 9.4. Along with the sources, we uploaded binary packages to Debian experimental and apt.postgresql.org, so there's now packages ready to be tested on Debian wheezy, squeeze, testing/unstable, and Ubuntu trusty, saucy, precise, and lucid.
If you are using one of the release distributions of Debian or Ubuntu, add this to your /etc/apt/sources.list.d/pgdg.list to have 9.4 available:
deb http://apt.postgresql.org/pub/repos/apt/ codename-pgdg main 9.4
On Debian jessie and sid, install the packages from experimental.
Happy testing!
Over the past few weeks, new distributions have been added on apt.postgresql.org: Ubuntu 13.10 codenamed "saucy" and the upcoming Ubuntu LTS release 14.04 codenamed "trusty".
Adding non-LTS releases for the benefit of developers using PostgreSQL on their notebooks and desktop machines has been a frequently requested item since we created the repository. I had some qualms about targeting a new Ubuntu release every 6 months, but with having automated more and more parts of the repository infrastructure, and the bootstrapping process now being painless, the distributions are now available for use. Technically, trusty started as empty, so it hasn't all packages yet, but of course all the PostgreSQL server packages are there, along with pgAdmin. Saucy started as a copy of precise (12.04) so it has all packages. Not all packages have been rebuilt for saucy, but the precise packages included (you can tell by the version number ending in .pgdg12.4+12 or .pgdg13.10+1) will work, unless apt complains about dependency problems. I have rebuilt the packages needing it I was aware about (most notably the postgresql-plperl packages) - if you spot problems, please let us know on the mailing list.
Needless to say, last week's PostgreSQL server updates are already included in the repository.
More and more packages are getting autopkgtest aka DEP-8 testsuites these days. Thanks to Antonio Terceiro, there is ci.debian.net" running the tests.
Last weekend, I've added a "CI" column on DDPO that shows the current test results for your packages. Enjoy, and add tests to your packages!
My ASUS Transformer TF101 had suddenly started flickering in all sorts of funny colors some weeks ago. As tapping it gently on the table in the right angle made the problem go away temporarily, it was clear the problem was about a loose cable, or some other hardware connection issue.
As I needed to go on a business trip the other day, I didn't look up the warranty expiration day until later that week. Then, Murphy struck: the tablet was now 2 years + 1 day old! Calling ASUS, some friendly guy there suggested I still tried to get ASUS to accept it for warranty, because the tablet had been with them last year for 5 days, so if they added that, it would still be within the warranty period. I filled out the RMA form, but one hour later the reply was they rejected it because it was out of warranty. Another guy on the phone then said they would probably only do the adding if it had been with them for maybe 10 days, or actually really 30 days, or whatever.
Some googling suggested that the loose cable theory was indeed worth a try, so I took it apart. Thanks to a forum post I could then locate the display connector and fix it.
Putting the case back together was actually harder than disassembling it because some plastic bits got stuck, but now everything is back to normal.
This blog is powered by ikiwiki.