Planet Closed Fist

June 16, 2022

Emmanuele Bassi


In the beginning…

In 1997, I downloaded my first MP3 file. It was linked on a website, and all I had was a 56k modem, so it took me ages to download the nearly 4 megabytes of 128 kbit/s music goodness. Before that file magically appeared on my hard drive, if we exclude a brief dalliance with MOD files, the only music I had on my computer came either in MIDI or in WAV format.

In the nearly 25 years passed since that seminal moment, my music collection has steadily increased in size — to the point that I cannot comfortably keep it in my laptop’s internal storage without cutting into the available space for other stuff and without taking ages when copying it to new machines; and if I had to upload it to a cloud service, I’d end up paying monthly storage fees that would definitely not make me happy. Plus, I like being able to listen to my music without having a network connection — say, when I’m travelling. For these reasons, I have my music collection on a dedicated USB3 drive and on various 128 GB SD cards that I use when travelling, to avoid bumping around a spinning rust drive.

In order to listen to that first MP3 file, I also had to download a music player, and back in 1997 there was this little software called Winamp, which apparently really whipped the llama’s ass. Around that same time I was also dual-booting between Windows and Linux, and, obviously, Linux had its own Winamp clone called x11amp. This means that, since late 1997, I’ve also tested more or less all mainstream, GTK-based Linux music players—xmms, beep, xmms2, Rhythmbox, Muine, Banshee, Lollypop, GNOME Music—and various less mainstream/non-GTK ones—shout out to ma boi mpg123. I also used iTunes on macOS and Windows, but I don’t speak of that.

Turns out that, with the very special exception of Muine, I can’t stand any of them. They are all fairly inefficient when it comes to managing my music collection; or they are barely maintained; or (but, most often, and) they are just iTunes clones—as if cloning iTunes was a worthy goal for anything remotely connected to music, computing, or even human progress in general.

I did enjoy using Banshee, up to a point; it wasn’t overly offensive to my eyes and pointing devices, and had the advantage of being able to minimise its UI without getting in the way. It just bitrotted with the rest of the GNOME 2 platform even before GNOME bumped major version, and it still wasn’t as good as Muine.

A detour: managing a music collection

I’d like to preface this detour with a disclaimer: I am not talking about specific applications; specific technologies/libraries; or specific platforms. Any resemblance to real projects, existing or abandoned, is purely coincidental. Seriously.

Most music management software is, I feel, predicated on the fallacy that the majority of people don’t bother organising their files, and are thus willing to accept a flat storage with complex views built at run time on top of that; while simultaneously being willing to spend a disproportionate amount of time classifying those files—without, of course, using a hierarchical structure. This is a fundamental misunderstanding of human nature.

By way of an example: if we perceive the Universe in a techno-mazdeist struggle between a πνεῦμα which creates fool-proof tools for users; and a φύσις, which creates more and more adept fools; then we can easily see that, for the entirety of history until now, the pneuma has been kicked squarely in the nuts by the physis. In other words: any design or implementation that does not take into account human nature in that particular problem space is bound to fail.

While documents might benefit from additional relations that are not simply inferred by their type or location on the file system, media files do not really have the same constraints. Especially stuff like music or videos. All the tracks of an album are in the same place not because I decided that, but because the artist or the music producers willed it that way; all the episodes of a series are in the same place because of course they are, and they are divided by season because that’s how TV series work; all the episodes of a podcast are in the same place for the same reason, maybe divided by year, or by season. If that structure already exists, then what’s the point of flattening it and then trying to recreate it every time out of thin air with a database query?

The end result of constructing a UI that is just a view on top of a database is that your UI will be indistiguishable from a database design and management tool; which is why all music management software looks very much like Microsoft Access from circa 1997 onwards. Of course you can dress it up however you like, by adding fancy views of album covers, but at the end of the day it’s just an Excel spread sheet that occasionally plays music.

Another side effect of writing a database that contains the metadata of a bunch of files is that you’ll end up changing the database instead of changing the files; you could write the changes to the files, but reconciling the files with the database is a hard problem, and it also assumes you have read-write access to those files. Now that you have locked your users into your own database, switching to a new application becomes harder, unless your users enjoy figuring out what they changed over time.

A few years ago, before backing up everything in three separate storages, I had a catastrophic failure on my primary music hard drive; after recovering most of my data, I realised that a lot of the changes I made in the early years weren’t written out to music files, but were stored in some random SQLite database somewhere. I am still recovering from that particular disaster.

I want my music player to have read-only access to my music. I don’t want anything that isn’t me writing to it. I also don’t want to re-index my whole music collection just because I fixed the metadata of one album, and I don’t want to lose all my changes when I find a better music player.

Another detour: non-local media

Yes, yes: everyone listens to streamed media these days, because media (and software) companies are speed-running Adam Smith’s The Wealth of Nations and have just arrived at the bit about rentier economy. After all, why should they want to get paid once for something, when media conglomerates can “reap where they never sowed, and demand a rent even for its natural produce”.

You know what streaming services don’t like? Custom, third party clients that they can’t control, can’t use for metrics, and can’t use to serve people ads.

You know what cloud services that offer to host music don’t like? Duplicate storage, and service files that may potentially infringe the IP of a very litigious industry. Plus, of course, third party clients that they can’t use to serve you ads, as that’s how they can operate at all, because this is the Darkest Timeline, and adtech is the modern Moloch to which we must sacrifice as many lives as we can.

You may have a music player that streams somebody’s music collection, or even yours if you can accept the remote service making a mess of it, but you’re always a bad IPO or a bad quarterly revenue report away from losing access to everything.

Writing a music player for fun and no profit

For the past few years I’ve been meaning to put some time into writing a music player, mostly for my own amusement; I also had the idea of using this project to learn the Rust programming language. In 2015 I was looking for a way to read the metadata of music files with Rust, but since I couldn’t find anything decent, I ended up writing the Rust bindings for taglib. I kept noodling at this side project for the following years, but I was mostly hitting the limits of GTK3 when it came to dealing with my music collection; every single iteration of the user interface ended up with a GtkTreeView and a replica of iTunes 1.0.

In the meantime, though, the Rust ecosystem got exponentially better, with lots of crates dedicated to parsing music file metadata; GTK4 got released with new list widgets; libadwaita is available to take care of nice UI layouts; and the Rust bindings for GTK have become one of the most well curated and maintained projects in the language bindings ecosystem.

Another few things that happened in the meantime: a pandemic, a year of unemployment, and zero conferences, all of which pushed me to streaming my free and open source software contributions on Twitch, as a way to break the isolation.

So, after spending the first couple of months of 2022 on writing the beginners tutorial for the GNOME developer documentation website, in March I began writing Amberol, a local-only music player that has no plans of becoming more than that.

Desktop mode

Amberol’s scope sits in the same grand tradition of Winamp, and while its UI started off as a Muine rip off—down to the same key shortcuts—it has evolved into something that more closely resembles the music player I have on my phone.

Mobile mode

Amberol’s explicit goal is to let me play music on my desktop the same way I typically do when I am using my phone, which is: shuffling all the songs in my music collection; or, alternatively, listening to all the songs in an album or from an artist from start to finish.

Amberol’s explicit non goals are:

  • managing your music collection
  • figuring out your music metadata
  • building playlists
  • accessing external services for stuff like cover art, song lyrics, or the artist’s Wikipedia page

The actual main feature of this application is that it has forced me to figure out how to deal with GStreamer after 15 years.

I did try to write this application in a way that reflects the latest best practices of GTK4:

  • model objects
  • custom view widgets
  • composite widgets using templates
  • property bindings/expressions to couple model/state to its view/representation
  • actions and actionable widgets

The ability to rely on libadwaita has allowed me to implement the recoloring of the main window without having to deal with breakage coming from rando style sheets:

The main thing I did not expect was how much of a good fit was Rust in all of this. The GTK bindings are top notch, and constantly improving; the type system has helped me much more than hindering me, a poor programmer whose mind has been twisted by nearly two decades of C. Good idiomatic practices for GTK are entirely within the same ballpark of idiomatic practices for Rust, especially for application development.

On the tooling side, Builder has been incredibly helpful in letting me concentrate on the project—starting from the basic template for a GNOME application in Rust, to dealing with the build system; from the Flatpak manifest, to running the application under a debugger. My work was basically ready to be submitted to Flathub from day one. I did have some challenge with the AppData validation, mostly caused by appstream-utils undocumented validation rules, but luckily it’s entirely possible to remove the validation after you deal with the basic manifest.

All in all, I am definitely happy with the results of basically two months of hacking and refactoring, mostly off an on (and with two weeks of COVID in the middle).

by ebassi at June 16, 2022 12:46 PM

April 19, 2022

Chris Lord

WebKit frame delivery

Part of my work on WebKit at Igalia is keeping an eye on performance, especially when it concerns embedded devices. In my experience, there tend to be two types of changes that cause the biggest performance gains – either complete rewrites (e.g. WebRender for Firefox) or tiny, low-hanging fruit that get overlooked because the big picture is so multi-faceted and complex (e.g. making unnecessary buffer copies, removing an errant sleep(5) call). I would say that it’s actually the latter bug that’s harder to diagnose and fix, even though the code produced at the end of it tends to be much smaller. What I’m writing about here falls firmly into the latter group.

For a good while now, Alejandro García has been producing internal performance reports for WPE WebKit. A group of us will gather and do some basic testing of WPE WebKit on the same hardware – a Raspberry Pi 3B+. This involves running things like MotionMark and going to popular sites like YouTube and Google Maps and noting how well they work. We do this periodically and it’s a great help in tracking down regressions or obvious, user-facing issues. Another part of it is monitoring things like memory and CPU usage during this testing. Alex noted that we have a lot of idle CPU time during benchmarks, and at the same time our benchmark results fall markedly behind Chromium. Some of this is expected, Chromium has a more advanced rendering architecture that better makes use of the GPU, but a well-written benchmark should certainly be close to saturating the CPU and we certainly have CPU time to spare to improve our results. Based on this idea, Alex crafted a patch that would pre-emptively render frames (this isn’t quite what it did, but for the sake of brevity, that’s how I’m going to describe it). This showed significant improvements in MotionMark results and proved at the very least that it was legitimately possible to improve our performance in this synthetic test without any large changes.

This patch couldn’t land as it was due to concerns with it subtly changing the behaviour of how requestAnimationFrame callbacks work, but the idea was sound and definitely pointed towards an area where there may be a relatively small change that could have a large effect. This laid the groundwork for us to dig deeper and debug what was really happening here. Being a fan of computer graphics and video games, frame delivery/cadence is quite a hot topic in that area. I had the idea that something was unusual, but the code that gets a frame to the screen in WebKit is spread across many classes (and multiple threads) and isn’t so easy to reason about. On the other hand, it isn’t so hard to write a tool to analyse it from the client side and this would also give us a handy comparison with other browsers too.

So I went ahead and wrote a tool that would help us analyse exactly how frames are delivered from the client side, including under load, and the results were illuminating. The tool visualises the time between frames, the time between requesting a frame and receiving the corresponding callback and the time passed between called at the start of a callback and the timestamp received as the callback parameter. To simulate ‘load’, it uses and waits until the given amount of time has elapsed since the start of the callback before returning control to the main loop. If you want to sustain a 60fps update, you have about 16ms to finish whatever you’re doing in your requestAnimationFrame callback. If you exceed that, you won’t be able to maintain 60fps. Here’s Firefox under a 20ms load:

Firefox frame-times under a 20ms rendering load

And here’s Chrome under the same parameters:

Firefox frame-times under a 20ms rendering load

Now here’s WPE WebKit, using the Wayland backend, without any patches:

WebKit/WPE Wayland under a 20ms rendering load

One of these graphs does not look like the others, right? We can immediately see that when a frame exceeds the 16.67ms/60fps rendering budget in WebKit/WPE under the Wayland backend, that it hard drops to 30fps. Other browsers don’t wait for a vsync to kick off rendering work and so are able to achieve frame-rates between 30fps and 60fps, when measured over multiple frames (all of these browsers are locked to the screen refresh, there is no screen tearing present). The other noticeable thing in these is that the green line is missing on the WebKit test – this shows that the timestamp delivered to the frame callback is exactly the same as, where as the timestamps in both Chrome and Firefox appear to be the time of the last vsync. This makes sense from an animation point of view and would mean that animations that are timed using this timestamp would move at a rate that’s more consistent with the screen refresh when under load.

What I’m omitting from this write-up is the gradual development of this tool, the subtle and different behaviour of different WebKit backends and the huge amount of testing and instrumentation that was required to reach these conclusions. I also wrote another tool to visualise the WebKit render pipeline, which greatly aided in writing potential patches to fix these issues, but perhaps that’s the topic of another blog post for another day.

In summary, there are two identified bugs, or at least, major differences in behaviour with other browsers here, both of which affect both fluidity and synthetic test performance. I’m not too concerned about the latter, but that’s a hard sell to a potential client that’s pointing at concrete numbers that say WebKit is significantly worse than some competing option. The first bug is that if a frame goes over budget and we miss a screen refresh (a vsync signal), we wait for the next one before kicking off rendering again. This is what causes the hard drop from 60fps to 30fps. As it concerns Linux, this only affects the Wayland WPE backend because that’s the only backend that implements vsync signals fully, so this doesn’t affect GTK or other WPE backends. The second bug, which is less of a bug, as a reading of the spec (steps 9 and 11.10) would certainly indicate that WebKit is doing the correct thing here, is that the timestamp given to requestAnimationFrame callbacks is the current time and not the vsync time as it is in other browsers (which makes more sense for timing animations). I have patches for both of these issues and they’re tracked in bug 233312 and bug 238999.

With these two issues fixed, this is what the graph looks like.

Patched WebKit/WPE with a 20ms rendering load

And another nice consequence is that MotionMark 1.2 has gone from:

WebKit/WPE MotionMark 1.2 results


Patched WebKit/WPE MotionMark 1.2 results

Much better 🙂

No ETA on these patches landing; perhaps I’ve drawn some incorrect conclusions, or done something in a way that won’t work long term, or is wrong in some fashion that I’m not aware of yet. Also, this will most affect users of WPE/Wayland, so don’t get too excited if you’re using GNOME Web or similar. Fingers crossed though! A huge thanks to Alejandro García who worked with me on this and did an awful lot of debugging and testing (as well as the original patch that inspired this work).

by Chris Lord at April 19, 2022 12:08 PM

January 06, 2022

Emmanuele Bassi

GNOME Keyring Kills Babies

I’ve just downloaded the gnome-keyring-manager application, written for the GNOME Love effort, in order to see its new UI; along the way, I decided to give a look to the gnome-keyring library.

The more I look at gnome-keyring.h, the more I regret had having breakfast this morning.

Using gpointers all over the place, no GObjects, no signals and properties, basically impossible to bind it for the usage with other languages apart from C. It looks like someone took each point I dislike in the current GNOME-VFS implementation in order to build a library.

It’s utter madness, and lack of proper software engineering, that which drives this library; and its authors really did propose it for inclusion in the Developer Platform? For Sauron’ sake, do not include it!

I know it’s meant to be a private library, but please: we should try to enforce a certain coding policy for GNOME libraries, even private ones like this; as it is, gnome-keyring is something I’d like not to touch with a ten-feet pole.

by ebassi at January 06, 2022 11:15 PM

December 28, 2021

Tomas Frydrych

Scotland and Sitka

I see that Chris Packham and others are making waves about the amount of Sitka being planted in Scotland. Funny that. There is nothing new here. I have raised this issue years back when the Scottish Government first published its Draft Climate Change Plan (2016?), and various environmental groups and outdoor influencers were praising its tree planting objectives without bothering to scrutinise the numbers (simply putting the tree planting numbers along the timber production targets in a different section of the DCCP made it clear that most of the new trees were going to be Sitka). At the time I got some fairly condescending responses to those concerns from some who should have known better.

But here is the thing, Sitka might not be doing much for the romantic images of Scotland as a would be wilderness, but we need it. Timber is one of the keys to dealing with climate change, as we need to wean ourselves off our dependency on concrete which accounts for something like 15% of worldwide CO2 emissions (the estimates vary somewhat, but it’s a lot). In Britain we happen to import most of out timber, and not all of it is grown sustainabilty elsewhere. We should really be self sufficient, and Sitka is perhaps the most productive source of construction timber.

The other thing people don’t seem to get is that natural forests do not provide for long term carbon capture simply because the natural life cycle of a tree is carbon neutral. As the stump said to the seedling, ‘remember, from CO2 thou came and into CO2 thou shall return’. Planting new forests gives us some initial carbon capture, but that trails off as the forest matures into its regular life cycle. To provide ongoing capture requires removing the wood and locking the carbon in the form of timber.

That is not to say we don’t need (many) more native trees, and ultimately established mature woodlands, but we need those because in our latitude they are key to biodiversity, and the loss of biodiversity is as much as an existential threat to us as is climate change. The real problem in Scotland is not that we are planting too much Sitka, but simply that we do not plant enough trees. The Scottish Government’s tree planting numbers are not ambitious enough, we could easily be aiming for at least double that, with a better balance between timber and native woodlands.

(A bigger issue is that the Scottish Government’s environmental policies are reduced largely to cutting down CO2 emissions, they haven’t cotton on yet that retaining biodiversity matters at least as much, and, unfortunately, the actions needed to address the latter are at times at odds with the former. Just now the Scottish Government is obsessed with the expansion of renewable energy installations, and, as it happens, windfarms and forests don’t mix, so every new windfarm built is a forest not planted. This environmental reductionism is going to hurt us badly in not so distant future.)

by tf at December 28, 2021 10:20 AM

November 19, 2021

Emmanuele Bassi

Fair Weather Friends

Today I released libgweather-3.90.0, the first developers snapshot of GWeather 4:

Behold! A project logo

This release is mostly meant to be used as a target for porting existing code to the new API, and verifying that everything works as it should.

The major changes from GWeather-3.0 are:

  • the GTK3 widgets have gone to a farm up state, so you’ll have to write your own UI for searching locations
  • GWeatherLocation is a GObject type, so you can use it with GListModel and friends, which should help with the point above
  • the deprecated API has been removed
  • the API that will be part of GWeather 4.0 will be stable, and regular API/ABI stability guarantees will apply

If you are using libgweather in your application, you should head over to the migration guide and check out what changed.

Ideally, there are still things that need to be cleaned up in the GWeather API, for instance:

  • GWeatherTimezone parses the tzdata file directly, which comes with its own set of issues, like having to track whether the time zone database has changed or not; we should use GTimeZone instead, but the API provided by the two types do not match entirely. I need to check the current users of that API, and if possible, just drop the whole type.
  • GWeatherInfo has a bunch of getter functions that return the bare values and that can fail, and additional getter functions that always return a formatted string, and cannot fail; it’s not a great API.
  • GWeatherLocation returns the localised names by default, and has additional getters for the “English” (really: POSIX C locale) names.

If you encounter issues when porting, please: file an issue on GitLab.

by ebassi at November 19, 2021 09:18 PM

October 15, 2021

Emmanuele Bassi

GWeather next

tl;dr Libgweather, the small GNOME library that queries weather services, is getting a major version bump to allow applications using it to be ported to GTK4.

In the beginning, there was a weather applet in the GNOME panel. It had a bunch of code that poked at a couple of websites to get the weather information for a given airport or weather observation stations, and shipped with a list of locations and their nearest METAR code.

In 2007, the relevant code was moved to its own separate repository, so that other applications and system settings could reuse the same code as the panel applet: the libgweather library was born. Aside from the basic weather information and location objects, libgweather also had a couple of widgets: one for selecting a location (with autocompletion), and one for selecting a timezone using a location.

Since libgweather was still very much an ad hoc library for a handful of applications, there was no explicit API and ABI stability guarantee made by its maintainers; in fact, in order to use it, you had to “opt in” with a specific C pre-processor symbol.

Time passed, and a few more applications appeared during the initial GNOME 3 cycles—like Weather, followed by Clocks a month later. Most of the consumers of libgweather were actually going through a language binding, which meant they were not really “opting into” the API through the explicit pre-processor symbol; it also meant that changes in the API and ABI could end up being found only after a libgweather release, instead of during a development cycle. Of course, back then, we only had a single CI/CD pipeline for the whole project, with far too little granularity and far too wide scope. Still, the GWeather consumers were few and far between, and the API was not stabilised.

Fast forward to now.

The core GNOME applications using GWeather are in the process of being ported to GTK4, but GWeather still ships with two GTK3 widgets. Since you cannot have GTK3 and GTK4 types in the same process, this requires either porting GWeather to GTK4 or dropping the widgets. As it turns out, the widgets are not really shared across applications using libgweather, and all of them have also been redesigned or are using the libadwaita/GTK4 port as a chance to refresh their overall appearences. This makes our life a little bit easier, as we can drop the widgets without really losing any actual functionality that people do care about.

For GNOME 42, the plan for libgweather is:

  • bump up the API version to 4.0, and ensure parallel installability with the older libgweather-3; this requires renaming things like the pkg-config file and the settings schema, alongside the shared library
  • drop the GTK widgets, and some old API that hasn’t been working in years, like getting the radar image animation
  • stabilise the API, and turn libgweather into a proper library, with the usual API and ABI stability guarantees (deprecations and new symbols added only during development cycles, no changes/removals until the following major API bump)
  • make it easier to use libgweather objects with GListModel-based API
  • document the API properly
  • clean up the internals from various years of inconsistent coding style and practices

I’m also going through the issues imported from Bugzilla and closing the ones that have long since been fixed.

In the meantime, the old libgweather-3 API is going to be frozen, for the tools that still use it and won’t be ported to GTK4 any time soon.

For more information, you can read:

If you’re using libgweather, I strongly recommend you to use the 40.0 release or build from the libgweather-3 branch until you are planning to port to GTK4.

If you’re distributing libgweather, I recommend you package the new libgweather under a new name, given that it’s parallel installable with the old one; my recommendation is to use libgweather4 or libgweather-4 as the name of the package.

by ebassi at October 15, 2021 09:54 AM

October 06, 2021

Emmanuele Bassi

More documentation changes

It’s been nearly a month since I’ve talked about gi-docgen, my little tool to generate API references from introspection data. In between my blog post and now, a few things have changed:

  • the generated API reference has had a few improvements, most notably the use of summaries in all index pages
  • all inheritable types now show the properties, signals, and methods inherited from their ancestors and from the implemented interfaces; this should hopefully make the reference much more useful for newcomers to GTK
  • we allow cross-linking between dependent namespaces; this is done using an optional URL map, with links re-written on page load. Websites hosting the API reference would need only to provide an urlmap.js file to rewrite those links, instead of doing things like parsing the HTML and changing the href attribute of every link cough library-web cough
  • we parse custom GIR attributes to provide better cross-linking between methods, properties, and signals.
  • we generate an index file with all the possible end-points, and a dictionary of terms that can be used for searching; the terms are stemmed using the Porter stemming algorithm
  • the default template will let you search using the generated index; the search supports scoping, so using method:show widget will look for all the symbols in which the term show appears in method descriptions, alongside the widget term
  • we also generate a DevHelp file, so theoretically DevHelp can load up the API references built by gi-docgen; there is still work to be done, there, but thanks to the help of Jan Tojnar, it’s not entirely hopeless

Thanks to all these changes, both Pango and GTK have switched from gtk-doc to gi-docgen for their API references in their respective main development branches.

Now, here’s the part where it gets complicated.

Using gi-docgen

Quick reminder: the first and foremost use case for gi-docgen is GTK (and some of its dependencies). If it works for you, I’m happy, but I will not go out of my way to make your use case work—especially if it comes at the expense of Job #1, i.e. generating the API reference for GTK.

Since gi-docgen is currently a slightly moving target, I strongly recommend using it as a Meson subproject. I also strongly recommend vendoring it inside your release tarballs, using:

meson dist --include-subprojects

when generating the distribution archive. Do not try and depend on an installed copy of gi-docgen.

Additionally, it’s possible to include the gi-docgen API reference into the Meson tarball by using a dist script. The API reference will be re-generated when building, but it can be extracted from the tarball, like in the good old gtk-doc-on-Autotools days.

Publishing your API reference

The tool we use to generate, library-web, is unmaintained and, quite frankly, fairly broken. It is a Python2 script that got increasingly more complicated without actually getting more reliable; it got progressively more broken once we started having more than two GTK modules, and then it got severely broken once we started using Meson and CMake, instead of Autotools. These days, you’ll be lucky to get your API reference uploaded to (as a separate archive), and you can definitely forget about cross-linking, because the tool will most likely get things wrong in its quest to restyle any HTML it finds, and then fix the references to what it thinks is the correct place:

The support for Doxygen (which is used by the C++ bindings) is minimal, and it ended up breaking a few times. Switching away from gtk-doc to gi-docgen is basically the death knell for the whole thing:

  • first of all, it cannot match the documentation module with the configuration for it, because git-docgen does not have the concept of a “documentation module”; at most, it has a project configuration file.
  • additionally, we really don’t want library-web messing about with the generated HTML, especially if the end result breaks stuff.

So, the current solution is to try and make library-web detect if we’re using gi-docgen, by looking for toml and files in the release archive, and then upload various files as they are. It’s a bad and fragile stop gap solution, but it’s the best we can do without breaking everything in even more terrible ways.

For GNOME 41 my plan is to sidestep the whole thing, and send library-web to a farm upstate. We’re going to use gnome-build-meta to build the API references of the projects we have in our SDK, and then publish them according to the SDK version.

My recommendation for library authors, in any case, is to build the API reference for the development branch of their project as part of their CI, and then publish it to the GitLab pages space. For instance:

This way, you’ll always have access to the latest documentation.

Sadly, we can’t have per-branch references, because GitLab pages are nuked every time a branch gets built; for that, we’d have to upload the artifacts somewhere else, like an S3 bucket.

Things are going to get better in the near future, after 10 years of stagnation; sadly, this means we’re living in Interesting Times, so I ask of you to please be patient while we transition towards a new and improved way to document our platform.

by ebassi at October 06, 2021 11:54 PM

September 21, 2021

Emmanuele Bassi

Properties, introspection, and you

It is a truth universally acknowledged, that a GObject class in possession of a property, must be in want of an accessor function.

The main issue with that statement is that it’s really hard to pair the GObject property with the accessor functions that set the property’s value, and retrieve it.

From a documentation perspective, tools might not establish any relation (gtk-doc), or they might require some additional annotation to do so (gi-docgen); but at the introspection level there’s nothing in the XML or the binary data that lets you go from a property name to a setter, or a getter, function. At least, until now.

GObject-introspection 1.70, released alongside GLib 2.70 and GNOME 41, introduced various annotations for both properties and methods that let you go from one to the other; additionally, new API was added to libgirepository to allow bindings to dynamic languages to establish that relation at run time.


If you have a property, and you document it as you should, you’ll have something like this:

 * YourWidget:your-property
 * A property that does something amazing.

If you want to associate the setter and getter functions to this property, all you need to do is add the following identifier annotations to it:

 * YourWidget:your-property: (setter set_your_property) (getter get_your_property)
 * A property that does something amazing.

The (setter) and (getter) annotations take the name of the method that is used to set, and get, the property, respectively. The method name is relative to the type, so you should not pass the C symbol.

On the accessor methods side, you have two additional annotations:

 * your_widget_set_your_property: (set-property your-property)
 * @self: your widget
 * @value: the value to set
 * Sets the given value for your property.


 * your_widget_get_your_property: (get-property your-property)
 * @self: your widget
 * Retrieves the value of your property.
 * Returns: the value of the property


Of course, you’re now tempted to go and add those annotations to all your properties and related accessors. Before you do that, though, you should know that the introspection scanner will try and match properties and accessors by itself, using appropriate heuristics:

  • if your object type has a writable, non-construct-only property, and a method that is called set_<property>, then the property will have a setter and the method will be matched to the property
  • if your object type has a readable property, and a method that is called get_<property>, then the property will have a getter and the method will be matched to the property
  • additionally, if the property is read-only and the property type is boolean, the scanner will look at a method that has the same name as the property as well; this is meant to catch getters like gtk_widget_has_focus(), which accesses the read-only property has-focus


All of the above ends up in the introspection XML, which is used by documentation tools and code generators. Bindings for dynamic languages using libgirepository can also access this information at run time, by using the API in GIPropertyInfo to retrieve the setter and getter function information for a property; and the API in GIFunctionInfo to retrieve the property being set.


Ideally, with this information, language bindings should be able to call the accessor functions instead of going through the generic g_object_set_property() and g_object_get_property() API, except as a fallback. This should speed up the property access in various cases. Additionally, bindings could decide to stop exposing C accessors, and only expose the property, in order to make the API more idiomatic.

On the documentation side, this will ensure that tools like gi-docgen will be able to bind the properties and their accessors more reliably, without requiring extra attributes.

And one more thing

One thing that did not make it in time for the 1.70 release, but will land early in the next development cycle for gobject-introspection, is the validation for properties. Language bindings don’t really like it when the C API exposes properties that have the same name of methods and virtual functions; we already have a validation pass ready to land, so expect warnings in the near future.

Another feature that will land early in the cycle is the (emitter) annotation, which will bind a method emitting a signal with the signal name. This is a feature taken from Vala’s metadata, and should improve the quality of life of people using introspection data with Vala, as well as removing the need for another attribute in gi-docgen.

Finally, if you maintain a language binding: please look at !204, and make sure you’re not calling g_assert_not_reached() or g_error() when encountering a new scope type. The forever scope cannot land if it breaks every single binding in existence.

by ebassi at September 21, 2021 08:23 PM

August 26, 2021

Emmanuele Bassi

Publishing your documentation

The main function of library-web, the tool that published the API reference of the various GNOME libraries, was to take release archives and put their contents in a location that would be visible to a web server. In 2006, this was the apex of automation, of course. These days? Not so much.

Since library-web is going the way of the Dodo, and we do have better ways to automate the build and publishing of files with GitLab, how do we replace library-web in 2021? The answer is, unsurprisingly: continuous integration pipelines.

I will assume that you’re already building—and testing—your library using GitLab’s CI; if you aren’t, then you have bigger problems than just publishing your API.

So, let’s start with these preconditions:

If your project doesn’t satisfy these preconditions you might want to work on doing so; alternatively, you can implement your own CI pipeline.

Let’s start with a simple job template:

# Expected variables:
# PROJECT_DEPS: the dependencies for your own project
# MESON_VERSION: the version of Meson you depend on
# MESON_EXTRA_FLAGS: additional Meson setup options
#   you wish to pass to the configuration phase
# DOCS_FLAGS: the Meson setup option for enabling the
#   documentation, if any
# DOCS_PATH: the path of the generated reference,
#   relative to the build root
  image: fedora:latest
    - export PATH="$HOME/.local/bin:$PATH"
    - >
      dnf install -y
    - dnf install -y ${PROJECT_DEPS}
    - >
      pip3 install
    - meson setup ${MESON_EXTRA_FLAGS} ${DOCS_FLAGS} _docs .
    - meson compile -C _docs
    - |
      pushd "_docs/${DOCS_PATH}" > /dev/null 
      tar cfJ ${CI_PROJECT_NAME}-docs.tar.xz .
      popd > /dev/null
    - mv _docs/${DOCS_PATH}/${CI_PROJECT_NAME}-docs.tar .
    when: always
    name: 'Documentation'
    expose_as: 'Download the API reference'
      - ${CI_PROJECT_NAME}-docs.tar.xz

This CI template will:

  • download all the required dependencies for building the API reference using gi-docgen
  • build your project, including the API reference
  • create an archive with the API reference
  • store the archive as a CI artefact that you can easily download

Incidentally, by adding a meson test -C _build to the script section, you can easily test your build as well; and if you have a test() target in your build that runs gi-docgen check, then you can verify that your documentation is always complete.

Now, all you have to do is create your own CI job that inherits from the template inside its own stage. I will use JSON-GLib as a reference:

  - docs

  stage: docs
  extends: .gidocgen-build
  needs: []
    MESON_VERSION: 0.55.3
    DOCS_FLAGS: -Dgtk_doc=true
    DOCS_PATH: docs/json-glib-1.0

What about gtk-doc!”, I hear from the back of the room. Well, fear not, because there’s a similar template you can use if you’re still using gtk-doc in your project:

# Expected variables:
# PROJECT_DEPS: the dependencies for your own project
# MESON_VERSION: the version of Meson you depend on
# MESON_EXTRA_FLAGS: additional Meson setup options you
#   wish to pass to the configuration phase
# DOCS_FLAGS: the Meson setup option for enabling the
#   documentation, if any
# DOCS_TARGET: the Meson target for building the
#   documentation, if any
# DOCS_PATH: the path of the generated reference,
#   relative to the build root
  image: fedora:latest
    - export PATH="$HOME/.local/bin:$PATH"
    - >
      dnf install -y
    - dnf install -y ${PROJECT_DEPS}
    - pip3 install  meson==${MESON_VERSION}
    - meson setup ${MESON_EXTRA_FLAGS} ${DOCS_FLAGS} _docs .
    # This is exceedingly annoying, but sadly its how
    # gtk-doc works in Meson
    - ninja -C _docs ${DOCS_TARGET}
    - |
      pushd "_docs/${DOCS_PATH}" > /dev/null 
      tar cfJ ${CI_PROJECT_NAME}-docs.tar.xz .
      popd > /dev/null
    - mv _docs/${DOCS_PATH}/${CI_PROJECT_NAME}-docs.tar .
    when: always
    name: 'Documentation'
    expose_as: 'Download the API reference'
      - ${CI_PROJECT_NAME}-docs.tar.xz

And now you can use extends: .gtkdoc-build in your api-reference job.

Of course, this is just half of the job: the actual goal is to publish the documentation using GitLab’s Pages. For that, you will need another CI job in your pipeline, this time using the deploy stage:

  - docs
  - deploy

# ... the api-reference job goes here...

  stage: deploy
  needs: ['api-reference']
    - mkdir public && cd public
    - tar xfJ ../${CI_PROJECT_NAME}-docs.tar.xz
      - public
    - master
    - main

Now, once you push to your main development branch, your API reference will be built by your CI pipeline, and the results published in your project’s Pages space—like JSON-GLib.

The CI pipeline and GitLab Pages are also useful for building complex, static websites presenting multiple versions of the documentation; or presenting multiple libraries. An example of the former is libadwaita’s website, while an example of the latter is the GTK documentation website. I’ll write a blog post about them another time.

Given that the CI templates are pretty generic, I’m working on adding them into the GNOME ci-templates repository, so you will be able to use something like:

include: ''


include: ''

without having to copy-paste the template in your own .gitlab-ci.yml file.

The obvious limitation of this approach is that you will need to depend on the latest version of Fedora to build your project. Sadly, we cannot use Flatpak and the GNOME run time images for this, mainly because we are building libraries, not applications; and because extracting files out of a Flatpak build after it has completed isn’t entirely trivial. Another side effect is that if you bump up the dependencies of your project to something on the bleeding edge and currently not packaged on the latest stable Fedora, you will need to have it included as a Meson sub-project. Of course, you should already be doing that, so it’s a minor downside.

Ideally, if GNOME built actual run time images for the SDK, we could install gtk-doc, gi-docgen, and all their dependencies into the SDK itself, and avoid depending on a real Linux distribution for the libraries in our platform.

by ebassi at August 26, 2021 12:01 AM

August 05, 2021

Chris Lord

OffscreenCanvas update

Hold up, a blog post before a year’s up? I’d best slow down, don’t want to over-strain myself 🙂 So, a year ago, OffscreenCanvas was starting to become usable but was missing some key features, such as asynchronous updates and text-related functions. I’m pleased to say that, at least for Linux, it’s been complete for quite a while now! It’s still going to be a while, I think, before this is a truly usable feature in every browser. Gecko support is still forthcoming, support for non-Linux WebKit is still off by default and I find it can be a little unstable in Chrome… But the potential is huge, and there are now double the number of independent, mostly-complete implementations that prove it’s a workable concept.

Something I find I’m guilty of, and I think that a lot of systems programmers tend to be guilty of, is working on a feature but not using that feature. With that in mind, I’ve been spending some time in the last couple of weeks to try and bring together demos and information on the various features that the WebKit team at Igalia has been working on. With that in mind, I’ve written a little OffscreenCanvas demo. It should work in any browser, but is a bit pointless if you don’t have OffscreenCanvas, so maybe spin up Chrome or a canary build of Epiphany.

OffscreenCanvas fractal renderer demo, running in GNOME Web Canary

Those of us old-skool computer types probably remember running fractal renderers back on their old home computers, whatever they may have been (PC for me, but I’ve seen similar demos on Amigas, C64s, Amstrad CPCs, etc.) They would take minutes to render a whole screen. Of course, with today’s computing power, they are much faster to render, but they still aren’t cheap by any stretch of the imagination. We’re talking 100s of millions of operations to render a full-HD frame. Running on the CPU on a single thread, this is still something that isn’t really real-time, at least implemented naively in JavaScript. This makes it a nice demonstration of what OffscreenCanvas, and really, Worker threads allow you to do without too much fuss.

The demo, for which you can look at my awful code, splits that rendering into 64 tiles and gives each tile to the first available Worker in a pool of rendering threads (different parts of the fractal are much more expensive to render than others, so it makes sense to use a work queue, rather than just shoot them all off distributed evenly amongst however many Workers you’re using). Toggle one of the animation options (palette cycling looks nice) and you’ll get a frame-rate counter in the top-right, where you can see the impact on performance that adding Workers can have. In Chrome, I can hit 60fps on this 40-core Xeon machine, rendering at 1080p. Just using a single worker, I barely reach 1fps (my frame-rates aren’t quite as good in WebKit, I expect because of some extra copying – there are some low-hanging fruit around OffscreenCanvas/ImageBitmap and serialisation when it comes to optimisation). If you don’t have an OffscreenCanvas-capable browser (or a monster PC), I’ve recorded a little demonstration too.

The important thing in this demo is not so much that we can render fractals fast (this is probably much, much faster to do using WebGL and shaders), but how easy it is to massively speed up a naive implementation with relatively little thought. Google Maps is great, but even on this machine I can get it to occasionally chug and hitch – OffscreenCanvas would allow this to be entirely fluid with no hitches. This becomes even more important on less powerful machines. It’s a neat technology and one I’m pleased to have had the opportunity to work on. I look forward to seeing it used in the wild in the future.

by Chris Lord at August 05, 2021 03:33 PM

August 02, 2021

Emmanuele Bassi

Documenting GNOME for developers

You may have just now noticed that the GNOME developers documentation website has changed after 15 years. You may also have noticed that it contains drastically less content than it used to. Before you pick up torches and pitchforks, let me give you a short tl;dr of the changes:

  • Yes, this is entirely intentional
  • Yes, I know that stuff has been moved
  • Yes, I know that old URLs don’t work
  • Yes, some redirections will be put in place
  • No, we can’t go back

So let’s recap a bit the state of the developers documentation website in 2021, for those who weren’t in attendance at my GUADEC 2021 presentation:

  • library-web is a Python application, which started as a Summer of Code project in 2006, whose job was to take Autotools release tarballs, explode them, fiddle with their contents, and then publish files on the infrastructure.
  • library-web relies heavily on Autotools and gtk-doc.
  • library-web does a lot of pre-processing of the documentation to rewrite links and CSS from the HTML files it receives.
  • library-web is very much a locally sourced, organic, artisanal pile of hacks that revolve very much around the GNOME infrastructure from around 2007-2009.
  • library-web is incredibly hard to test locally, even when running inside a container, and the logging is virtually non-existent.
  • library-web is still running on Python 2.
  • library-web is entirely unmaintained.

That should cover the infrastructure side of things. Now let’s look at the content.

The developers documentation is divided in four sections:

  • a platform overview
  • the Human Interface guidelines
  • guides and tutorials
  • API references

The platform overview is slightly out of date; the design team has been reviewing the HIG and using a new documentation format; the guides and tutorials still like GTK1 and GTK2 content; or how to port GNOME 2 applications to GNOME 3; or how to write a Metacity theme.

This leaves us with the API references, which are a grab bag of miscellaneous things, listed by version numbers. Outside of the C API documentation, the only other references hosted on are the C++ bindings—which, incidentally, use Doxygen and when they aren’t broken by library-web messing about with the HTML, they have their own franken-style mash up of and

Why didn’t I know about this?

If you’re asking this question, allow me to be blunt for a second: the reason you never noticed that the developers documentation website was broken is that you never actually experienced it for its intended use case. Most likely, you either just looked in a couple of well known places and never ventured outside of those; and/or you are a maintainer, and you never literally cared how things worked (or didn’t work) after you uploaded a release tarball somewhere. Like all infrastructure, it was somebody else’s problem.

I completely understand that we’re all volunteers, and that things that work can be ignored because everyone has more important things to think about.

Sadly, things change: we don’t use Autotools (that much), which means release archives do not contain the generated documentation any more; this means library-web cannot be updated, unless somebody modifies the configuration to look for a separate documentation tarball that the maintainer has to generate manually and upload in a magic location on the file server—this has happened for GTK4 and GLib for the past two years.

Projects change the way they lay out the documentation, or gtk-doc changes something, and that causes library-web to stop extracting the right files; you can look at the ATK reference for the past year and a half for an example.

Projects bump up their API, and now the cross-referencing gets broken, like the GTK3 pages linking GDK2 types.

Finally, projects decide to change how their documentation is generated, which means that library-web has no idea how to extract the HTML files, or how to fiddle with them.

If you’re still using Autotools and gtk-doc, and haven’t done an API bump in 15 years, and all you care about is copying a release archive to the infrastructure I’m sure all of this will come as a surprise, and I’m sorry you’re just now being confronted with a completely broken infrastructure. Sadly, the infrastructure was broken for everybody else long before this point.

What did you do?

I tried to make library-web deal with the changes in our infrastructure. I personally built and uploaded multiple versions of the documentation for GLib (three different archives for each release) for a year and a half; I configured library-web to add more “extra tarball” locations for various projects; I tried making library-web understand the new layout of various projects; I even tried making library-web publish the gi-docgen references used by GTK, Pango, and other projects.

Sadly, every change broke something else—and I’m not just talking about the horrors of the code base. As library-web is responsible for determining the structure of the documentation, any change to how the documentation is handled leads to broken URLs, broken links, or broken redirections.

The entire castle of cards needed to go.

Which brings us to the plan.

What are you going to do?

Well, the first step has been made: the new website does not use library-web. The content has been refreshed, and more content is on the way.

Again, this leaves the API references. For those, there are two things that need to happen—and are planned for GNOME 41:

  1. all the libraries that are part of the GNOME SDK run time, built by gnome-build-meta must also build their documentation, which will be published as part of the org.gnome.Sdk.Docs extension; the contents of the extension will also be published online.
  2. every library that is hosted on infrastructure should publish their documentation through their CI pipeline; for that, I’m working on a CI template file and image that should take care of the easy projects, and will act as model for projects that are more complicated.

I’m happy to guide maintainers to deal with that, and I’m also happy to open merge requests on various projects.

In the meantime, the old documentation is still available as a static snapshot, and the sysadmins are going to set up some redirections to bridge us from the old platform to the new—and hopefully we’ll soon be able to redirect to each project’s GitLab pages.

Can we go back, please?

Sadly, since nobody has ever bothered picking up the developers documentation when it was still possible to incrementally fix it, going back to a broken infrastructure isn’t going to help anybody.

We also cannot keep the old and add a new one, of course; now we’d have two websites, one of which broken and unmaintained and linked all over the place, and a new one that nobody knows exists.

The only way is forward, for better or worse.

What about Devhelp

Some of you may have noticed that I picked up the maintenance of Devhelp, and landed a few fixes to ensure that it can read the GTK4 documentation. Outside of some visual refresh for the UI, I also am working on making it load the contents of the org.gnome.Sdk.Docs run time extension, which means it’ll be able to load all the core API references. Ideally, we’re also going to see a port to GTK4 and libadwaita, as soon as WebKitGTK for GTK4 is more wideley available.

by ebassi at August 02, 2021 09:32 AM

July 28, 2021

Tomas Frydrych

The BBC's 'Riding the North Coast 500'

The Beeb has a photographic piece on cycling the NC500. Unfortunately, these serene images, featuring four cyclists riding quiet scenic roads, are completely misleading as to the reality of cycling on the NC500.

I have just spent two weeks cycling on a section of the route in the North West, and the simple truth is the creation of the NC500 has turned what once might have been one of the best cycle touring destinations in the UK into a busy, unpleasant, thoroughfare.

The two lane sections of the route are particularly awful to cycle on. The two lane roads in the NW frequently lend themselves to driving at speeds well above the speed limit even though the winding and undulating nature of the road often means very limited visibility. Performance car racing along the NC500 is rampant and Police Scotland seem unperturbed (I did not see a single police car on the open road). As a cyclist you will be repeatedly passed by cars travelling at idiotic speeds, and often passing very close.

On the single track roads the traffic usually moves a bit slower and you at least have the option to prevent close passing by systematically riding in the primary position. But traffic jams are common, as the traffic bunches up and passing places are far apart, and this makes all of the drivers just that bit more inconsiderate. Again and again you will run into drivers who will not wait at passing places for you, expecting you to dismount and step off the road. The worst offenders in this regard are the large camper vans, even though these vehicles barely fit the narrow tarmac lanes alone, and it is literally impossible for a cyclist to pass them.

I have always wanted to do a proper bike tour of the NW, but after those couple of weeks this July I largely lost the appetite for it; perhaps during the off season months it still might be a nice trip.

I have made a formal complaint to the BBC over the above piece, the selection of images presented without any commentary misrepresents what the NC500 is like, and I'd like to see a bit more balanced coverage of it.

by tf at July 28, 2021 02:11 PM

Emmanuele Bassi

Final Types

The type system at the base of our platform, GType, has various kinds of derivability:

  • simple derivability, where you’re allowed to create your derived version of an existing type, but you cannot derive your type any further;
  • deep derivability, where you’re allowed to derive types from other types;

An example of the first kind is any type inheriting from GBoxed, whereas an example of the second kind is anything that inherits from GTypeInstance, like GObject.

Additionally, any derivable type can be marked as abstract; an abstract type cannot be instantiated, but you can create your own derived type which may or may not be “concrete”. Looking at the GType reference documentation, you’ll notice various macros and flags that exist to implement this functionality—including macros that were introduced to cut down the boilerplate necessary to declare and define new types.

The G_DECLARE_* family of macros, though, introduced a new concept in the type system: a “final” type. Final types are leaf nodes in the type hierarchy: they can be instantiated, but they cannot be derived any further. GTK 4 makes use of this kind of types to nudge developers towards composition, instead of inheritance. The main problem is that the concept of a “final” type is entirely orthogonal to the type system; there’s no way to programmatically know that a type is “final”—unless you have access to the introspection data and start playing with heuristics about symbol visibility. This means that language bindings are unable to know without human intervention if a type can actually be inherited from or not.

In GLib 2.70 we finally plugged the hole in the type system, and we introduced the G_TYPE_FLAG_FINAL flag. Types defined as “final” cannot be derived any further: as soon as you attempt to register your new type that inherits from a “final” type, you’ll get a warning at run time. There are macros available that will let you define final types, as well.

Thanks to the “final” flag, we can also include this information into the introspection data; this will allow language bindings to warn you if you attempt at inheriting from a “final” type, likely using language-native tools, instead of getting a run time warning.

If you are using G_DECLARE_FINAL_TYPE in your code you should bump up your GObject dependency to 2.70, and switch your implementation from G_DEFINE_TYPE and friends to G_DEFINE_FINAL_TYPE.

by ebassi at July 28, 2021 10:02 AM

June 10, 2021

Ross Burton

Faster image transfer across the network with zsync

Those of us involved in building operating system images using tools such as OpenEmbedded/Yocto Project or Buildroot don't always have a power build machine under our desk or in the same building on gigabit. Our build machine may be in the cloud, or in another office over a VPN running over a slow residential ADSL connection. In these scenarios, repeatedly downloading gigabyte-sized images for local testing can get very tedious.

There are some interesting solutions if you use Yocto: you could expose the shared state over the network and recreate the image, which if the configurations are the same will result in no local compilation. However this isn't feasible if your local machine isn't running Linux or you just want to download the image without any other complications. This is where zsync is useful.

zsync is a tool similar to rsync but optimised for transfering single large files across the network. The server generates metadata containing the chunk information, and then shares both the image and the metadata over HTTP. The client can then use any existing local file as a seed file to speed up downloading the remote file.

On the server, run zsyncmake on the file to be transferred to generate the .zsync metadata. You can also pass -z if the file isn't already compressed to tell it to compress the file first.

$ ls -lh core-image-minimal-*.wic*
-rw-r--r-- 1 ross ross 421M Jun 10 13:44 core-image-minimal-fvp-base-20210610124230.rootfs.wic

$ zsyncmake -z core-image-minimal-*.wic

$ ls -lh core-image-minimal-*.wic*
-rw-r--r-- 1 ross ross 4.7K Jun 10 13:44 core-image-minimal-fvp-base-20210610124230.rootfs.manifest
-rw-r--r-- 1 ross ross 421M Jun 10 13:44 core-image-minimal-fvp-base-20210610124230.rootfs.wic
-rw-r--r-- 1 ross ross  53M Jun 10 13:45 core-image-minimal-fvp-base-20210610124230.rootfs.wic.gz

Here we have ~420MB of disk image, which compressed down to a slight 53MB, and just ~5KB of metadata. This image compressed very well as the raw image is largely empty space, but for the purposes of this example we can ignore that.

The zsync client downloads over HTTP and has some non-trivial requirements so you can't just use any HTTP server, specifically my go-to dumb server (Python's integrated http.server) isn't sufficient. If you want a hassle-free server then the Node.js package http-server works nicely, or any other proper server will work. However you choose to do it, share both the .zsync and .wic.gz files.

$ npm install -g http-server
$ http-server -p 8080 /path/to/images

Now you can use the zsync client to download the images. Sadly zsync isn't actually magical, so the first download will still need to download the full file:

$ zsync http://buildmachine:8080/core-image-minimal-fvp-base-20210610124230.rootfs.wic.zsync
No relevent local data found - I will be downloading the whole file.
downloading from http://buildmachine:8080/core-image-minimal-fvp-base-20210610124230.rootfs.wic.gz:
#################### 100.0% 7359.7 kBps DONE

verifying download...checksum matches OK
used 0 local, fetched 55208393

However, subsequent downloads will be a lot faster as only the differences will be fetched. Say I decide that core-image-minimal is too, well, minimal, and build core-image-sato which is a full stack instead of just busybox. After building the the image and metadata we now have a ~700MB image:

-rw-r--r-- 1 ross ross 729M Jun 10 14:17 core-image-sato-fvp-base-20210610125939.rootfs.wic
-rw-r--r-- 1 ross ross 118M Jun 10 14:18 core-image-sato-fvp-base-20210610125939.rootfs.wic.gz
-rw-r--r-- 1 ross ross 2.2M Jun 10 14:19 core-image-sato-fvp-base-20210610125939.rootfs.wic.zsync```

Normally we'd have to download the full 730MB, but with zsync we can just fetch the differences. By telling the client to use the existing core-image-minimal as a seed file, we can fetch the new core-image-sato:

$ zsync -i core-image-minimal-fvp-base-20210610124230.rootfs.wic  http://buildmachine:8080/core-image-sato-fvp-base-20210610125939.rootfs.wic.zsync
reading seed file core-image-minimal-fvp-base-20210610124230.rootfs.wic
core-image-minimal-fvp-base-20210610124230.rootfs.wic. Target 70.5% complete.
downloading from http://buildmachine:8080/core-image-sato-fvp-base-20210610125939.rootfs.wic.gz:
#################### 100.0% 10071.8 kBps DONE     

verifying download...checksum matches OK
used 538800128 local, fetched 70972961

By using the seed file, zsync determined that it already has 70% of the file on disk, and downloaded just the remaining chunks.

For incremental builds the differences can be very small when using the Yocto Project, as thanks to the reproducible builds effort there are no spurious changes (such as embedded timestamps or non-deterministic compilation) on recompiles.

Now, obviously I don't recommend doing all of this by hand. For Yocto Project users, as of right now there is a patch queued for meta-openembedded adding a recipe for zsync-curl, and a patch queued for openembedded-core to add zsync and gzsync image conversion types (for IMAGE_FSTYPES, for example wic.gzsync) to generate the metadata automatically. Bring your own HTTP server and you can fetch without further effort.

by Ross Burton at June 10, 2021 03:28 PM

May 31, 2021

Tomas Frydrych

New Year Reflection

New Year Reflection

The year has began splendidly: the sun is shinning, the sky is blue, the hills around caped in snow, the reservoir part frozen, and under my wheels the satisfying crunch of hard ice. I turn onto the minor single track road that takes me over the hills. Here in the shade of the frost decorated evergreens it’s noticeably colder. I leave behind a stuck 4x4, pedalling out of the trees into the sun, grinning. At this moment there is nowhere else I’d rather be, nothing else I’d rather be doing.

Deciding to get a road bike again, after some 25+ year pause, was one of the best decisions of 2020, and today is the best thirty miles I have done on this bike yet. Here in the relative solitude among sheep and cows, the hum of my tyres echoing the hum of the wind turbines on the hills, as you do on a New Year’s Day, I reminisce about the years (and bikes) gone by.

My first bike was a birthday present, I think the year before I started school. It had 22" wheels, coaster brake, the frame was of the girl’s type and it was pink, but it was mine. My parents didn’t believe in stabilisers. That day Dad simply took me up to the top of the gentle hill that was our street and said ‘I am going to hold you by the saddle’ ... and then let go off me. And that was that.

My second bike came a few years later. A standard 700c, alas also of the girl’s type; this time I did mind, but at least it was blue (it was only years later I came to understand how hard bikes were to get in 1970s Czechoslovakia, plenty were made, but majority were exported).

During those years, the bike was primarily a means to an end, which came to its own during the summer holidays: a way to get between the grannies and aunties, to the forest hunting for mushrooms, to the river for a swim. But my real interests lied elsewhere, and on foot.

That changed in the year of my 20th birthday. The autumn before I was diagnosed with an early onset arthritis in my knees and was told to, for the rest of my life, forget about sport, except for swimming and cycling. It wasn’t what I wanted to hear, but the pain was unbearably real. And so that January I scoured the For Sales ads until I found a ‘real’ bike I could afford.

I took three buses across the city, expecting to cycle back. The seller wasn’t in, but his wife took me to a barn where the bike was kept—neglected, long unused, the pedals not even attached. I knew nil about bikes at the time, but all in all I thought it could be fixed up, and so the bike and me took three buses back home.

As I realised later, the frame came from a track bike that someone (not too gently) spread to accommodate a 6-speed cassette; this resulted in the wheels not being entirely aligned, but that didn’t bother me too much. The frame had super short chain stays and to match the fork had a zero rake, so that my toes overlapped the front wheel by more than an inch—that took some getting used to. But the whole thing was stupidly light compared to even the top end bike that could be bought in a shop.

I spent that winter fixing it up—hand painted it with white enamel paint, fitted hooded brake leavers, nice bar tape, new skinny tubulars, new pedals, and, with particular pride, new aluminium crankset. Then I saved up for a proper pair of cycling shoes; they gave me a few scary moments at traffic lights to start with, but I got the hang of the toe clip straps in the end.

As I said before, I knew nothing about bikes at this point, so I didn’t know tubulars had to be glued onto the rims. Fortunately, there was enough glue residue left on the rims from the original tyres to keep me in one piece until a friend set me right.

That bike took me to new places and friendships, provided means of keeping sane, of escaping bleak industrial landscapes; I rode it on our first date with Linda. Then in my late twenties my knees problems went away, and I turned back to my old passions, and to mountain bikes.

Over the years that passed I have forgotten what a simple pleasure there is in riding a road bike, but like my early bikes, this bike is more than anything else a means to an end, a bike ‘to ride to somewhere’ rather than just ‘to ride’. It’s about maintaining sanity in a world where a car is suddenly not a viable option: an old fashioned steel frame, mudguards, a rack to take my cameras and sandwiches, a flask of coffee. And just now it’s time for a cup.

[Just found this in drafts folder :-)]

by tf at May 31, 2021 09:09 AM

April 24, 2021

Tomas Frydrych

Return the National Parks to the Tribes

This piece in the Atlantic is well worth a read, cutting right through the cosy myth of the 19th century conservationism. There are lessons there for the present here in Scotland today for sure.

by tf at April 24, 2021 08:21 AM

April 18, 2021

Tomas Frydrych

A Photographer’s Quest for Purpose

A Photographer’s Quest for Purpose

I rarely enter photographic competitions, but the Scottish Landscape Photographer of the Year is an important point in my photographic calendar: Scottish landscape is my primary photographic interest and it’s always worthwhile to be able to place my own work in the wider context of other people's imagery.

While the competition has been of a very high standard in the four years I have been taking part, the images that have come through to the final this year seem to be particularly so. I have much enjoyed browsing through the galleries, and there are a few images there that have really made me pause and reflect on the beauty of this wee land of ours.

But, Friday a week ago, as I was browsing the landscape and treescape galleries, I was struck not simply by what is in these images, but also, and perhaps more so, by what is not. By the absence of the very hallmarks of contemporary Scottish landscape: the wind turbines, the hydro schemes, the electricity lines; the dirt roads, the detritus of industrial scale clear felling. After all, in today’s Scotland there is a hardly a vista where at least some of these signs of human intrusion would not be visible.

Now, I quite understand why these man-made objects are largely absent from contemporary Scottish landscapes. They are not the things we seek to see, they are not the bits of Scotland we feel emotionally attached to, nor they are what the tourists come to Scotland for. And of course, not least for the reasons just stated, it’s bloody hard to take an engaging photo of a wind farm, never mind the visual mess left behind by forestry operations.

This year, two of my images made it into the second round, and while none of them has made it any further, the Birth of a Storm will be part of the SLPOTY exhibition. This image is a quintessential Scottish landscape: a loch, a well-known hill, some clouds. It was an awkward image to print, one which challenged my darkroom technique, and I have really enjoyed the process. But on an emotional level this image doesn’t mean much to me, I have taken better images of the same subject, and images like that abound.

The other image, called Scotland’s Bright Future?, captures the pylons of the Beauly to Denny electricity line. It’s not an image I felt thrilled taking, nor is it a subject matter I’d necessarily want to see on my living room wall. Yet, I felt strangely compelled to print it at A3 size, and I felt compelled to include it among the 15 images of my SLPOTY portfolio. I admit I was much surprised it made it into the second round, and not at all it didn’t not make it any further. It’s not that good, I have taken better images this year. Yet, for me this is an image with a high emotional charge, an image that expresses something of my growing angst for this land of ours.

If we as landscape photographers systematically choose not to capture these human intrusions, are we not guilty of perpetuating a mythical landscape of Bonnie Scotland that has not existed for some time now? Does such a myth not contribute to our society’s blasé attitude to this land of ours? Is the purpose of landscape photography to simply entertain, or should it also be asking probing questions about what lies out there?

These questions are not new for me. I have touched elsewhere on my discovery of the ‘other Scotland’, the one that doesn’t make it into tourist guidebooks, and if anything the last year has made these questions more pressing. Being stuck for a year exploring timber plantations and wind farms has not been without a value: I have a lot better idea of what hides behind the ‘renewables’ moniker, and what an explosive growth of them would mean for the land. And I am left with much more doubt that the nation as a whole, and those in power in particular, understand or care.

But perhaps a more balanced and realistic landscape photography can make a degree of difference?

by tf at April 18, 2021 02:27 PM

February 21, 2021

Emmanuele Bassi

Documentation changes

Back in the late ‘90s, people working on GTK had the exact same problem we have today: how do we document the collection of functions, types, macros, and assorted symbols that we call “an API”. It’s all well and good to strive for an API that can be immediately grasped by adhering to a set of well defined conventions and naming; but nothing is, or really can be, “self documenting”.

When GTK 1.0 was released, the documentation was literally stored in handwritten Texinfo files; the API footprint of GTK was small enough to still make it possible, but not really maintainable in the longer term. In 1998, a new system was devised for documenting GTK 1.2:

  • a script, to parse the source files for the various declarations and dump them into machine parseable “templates”, that would then be modified to include the actual documentation, and committed to the source repository
  • a small tool that would generate, compile, and run a small tool to introspect the type system for things like hierarchy and signals
  • a script to take the templates, the list of symbols divided into logical “sections”, an index file, and generate a bunch of DocBook XML files
  • finally, a script to convert DocBook to HTML or man pages, via xsltproc and an XML stylesheet

Whenever somebody added a new symbol to GTK, they would need to run the script, find the symbol in the template files, write the documentation using DocBook tags if necessary, and then commit the changes alongside the rest of the code.

Since this was 1998, and the scripts had to parse a bunch of text files using regular expressions, they were written in Perl and strung together with a bunch of Makefile rules.

Thus, gtk-doc was born.

Of course, since other libraries needed to provide an API reference to those poor souls using them, gtk-doc ended up being shared across the GNOME platform. We even built part of our website and release infrastructure around it.

At some point between 1998 and 2009, gtk-doc gained the ability to generate those template files incrementally, straight from the C sources; this allowed moving the preambles of each section into the corresponding source file, thus removing the templates from the repository, and keeping the documentation close to the code it references, in the hope it would lead to fewer instances of docs drift.

Between 2009 and 2021, a few things happened:

  1. gobject-introspection has become “a thing”; g-i also parses the C code, and does it slightly more thoroughly than gtk-doc, to gather the same information: declarations, hierarchy, interfaces, properties, and signals, and even documentation, which is all shoved into a well-defined XML file; on top of that, g-i needs annotations in the source to produce a machine-readable description of the C ABI of a library, which can then be used to generate language bindings
  2. turns out that DocBook is pretty terrible, and running xsltproc on large, complex DocBook files is really slow
  3. Perl isn’t really a Hot Language™ like it was in the late ‘90s; many Linux distributions dropped it from the core installation, and not many people speak it that fluently, which means not many people will want to help with a large Perl application

To cope with issue (1), gtk-doc had to learn to parse introspection annotations.

Issue (2) led to replacing DocBook tags inside the inline documentation with subset of Markdown, augmented with custom code blocks and intra-document anchors for specific sections.

Issue (3) led to a wholesale rewrite in Python, in the hope that more people would contribute to the maintenance of gtk-doc.

Sadly, all three solutions ended up breaking things in different ways:

  1. gtk-doc never really managed to express the introspection information in the generated documentation, outside of references to an ancillary appendix. If an annotation says “this argument can be NULL”, for instance, there’s no need to write “or NULL” in the documentation itself: the documentation tool can write it out for you.
  2. the move to Markdown means that existing DocBook tags in the documentation are now ignored or, worse, misinterpreted for HTML and not rendered; this requires porting all the documentation in every library, in a giant flag day, to avoid broken docs; on top of that, DocBook’s style sheet to generate HTML started exhibiting regressions after a build system change, which led, among other things, to the disappearance of per-version symbols indices
  3. the port to Python probably came too late, and ended up having many, many regressions; gtk-doc is still a pretty complex tool, and it still caters to many different use cases, spanning two decades; as much as its use is documented and tested, its internals are really not, meaning that it’s not an easy project to pick up

Over the past 10 years various projects started migrating away from gtk-doc; gobject-introspection itself shipped a documentation tool capable of generating API references, though it mostly is a demonstrator of potential capabilities more than an actual tool. Language bindings, on the other hand, adopted the introspection data as the source for their documentation, and you can see it in Python, JavaScript, and Rust.

As much as I’d like to contribute to gtk-doc, I’m afraid we reached the point where we might want to experiment with something more radical, instead of patching something up, and end up breaking what’s left.

So, since we’re starting from the bottom up, let’s figure out what are the requirements for a tool to generate the documentation for GTK:

  • be fast. Building GTK’s API reference takes a long time. The API footprint of GDK, GSK, and GTK is not small, but there’s no reason why building the documentation should take a comparable amount of time as building the library. We moved to Meson because it has improved the build times of GTK, we don’t want to get into a bottleneck now.
  • no additional source parsing. We already parse the C sources in order to generate the introspection data, we don’t need another pass at that.
  • tailored for GTK. Whenever GTK changes, the tool must change with GTK; the output must adapt to the style of documentation GTK uses.
  • integrated with GTK. We don’t want an external dependency that makes it harder to deploy the GTK documentation on non-Linux platforms. Using it as a sub-project would be the best option, followed by being able to install it everywhere without additional, Linux-only dependencies.

The explicit non-goal is to create a general purpose documentation tool. We don’t need that; in fact: we’re actively avoiding it. Regardless of what you’ve been taught at university, or your geeky instincts tell you, not every problem requires a generic solution. The whole reason why we are in this mess is that we took a tool for generating the GTK documentation and then generalised the approach until it fell apart under its own weight.

If you want a general purpose documentation tool for C and C++ libraries, there are many to choose from:

There’s also gtk-doc: if you’re using it already, I strongly recommend helping out with its maintenance.

Back in November 2020, as a side project while we were closing in to the GTK 4.0 release date, I started exploring the idea of parsing the introspection data to generate the C API reference for GTK. I wanted to start from scratch, and see how far I could go, so I deliberately avoided taking the GIR parser from gobject-introspection; armed only with the GIR schema and a bunch of Python, I ended up writing a decent parser that would be able to load the GTK introspection XML data, including its dependencies, and dump the whole tree of C identifiers and symbols. After a break, at the end of January 2021, I decided to take a page out the static website generator rule book, and plugged the Jinja templates into the introspection data. The whole thing took about a couple of weeks to go from this:

Everybody loves a tree-like output on the command line

to this:

Behold! My stuff!

My evil plan of generating something decent enough to be usable and then showing it to people with actual taste and web development skills paid off, because I got a whole merge request from Martin Zilz to create a beautiful theme, with support for responsive layout and even for a dark variant:

Amazing what actual taste and skill can accomplish

Like night and day

Turns out that when you stop parsing C files and building small binaries to introspect the type system, and remove DocBook and xsltproc from the pipeline, things get fast. Who knew…

Additionally, once you move the template and the styling outside of the generator, and you can create more complex documentation hierarchies, while retaining the ability for people that are not programmers to change the resulting HTML.

The interesting side effect of using introspection data is that our API reference is now matching what language bindings are able to see and consume—and oh boy, do we suck at that. Part of the fault lies in the introspection parser not being able to cover some of the nastiest parts of C, like macros—though, hopefully, that will improve in the near future; but a lot of issues come from our own API design. Even after 10 years since the introduction of introspection, we’re still doing some very dumb things when it comes to C API—ad let’s ignore stuff that happened 20 years ago and that we haven’t been able to fix yet. Hopefully, the documentation slapping us in the face is going to help us figuring things out before they hit stable releases.

What’s missing from gi-docgen? Well, you can look at the 2021.1 milestone on GitLab:

  • more documentation on the ancillary files used for project and template configuration; stabilising the key/value pairs would also be part of the documentation effort
  • client-side search, with symbols exposed to tools like GNOME Builder through something that isn’t quite as tragic as DevHelp files
  • automatic cross-linking with dependencies documented by gi-docgen, especially for libraries with multiple namespaces, like GTK and Pango
  • generating proper dependency files, for the consumption of build tools like Meson

In the meantime, what’s missing for GTK to use this? Mainly, porting the documentation away from the various gtk-doc-isms, like marking symbols with sigils, or using |[ ... ]| to define code blocks. Additionally, since the introspection scanner only attaches SECTION blocks to the documentation element of a class, all the sections that operate as “grab bag of related symbols” need to be moved to a separate Markdown file.

It must needs be remarked: gi-docgen is not a generic solution for documenting C libraries. If your library does not have introspection, doesn’t use type classes, or has a very different ABI exposed through introspection than the actual C API, then you’re not going to find it useful—and I don’t have any plans to cater to your use cases either. You should keep using gtk-doc if it still works for you; or you may want to consider other documentation tools.

Anything that complicates the goal of this tool—generating the API reference for GTK and ancillary libraries—is completely out of scope.

by ebassi at February 21, 2021 01:34 AM

February 17, 2021

Emmanuele Bassi

History of GNOME: Table of Contents

  1. Episode 0: Prologue
    • What
    • Why
    • Who
    • How

Chapter 1: Perfection, achieved

  1. Episode 1.1: The GNU Network Object Model Environment
    • Linux and Unix
    • X11, and the desktop landscape
    • KDE and the Qt licensing
    • GNU Image Manipulation Program
    • GTK
    • Guile and language bindings
    • Network Object Model
  2. Episode 1.2: Desktop Wars
    • KDE vs GNOME
    • C++ vs C vs Perl vs Scheme vs ObjC vs …
    • Qt vs GTK
    • Red Hat vs the World
    • Project Bob
    • GTK 1.2 and themes
    • GNOME 1.0
  3. Episode 1.3: Land of the bonobos
    • Helix Code, and Red Carpet
    • Eazel, and Nautilus
    • Components, components, components
    • GNOME 1.2
  4. Episode 1.4: Founding the Foundation
    • GUADEC
    • The GNOME Foundation
    • The GNOME logo
    • Working on GNOME: CVS
    • Working on GNOME: Bugzilla
  5. Episode 1.5: End of the road
    • The window manager question
    • Sawmill/Sawfish
    • GConf
    • GNOME 1.4
    • Sun, accessibility, and clocks
    • Dot bomb
    • Eazel’s last whisper
    • Outside-context problem: OSX
  6. Side episode 1.a: GTK 1
  7. Side episode 1.b: Language bindings
  8. Side episode 1.c: GNOME applications

Chapter 2: Perfection, Improved

  1. Episode 2.0: Retrospective
  2. Episode 2.1: On brand
    • GTK 2.0: GTK Harder
    • Fonts, icons, and Unicode
    • Configuration woes
  3. Episode 2.2: Release day
    • Design, usability, accessibility, and brand
    • Human Interface Guidelines
    • The cost of settings
    • Time versus features
    • GNOME 2.0 reactions
  4. Side episode 2.a: Building GNOME

by ebassi at February 17, 2021 06:19 PM

December 31, 2020

Tomas Frydrych

Onwards and Upwards

Onwards and Upwards

It’s been an odd year, to say the least. A year that brought out a pinch of community spirit in all of us, and alongside it a large doze of selfishness.

For us it started quite well, Linda was training for the Highland Fling and by February was in a great shape. Then I picked up ‘a virus’ the first week in March skiing in north of Italy ... the symptoms came four days after coming back, just as we were to head to Aberdeen for the D33. At this stage the UK wasn’t taking COVID seriously (thanks a lot Boris), there was no testing, and events like the D33 were still held; Linda sensibly pulled out, her full symptoms came three days later.

We both had what you would call a ‘mild COVID’, I was over the worst of it in 48h, thinking I would go for a short run the next day ... wishful thinking. I managed a 4k jog ten days later, but then needed a full week for my legs to recover (up to this point I’d not consider 4k worth lacing the shoes for). Linda’s experience was similar, it took us both over a month to start getting to some sort of a normal, but with no fitness left whatsoever, and I have been left with an irritable throat that still flairs up time from time when I get tired. As I said, a mild COVID, and we count our blessings.

The biggest regret of the year has been the three lost holidays in March, June and November; the last one in particular rankled, as our area should have been in Tier 2, but was kept in 3 to avoid the Weegies flocking to the pubs (not wholly unjustified given how overrun the Falkirk shops were with Glasgow tourists before Christmas). But again one needs to keep things in perspective, given what might have been, mustn’t grumble.

There were upsides to the year too: discovering some nice places within walking / running / cycling distance of home, growing veggies, eating well, joining the MAMILs. We have learnt to appreciate the little pleasures more (like sitting with a Kelly Kettle in the garden), and that, I think, is a good thing.

Of course, there is the whole Brexit saga finally coming to an end. My take on this has been pretty simple (never forget, never forgive), and nothing I have seen this year has made me change that view. There is only one way forward out of this mess: let England to its own devices, Scotland can do so much better ... bring on IndyRef2!

2021? I expect this coming year not to be dissimilar to 2020, regardless what Boris says. As everything that comes out of that imbecile’s mouth, ‘Freedom by Easter’ (just like ‘Normality by Christmas’) is manifestly bollocks. To achieve herd immunity requires for ~70% of the herd to be immune; using a vaccine with 90% efficacy, that would require nearly 80% of the nation to be immunised. I don’t see this happening by the autumn, because (a) the UK Government is not planning to fully vaccinate anywhere near those numbers, and (b) it would require the vaccinations to be mandatory to achieve that. There is also the fact the efficacy of the Oxford vaccine (which presumably will be the main vaccine used in the UK), is still unknown, with some of the reports suggesting it could be only in the 60% region. Nor do we know how long the immunity might lasts. The vaccines are a wee light at the end of the tunnel, but you know what they say about lights at the end of a tunnel ...

Anyway, have a good one, keep safe, make most of what it brings.

by tf at December 31, 2020 09:10 AM

August 27, 2020

Chris Lord

OffscreenCanvas, jobs, life

Hoo boy, it’s been a long time since I last blogged… About 2 and a half years! So, what’s been happening in that time? This will be a long one, so if you’re only interested in a part of it (and who could blame you), I’ve titled each section.

Leaving Impossible

Well, unfortunately my work with Impossible ended, as we essentially ran out of funding. That’s really a shame, we worked on some really cool, open-source stuff, and we’ve definitely seen similar innovations in the field since we stopped working on it. We took a short break (during which we also, unsuccessfully, searched for further funding), after which Rob started working on a cool, related project of his own that you should check out, and I, being a bit less brave, starting seeking out a new job. I did consider becoming a full-time musician, but business wasn’t picking up as quickly as I’d hoped it might in that down-time, and with hindsight, I’m glad I didn’t (Covid-19 and all).

I interviewed with a few places, which was certainly an eye-opening experience. The last ‘real’ job interview I did was for Mozilla in 2011, which consisted mainly of talking with engineers that worked there, and working through a few whiteboard problems. Being a young, eager coder at the time, this didn’t really phase me back then. Turns out either the questions have evolved or I’m just not quite as sharp as I used to be in that very particular environment. The one interview I had that involved whiteboard coding was a very mixed bag. It seemed a mix of two types of questions; those that are easy to answer (but unless you’re in the habit of writing very quickly on a whiteboard, slow to write down) and those that were pretty impossible to answer without specific preparation. Perhaps this was the fault of recruiters, but you might hope that interviews would be catered somewhat to the person you’re interviewing, or the work they might actually be doing, neither of which seemed to be the case? Unsurprisingly, I didn’t get past that interview, but in retrospect I’m also glad I didn’t. Igalia’s interview process was much more humane, and involved mostly discussions about actual work I’ve done, hypothetical situations and ethics. They were very long discussions, mind, but I’m very glad that they were happy to hire me, and that I didn’t entertain different possibilities. If you aren’t already familiar with Igalia, I’d highly recommend having a read about them/us. I’ve been there a year now, and the feeling is quite similar to when I first joined Mozilla, but I believe with Igalia’s structure, this is likely to stay a happier and safer environment. Not that I mean to knock Mozilla, especially now, but anyone that has worked there will likely admit that along with the giddy highs, there are also some unfortunate lows.


I joined Igalia as part of the team that works on WebKit, and that’s what I’ve been doing since. It almost makes perfect sense in a way. Surprisingly, although I’ve spent overwhelmingly more time on Gecko, I did actually work with WebKit first while at OpenedHand, and for a short period at Intel. While celebrating my first commit to WebKit, I did actually discover it wasn’t my first commit at all, but I’d contributed a small embedding-related fix-up in 2008. So it’s nice to have come full-circle! My first work at Igalia was fixing up some patches that Žan Doberšek had prototyped to allow direct display of YUV video data via pixel shaders. Later on, I was also pleased to extend that work somewhat by fixing some vc3 driver bugs and GStreamer bugs, to allow for hardware decoding of YUV video on Raspberry Pi 3b (this, I believe, is all upstream at this point). WebKit Gtk and WPE WebKit may be the only Linux browser backends that leverage this pipeline, allowing for 1080p30 video playback on a Pi3b. There are other issues making this less useful than you might think, but either way, it’s a nice first achievement.


After that introduction, I was pointed at what could be fairly described as my main project, OffscreenCanvas. This was also a continuation of Žan’s work (he’s prolific!), though there has been significant original work since. This might be the part of this post that people find most interesting or relevant, but having not blogged in over 2 years, I can’t be blamed for waffling just a little. OffscreenCanvas is a relatively new web standard that allows the use of canvas API disconnected from the DOM, and within Workers. It also makes some provisions for asynchronously updated rendering, allowing canvas updates in Workers to bypass the main thread entirely and thus not be blocked by long-running processes on that thread. The most obvious use-case for this, and I think the most practical, is essentially non-blocking rendering of generated content. This is extremely handy for maps, for example. There are some other nice use-cases for this as well – you can, for example, show loading indicators that don’t stop animating while performing complex DOM manipulation, or procedurally generate textures for games, asynchronously. Any situation where you might want to do some long-running image processing without blocking the main thread (image editing also springs to mind).

Currently, the only complete implementation is within Blink. Gecko has a partial implementation that only supports WebGL contexts (and last time I tried, crashed the browser on creation…), but as far as I know, that’s it. I’ve been working on this, with encouragement and cooperation from Apple, on and off for the past year. In fact, as of August 12th, it’s even partially usable, though there is still a fair bit missing. I’ve been concentrating on the 2d context use-case, as I think it’s by far the most useful part of the standard. It’s at the point where it’s mostly usable, minus text rendering and minus some edge-case colour parsing. Asynchronous updates are also not yet supported, though I believe that’s fairly close for Linux. OffscreenCanvas is enabled with experimental features, for those that want to try it out.

My next goal, after asynchronous updates on Linux, is to enable WebGL context support. I believe these aren’t particularly tough goals, given where it is now, so hopefully they’ll happen by the end of the year. Text rendering is a much harder problem, but I hope that between us at Igalia and the excellent engineers at Apple, we can come up with a plan for it. The difficulty is that both styling and font loading/caching were written with the assumption that they’d run on just one thread, and that that thread would be the main thread. A very reasonable assumption in a pre-Worker and pre-many-core-CPU world of course, but increasingly less so now, and very awkward for this particular piece of work. Hopefully we’ll persevere though, this is a pretty cool technology, and I’d love to contribute to it being feasible to use widely, and lessen the gap between native and the web.

And that’s it from me. Lots of non-work related stuff has happened in the time since I last posted, but I’m keeping this post tech-related. If you want to hear more of my nonsense, I tend to post on Twitter a bit more often these days. See you in another couple of years 🙂

by Chris Lord at August 27, 2020 08:56 AM

Tomas Frydrych

The Unsustainable Outdoors

The Unsustainable Outdoors

COVID has, so I hear, brought new levels of antisocial behaviour into the Outdoors: stories about irresponsible campers seem to appear daily in the news, and a petition asking for curbing of rights to camp has been lodged with the Scottish Parliament.

Yet, what surprises me most about this is the surprise itself, for there is nothing new here: the abandoned camping equipment and human faeces have plagued our outdoors for years, as have the endless fire pits and disposable barbecues, the chopping down of trees for fuel. These issues have long been widespread, just ask any bothy maintainer. Nor are the problems around the idiocy that is NC500 new, they have always been there.

The only thing that has changed is scale; I have discussed the unsustainability of contemporary outdoor practices at mass scale elsewhere, so won’t go into it again. The Pandemic has brought out a new spawn of Adventurers, folk who have no relationship with the outdoors and no emotional affinity with nature. But, and this is the crux of the matter, their attitudes are representative of our society’s general attitudes to nature and the environment: ‘Let Someone Else Clean it Up’ and ‘Carpe Diem’ are the mantras we have been living by for a long time.

In other words, this is what mass participation in the outdoors looks like and no amount of platitudes, sloganry or codes of practice will fix it. This isn’t a few bad apples, this is the real us. And to be frank, I don’t see how things could possibly change for better in my lifetime. You don’t have to dig very deep to see that the environmental crisis of our time, in all its shapes and forms, has a single root -- consumerism. And, unfortunately, consumerism has become the economic foundation of our world.

To the multi-billion Outdoor Industry I want to say this: screw your philanthropy and your environmental funds; for a generation now you have been ruthlessly and unashamedly pursuing the consumerisation of the outdoors while hiding behind the superficial etiquette of Leave No Trace® and the like. Well you have succeeded. What did you think was going to happen when the weekend camping kit costs less than a tank of fuel? To the ‘Influencers’ and ‘Ambassadors’, I hope the 30 pieces of silver were worth it, alas sales of souls are final.

The above mentioned petition will have caused lot of unease among the established outdoor practitioners, there will be concerns over the erosion of our hard won access rights. Those are entirely justified; there is very little doubt in my mind that it’s the landed lobbies that are pulling the strings on this in the background. But that said, an Iceland style country-wide ban on car camping outside of official campsites is long overdue, tent and camper van alike, for the latter are as much, if not more, of a problem as the former.

by tf at August 27, 2020 07:55 AM

June 25, 2020

Tomas Frydrych

Care for the Elderly

How many phone calls does it take to organise a COVID-19 test for a ninety+ year old who needs an emergency admission to a care home?

Linda’s been at it all day today ... and she is at 14 and counting. At this point she has a vague promise of a test, sometime. Today? Tomorrow? Next week? Nobody knows.

[The good news: while I was typing this a tester turned up and took the swabs; achievement unlocked.]

But here is the thing, sorting out this sort of stuff is what Linda does for living, she knows what to say to whom, when to push, and, ultimately, when not to take no for an answer.

Following her progress over the course of the last eight hours it became clear to me that had it been me trying to get this organised I’d have fallen by the wayside well before lunch. And I don’t think the way things went today is atypical. A colleague of Linda’s went through a similar situation recently ... 13 calls were needed — I can’t see our parents ever managing.

The free social care for the elderly is great on the glossy paper of political manifestos but the reality of it fails expectations. The councils don’t have enough money to implement it properly, and so not only what is on offer is rather meagre, but the system actively discourages uptake: the automated phone menus are endless, the forms are incomprehensible, the processes are protracted ad absurdum.

Call me cynical, but I don’t believe that this is just a bureaucratic inefficiency. We have been through this enough times in the last year and a half to start picking up the patterns, and the message is clear: you are a nuisance, be nice and go away. And I bet many do.

by tf at June 25, 2020 06:56 PM

June 10, 2020

Tomas Frydrych

'If You Elect a Clown ...

... expect a circus.'

Chessington World of Adventures by Beans on Toast, as good a summary of the UK situation as you will find (and some great photos by Pitlad).

by tf at June 10, 2020 08:38 AM

June 07, 2020

Tomas Frydrych

The Five Miles

There has been much hoohaa in recent days over people breaking the recommendation not to travel further than five miles for their recreation. But while the images of weekend scenes at Balmaha and Arrochar are quite ridiculous, the politicians need to get of their high horses and lose that self righteous indignation, for the real problem is not with us people.

It has been clear from the beginning that the various do and don’t rules are being created by people living in some sort of an alternative reality cocoon. Take the, now thankfully abolished, 60min day exercise rule—no wonder the nation is experiencing an obesity epidemic if our politicians reckon that’s all we need.

And let there be no doubt, these are political constraints. At no point have we been presented with any evidence to back the restrictions on our outdoor activities other than the need to keep 2m distance: no explanation why 60min and not 90min, why once a day and not twice a day; the very fact that the restriction didn’t apply to walking a dog demonstration of its arbitrariness.

The latest five mile rule is no different, at best a number conjured up by a bureaucratic pen, at worst made up on the fly by Nicola right there behind the pulpit. For many, if not most of us, a five mile radius doesn’t encompass our ‘local area’: we can’t do our shopping within five miles, meet our closest friends, never mind reach a proper outdoor space.

Now, I understand why the Scottish Government are reluctant to ease the lockdown, they don’t have the epidemic under control. But by now we all know that’s mainly the case because the lockdown was introduced way too late and because contact tracing was abandoned when it mattered most. And on the top of it, for better part of three months the Scottish Government, as much as the English one, didn’t make any preparations for eventually getting us out of lockdown.

So forgive me if I, for one, am sick and tired of being patronised, of being told that ‘we understand people want to head in to the places they love, but they will still be there when it’s over’. This shows so little grasp of what outdoor recreation is about that I don’t know whether to scream or just cry quietly. For great many of us, we don’t head to the outdoors because we love it in some abstract platonic sense. We do so because we need to, because it’s the only way we know to maintain our sanity; and that’s why in turn we love these places. And by now, we are crawling up the walls.

If we are having a problem with controlling the spread of the virus, it’s not because we, the people, are being irresponsible, it’s because of the half baked guidelines coming to us from up on high. It is inevitable that when the official rules don’t make enough sense for real people living real lives, folk resort to making their own individual choices. And when we do, it becomes each one for themselves, and that’s the last thing we need.

by tf at June 07, 2020 06:09 AM

June 02, 2020

Emmanuele Bassi

Type instances

Let us assume we are writing a library.

The particular nature of our work is up for any amount of debate, but the basic fact of it comes with a few requirements, and they are by and large inevitable if you wish to be a well-behaved, well-integrated member of the GNOME community. One of which is: “please, think of the language bindings”. These days, luckily for all of us, this means writing introspectable interfaces that adhere to fairly sensible best practices and conventions.

One of the basic conventions has to do with types. By and large, types exposed by libraries fall into these two categories:

  • plain old data structures, which are represented by what’s called a “boxed” type; these are simple types with a copy and a free function, mostly meant for marshalling things around so that language bindings can implement properties, signal handlers, and abide to ownership transfer rules. Boxed types cannot have sub-types.
  • object types, used for everything else: properties, emitting signals, inheritance, interface implementation, the whole shebang.

Boxed and object types cover most of the functionality in a modern, GObject-based API, and people can consume the very same API from languages that are not C.

Except that there’s a third, kind of niche data type:

  • fully opaque, with instance fields only known within the scope of the project itself
  • immutable, or at least with low-mutability, after construction
  • reference counted, with optional cloning and serialization
  • derivable within the scope of the project, typically with a base abstract class
  • without signals or properties


One strategy used to implement this niche type has been to use a boxed type, and then invent some private, ad hoc derivation technique, with some structure containing function pointers used as a vtable, for instance:

boxed-type.c [Lines 3-79] download
/* {{{ Base */
typedef struct _Base            Base;
typedef struct _BaseClass       BaseClass;

struct _BaseClass
  const char *type_name;
  gsize instance_size;

  void  (* finalize)    (Base *self);
  void  (* foo)         (Base *self);
  void  (* bar)         (Base *self);

struct _Base
  const BaseClass *base_class;

  // Shared field
  int some_field;

// Allocate the instance described in the vtable
static Base *
base_alloc (const BaseClass *vtable)
  // Simple check to ensure that the derived instance includes the
  // parent base type
  g_assert (vtable->instance_size >= sizeof (Base));

  // Use the atomic refcounted boxing to allocated the requested
  // instance size
  Base *res = g_atomic_rc_box_new (vtable->instance_size);

  // Store the vtable
  res->base_class = vtable;

  // Initialize the base instance fields
  res->some_field = 42;

  return res;

static void
base_finalize (Base *self)
  // Allow derived types to clear up their own instance data
  self->base_class->finalize (self);

Base *
base_ref (Base *self)
  return g_atomic_rc_box_acquire (self);

base_unref (Base *self)
  g_atomic_rc_box_release (self, base_finalize);

base_foo (Base *self)
  self->base_class->foo (self);

base_bar (Base *self)
  self->base_class->bar (self);

// Add a GType for the base type
G_DEFINE_BOXED_TYPE (Base, base, base_ref, base_unref)
/* }}} */

The code above lets us create derived types that conform to the base type API contract, while providing additional functionality; for instance:

boxed-type.c [Lines 81-123] download
/* {{{ DerivedA */
typedef struct {
  Base parent;

  char *some_other_field;
} DerivedA;

static void
derived_a_finalize (Base *base)
  DerivedA *self = (DerivedA *) base;

  g_free (self->some_other_field);

static const BaseClass derived_a_class = {
  .type_name = "DerivedA",
  .instance_size = sizeof (DerivedA),
  .finalize = derived_a_finalize,
  .foo = derived_a_foo, // defined elsewhere
  .bar = derived_a_bar, // defined elsewhere

Base *
derived_a_new (const char *some_other_field)
  Base *res = base_alloc (&derived_a_class);

  DerivedA *self = (DerivedA *) res;

  self->some_other_field = g_strdup (some_other_field);

  return res;

const char *
derived_a_get_some_other_field (Base *base)
  DerivedA *self = (DerivedA *) base;

  return self->some_other_field;
/* }}} */

Since the Base type is also a boxed type, it can be used for signal marshallers and GObject properties at zero cost.

This whole thing seems pretty efficient, and fairly simple to wrap your head around, but things fall apart pretty quickly as soon as you make this API public and tell people to use it from languages that are not C.

As I said above, boxed types cannot have sub-types; the type system has no idea that DerivedA implements the Base API contract. Additionally, since the whole introspection system is based on conventions applied on top of some C API, there is no way for language bindings to know that the derived_a_get_some_other_field() function is really a DerivedA method, meant to operate on DerivedA instances. Instead, you’ll only be able to access the method as a static function, like:

obj = Namespace.derived_a_new()

instead of the idiomatic, and natural:

obj =

In short: please, don’t use boxed types for this, unless you’re planning to hide this functionality from the public API.

Typed instances

At this point the recommendation would be to switch to GObject for your type; make the type derivable in your project’s scope, avoid properties and signals, and you get fairly idiomatic code, and a bunch of other features, like weak references, toggle references, and keyed instance data. You can use your types for properties and signals, and you’re pretty much done.

But what if you don’t want to use GObject…

Well, in that case GLib lets you create your own type hierarchy, with its own rules, by using GTypeInstance as the base type.

GTypeInstance is the common ancestor for everything that is meant to be derivable; it’s the base type for GObject as well. Implementing a GTypeInstance-derived hierarchy doesn’t take much effort: it’s mostly low level glue code:

instance-type.c [Lines 4-189] download
typedef struct _Base            Base;
typedef struct _BaseClass       BaseClass;
typedef struct _BaseTypeInfo    BaseTypeInfo;

#define BASE_TYPE               (base_get_type())

// Simple macro that lets you chain up to the parent type's implementation
// of a virtual function, e.g.:
//   BASE_SUPER (self)->finalize (obj);
#define BASE_SUPER(obj)         ((BaseClass *) g_type_class_peek (g_type_parent (G_TYPE_FROM_INSTANCE (obj))))

struct _BaseClass
  GTypeClass parent_class;

  void  (* finalize)    (Base *self);
  void  (* foo)         (Base *self);
  void  (* bar)         (Base *self);

struct _Base
  GTypeInstance parent_instance;

  gatomicrefcount ref_count;

  // Shared field
  int some_field;

// A structure to be filled out by derived types when registering
// themselves into the type system; it copies the vtable into the
// class structure, and defines the size of the instance
struct _BaseTypeInfo
  gsize instance_size;

  void  (* finalize)    (Base *self);
  void  (* foo)         (Base *self);
  void  (* bar)         (Base *self);

// GValue table, so that you can initialize, compare, and clear
// your type inside a GValue, as well as collect/copy it when
// dealing with variadic arguments
static void
value_base_init (GValue *value)
  value->data[0].v_pointer = NULL;

static void
value_base_free_value (GValue *value)
  if (value->data[0].v_pointer != NULL)
    base_unref (value->data[0].v_pointer);

static void
value_base_copy_value (const GValue *src,
                       GValue       *dst)
  if (src->data[0].v_pointer != NULL)
    dst->data[0].v_pointer = base_ref (src->data[0].v_pointer);
    dst->data[0].v_pointer = NULL;

static gpointer
value_expression_peek_pointer (const GValue *value)
  return value->data[0].v_pointer;

static char *
value_base_collect_value (GValue      *value,
                          guint        n_collect_values,
                          GTypeCValue *collect_values,
                          guint        collect_flags)
  Base *base = collect_values[0].v_pointer;

  if (base == NULL)
      value->data[0].v_pointer = NULL;
      return NULL;

  if (base->parent_instance.g_class == NULL)
    return g_strconcat ("invalid unclassed Base pointer for "
                        "value type '",
                        G_VALUE_TYPE_NAME (value),

  value->data[0].v_pointer = base_ref (base);

  return NULL;

static gchar *
value_base_lcopy_value (const GValue *value,
                        guint         n_collect_values,
                        GTypeCValue  *collect_values,
                        guint         collect_flags)
  Base **base_p = collect_values[0].v_pointer;

  if (base_p == NULL)
    return g_strconcat ("value location for '",
                        G_VALUE_TYPE_NAME (value),
                        "' passed as NULL",

  if (value->data[0].v_pointer == NULL)
    *base_p = NULL;
  else if (collect_flags & G_VALUE_NOCOPY_CONTENTS)
    *base_p = value->data[0].v_pointer;
    *base_p = base_ref (value->data[0].v_pointer);

  return NULL;

// Register the Base type
base_get_type (void)
  static volatile gsize base_type__volatile;

  if (g_once_init_enter (&base_type__volatile))
      // This is a derivable type; we also want to allow
      // its derived types to be derivable
      static const GTypeFundamentalInfo finfo = {

      // The gunk for dealing with GValue
      static const GTypeValueTable value_table = {

      // Base type information
      const GTypeInfo base_info = {
        // Class
        sizeof (GtkExpressionClass),
        (GBaseInitFunc) NULL,
        (GBaseFinalizeFunc) NULL,
        (GClassInitFunc) base_class_init,
        (GClassFinalizeFunc) NULL,

        // Instance
        sizeof (GtkExpression),
        (GInstanceInitFunc) base_init,

        // GValue

      // Register the Base type as a new, abstract fundamental type
      GType base_type =
        g_type_register_fundamental (g_type_fundamental_next (),
                                     g_intern_static_string ("Base"),
                                     &event_info, &finfo,

      g_once_init_leave (&base_type__volatile, expression_type);

  return base_type__volatile;

Yes, this is a lot of code.

The base code stays pretty much the same:

instance-type.c [Lines 191-243] download
static void
base_real_finalize (Base *self)
  g_type_free_instance ((GTypeInstance *) self);

static void
base_class_init (BaseClass *klass)
  klass->finalize = base_real_finalize;

static void
base_init (Base *self)
  // Initialize the base instance fields
  g_atomic_ref_count_init (&res->ref_count);
  res->some_field = 42;

static Base *
base_alloc (GType type)
  g_assert (g_type_is_a (type, base_get_type());

  // Instantiate a new type derived by Base
  return (Base *) g_type_create_instance (type);

Base *
base_ref (Base *self)
  g_atomic_ref_count_inc (&self->ref_count);

base_unref (Base *self)
  if (g_atomic_ref_count_dec (&self->ref_count))
    BASE_GET_CLASS (self)->finalize (self);

base_foo (Base *self)
  BASE_GET_CLASS (self)->foo (self);

base_bar (Base *self)
  BASE_GET_CLASS (self)->bar (self);


  • the reference counting is explicit, as we must use g_type_create_instance() and g_type_free_instance() to allocate and free the memory associated to the instance
  • you need to get the class structure from the instance using the GType macros instead of direct pointer access

Finally, you will need to add code to let you register derived types; since we want to tightly control the derivation, we use an ad hoc structure for the virtual functions, and we use a generic class initialization function:

instance-type.c [Lines 245-290] download
static void
base_generic_class_init (gpointer g_class,
                         gpointer class_data)
  BaseTypeInfo *info = class_data;
  BaseClass *klass = g_class;

  klass->finalize = info->finalize;
  klass->foo = info->foo;
  klass->bar = info->bar;

  // The info structure was copied, so we now need
  // to release the resources associated with it
  g_free (class_data);

// Register a derived typed of Base
static GType
base_type_register_static (const char         *type_name,
                           const BaseTypeInfo *type_info)
  // Simple check to ensure that the derived instance includes the
  // parent base type
  g_assert (type_info->instance_size >= sizeof (Base));

  GTypeInfo type_info;

  // All derived types have the same class and cannot add new virtual
  // functions
  type_info.class_size = sizeof (BaseClass);
  type_info.base_init = NULL;
  type_info.base_finalize = NULL;

  // Fill out the class vtable from the BaseTypeInfo structure
  type_info.class_init = base_generic_class_init;
  type_info.class_finalize = NULL;
  type_info.class_data = g_memdup (type_info, sizeof (BaseTypeInfo));

  // Instance information
  type_info.instance_size = type_info->instance_size;
  type_info.n_preallocs = 0;
  type_info.instance_init = NULL;
  type_info.value_table = NULL;

  return g_type_register_static (BASE_TYPE, type_name, &type_info, 0);

Otherwise, you could re-use the G_DEFINE_TYPE macro—yes, it does not require GObject—but then you’d have to implement your own class initialization and instance initialization functions.

After you defined the base type, you can structure your types in the same way as the boxed type code:

instance-type.c [Lines 294-347] download
typedef struct {
  Base parent;

  char *some_other_field;
} DerivedA;

static void
derived_a_finalize (Base *base)
  DerivedA *self = (DerivedA *) base;

  g_free (self->some_other_field);

  // We need to chain up to the parent's finalize() or we're
  // going to leak the instance
  BASE_SUPER (self)->finalize (base);

static const BaseTypeInfo derived_a_info = {
  .instance_size = sizeof (DerivedA),
  .finalize = derived_a_finalize,
  .foo = derived_a_foo, // defined elsewhere
  .bar = derived_a_bar, // defined elsewhere

derived_a_get_type (void)
  static volatile gsize derived_type__volatile;

  if (g_once_init_enter (&derived_type__volatile))
      // Register the type
      GType derived_type =
        base_type_register_static (g_intern_static_string ("DerivedA"),

      g_once_init_leave (&derived_type__volatile, derived_type);

  return derived_type__volatile;

Base *
derived_a_new (const char *some_other_field)
  Base *res = base_alloc (derived_a_get_type ());

  DerivedA *self = (DerivedA *) res;

  self->some_other_field = g_strdup (some_other_field);

  return res;

The nice bit is that you can tell the introspection scanner how to deal with each derived type through annotations, and keep the API simple to use in C while idiomatic to use in other languages:

instance-type.c [Lines 349-363] download
 * derived_a_get_some_other_field:
 * @base: (type DerivedA): a derived #Base instance
 * Retrieves the `some_other_field` of a derived #Base instance.
 * Returns: (transfer none): the contents of the field
const char *
derived_a_get_some_other_field (Base *base)
  DerivedA *self = (DerivedA *) base;

  return self->some_other_field;


Of course, there are costs to this approach. In no particular order:

  • The type system boilerplate is a lot; the code size more than doubled from the boxed type approach. This is quite annoying, but at least it is a one-off cost, and you won’t likely ever need to change it. It would be nice to have it hidden by some magic macro incantation, but it’s understandably hard to do so without imposing restrictions on the kind of types you can create; since you’re trying to escape the restrictions of GObject, it would not make sense to impose a different set of restrictions.

  • If you want to be able to use this new type with properties and you cannot use G_TYPE_POINTER as a generic, hands-off container, you will need to derive GParamSpec, and add ad hoc API for GValue, which is even more annoying boilerplate. I’ve skipped it in the example, because that would add about 100 more lines of code.

  • Generated signal marshallers, and the generic one using libffi, do not know how to marshal typed instances; you will need custom written marshallers, or you’re going to use G_TYPE_POINTER everywhere and assume the risk of untyped boxing. The same applies to anything that uses the type system to perform things like serialization and deserialization, or GValue boxing and unboxing. You decided to build your own theme park on the Moon, and the type system has no idea how to represent it, or access its functionality.

  • Language bindings need to be able to deal with GTypeInstance and fundamental types; this is not always immediately necessary, so some maintainers do not add the code to handle this aspect of the type system.

The benefit is, of course, the fact that you are using a separate type hierarchy, and you get to make your own rules on things like memory management, lifetimes, and ownership. You can control the inheritance chain, and the rules on the overridable virtual functions. Since you control the whole type, you can add things like serialization and deserialization, or instance cloning, right at the top of the hierarchy. You could even implement properties without using GParamSpec.


Please, please use GObject. Writing type system code is already boring and error prone, which is why we added a ton of macros to avoid people shooting themselves in both their feet, and we hammered away all the special snowflake API flourishes that made parsing C API to generate introspection data impossible.

I can only recommend you go down the GTypeInstance route if you’ve done your due diligence on what that entails, and are aware that it is a last resort if GObject simply does not work within your project’s constraints.

by ebassi at June 02, 2020 03:06 PM

May 25, 2020

Tomas Frydrych

England’s Favourite Ned*

... did nothing wrong. Of course not. They never do, do they? No matter the vandalism, the destruction in their wake, the trail of buckie shards left behind, someone else is to blame.

Personally, I am not sure whether I am more concerned that our collective fate lies in the hands of someone who at the first whiff of trouble runs to mummy and daddy, or someone who thinks driving 30 miles is an appropriate way to ascertain he is fit to drive.

But more importantly, if we learnt anything at all from this episode, it’s that the Government’s Herd Immunity strategy really was nothing other than a cavalier disregard for the lives of other (ordinary) people. For politicians who wish to pursue a Coventry strategy in the name of the greater good cannot shrink away from paying a personal price for that greater good. And one thing is sure, this lot doesn’t have the moral backbone, if they have any backbone at all.

Of course, England got the Prime Minister it wanted, and Dom was always part of that package. As for the rest of us ... dog shite someone else’s shoes treaded into the house.

[*] I think down south they call them chavs.

by tf at May 25, 2020 06:37 PM

May 11, 2020

Tomas Frydrych

Stay Alert!

Take a comfortable lockdown seat, stick some popcorn into the microwave and watch: Remember the overweight BoJo stuck on a zip wire waving the Union Jack? You’ve seen nothing yet, England and her Tories are about to find out the true cost of putting an inept clown in charge.

Of course, this is not a third rate comedy, but a national tragedy in which we happen to be caught up and in which some 30,000-50,000 of us (depending whose counting you believe) have already died. And, predictably, that human cost is being disproportionately distributed across the socioeconomic spectrum, which suits the Tories to the bone. I guess not much of a surprise there, Compassionate Conservatism is as much of an oxymoron as Socialism with a Human Face was.

I am glad to see that the Scottish Government have finally shed that pathological fear of being accused of politicising the pandemic and stood up for Scotland. There is little doubt it’s far too early to lift the lockdown.

We hear a lot about the R number: a great theoretical indicator, which unfortunately can’t be accurately calculated. But there is another number, one which is not an estimate, that’s worth paying attention to: the number of newly diagnosed cases. As of yesterday this stood at 181 in Scotland; I expect for practical reasons this will need to drop into low double digits in order to bootstrap the test and trace strategy (which is why suddenly dropping tracing in early March was grossly negligent).

So, yeah, I can’t but ask again, are the tracers getting recruited and being trained, so Scotland can hit the ground running when the conditions are right?

by tf at May 11, 2020 07:18 AM

May 02, 2020

Tomas Frydrych

Netflix and Chill

At the top of this week’s Netflix ‘for you’ list came the star studded 1995 classic Outbreak. I cringed a bit, but watched it in the end, being the lockdown and all that, and I am glad I did, for I finally understand where the UK Government got its COVID-19 strategy from! This Hollywood disaster cliche has it all: the politicians’ cavalier attitude to loss of life, the war imagery, the pseudoscience, the vaccine produced in days, the triumphal ending.

Also this week, in order to meet Hancock’s arbitrary, and entirely meaningless, 100,000 test target, we saw the UK Government clumsily fiddling the figures to get to that magic number. While the UK might not be particularly high up in the world’s school leavers literacy and math skill rankings, not even a furloughed UK six year old would be fooled by this particular slight of hand—yet another confirmation that our dear leaders don’t care about anything but loss of face, the Government’s daily briefings continuous stream of inane slogans, spin and outright lies.

There is one phrase in particular that sums up the moral bankruptcy of the UK Government: Protect the NHS — it is not the job of the public to be protecting the NHS! It’s for the NHS to protect us, that’s its job description and its vocation.

The fact the health service is unable to cope is solely the fault of the Tory governments of the past decade, making the daily routine of hollow thanks from the governmental pulpit a hypocrisy of the highest order. (And the NHS is not coping; a slow suffocation is a terrible way to die. In a supposedly civilised society nobody should be allowed go through it without appropriate hospital care—the next time you hear how many people are dying of this virus outside of a hospital, ask yourself why are they not in one if things are so splendid?)

Luckily the situation is a wee bit better in Scotland then south of the border. There are two reasons for this, (a) we are somewhat behind the English curve here, so the start of the lockdown has come relatively earlier up here, and (b) the Scottish Government is spending bit more on the NHS then they do down south. But, of course, there are limits to how much we can spend on public services up here, for we are chained up to the spending policies of our English overlords by the Barnett formula, our budget hostage of Tory worldview. This cannot go on, this crisis is making that, yet again, clear.

And still no practical steps taken toward ending the lockdown ... we can’t go on like this indefinitely, not even for months. When it is obvious our Governments have no plan, it is unavoidable that we start exercising our individual judgement on what we have to do to get ourselves and our families safely through this, and that’s not ideal. So rather than lamenting the rise in traffic, it’s time for our politicians to pull their finger out.

by tf at May 02, 2020 08:54 AM

April 25, 2020

Tomas Frydrych

A Grownup Conversation?

This week’s attempt of the Scottish Government to start a grown up conversation on how we deal with the ongoing COVID-19 crisis is most welcome, to the extent to which it goes, which is not far enough.

I have lot time and respect for Nicola, and I have been a loyal SNP voter for over 20 years. But when it comes to this crisis even I am beginning to lose patience with the Scottish Government’s reluctance to break away from the ideologically driven policies of England’s Prime Minister and the ineffectual English Parliament.

For there can be no doubt that ideology is at the bottom of the present UK crisis. The current state of the NHS and the lack of epidemic readiness, those are the products of a decade of ‘Compassionate Conservatism’. And for all that talk of ‘best scientific advice’, even without today’s revelation that Cummings takes part in SAGE meetings the ideological paw prints on the No 10 handling of the crisis have been evident since the beginning, the herd immunity strategy a convenient solution to the growing costs of social and medical care.

Furthermore, I know many scientists don’t want to hear this, but science is not apolitical. No human endeavour is, the very assertion of being apolitical is in itself a profound political statement. We only need to look back a bit to see how science is marked by politics: alongside the long list of amazing scientific advances of the 20th century comes a long list of scientists who ended up on the wrong side of history, working for unsavoury ideological causes, both left and right.

Science costs money, lots of money. An in our world money is concentrated in the hands of a very few who pull the strings of power; democracy, as we know it in the west today, is an illusion, in reality we live in a plutocracy. The plutocrats are the people who will emerge from this crisis richer at the expense of the rest of us, and they are the people who shape the focus and direction of scientific endeavour. It is for this reason that the science the Government appeals to must be open to scrutiny, for science without peer review is no science at all, but mere alchemy.

The problem with having a grown up conversation is that it needs to lead to grown up answers, and in this instance there is only one such an answer, and we have know what it is since the beginning: mass testing, rigorous contact tracing and targeted precision isolation. This is the only viable way we can get from where we are to the point where a vaccine might fix this problem permanently.

The vaccine solution is a long way away, for it’s not just how long the science of developing effective safe vaccine might take, but also a question of costs (some people will be getting very rich of this, make no mistake), and mass global vaccination of some 7 billion people (for vaccination to be effective, around 70% of population must receive it) — I’d be very surprised if life is back to any sort of normality before 2025.

Ongoing mass lockdown is not a solution. So if we are to have an adult conversation, the first thing we need to discuss is why, three months into this thing, we are still no where near being able to test, trace and isolate. And no more alchemy bullshit, please.

by tf at April 25, 2020 10:41 AM

April 11, 2020

Tomas Frydrych

Reaping the Herd Immunity Strategy Costs

With BoJo out of critical care, and yesterday’s UK COVID-19 deaths just shy of a 1000, the highest ever recorded in Europe, it’s time to ask the PM ‘how’s the herd immunity working for ya?’ For as Marina Hyde noted, the people dying today were most likely infected in that critical time when BoJo and co. were telling us this was nothing serious to worry about.

More disconcertingly, Herd Immunity remains the Government’s strategy, the only thing that has really changed is that the ‘herd immunity’ designation is not verbalised, but that’s all. The press really need to pull their collective finger out and start doing their job of holding the politicians accountable, instead of endless speculating about how many more weeks of lockdown there might be ...

(The answer to that one is quite simple: unless the Government starts mass testing, contact tracing and targeted isolating, the lockdown will have to last until the nation has been vaccinated, for that’s the only way to achieve herd immunity without deaths in six figures. So that would in the order of years, not weeks.)

by tf at April 11, 2020 07:13 AM

April 05, 2020

Tomas Frydrych

One Set of Rules for Us, Another for Them

Seems the lockdown rules don’t apply to Dr Calderwood, Scotland’s Chief Medical Officer. This is incredibly serious and there is only one credible way in which Nicola Sturgeon can handle the situation and that is to sack the CMO.

The excuse that the CMO has been working hard and took the opportunity for a break will not wash. There are scores of others working as hard, if not harder, than any government bureaucrat — the nurses, the doctors, the cleaners, the administrators desperately trying to keep the NHS afloat, the supermarket workers, the delivery drivers. Majority of them get paid fraction of what the CMO earns and can but dream of second homes. And all of them are expected to follow the rules or else.

The lock down is the lynchpin of the Government’s strategy, there really is only one thing Nicola can do here. To do otherwise is to signal that the lockdown is an unnecessary overreach.

Since I wrote this earlier today, Dr Calderwood has apologised for an ‘error of judgement’, while Nicola Sturgeon has said we all make ‘mistakes’. Neither is an appropriate description of what has happened.

It comes down to this: either Dr Calderwood doesn’t believe the current policy is necessary to fight the pandemic, in which case she should say so, and the Government should put in place a better policy for all of us. Or she does believe this is the necessary and correct policy, in which case this seems to me like serious professional misconduct.

It has further emerged that she visited the family dacha last weekend as well, and that she was merely issued with a police warning. Sorry Nicola, this really does look like different rules for different people.

by tf at April 05, 2020 09:07 AM

April 04, 2020

Tomas Frydrych

The Lies are Killing Us

I don't know how you, but I have reached a point where I can't take the daily No. 10 briefings. There is no end to the half-truths, omissions and outright lies; it's the £350m red bus all over again.

Earlier this week much was made of the rate of hospital admissions becoming, for a couple of days, constant -- an indication that the Government is doing the right thing, that the rate of infections is slowing down. While a possible explanation, it is, as has been the Government's habit throughout, a case of the most optimistic interpretation possible.

There is, of course, another explanation, one that, given what we know about the state of the NHS in general, and what we are hearing from people on the cutting edge of things, seems quite likely: we might have reached the real NHS capacity.

This capacity is not simply the number of hospital beds, nor even the number of ventilators. It is also how much qualified nurses and doctors there are to man those beds, how many of them are well enough to work. It's how many ventilators the oxygen supplies in a hospital can cope with, the number of ambulances to bring sick people into hospitals, the number of paramedics to drive them, the space in the mortuaries, etc., etc.

At last night's briefing the question was asked whether by the time the peak comes there will be enough ventilators. Not to worry, we are no where near that limit. Really? How stupid do they take us for? By the Government's own figures, there are some 40,000 confirmed cases. Given that testing is pretty much limited to people who are serious enough to require hospital admission and the UK has some 8,000 ventilators, no computer model is needed to get the picture.

When Hancock declared that the peak is expected in a week's time, not even Nicola Sturgeon could fall in line (I really wish the Scottish Government had the guts to break away from the ideologically-driven English strategy). The curse of computer models.

As I said before, mathematical models are only as good as the data they are built from and that is fed into them, and the UK data quality is very poor. It is now emerging that even the numbers of deaths are not what they seem to be, e.g., the 159 deaths reported on 30 March have now been revised to over some 460, i.e., the curve we are seeing at the daily briefings, and that so much is being made of, is completely disconnected from reality -- this virus cannot, will not, be defeated by computer modelling.

Then there is the whole masks business. We are told masks don't work, the virus is not air born, plus most people don't put them on correctly anyway. That might be, but it's only half the story, the half where you wear a mask not to get infected. But here is the other half: wearing a mask makes it pretty hard to sneeze all over the apples in Tesco.

There is, of course, a more pragmatic reason why the general population shouldn't wear mask just now -- there aren't enough of them in the UK for the front line staff; I get that, and I imagine the majority of the UK population do as well, but why not be straight with us?

Every evening, again and again, much is being made of the drop in the number of people traveling, a sure sign the infection rate is slowing down. Again, there is an obvious problem with this. By the Government's own figures, which I imagine are, as has been the general trend, on the optimistic side of things, there are some 400,000 infected people in the community. That means that every day thousands of infected key workers go on spreading the virus (the NHS alone employs nearly 1% of the UK population). Folk on the cutting edge are reporting 25%, 30%, even more NHS staff being currently off ... this can only get worse.

Now, I do believe the lock down is the right thing to do, and no matter how frustrating you might find it, how angry and distrusting you are of the Government, please, please, stick with it. Not for them, but for mum, dad, granny, the old guy next door, the asthmatic pal.

But lock down is not going to save us; the way to get ahead of this is to follow the WHO protocol: test, contact trace, isolate; test, contact trace, isolate. And in spite of the inevitable u-turn on testing, the basis of the Government strategy is still the misguided idea of herd immunity.

But the biggest problem the Government is facing is the erosion of trust. I, for one, don't trust them as far as I could throw them, and if BoJo wants to turn this around, then here are the things that need to happen at the political end:

  1. Cummings (as the Government's chief strategist), Vallance and Whitty (as the two scientists ultimately responsible for the scientific advice), and Hancock (as the health minister on who's watch this mess came to be), need to be sacked, as does the posy of 'nudgers' and behavioural quacks. (This is not a question of whether they acted in good faith, I expect they did, but we can leave that for the Public Enquiry that is undoubtedly coming.)

  2. We need an experienced epidemiologist in charge, someone who is not so arrogant as to think the WHO advice is for third world countries only (as was actually said at one of the recent No. 10 briefings).

  3. The Government needs to start being straight with us. This is not the '40s, and we are not a nation of idiots.

Do I expect this to happen? Given the blame game has started already (and no, the UK situation is neither the fault of the Chinese, nor the Germans), I suspect not, but it's what it would take for me to give the Government some benefit of the doubt.

PS: The attentive reader has undoubtedly observed that all the links in this piece are from the Guardian. FWIW, I am not a Guardian reader, and I take an issue with lots of its politics. But as far as COVID-19 journalism goes, I am finding theirs to be the most comprehensive coverage of any of the UK media outlets.

by tf at April 04, 2020 11:01 AM

April 01, 2020

Tomas Frydrych

The Race is On

Which of our politicians is going to come up with the worst COVID-19 joke today, I wonder? All bets are off on this one, of fools we have no shortage. <-- more -->

by tf at April 01, 2020 06:03 AM

March 26, 2020

Tomas Frydrych

Not Dyson, FFS!

The UK Government has yet to put a foot right in its handling of the COVID-19 epidemic, and the decision to order 10,000 yet to be invented ventilators from a vacuum cleaner manufacturer while refusing to join the EU procurement effort is almost certainly another misstep on the way to the mother of all Public Inquiries.

If you need to rapidly manufacture a large number of pieces of medical equipment, then you need to do two things: (1) you need to provide (unlimited) support to the existing manufacturers of such equipment to scale up production, and (2) you need some temporary solution that can be constructed with minimal investment and expertise from existing off-the-shelf components (e.g., the OxVent), to tie you over while (1) is ramping up. The one thing (3) you absolutely don't want to do is to set out to invent a completely new solution, and commission it from people with a zero expertise and experience in the highly specialised field.

One doesn't have to be a medical scientist to grasp that pursuing option (3) instead of (2) makes no sense (though, in the interest of full disclosure, I do have a degree in Biomedical Engineering). This is yet another giant gamble, the UK Government yet again basing policy on the most optimistic outcomes. (Or perhaps just BoJo, in his Churchillian self-delusion, on a quest for his 'Spitfire', combined with a dose of Brexiteer cronyism.)

We need to, and can, do better in Scotland. Hopefully the establishment of a separate scientific committee to advise the Scottish Government is a signal that Bute House is, finally, willing to break away from London's inadequate policies, and I hope they will look seriously at the OxVent project.

by tf at March 26, 2020 10:05 AM

March 20, 2020

Tomas Frydrych

SLPOTY 2019 Commendation

The Scottish Landscape Photographer of the Year 2019 results have been announced, and I am glad to note I got a commendation in the monochrome category for Assynt Evening.

Must do better next time ... no, seriously, I am pretty chuffed, the standard this year was clearly very high, as you will be able to judge for yourself.

I am rather taken by the Wild Caledonia image from Kenny Muir's winning portfolio, it's beautifully, sensitively, done and represents the Scotland that I know, love and seek out, yet, also the Scotland that could be but isn't really.

However, my firm favourite from this year's winners gallery is Rocky Shore by Katrina Brayshaw, the image that won the Monochrome category.

by tf at March 20, 2020 09:38 AM

March 19, 2020

Tomas Frydrych

7 Weeks in ...

... and still not following the WHO guidelines. But what does the WHO know, eh? It takes an astonishing level of arrogance to ignore the WHO and at the same time to claim to follow the best scientific advice. But then, this is the guy who gave us the £350m red bus.

by tf at March 19, 2020 07:16 AM

March 17, 2020

Tomas Frydrych

Why Testing Matters

As I noted yesterday morning, testing is at the heart of the WHO procedures for dealing with the COVID-19 pandemic, and it is rather disappointing that both the UK and Scottish Governments are still choosing to ignore this. We are already reaping the consequences of this, and it will rapidly snowball.

Why is WHO so emphatic about the need to test, test, test? I can think of at least four good reasons:

  1. Computer models are only as good as the data they are built from. If the data is poor, so will the predictions, and any policies built on them. Not testing means poor models and no idea what to expect next.

  2. Not testing all suspected cases leads to unnecessary self-isolation. This has become particularly acute with the new 14-day household isolation policy, due to which a large number of NHS and other critical services workers have not been able to go to work today. How many of these would have been able to work if their unwell family members had been tested?

  3. Currently there is no test to tell if a person has had the infection and recovered. UK officials are assuming once recovered we will have some immunity, meaning we could return to work, not need to self-isolate if other family members develop symptoms, etc. But without being tested while ill, we won't be able to tell, and so end up in an endless circle of self-isolation.

  4. There is emerging evidence from the Chinese data that suggests that majority (perhaps as much as 90%) of all infections are spread by people who are asymptomatic. If this is the case, there is absolutely no way to get the spread under control without testing.

In other words, we can neither control the spread of the virus, nor keep our society working unless we follow the WHO guidelines and test, test, test.

by tf at March 17, 2020 05:21 PM

March 16, 2020

Tomas Frydrych

Governed by Misfits and Weirdos

If, like me, you were were curious what to expect from a government seeking to employ misfits and weirdos, the wait is over. It means nothing more than ignoring prevailing scientific opinion when formulating government policy.

The UK Government's disregard for the World Health Organisation guidelines when dealing with the COVID-19 pandemic has laid this bare beyond any reasonable doubt. These guidelines are the best of the current science; there is no way to spin this otherwise.

The only reason the Government's Herd Immunity strategy (and there is little doubt in my mind that until this weekend that was The Strategy, it is the one thing that explains everything this government has, or rather has not, done until now), has not made us the laughing stock of the world is because there is nothing funny in the potential outcomes.

(Though we have come close; a Harvard epidemiologist and his colleagues assumed at first that the reports coming from the UK were satire).

Sadly, we know that the WHO approach works, for the Asian countries that are rigorously sticking to it have managed to get or keep their epidemics under control. We also know the opposite is true, we only need to look at Italy for an example of what a slow initial response leads to.

The WHO strategy hinges on large scale testing and contact tracing to provide effective isolation prior to individuals becoming symptomatic. In the UK we seem to have failed to apply these guidelines in the critical early stages, and now have already given up on them again; the official UK guidelines seem to amount to little more than the 'wash your hands' and 'self-isolate'.

The early laxness I am inclined to put down the Johnson's natural contrarianism, the current mess to lack of NHS capacity -- successive Tory Governments have systematically run down the NHS with a view to cheap privatisation; we are about to reap the fruit of that.

So when I hear those speaking on behalf of the Government declaring the UK is following the best scientific advice, I can't avoid thinking this is nothing but desperate spin.

For in my mind the disregard of the WHO guidelines raises the questions of not only of our Government's competency (no surprises there), but ultimately whether Profs Vallance and Whitty are the right people to spearhead the UK's response; I really have no desire for myself and my elderly relatives to be guinea pigs in a high risk public health experiment, no matter how individually brilliant the dissenting scientists might be.

P.S. When the car makers are producing those ventilators, could the UK Government please take steps to ensure they don't include any defeat devices?

by tf at March 16, 2020 10:52 AM

February 16, 2020

Tomas Frydrych

Updated Photies Site

Updated Photies Site

I finally revamped my photos site. I have meant to do this for over a year now, but didn't get to it until now. The biggest change is no more SmugMug.

SmugMug seemed like a good idea a couple of years ago, but it failed to live up to expectations. Like so many other services with a similar licensing model, you don't get to find out what the service is really like until you take out a paid subscription, and it turns out the really good features, such as the Loxley Colour integration, are severely crippled below the (prohibitively expensive) top tier subscription.

The main issues for me were that even with the (not cheap) tier-2 Portfolio subscription I could not control printing (e.g., paper selection) on per-image basis, which made it completely unusable. In general customising the site was awkward, and SmugMug would roll out updates that would break your customised site without you even knowing. And, not last, and certainly not least, you cannot get rid of that stupid name, even at the Portfolio level it's always there, and it keeps popping up in the browser bar if you use a custom domain.

Anyway, I don't have particularly complex requirements, so I ended up using a self-hosted Ghost, which has served me well in the past, with a bespoke theme called Elphin specifically focusing on imagery (WIP, but you are welcome to make use of it).

And yes, you can now get a normal RSS feed for my photos!

by tf at February 16, 2020 03:54 PM

February 12, 2020

Tomas Frydrych

Updating the Photos Site

I am updating just now, and had to take the site off-line for a bit, so if you have landed here looking for it, it will be back in a day or two.

by tf at February 12, 2020 07:28 PM

January 29, 2020

Tomas Frydrych

Scotland the Invisible

Scotland the Invisible

All in all 2019 wasn’t a bad year by any means, but nevertheless, for a variety of reasons I needn’t to bore you with, it sticks in mind as a year of journeys unrealised and photographs untaken, of unfinished business and, not least, a year of blogs unpublished. Free time was harder to find, and even harder to set aside.

As such I was reluctant to waste my life on long car journeys, and much of my time outdoors was spent exploring what I call ‘The Invisible Places’: places without an obvious charisma to merit our closer attention, places I’d not usually think of visiting (and nobody else seems to either). Spots that perhaps caught my attention reading a map knowing I had no more than three hours to spare, or ones that I got curious about in years gone by from the window of the car going some place officially worthy. Or just somewhere I found myself in for whatever other reasons.

I suspect that to the contemporary adventurer this might sound sad, for such places offer limited opportunities to deploy all that necessary paraphernalia a serious outdoor person can’t be seen without. Nor do they promise to offer a suitable backdrop for the jaw dropping audiovisuals our modern day heroic deeds require. But in fact these little ventures not only brought me lot of pleasure, but turned out to be a bit of an eye opener.

I watched eagles soar and listened to the noiseless flight of owls. There were curlews and snipes, ravens and snow buntings. I had my coffee and pieces by the most picturesque of streams, came across stunning gorges in the unlikeliest of places, seen waterfalls that by their size and majesty can compete with any in the land, yet you will find in no guide book (and I, for sure, ain’t telling).

These little outings made me realise that there is much more to this land of ours than the Glencoes, the An Teallachs, the Lairig Ghrus. And at the same time I came to see that the perceived beauty of this land, the one that we get from the places where award winning photographs are made and faked, the places our outdoor culture revolves around, where the influencers make their money, and hence campaign for, that beauty is but skin deep.

For I have also seen feed buckets in the pretties of streams and would have dared to drink from none. There were giant tyres right by ‘Please Take Your Litter Home’ signs, and washing machines at the bottom of dramatic ravines. Quaint beaches decorated by discarded fishing nets and fish farm detritus; I marvelled over decomposing cars in the middle of postcard vistas.

Strongbow tinnies abounded, and along side them I collected enough balloons to have a party of my own. I have, I kid you not, stumbled on a half a shotgun buried deep in a river bed, and lost count of the land wrecked by hoofs, bulldozers and forestry machinery; of bogs churned up by the argocats of hunters too posh to walk.

And so, reluctantly, I came to see that this, not the other, is the real Scotland. That for over two decades my preoccupation with the other, the tourist honeypots, blinded me to the sorry state most of this land is really in. If you only stop and listen, you can hear it groaning, praying for redemption.

There is lot of talk of Climate Change and C02 reduction, but that’s only half of the picture. We are facing a second, equally serious, existential crisis in a rapid loss of biodiversity, an issue that is not getting enough practical attention. And, unfortunately, biodiveristy is at times directly at odds with the steps we take to address Climate Change (e.g., in Scotland we are obsessed with the expansion of renewables, which invariably come at the loss of habitat; this will get worse with growing adoption of electric vehicles).

But the real problem is not so much that renewables are encroaching on biodiversity (though better checks and balances are needed, and the Scottish Government appears to be failing to see the bigger picture here), but rather that the land as a whole is in a very poor ecological condition to start with, because for a very long time we have collectively not given a crap. What is needed is a radical, nation wide, program of rapid ecological restoration, e.g., along the lines of the Land Stewardship model proposed two and half years ago by the Scottish Wildlife Trust. Yet this is not happening, isn’t even on the distant horizon.

The state of the land reflects how we, as a nation, really feel about it, and the fact that we are dragging our feet doesn’t say much for us. Go and see for yourself, you might be surprised at what you find.


P.S. The tile image is called Ceci n'est pas une microbead, and is not from one of the ‘invisible’ places.

by tf at January 29, 2020 05:32 PM

January 28, 2020

Tomas Frydrych

The Cheery World of B&W

The Cheery World of B&W

My current interest in black and white photography goes back to a three night wild camping trip in my favourite part of the world a couple of years ago. I had planned it for some time, with one particular image in mind, but alas, happened to run into an early heatwave, with all that it brings -- largely cloudless skies, haze that even a polarizer can't do much about, thunder, and rapidly forming, all encompassing, evening inversions.

The weather was so unconducive to photography that on day 2 I simply left the camera in the bag just after breakfast. This, as often, turned out to be a mistake, as I nearly missed the one worthwhile image that came from this trip.

Now the colour original of this picture is rather uninspiring, and my first response seeing it on the big screen was a disappointed ‘doh’. But a B&W conversion opened up different possibilities, and ultimately made me wonder what it would be like to do this properly, i.e., work with B&W as a landscape medium in its own right, rather than using it as a last ditch effort to save a disappointing picture.

So what have I learnt since? Two things: (a) B&W landscape photography is very hard, and (b) taking a half decent B&W picture generally requires thinking in B&W terms before taking it.

Key Issues in B&W Landscape

When the colour dimension is eliminated, the building blocks that I am left with for my image are luminance, texture, and structure. Should be plenty, right? Well, it’s not that simple.

I came to realise rather quickly that the two dominant colour tones that make up the bulk of Scottish landscape for much of the year -- the yellow-greens and orange-browns -- happen to have quite similar luminance levels, i.e., they translate into rather similar shades of grey.

More so, colours of similar luminance cannot be separated very effectively by the use of traditional B&W filters if they are close to each other on the colour wheel (filters work best for colours across the wheel, e.g., yellow filter for blue and yellow; digital, with its ability to manipulate isolated narrow colour bands, has a definite advantage here, but that comes with its own pitfalls, on that shortly).

And so the quality of the grey scale in a B&W image comes mostly from the quality of the external light, and while good light is key to all photography, it’s doubly so for B&W I will go as far as saying that I learnt that without interesting light there can be no interesting B&W landscape.

The other thing I have discovered is that the various textures naturally found in landscape, particularly those associated with vegetation, are in fact also quite homogeneous -- it would seem that what I often perceive as a different texture at first glance is really a difference in colour. This leaves structure (the larger patterns) in the landscape as just about the only WYSIWYG attribute of the landscape as I initially perceive it.

Visualisation, Visualisation, ... Visualisation

The necessary fallout from the above observations is that it is virtually impossible to take a good B&W landscape without giving some advance thought to what the result will look like in grey scale.

Like the image that started my current quest, most of the B&W images I tend to come across these days are, quite obviously, a final attempt to salvage something from a picture that has not come out too well in colour. The aforementioned ability of digital post processing to manipulate narrow bands of colour in the B&W mix often makes things worse, because it allows me to break the natural tonality of the image; sometime this works, but more often it doesn’t.

I have been reflecting on why that is, and come to the conclusion that when we are looking at a monochrome image, our brain is processing it in a similar way in which it processes natural low light scenes, and it baulks when it comes across something it doesn’t expect. For example, my brain seems quite unperturbed by solid black sky, but a steep contrast curve in the clouds (the sort of ‘dramatic’ sky you get with a bit of Lightroom ‘clarity’ thrown in) jars to no end.

Digital does make it possible to experiment with what might and might not work, but personally I prefer working with B&W film as it forces me to focus on the fundamentals and on the landscape itself. (And yes, I do prefer the film look.)

It’s been said that practice makes perfect, and learning to see in B&W certainly comes one image at a time. To me that’s part of the fun, and one of the things that draws me to it.

by tf at January 28, 2020 10:02 PM

November 18, 2019

Tomas Frydrych

The Camera Needn’t Lie

The Camera Needn’t Lie

The question of the role and the importance of realism in landscape photography is one that keeps coming up over and over again. On the one hand we have the purists, holding onto old fashioned notions of truth, arguing that landscapes should be true to the land; on the other the free creatives, raising that millennia old question ‘What is Truth?’. And, broadly speaking, the former seem to have long lost the argument.

This was brought into a clear focus for me about a year ago listening to the Alex Neil and Erin Babnik debate on the Matt Payne podcast. I was routing for Alex, for I tend to identify with the traditionalist side of the argument (in spite of, or perhaps because of, being a child of postmodernity, and having long given up on simplistic notions of objective truth). But that afternoon Erin won the argument, and did so comprehensively.

So over the last year, while waiting for the light, or lugging the gear up somewhere in a hope of a good picture, I have been pondering why it is that, in spite of all the arguments (and in spite of understating the inadequacy of the old modernist and positivist models of truth), realism in landscape still matters to me, and why its widespread absence in contemporary photography hasn’t stopped bothering me. So here are some thoughts.

The gist of the creative side of the argument is that photography is always a distortion of reality, and hence the quest for realism is naive and bound to fail. This assertion is, on the most basic level, true. That is, it is true the same way as the assertion that the earth is not a sphere. However, it fails the same way naive postmodernism did, by dismissing the question of a degree of distortion as inconsequential.

The distortion of reality in a photographic image has several different origins, and in my own thinking I have come to split it into five categories: fundamental, systematic, compensatory, interpretative and re-interpretative.

Fundamental Distortion is one that comes from the most basic aspects of producing a photographic image, and is always present, regardless of how, or by whom, the image is taken. The most obvious fundamental distortion is the flattening of a three dimensional reality into a two dimensional space.

Systematic Distortion is down the characteristics of the technological process used to generate the image, that is, it is not introduced by the photographer per se but by the equipment. It will vary between different equipment, but is independent of the operator. This includes things such as the loss of colour information when using B&W film, lack of dynamic range and/or colour shift due to the physical limitations of a CMOS sensor, or a geometrical distortion introduced by the lens.

Compensatory Distortion is the result of steps taken to counteract the effects of the fundamental and systemic distortion to better match the perception of a human observer. Some of this is done automatically by the equipment (lens profile, sensor calibration, auto ISO, etc.), some of it as a conscious choice of the photographer (graduated filters, colour cast removal, etc.). Given several photographers with different equipment, following compensatory adjustments the images would all be quite similar (but not identical, hence this too is a form of distortion).

Interpretative Distortion is where the photographer’s ego fully enters the equation, it’s where we are no longer taking a photograph, but start making it. Interpretative changes take different forms and involve different tools, but they are essentially about emphasis and mood.

Re-interpretative Distortion is where the connection between the image and reality is severed and the image becomes an expression of the creator’s vision inspired by a place, rather than being an image of the place per se.

The categorisation above is not about the tools used, but simply about the kind and degree of distortion they introduce. And while the boundaries are somewhat fuzzy, they are distinguishable from each other, so that, e.g., if I were running a photographic competition, I would be able to write up a set of rules such as to clearly define acceptable entries.

For example, a +/- 1 stop HDR, or a 1 stop burn-in can be well argued to be compensatory for the dynamic range limitations of sensor or photographic paper, but +/- 5 stop HDR takes us well beyond the usable dynamic range of the human eye, and hence beyond that which can be humanly observed. A removal of a transient object (a passing car) can be interpretative, since the object is not a property of the landscape itself, but the removal of a building is re-interpretative, for it presents a scene which cannot be observed.

To put it differently, for me the basic question with reference to the landscape genre is this: if a third party observer present at the taking of the image were shown the final product, would they say ‘yes, this is what we saw’? If the answer is yes, then the distortion in the image is no more than interpretative; if the answer is no, then we have departed the genre of landscape photography by any meaningful definition, for such a definition could be applied to any and every photographic image.

(The other point worth noting here is that the fundamental and systematic distortions are qualitatively different from the rest: you don't get to remove a building and justify it by the limited dynamic range of your sensor, this kind of argument is incongruous, yet incredibly common in this debate.)

But why should any of this matter? If my images are simply about expressing my creative urge, or, perhaps, bringing pleasure to others (for me photography is largely about the former), then all of this is of no importance. But is this really all there is to a landscape photograph?

To come back to the aforementioned podcast, the most important point in the discussion there is made by Erin Babnik noting that unrealistic expectations of realism in photography by the viewer are dangerous: if we believe photographs to represent reality, we are sooner or later going to be deceived.

This observation is valid, but the underlying analysis, and the conclusions drawn from it are not. The issue at hand is very similar to the problem of ambiguity of language in human communication, and just as the fact that language is always imprecise and ambiguous doesn’t mean that we can’t ever understand each other, so the inherent distortion present in all photography doesn’t mean that all images of a place are equally false representations of it, and that no photograph can ever be a realistic representation of an external reality (you do have a photograph in your passport, don’t you?).

The problem with landscape photography for me is that, whether I like it or not, a landscape photograph is not merely an image, but also an act of communication. It doesn’t just entertain, but also informs, let’s the viewer experience land that they might not have first hand knowledge of. Furthermore, a landscape photograph is a record not solely in space but also in time -- our knowledge of land as it once was is entirely dependent on such imagery.

The earlier observation that severing the tangible connection with observable reality voids the landscape genre of meaning leads to the necessary conclusion that the landscape genre comes with an implicit contract between the viewer and the photographer that what is being shown is real. The issue here is not the legitimacy of creative post-processing; I have no quibble with that, do what makes you happy. What I do object to is presenting creative fantasies, no matter how good and aesthetically pleasing, as landscapes.

Breaching the genre contract has consequences for the viewer: on the more innocuous end of the spectrum is the disappointment when the wild place you travel a long distance to looks nothing like the photographs that brought you there (someone thoughtfully cloned out all the car parks), on the more insidious end our carefully manicured ‘photos’ obscure the real state of the planet, its environment and our impact on it. (The former doesn't bother me too much, but the latter at least should give us a pause for thought.)

But there are also collective consequences for the photographer: the contemporary cavalier attitude toward reality is breeding a generation of cynical viewers who increasingly see less and less value in photography -- I suspect contemporary landscape photographers are unwittingly Photoshopping themselves out of existence.

by tf at November 18, 2019 06:40 PM

August 15, 2019

Emmanuele Bassi

Another layer

Five years (and change) ago I was looking at the data types and API that were needed to write a 3D-capable scene graph API; I was also learning about SIMD instructions and compiler builtins on IA and ARM, as well as a bunch of math I didn’t really study in my brush offs with formal higher education. The result was a small library called Graphene.

Over the years I added more API, moved the build system from Autotools over to Meson, and wrote a whole separate library for its test suite.

In the meantime, GStreamer started using Graphene in its GL element; GTK 3.9x is very much using Graphene internally and exposing it as public API; Mutter developers are working on reimplementing the various math types in their copies of Cogl and Clutter using Graphene; and Alex wrote an entire 3D engine using it.

Not bad for a side project.

Of course, now I’ll have to start maintaining Graphene like a proper grownup, which also means reporting its changes, bug fixes, and features when I’m at the end of a development cycle.

While the 1.8 development cycle consisted mostly of bug fixes with no new API, there have been a few major internal changes during the development cycle towards 1.10:

  • I rewrote the Euler angles conversion to and from quaternions and matrices; the original implementation I cribbed from here and there was not really adequate, and broke pretty horribly when you tried to roundtrip from Euler angles to a transformation matrix and back. This also affected the conversion between Euler angles and quaternions. The new implementation is more correct, and as a side effect it now includes not just the Tait–Bryan angles, but also the classic Euler angles. All possible orders are available in both the intrinsic and extrinsic axes variants.
  • We’re dealing with floating point comparison and with infinities a bit better, now; this is usually necessary because the various vector implementations may have different behaviour, depending on the toolchain in use. A shout out goes to Intel, who bothered to add an instruction to check for infinities only in AVX 512, making it pointless for me, and causing a lot more grief than necessary.
  • The ARM NEON implementation of graphene_simd4f_t has been fixed and tested on actual ARM devices (an old Odroid I had lying around for ARMv7 and a Raspberry Pi3 for Aarch64); this means that the “this is experimental” compiler warning has been removed. I still need to run the CI on an ARM builder, but at least I can check if I’m doing something dumb, now.
  • As mentioned in the blog posts above, the whole test suite has been rewritten using µTest, which dropped a dependency on GLib; you still need GLib to get the integration with GObject, but if you’re not using that, Graphene should now be easier to build and test.

On the API side:

  • there are a bunch of new functions for graphene_rect_t, courtesy of Georges Basile Stavracas Neto and Marco Trevisan
  • thanks to Marco, the graphene_rect_round() function has been deprecated in favour of the more reliable graphene_rect_round_extents()
  • graphene_quaternion_t gained new operators, like add(), multiply() and scale()
  • thanks to Alex Larsson, graphene_plane_t can now be transformed using a matrix
  • I added equality and near-equality operators for graphene_matrix_t, and a getter function to retrieve the translation components of a transformation matrix
  • I added interpolation functions for the 2, 3, and 4-sized vectors
  • I’m working on exposing the matrix decomposition code for Gthree, but that requires some untangling of messy code so I’ll be in the next development snapshot.

On the documentation side:

  • I’ve reworked the contribution guide, and added a code of conduct to the project; doesn’t matter how many times you say “patches welcome” if you also aren’t clear on how those patches should be written, submitted, and reviewed, and if you aren’t clear on what constitutes acceptable behaviour when it comes to interactions between contributors and the maintainer
  • this landed at the tail end of 1.8, but I’ve hopefully clearly documented the conventions of the matrix/matrix and matrix/vector operations, to the point that people can use the Graphene API without necessarily having to read the code to understand how to use it

This concludes the changes that will appear with the next 1.10 stable release, which will be available by the time GNOME 3.34 is out. For the time being, you can check out the latest development snapshot available on Github.

I don’t have many plans for the future, to be quite honest; I’ll keep an eye out for what GTK and Gthree need, and I expect that once Mutter starts using Graphene I’ll start receiving bug reports.

One thing I did try was moving to a “static” API reference using Markdeep, just like I did for µTest, and drop yet another dependency; sadly, since we need to use gtk-doc annotations for the GObject introspection data generation, we’re going to depend on gtk-doc for a little while longer.

Of course, if you are using Graphene and you find some missing functionality, feel free to open an issue, or a merge request.

by ebassi at August 15, 2019 09:38 AM

June 20, 2019

Tomas Frydrych

The Fitzroys

My shoulder hurts from the seatbelt, plumes of steam bellowing from underneath the bonnet; I am sure I can smell petrol. Shaking fingers fumbling with the buckle. The driver side door won’t budge. Scrambling out over the gear stick. I put some distance between myself and the car, expecting it to burst into flames any moment.

Too many Hollywood movies.

There in the boot are some of my most treasured worldly possessions, not least a near new pair of Scarpa Fitzroy boots; even at the heavily discounted £99, a purchase I could ill afford. As the initial adrenaline spike wears off in the chill of the crisp winter morning, I pluck up the courage to retrieve them, then watch a glorious dawn over the distant hills. It will be a cracker, as Heather predicted.

In the years that followed the Fitzroys and me had trodden all over the Scottish hills, hundreds of miles of fine ridges, and even finer bogs; in the heat of the summer, and through the winters. Later on I added the Cumbraes for the winter climbing days, and Mescalitos for the summer, but the latter turned out to be terrible for walking, and so the Fitzroys remained my main summer boot. 20+ years later I still have them, still use them from time to time. One of the better £99 I have ever spent; they don’t make them like they used to.

Some twenty minutes later a car comes down the icy ungritted road, and the driver kindly lends me his mobile phone to call the AA. When there is no sign of the recovery truck an hour later I walk over to a nearby farm where the lights are now on, and ask to use their phone. As I step out back into the fine morning a couple of minutes later, the yellow truck rolls over the hill — ‘Wow, that was quick!’ says the farmer. I just laugh.

It’s a slow journey home, and I prattle, worried what Linda will say about me writing off our car; money is tight and the car is (or rather ‘was’) the key to maintaining our sanity. The AA man brings me back to reality: ‘You are lucky, son, I have seen less damage where the people did not make it.’

For a long time that is to be my first waking up thought; every day in the hills a bonus, none of them taken for granted.

by tf at June 20, 2019 11:03 AM

June 14, 2019

Tomas Frydrych

North Coast 500 — An Alternative View

The NC500 is a travesty. The idea of car touring holidays harkens back to the environmental ignorance of mid 20th century and is wholly unfit for these days of an unfolding environmental catastrophe. VisitScotland, the Scottish Government, and all those who lend their name to promoting this anachronism, should be ashamed of themselves.

The principal contribution of the NC500 to the northwest highlands is pollution — particulates from tyres and Diesel engines, chemical toilet waste dumped into roadside streams and on beaches, camping detritus and fire rings decorating any and every scenic spot reachable by car. Quiet villages along the north coast have been turned into noisy thoroughfares. During the main season the stream of traffic is incessant.

Campervans wherever you look. Invariably two up front (plus a dog), the back loaded with stacks of Heinz beans from supermarkets down south. The professional types in their shiny T5s with eye-wateringly expensive conversions, the retired couples in ever bigger mobile homes, at times towing a small car behind. Even now, still off season, you will find them hogging passing places along every minor Assynt road from an early afternoon on.

Once upon a time the quiet roads of the northwest highlands provided some of the best cycle touring in the country. No more, thanks to the ever growing volume of traffic and vehicle size. Many of these vehicles are too wide for the single track roads to safely pass a pedestrian, never mind a cyclist. An opportunity missed.

This is not sustainable, the putative benefits to the local communities are failing to materialise (or so I hear from the locals), the real beneficiaries the car makers, the campervan hire places, the oil companies. Politicians congratulating themselves on being seen doing something for the Highlands without spending any money. Meanwhile the roads crumble.

It will not (cannot) work. We only need to look at the Icelandic experience to see what’s coming.

Following the economic crash of 2008 the Icelanders thrown themselves wholeheartedly into developing tourism, it was all that was left. Myriads of camper van hire companies sprung up chasing the tourist buck. It didn’t take long for the country to be overrun and completely overwhelmed by them. It dawned rapidly that they bring nothing to local communities while causing no end of environmental damage; legislation banning all forms of car-camping outwidth officially designated campsites followed.

This is what needs to happen in Scotland, now.

Some will undoubtedly come out to blame the locals and the Highland Council for the associated problems: lack of adequate (read ‘free’) campervan facilities. Let’s think about this for a moment: people in purpose built luxury vehicles worth tens of thousands of pounds expecting the highlanders to, directly, or indirectly through their council tax, subsidise their holidays. Not sure what the right word for this is. Entitlement? Greed?

If the economic model for the revival of the highlands is to be tourism (which shouldn’t be automatically assumed to be the right answer, BTW), it needs to be built on bringing people in, rather than, as the NC500 does, taking them through. The Scottish Government and VisitScotland should be promoting places and communities, not roads; physical activities such as walking, cycling, paddling, not driving. This area has so much to offer that reducing it to iPhone snaps taken from a car window, as the NC500 ultimately does, is outright insulting.

I am sure my view that we should ban all roadside car camping as they did in Iceland, and heavily tax camper vans and mobile homes, will not be very popular. The campervan has become the ultimate middle class outdoor accessory, virtually all my friends have one. You will be hard pressed to find an outdoor influencer that doesn’t; vested interests creating blind spots big enough to park a Hymer in.

But I’ll voice it anyway, the Highlands deserve better.

by tf at June 14, 2019 03:36 PM

June 12, 2019

Damien Lespiau

jk - Configuration as code with TypeScript

Of all the problems we have confronted, the ones over which the most brain power, ink, and code have been spilled are related to managing configurationsBorg, Omega, and Kubernetes - Lessons learned from three container-management systems over a decade

This post is the first of a series introducing jk. We will start the series by showing a concrete example of what jk can do for you.

jk is a javascript runtime tailored for writing configuration files. The abstraction and expressive power of a programming language makes writing configuration easier and more maintainable by allowing developers to think at a higher level.

Let’s pretend we want to deploy a billing micro-service on a Kubernetes cluster. This micro-service could be defined as:

  name: billing
  description: Provides the /api/billing endpoints for frontend.
  namespace: billing
  port: 80
    path: /api/billing
    - service.RPS.HTTP
    - service.RPS.HTTP.HighErrorRate

From this simple, reduced definition of what a micro-service is, we can generate:

  • Kubernetes Namespace, Deployment, Service and Ingress objects.
  • A ConfigMap with dashboard definitions that grafana can detect and load.
  • Alerts for Prometheus using the PrometheusRule custom resource defined by the Prometheus operator.
apiVersion: v1
kind: Namespace
  name: billing
apiVersion: apps/v1
kind: Deployment
    app: billing
  name: billing
  namespace: billing
  revisionHistoryLimit: 2
      maxSurge: 1
      maxUnavailable: 0
        app: billing
      - image:
        name: billing
        - containerPort: 80
apiVersion: v1
kind: Service
    app: billing
  name: billing
  namespace: billing
  - port: 80
    app: billing
apiVersion: extensions/v1beta1
kind: Ingress
  annotations: /
  name: billing
  namespace: billing
  - http:
      - backend:
          serviceName: billing
          servicePort: 80
        path: /api/billing
apiVersion: v1
  dashboard: '[{"annotations":{"list":[]},"editable":false,"gnetId":null,"graphTooltip":0,"hideControls":false,"id":null,"links":[],"panels":[{"aliasColors":{},"bars":false,"dashLength":10,"dashes":false,"datasource":null,"fill":1,"gridPos":{"h":7,"w":12,"x":0,"y":0},"id":2,"legend":{"alignAsTable":false,"avg":false,"current":false,"max":false,"min":false,"rightSide":false,"show":true,"total":false,"values":false},"lines":true,"linewidth":1,"links":[],"nullPointMode":"null","percentage":false,"pointradius":5,"points":false,"renderer":"flot","repeat":null,"seriesOverrides":[],"spaceLength":10,"stack":false,"steppedLine":false,"targets":[{"expr":"sum
    by (code)(sum(irate(http_request_total{job=billing}[2m])))","format":"time_series","intervalFactor":2,"legendFormat":"{{code}}","refId":"A"}],"thresholds":[],"timeFrom":null,"timeShift":null,"title":"billing
    sum(rate(http_request_duration_seconds_bucket{job=billing}[2m])) by (route) *
    1e3","format":"time_series","intervalFactor":2,"legendFormat":"{{route}} 99th
    percentile","refId":"A"},{"expr":"histogram_quantile(0.50, sum(rate(http_request_duration_seconds_bucket{job=billing}[2m]))
    by (route) * 1e3","format":"time_series","intervalFactor":2,"legendFormat":"{{route}}
    median","refId":"B"},{"expr":"sum(rate(http_request_total{job=billing}[2m])) /
    sum(rate(http_request_duration_seconds_count{job=billing}[2m])) * 1e3","format":"time_series","intervalFactor":2,"legendFormat":"mean","refId":"C"}],"thresholds":[],"timeFrom":null,"timeShift":null,"title":"billing
    \u003e billing","uid":"","version":0}]'
kind: ConfigMap
    app: billing
  name: billing-dashboards
  namespace: billing
kind: PrometheusRule
    app: billing
    prometheus: global
    role: alert-rules
  name: billing
  - name: billing-alerts.rules
    - alert: HighErrorRate
        description: More than 10% of requests to the billing service are failing
          with 5xx errors
        details: '{{$value | printf "%.1f"}}% errors for more than 5m'
        service: billing
      expr: |-
            / rate(http_request_duration_seconds_count{job=billing}[2m]) * 100 > 10
      for: 5m
        severity: critical

What’s interesting to me is that such an approach shifts writing configuration files from a big flat soup of properties to a familiar API problem: developers in charge of the platform get to define the high level objects they want to present to their users, can encode best practices and hide details in library code.

For the curious minds, the jk script used to generate these Kubernetes objects can be found in the jk repository.

Built for configuration

We’re building jk in an attempt to advance the configuration management discussion. It offers a different take on existing solutions:

  • jk is a generation tool. We believe in a strict separation of configuration data and how that data is being used. For instance we do not take an opinionated view on how you should deploy applications to a cluster and leave that design choice in your hands. In a sense, jk is a pure function transforming a set of input into configuration files.

  • jk is cross domain. jk generates JSON, YAML, HCL as well as plain text files. It allows the generation of cross-domain configuration. In the micro-service example above, grafana dashboards and Kubernetes objects are part of two different domains that are usually treated differently. We could augment the example further by defining a list of AWS resources needed for that service to operate (eg. an RDS instance) as Terraform HCL.

  • jk uses a general purpose language: javascript. The configuration domain attracts a lot of people interested in languages and the result is many new Domain Specific Languages (DSLs). We do not believe those new languages offer more expressive power than javascript and their tooling is generally lagging behind. With a widely used general purpose language, we get many things for free: unit test frameworks, linters, api documentation, refactoring tools, IDE support, static typing, ecosystem of libraries, …

  • jk is hermetic. Hermeticity is the property to produce the same output given the same input no matter the machine the program is being run on. This seems like a great property for a tool generating configuration files. We achieve this with a custom v8-based runtime exposing as little as possible from the underlying OS. For instance you cannot access the process environment variables nor read file anywhere on the filesystem with jk.

  • jk is fast! By being an embedded DSL and using v8 under the hood, we’re significantly faster than the usual interpreters powering DSLs.

Hello, World!

The jk “Hello, World!” example generates a YAML file from a js object:

// Alice is a developer.
const alice = {
  name: 'Alice',
  beverage: 'Club-Mate',
  monitors: 2,
  languages: [
    '68k assembly', // Alice is cool like that!

// Instruct to write the alice object as a YAML file.
export default [
  { value: alice, file: `developers/${}.yaml` },

Run this example with:

$ jk generate -v alice.js
wrote developers/alice.yaml

This results in the developers/alice.yaml file:

beverage: Club-Mate
- python
- haskell
- c++
- 68k assembly
monitors: 2
name: Alice

Typing with TypeScript

The main reason to use a general purpose language is to benefit from its ecosystem. With javascript we can leverage typing systems such as TypeScript or flow to help authoring configuration.

Types help in a number of ways, including when refactoring large amounts of code or defining and documenting APIs. I’d also like to show it helps at authoring time by providing context-aware auto-completion:

Types - autocompletion

In the screenshot above we’re defining a container in a Deployment and the IDE only offers the fields that are valid at the cursor position along with the accompanying type information and documentation.

Similarly, typing can provide some level of validation:

Types - autocompletion

The IDE is telling us we haven’t quite defined a valid apps/v1 Deployment. We are missing the mandatory selector field.

Status and Future work

Albeit being still young, we believe jk is already useful enough to be a contender in the space. There’s definitely a lot of room for improvement though:

  • Helm integration: we’d like jk to be able to render Helm charts client side and expose the result as js objects for further manipulation.
  • Jsonnet integration: similarly, it should be possible to consume existing jsonnet programs.
  • Native TypeScript support: currently developers need to run the tsc transpiler by hand. We should be able to make jk consume TypeScript files natively a la deno.
  • Kubernetes strategic merging: the object merging primitives are currently quite basic and we’d like to extend the object merging capabilities of the standard library to implement Kubernetes strategic merging.
  • Expose type generation for Kubernetes custom resources.
  • More helper libraries to generate Grafana dashboards, custom resources for the Prometheus operator, …
  • Produce more examples: it’s easy to feel a bit overwhelmed when facing a new language and paradigm. More examples would make jk more approachable.

Try it yourself!

It’s easy to download jk from the github release page and try it yourself. You can also peruse through the (currently small amount of) examples.

June 12, 2019 10:21 AM

June 01, 2019

Emmanuele Bassi

More little testing

Back in March, I wrote about µTest, a Behavior-Driven Development testing API for C libraries, and that I was planning to use it to replace the GLib testing API in Graphene.

As I was busy with other things in GTK, it took me a while to get back to µTest—especially because I needed some time to set up a development environment on Windows in order to port µTest there. I managed to find some time over various weekends and evenings, and ended up fixing a couple of small issues here and there, to the point that I could run µTest’s own test suite on my Windows 10 box, and then get the CI build job I have on Appveyor to succeed as well.

Setting up MSYS2 was the most time consuming bit, really

While at it, I also cleaned up the API and properly documented it.

Since depending on gtk-doc would defeat the purpose, and since I honestly dislike Doxygen, I was looking for a way to write the API reference and publish it as HTML. As luck would have it, I remembered a mention on Twitter about Markdeep, a self-contained bit of JavaScript capable of turning a Markdown document into a half decent HTML page client side. Coupled with GitHub pages, I ended up with a fairly decent online API reference that also works offline, falls back to a Markdown document when not running through JavaScript, and can get fixed via pull requests.

Now that µTest is in a decent state, I ported the Graphene test suite over to it and, now I can run it on Windows using MSVC—and MSYS2, as soon as the issue with GCC gets fixed upstream. This means that, hopefully, we won’t have regressions on Windows in the future.

The µTest API is small enough, now, that I don’t plan major changes; I don’t want to commit to full API stability just yet, but I think we’re getting close to a first stable release soon; definitely before Graphene 1.10 gets released.

In case you think this could be useful for you: feedback, in the form of issues and pull requests, is welcome.

by ebassi at June 01, 2019 11:46 AM

April 19, 2019

Tomas Frydrych

Let It Burn

Let It Burn

A flame stretching up to heaven. The newsrooms can’t get enough, a journalist’s dream come true. You, me, everyone, glued to our screens, riveting stuff. (Honey, make us some popcorn, will you?)

“We shall rebuild it!”, echoes through the corridors of power, “Money is no object!”.

I have no doubt.

A flame stretching up to heaven. The newsroom’s embarrassed. A 15 year old making noise, a traffic jam, a pensioner chained to railings. (Nutters. Collective shrugging of shoulders.)

“Full force of the law!”, echoes through the corridors of power (money is the object).

I have no doubt.

A skeleton thin polar bear 400 miles from home.

by tf at April 19, 2019 07:25 AM

April 14, 2019

Emmanuele Bassi

(New) Adventures in CI

One of the great advantages of moving the code hosting in GNOME to GitLab is the ability to run per-project, per-branch, and per-merge request continuous integration pipelines. While we’ve had a CI pipeline for the whole of GNOME since 2012, it is limited to the master branch of everything, so it only helps catching build issues post-merge. Additionally, we haven’t been able to run test suites on Continuous since early 2016.

Being able to run your test suite is, of course, great—assuming you do have a test suite, and you’re good at keeping it working; gating all merge requests on whether your CI pipeline passes or fails is incredibly powerful, as it not only keeps your from unknowingly merging broken code, but it also nudges you in the direction of never pushing commits to the master branch. The downside is that it lacks nuance; if your test suite is composed of hundreds of tests you need a way to know at a glance which ones failed. Going through the job log is kind of crude, and it’s easy to miss things.

Luckily for us, GitLab has the ability to create a cover report for your test suite results, and present it on the merge request summary, if you generate an XML report and tell the CI machinery where you put it:

        - "${CI_PROJECT_DIR}/_build/report.xml"

Sadly, the XML format chosen by GitLab is the one generated by JUnit, and we aren’t really writing Java classes. The JUnit XML format is woefully underdocumented, with only an unofficial breakdown of the entities and structure available. On top of that, since JUnit’s XML format is undocumented, GitLab has its own quirks in how it parses it.

Okay, assuming we have nailed down the output, how about the input? Since we’re using Meson on various projects, we can rely on machine parseable logs for the test suite log. Unfortunately, Meson currently outputs something that is not really valid JSON—you have to break the log into separate lines, and parse each line into a JSON object, which is somewhat less than optimal. Hopefully future versions of Meson will generate an actual JSON file, and reduce the overhead in the tooling consuming Meson files.

Nevertheless, after an afternoon of figuring out Meson’s output, and reverse engineering the JUnit XML format and the GitLab JUnit parser, I managed to write a simple script that translates Meson’s testlog.json file into a JUnit XML report that you can use with GitLab after you ran the test suite in your CI pipeline. For instance, this is what GTK does:

set +e

xvfb-run -a -s "-screen 0 1024x768x24" \
    meson test \
         -C _build \
         --timeout-multiplier 2 \
     --print-errorlogs \
     --suite=gtk \
     --no-suite=gtk:gsk \

# Save the exit code, so we can reuse it
# later to pass/fail the job

# We always run the report generator, even
# if the tests failed
$srcdir/.gitlab-ci/ \
        --project-name=gtk \
        --job-id="${CI_JOB_NAME}" \
        --output=_build/${CI_JOB_NAME}-report.xml \

exit $exit_code

Which results in this:

Some assembly required; those are XFAIL reftests, but JUnit doesn’t understand the concept

The JUnit cover report in GitLab is only shown inside the merge request summary, so it’s not entirely useful if you’re developing in a branch without opening an MR immediately after you push to the repository. I prefer working on feature branches and getting the CI to run on my changes without necessarily having to care about opening the MR until my work is ready for review—especially since GitLab is not a speed demon when it comes to MRs with lots of rebases/fixup commits in them. Having a summary of the test suite results in that case is still useful, so I wrote a small conversion script that takes the testlog.json and turns it into an HTML page, with a bit of Jinja templating thrown into it to avoid hardcoding the whole thing into string chunks. Like the JUnit generator above, we can call the HTML generator right after running the test suite:

$srcdir/.gitlab-ci/ \
        --project-name=GTK \
        --job-id="${CI_JOB_NAME}" \
        --output=_build/${CI_JOB_NAME}-report.html \

Then, we take the HTML file and store it as an artifact:

    when: always
      - "${CI_PROJECT_DIR}/_build/${CI_JOB_NAME}-report.html"

And GitLab will store it for us, so that we can download it or view it in the web UI.

There are additional improvements that can be made. For instance, the reftests test suite in GTK generates images, and we’re already uploading them as artifacts; since the image names are stable and determined by the test name, we can create a link to them in the HTML report itself, so we can show the result of the failed tests. With some more fancy HTML, CSS, and JavaScript, we could have a nicer output, with collapsible sections hiding the full console log. If we had a place to upload test results from multiple pipelines, we could even graph the trends in the test suite on a particular branch, and track our improvements.

All of this is, of course, not incredibly novel; nevertheless, the network effect of having a build system in Meson that lends itself to integration with additional tooling, and a code hosting infrastructure with native CI capabilities in GitLab, allows us to achieve really cool results with minimal glue code.

by ebassi at April 14, 2019 04:02 PM

April 12, 2019

Tomas Frydrych

The ‘Truly Clean Green Energy’ Fallacy

The ‘Truly Clean Green Energy’ Fallacy

A recent UKH Opinion piece dealing with the ecological cost of the forthcoming Glen Etive micro hydro, and micro hydro in general, includes this statement: ‘[we can] produce large amounts of truly “clean and green” energy ... through solar, offshore ... and tidal energy solutions’. I have come across permutations of this argument before, and it strikes me that our assessment of the environmental cost of renewables, and our understanding of renewables in general, is somewhat simplistic, glossing over what it is renewables actually do.

Power plants, like the rest of our world, are subject to the law of conservation of energy. They don’t make energy, they convert energy from one form into another, so that it could be transported and released elsewhere. Renewables differ in one important respect: their source energy is being extracted directly from the ecosystems of their installation. In other words, renewables are principally mining operations of a resource that is an intrinsic part of active ecological processes; the fact that we cannot see the thing being mined by the naked eye doesn’t make it any less so.

Thus, renewables have a built-in ecological cost, for it is not possible to extract a significant amounts of energy from an ecosystem without affecting a material change in it -- there are no ‘truly clean and green’ renewables. This is something that is not talked about, I suspect not least because the renewables industry is more about the source energy being ‘free’ than ‘clean’. Yet, we cannot afford to assume this to be inconsequential.

The scale of the problem is easier to grasp when viewed from the other end of the transaction, so let’s talk wind. A typical wind turbine today is rated a bit over 2MW, so a large farm of 150 turbines, such as the Clyde Wind Farm alongside the M74, generates around 350MW. We know that when the 300,000 homes this represents consume this energy, it radically alters the ecosystem around them, changing ambient temperature, humidity, light and noise levels, etc. It is unreasonable to expect that the ecosystems from which this energy was extracted to start with will not experience a transformation of a comparable magnitude.

Sticking with wind, it’s been known for some time that wind farms create their own micro-climates, with one of the easily measurable effects being an increase in temperature at ground level. A study by Harvard researchers Miller and Keith published last year concluded that if all US power generation was switched to wind, this would lead to an overall continental US temperature increase of 0.24C.

The above study has been misrepresented by the popular press as ‘wind turbines cause global warming’, which is, obviously, not what it says; what it does show is that renewables have their own, significant, ecological impact. As Keith summarised it,

If your perspective is the next 10 years, wind power actually has—in some respects—more climate impact than coal or gas, if your perspective is the next thousand years, then wind power is enormously cleaner than coal or gas.

The problem is that the 1,000 year perspective currently focuses strictly on combating (the easily commercialised) effects of warming, while glossing over more subtle questions of ecological integrity. This needs to change.

There is an additional issue with renewables. Since no engineering processes is, or can be, 100% efficient, not all of the source energy extracted is converted to electricity. Some of it leaks back into the ecosystem in other forms. The classic example of this is noise. We know that people living in proximity to wind turbine installations report health issues, but the effect is far wider. We are aware of major effect on birds, bats, and, moving off shore, marine mammals; and, of course, these ecosystems are home to myriad of other tightly interconnected species.

This is not meant to be a diatribe against renewables. Given that the public opinion is firmly against nuclear energy, renewables are all we have in combating Climate Change, for Climate Change does change everything, if we don’t get it under control, nothing else is of consequence. Nor is this a diatribe against wind power; I have used wind as a convenient example, these problems are I believe intrinsic to the whole class of renewable energy, and I expect we will be hearing increasingly more on this subject in the future.

What I want to do here is simply to draw an attention to the fact that all renewables come with intrinsic ecological costs that go beyond the obvious and visible things like loss of habitat. I draw two conclusions from these observations:

  1. The single most important thing in addressing Climate Change is not switching to renewables but a radical cut to our energy consumption. Renewables might allow us to keep the planet cooler, but they do not necessarily preserve its ecosystems.

  2. I find the ‘renewable X bad, renewable Y good’ line of reasoning a form of NIMBY-ism, missing the big picture. We can’t afford to exclude any form of renewables en masse, because when it comes to renewables too much of any one thing is going to be bad news for something somewhere. And in Scotland our renewables portfolio is too heavily weighted toward wind, it needs to be more balanced.

by tf at April 12, 2019 09:29 AM

April 06, 2019

Tomas Frydrych

On Delayed Gratificiation

On Delayed Gratificiation

Some of the photographers of old get rather upset when folk say ‘film slows you down’, so I won’t say that, but I’ll say it slows me down for sure. It’s not just the ‘on location’ pace, but also the time it takes before I get to see what I tried to visualise.

It starts with the negative development. While the process itself is not particularly time consuming, for reasons both environmental and economic, I tend to develop my B&W negatives in batches of two, and I rarely shoot more than a roll a week, and sometime life permits a lot less than that.

The delay between imagining and seeing becomes even more pronounced with colour. The chemicals are designed to be mixed in batches for six rolls each, and once mixed don’t have very long shelf life. And so in the fridge the exposed films go, until I have at least four of them, which, considering I am mostly focusing on B&W these days, can take a while (this weekend I am developing slides I took in October and November of last year; quite exciting, a couple of frames there that I thought at the time had some promise).

And then there is the printing (for me photography is mainly about the print, I have nothing against digital display of photos, but it doesn’t do it for me personally). Sometime the print takes four hours in the darkroom, sometime fifteen, and I can maybe find a day or two a month for this. All in all, in the last six months I have made eight photographs, and have another four or so waiting to be printed.

I expect, you, the reader, might be asking why on earth would anyone do this in this day and age? I could give here a whole list of reasons but the simple and most honest answer is ‘because I enjoy it’; the tactile nature of it, the very fact it doesn’t involve a computer (which I spend far too much time with as is).

Of course, this delayed gratification can (and does) at times turn into a delayed disappointment; the cover image is my witness. But all in all I am finding that in this instant world of ours the lack of immediate feedback is more of a benefit than a hindrance, the inability to see the result right here and now makes photography sort of an exercise in patience, even faith, for as it was said long time ago

Faith is confidence in what we hope for and assurance about what we do not see, this is what the ancients were commended for.

And faith, in turn, inspires dreams, and dreams are what the best photographs are made of.

by tf at April 06, 2019 03:08 PM

March 15, 2019

Emmanuele Bassi

A little testing

Years ago I started writing Graphene as a small library of 3D transformation-related math types to be used by GTK (and possibly Clutter, even if that didn’t pan out until Georges started working on the Clutter fork inside Mutter).

Graphene’s only requirement is a C99 compiler and a decent toolchain capable of either taking SSE builtins or support vectorization on appropriately aligned types. This means that, unless you decide to enable the GObject types for each Graphene type, Graphene doesn’t really need GLib types or API—except that’s a bit of a lie.

As I wanted to test what I was doing, Graphene has an optional build time dependency on GLib for its test suite; the library itself may not use anything from GLib, but if you want to build and run the test suite then you need to have GLib installed.

This build time dependency makes testing Graphene on Windows a lot more complicated than it ought to be. For instance, I need to install a ton of packages when using the MSYS2 toolchain on the CI instance on AppVeyor, which takes roughly 6 minutes each for the 32bit and the 64bit builds; and I can’t build the test suite at all when using MSVC, because then I’d have to download and build GLib as well—and just to access the GTest API, which I don’t even like.

What’s wrong with GTest

GTest is kind of problematic—outside of Google hijacking the name of the API for their own testing framework, which makes looking for it a pain. GTest is a lot more complicated than a small unit testing API needs to be, for starters; it was originally written to be used with a specific harness, gtester, in order to generate a very brief HTML report using gtester-report, including some timing information on each unit—except that gtester is now deprecated because the build system gunk to make it work was terrible to deal with. So, we pretty much told everyone to stop bothering, add a --tap argument when calling every test binary, and use the TAP harness in Autotools.

Of course, this means that the testing framework now has a completely useless output format, and with it, a bunch of default behaviours driven by said useless output format, and we’re still deciding if we should break backward compatibility to ensure that the supported output format has a sane default behaviour.

On top of that, GTest piggybacks on GLib’s own assertion mechanism, which has two major downsides:

  • it can be disabled at compile time by defining G_DISABLE_ASSERT before including glib.h, which, surprise, people tend to use when releasing; thus, you can’t run tests on builds that would most benefit from a test suite
  • it literally abort()s the test unit, which breaks any test harness in existence that does not expect things to SIGABRT midway through a test suite—which includes GLib’s own deprecated gtester harness

To solve the first problem we added a lot of wrappers around g_assert(), like g_assert_true() and g_assert_no_error(), that won’t be disabled depending on your build options and thus won’t break your test suite—and if your test suite is still using g_assert(), you’re strongly encouraged to port to the newer API. The second issue is still standing, and makes running GTest-based test suite under any harness a pain, but especially under a TAP harness, which requires listing the amount of tests you’ve run, or that you’re planning to run.

The remaining issues of GTest are the convoluted way to add tests using a unique path; the bizarre pattern matching API for warnings and errors; the whole sub-process API that relaunches the test binary and calls a single test unit in order to allow it to assert safely and capture its output. It’s very much the GLib test suite, except when it tries to use non-GLib API internally, like the command line option parser, or its own logging primitives; it’s also sorely lacking in the GObject/GIO side of things, so you can’t use standard API to create a mock GObject type, or a mock GFile.

If you want to contribute to GLib, then working on improving the GTest API would be a good investment of your time; since my project does not depend on GLib, though, I had the chance of starting with a clean slate.

A clean slate

For the last couple of years I’ve been playing off and on with a small test framework API, mostly inspired by BDD frameworks like Mocha and Jasmine. Behaviour Driven Development is kind of a buzzword, like test driven development, but I particularly like the idea of describing a test suite in terms of specifications and expectations: you specify what a piece of code does, and you match results to your expectations.

The API for describing the test suites is modelled on natural language (assuming your language is English, sadly):

  describe("your data type", function() {
    it("does something", () => {
    it("can greet you", () => {
      let greeting = getHelloWorld();
      expect(greeting).not.toBe("Goodbye World");

Of course, C is more verbose that JavaScript, but we can adopt a similar mechanism:

static void
something (void)
  expect ("doSomething",
    bool_value (do_something ()),
    to_be, true,

static void
  const char *greeting = get_hello_world ();

  expect ("getHelloWorld",
    string_value (greeting),
    not, to_be, "Goodbye World",

static void
type_suite (void)
  it ("does something", do_something);
  it ("can greet you", greet);

  describe ("your data type", type_suite);

If only C11 got blocks from Clang, this would look a lot less clunkier.

The value wrappers are also necessary, because C is only type safe as long as every type you have is an integer.

Since we’re good C citizens, we should namespace the API, which requires naming this library—let’s call it µTest, in a fit of unoriginality.

One of the nice bits of Mocha and Jasmine is the output of running a test suite:

$ ./tests/general 

    contains at least a spec with an expectation
      ✓ a is true
      ✓ a is not false

      2 passing (219.00 µs)

    can contain multiple specs
      ✓ str contains 'hello'
      ✓ str contains 'world'
      ✓ contains all fragments

      3 passing (145.00 µs)

    should be skipped
      - skip this test

      0 passing (31.00 µs)
      1 skipped

5 passing (810.00 µs)
1 skipped

Or, with colors:

Using colors means immediately taking this more seriously

The colours go automatically away if you redirect the output to something that is not a TTY, so your logs won’t be messed up by escape sequences.

If you have a test harness, then you can use the MUTEST_OUTPUT environment variable to control the output; for instance, if you’re using TAP you’ll get:

$ MUTEST_OUTPUT=tap ./tests/general
# General
# contains at least a spec with an expectation
ok 1 a is true
ok 2 a is not false
# can contain multiple specs
ok 3 str contains 'hello'
ok 4 str contains 'world'
ok 5 contains all fragments
# should be skipped
ok 6 # skip: skip this test

Which can be passed through to prove to get:

$ MUTEST_OUTPUT=tap prove ./tests/general
./tests/general .. ok
All tests successful.
Files=1, Tests=6,  0 wallclock secs ( 0.02 usr +  0.00 sys =  0.02 CPU)
Result: PASS

I’m planning to add some additional output formatters, like JSON and XML.

Using µTest

Ideally, µTest should be used as a sub-module or a Meson sub-project of your own; if you’re using it as a sub-project, you can tell Meson to build a static library that won’t get installed on your system, e.g.:

mutest_dep = dependency('mutest-1',
  fallback: [ 'mutest', 'mutest_dep' ],
  default_options: ['static=true'],
  required: false,
  disabler: true,

# Or, if you're using Meson < 0.49.0
mutest_dep = dependency('mutest-1', required: false)
if not mutest_dep.found()
  mutest = subproject('mutest',
    default_options: [ 'static=true', ],
    required: false,

  if mutest.found()
    mutest_dep = mutest.get_variable('mutest_dep')
    mutest_dep = disabler()

Then you can make the tests conditional on mutest_dep.found().

µTest is kind of experimental, and I’m still breaking its API in places, as a result of documenting it and trying it out, by porting the Graphene test suite to it. There’s still a bunch of API that I’d like to land, like custom matchers/formatters for complex data types, and a decent want to skip a specification or a whole suite; plus, as I said above, some additional formatted output.

If you have feedback, feel free to open an issue—or a pull request wink wink nudge nudge.

by ebassi at March 15, 2019 02:02 PM

March 14, 2019

Emmanuele Bassi

Episode 1.a: The GIMP Toolkit

The history of the GNOME project is also the history of its core development platform. After all, you can’t have a whole graphical environment without a way to write not only applications for it, but its own components. Linux, for better or worse, does not come with its own default GUI toolkit; and if we’re using GNU as the userspace environment we’re still out of something that can do the job of putting things on the screen for people to use.

While we can’t tell the history of GNOME without GTK, we also cannot tell the history of GTK without GNOME; the two projects are fundamental entwined, both in terms of origins and in terms of direction. Sure, GTK can be used in different environments, and on different platforms, now; and having a free-as-in-free software GUI toolkit was definitely the intent of the separation from GIMP’s own code base; nevertheless, GTK as we know it today would not exist without GNOME, just like GNOME would not have been possible without GTK.

We’ve talked about GTK’s origin as the replacement toolkit for Motif created by the GIMP developers, but we haven’t spent much time on it during the first chapter of the history of the GNOME project.

If you managed to fall into a time hole, and ended up in 1996 trying to write a GUI application on Linux or any commercial Unix, you’d have different choices depending on:

  • the programming language you wish to use in order to write your application
  • the license you wish to use once you release your application

The academic and professional space on Unix was dominated by OpenGroup’s Motif toolkit, which was mostly a collection of widgets and utilities on top of the X11 toolkit, or Xt. Xt’s API, like all pre-Xorg X11 libraries API, is very 1987: pointers hidden inside type names; global locks; explicit event loop. Xt is also notable because it lacks basically any feature outside of “initialise an application singleton”; “create this top level window”, either managed by a window manager compatible with the Inter-Client Communication Conventions Manual, or an unmanaged “pop up” window that is left to the application developer to handle; and an event dispatch API, in order to write your own event loop. Motif integrated with Xt to provide everything else: from buttons to text entries; from menus to scroll bars.

Motif was released under a proprietary license, which required paying royalties to the Open Group.

If you wanted to release your application under a copyleft license, you’d probably end up writing something using another widget collection, released under the same terms as X11 itself, and like Motif based on the X toolkit, called the “X Athena widgets”—which is something I would not wish on my worst enemy; you could also use the GNUstep project toolkit, a reimplementation of the NeXT frameworks; but like on NeXT, you’d have to use the (then) relatively niche Objective C language.

If you didn’t want to suffer pain and misery with the Athena widgets, or you wanted to use anything except Objective C, you’d have to write your own layout, rendering, and input handling layer on top of raw Xlib calls — an effort commonly known amongst developers as: “writing a GUI toolkit”.

GIMP developers opted for the latter.

Since GTK was a replacement for an extant toolkit, some of the decisions that shaped its API were clearly made with Motif terminology in mind: managed windows (“top levels”), and unmanaged windows (“pop ups”); buttons and toggle buttons; menus and menu shells; scale and spin widgets; panes and frames. Originally, GTK objects were represented by opaque integer identifiers, like objects in OpenGL; that was, fortunately, quickly abandoned in favour of pointers to structures. The widget hierarchy was flat, and you could not derive new widgets from the existing ones.

GTK as a project provided, and was divided into three basic and separate libraries:

  • GLib, a C utility library, meant to provide the fundamental data structures that the C standard library does not have: linked lists, dynamic arrays, hash tables, trees
  • GDK, or the GIMP drawing toolkit, which wrapped Xlib function calls and data types, like graphic contexts, visuals, colormaps, and display connections
  • GTK, or the GIMP toolkit, which contained the various UI elements, from windows, to buttons, to labels, to menus

Once GTK was spun off into its own project in late 1996, GTK gained the necessary new features needed to write applications and libraries. The widget-specific callback mechanism to handle events was generalised into “signals”, which are just a fancy name for the ability to call a list of functions tied to a well known name, like “clicked” for buttons, or “key-press-event” for key presses. Additionally, and more importantly, GTK introduced a run time type system that allowed deriving new widgets from existing ones, as well as creating new widgets outside of the GTK source tree, to allow applications to define their own specialised UI elements.

The new GTK gained a “plus”, to distinguish it from the in-tree GTK of the early GIMP.

Between 1996 and 1997, GTK development was mostly driven by the needs of GIMP, which is why you could find specialised widgets such as a ruler or a color wheel in the standard API, whereas things like lists and trees of widgets were less developed, and amounted to simple containers that would not scale to large sets of items—as any user of the file selection widget at the time would be able to tell you.

Aside from the API being clearly inspired by Motif, the appearance of GTK in its 0.x and 1.0 days was very much Motif-like; blocky components, 1px shadows, aliased arrows and text. Underneath it all, X11 resources like windows, visuals, server-side allocated color maps, graphic contexts, and rendering primitives. Yes, back in the old days, GTK was fully network transparent, like every other X11 toolkit. It would take a few more years, and two major API cycles, for GTK to fully move away from that model, and towards its own custom, client-side rendering.

GTK’s central tenets about how the widget hierarchy worked were also mutuated in part from Motif; you had a tree of widgets, starting from the top level window, down to the leaf widgets like text labels. Widgets capable of holding other widgets, or containers, were mostly meant to be used as layout managers, that is UI elements encoding a layout policy for their children widgets—like a table, or an horizontal box. The layout policies provided by GTK eschewed the more traditional “pixel perfect” positioning and sizing, as provided by the Windows toolkits; or the “struts and springs” model, provided by Apple’s frameworks. With GTK, you packed your widgets inside different containers, and those would be sized by their contents, as well as by packing options, such as a child filling all the available space provided by its parent; or allowing the parent widget to expand, and aligning itself within the available space.

As we’ve seen in episode 1.2, one of the first things the Red Hat Advanced Development labs did was to get GTK 1.0 out of the door in advance of the GNOME 1.0 release, in order to ensure that the GNOME platform could be consumed both by desktop components and by applications alike. While that happened, GTK started its 1.2 development cycle.

The 1.2 development effort went into stabilisation, performance, and the occasional new widget. GLib was split off from GTK’s repository at the start of the development cycle, as it was useful for other, non-GUI C projects.

As a result of “Project Bob”, Owen Taylor took the code initially written by Carsten Haitzler, and wrote a theming engine for GTK. GTK 1.0 gave you API inside GDK to draw the components of any widget: lines, polygons, ovals, text, and shadows; the GDK API would then call into the Xlib one, and send commands over the wire to the X server. In GTK 1.2, instead of using the API inside GDK, you’d go through a separate layer inside GTK; you had access to the same primitives, augmented by a state tracker for things like colors, background pixmaps, and fonts. That state was defined via an ancillary configuration file, called a “resource” file, with its own custom syntax. You could define the different colors for each different state for each widget class, and GTK would store that information inside the style state tracker, ready to be used when rendering.

Additionally, GTK allowed to load “engines” at run time: small pieces of compiled C code that would be injected into each GTK application, and that replaced the default drawing implementation provided by GTK. Engines would have access to the style information, and, critically, as we’re going to see in a moment, to the windowing system surface that the widget would be drawn into.

It’s important to note that theme engines in GTK 1.2 could only influence colors and fonts; additionally, widgets would be built in isolation one from the other, from disjoint primitives. As a theme engine author you had no idea if the text you’re drawing is going to end up in a button, or inside a label; or if the background you’re rendering is going to be a menu or a top level window. Which is why theme engine authors tried to be sneaky about this; due to how GTK and GDK were split, the GDK windowing system surface needed to have an opaque back pointer to the GTK widget that owned it. If you knew this detail, you could obtain the GTK widget currently being drawn, and determine not only the type of the widget, but also its current position within the scene graph. Of course, this meant that theme engines, typically developed in isolation, would need to be cognisant of the custom widgets inside applications, and that any bug inside an engine would either break new applications, which had no responsibility towards maintaining an internal state for theme engines to poke around with, or simply crash everything that loaded it. As we’re going to see in the future, this approach to theming—at the same time limited in what it could achieve out of the box, and terrifyingly brittle as soon as it was fully exploited—would be a problem that both GTK developers and GNOME theme developers would try to address; and to the surprise of absolutely nobody, the solution made a bunch of people deeply unhappy.

Given that GTK 1.2 had to maintain API and ABI compatibility with GTK 1.0, nothing much changed over the course of its development history. By the time GNOME 1.2 was released, the idea was to potentially release GTK 1.4 as an interim version. A new image loading library, called GdkPixbuf, was written to replace the aging imlib2, using the GTK type system. Additionally, as we saw in episode 1.5, Owen Taylor was putting the finishing touches on a text shaping library called Pango, capable of generating complex text layouts through Unicode strings. Finally, Tim Janik was hard at work on a new type system, to be added to GLib, capable of being used for more than just GUI development. All of these changes would require a clear cut with the backward compatibility of the 1.x series, which meant that the 1.3 development cycle would result in GTK 2.0, and thus would be part of the GNOME 2.0 development effort—but this is all in the future. Or in the past, if you think fourth dimensionally.

Next week, we’re going to see what happens when people that really don’t want to use C to write their applications need to deal with the fact that the whole toolkit and core platform they are consuming is written in object oriented C, in the side episode about language bindings.


by ebassi at March 14, 2019 03:06 PM

February 24, 2019

Emmanuele Bassi

Episode 2.a: Building GNOME

In the GNOME project community, the people building the code are represented by two separate but equally important groups: the maintainers, who release the code, and the release team, who release GNOME. These are their stories.

Developing a software project can be hard, but building one ought to be simpler, right? After all, it’s kind of a binary state for software projects after any change: either the change broke the build, or it didn’t—and broken builds do not get released, right?

Oh, you sweet summer child. Of course it’s not that simple.

Building software gets even more complicated when it comes to complex, interdependent projects composed by multiple modules like GNOME; each module has dependencies lower in the stack, and reverse dependencies—that is, modules that depend on the interfaces it provides—higher in the stack.

If you’re working on a component low in the stack, especially in 2002, chances are you only have system dependencies, and those system dependencies are generally shipped by your Linux distribution. Those dependencies do not typically change very often, or at the very least they assume you’re okay with not targeting the latest, bleeding edge version. If push comes to shove, you can always write a bunch of fallback code that gets only ever tested by people running on old operating systems—if they decide on a whim to try and compile the latest and greatest application, instead of just waiting until their whole platform gets an upgrade.

Moving upwards in the GNOME stack, you start having multiple dependencies, but generally speaking those dependencies move at the same speed as your project, so you can keep track of them. Unlike other projects, the GNOME platform never had issues with the multiplication of dependencies—something that will come back to bite the maintainers later in the 2.x development cycle—but even so, with GNOME offering code hosting to like-minded developers, it didn’t use to be hard to keep track of things; if everything happens on the same source code repository infrastructure you can subscribe to the changes for your dependencies, and see what happens almost in real time.

Applications, finally, are another beast entirely; here, dependencies can be many, and spanning across different system services, different build systems, different code hosting services, and different languages. To minimise the causes of headaches, you can decide to target the versions packages by the Linux distribution you’re using, unless you’re in the middle of a major API version shift in the platform—like, say, the one that happened between GNOME 1 and 2. No Linux distribution packager in their right mind would ship releases for highly unstable core platform libraries, especially when the development cadence of those libraries outpaces the cadence of the distribution packaging and update process. For those cases, using snapshots of the source under revision control is the only option left for you as an upstream maintainer.

So, at this point, your options are limited. You want to install the latest and greatest version of your dependencies—unstable as they might be—on your local system, but you don’t want to mess up the rest of the system in case the changes introduce a bug. In other words: if you’re running GNOME as your desktop, and you upgrade a dependency for your application, you might end up breaking the your own desktop, and then spend time undoing the unholy mess you made, instead of hacking on your own project.

This is usually the point where programmers start writing scripts to modify the environment, and build dependencies and applications into their own separate prefix, using small utilities that describe where libraries put their header files and shared objects in order to construct the necessary compiler and linker arguments. GNOME libraries standardised to one of these tools, called pkg-config, before the 2.0 release, replacing the per-project tools like glib-config or gtk-config that were common during the GNOME 1.x era.

Little by little, the scripts each developer used started to get shared, and improved; for instance, instead of hard coding the list of things to build, and the order in which they should be built, you may want to be able to describe a whole project in terms of a list of modules to be built—each module pointing to its source code repository, its configuration and build time options, and its dependencies. Another useful feature is the ability to set up an environment capable of building and running applications against the components you just built.

The most successful script for building and running GNOME components, jhbuild, emerged in 2002. Written by James Henstridge, the maintainer of the GTK bindings for Python, jhbuild took an XML description of a set of modules, built the dependency tree for each component, and went through the set in the right order, building everything it could, assuming it knew how to handle the module’s build system—which, at the time, was mostly a choice between Autotools and Autotools. Additionally, it could spawn a shell and let you run what you built, or compile additional code as if you installed every component in a system location. If you wanted to experience GNOME at its most bleeding edge, you could build a whole module set into a system prefix like /opt, and point your session manager to that location. Running a whole desktop environment out of a CVS snapshot: what could possibly go wrong, right? Well, at least you had the option of going back to the safe harbours of your Linux distibution’s packaged version of GNOME.

Little by little, over the years, jhbuild acquire new features, like the ability to build projects that were not using Autotools. Multiple module sets, one for each branch of GNOME, and one for tracking the latest and greatest, appeared over the years, as well as additional module sets for building applications, both hosted on repositories and outside the GNOME infrastructure. Jhbuild was even used on non-Linux platforms, like macOS, to build the core GNOME platform stack, and let application developers port their work there. Additionally, other projects like X11 and the stack that we’ll see in a future episode, will publish their own module sets, as many of the developers in GNOME moved through the stack and brought their tools with them.

With jhbuild consuming sets of modules in order to build the GNOME stack, the question becomes: who maintained those sets? Ideally, the release team would be responsible for keeping the modules up to date whenever a new dependency was added, or an old dependency removed. As the release team was responsible of deciding which modules belonged in the GNOME release, they would be the ones best positioned to update the jhbuild sets. There was a small snag in this plan, though: the release team already had its own tool for building GNOME from release archives produced and published by the module maintainers, in the correct order, to verify that the whole GNOME release would build and produce something that could be packaged by Linux (and non-Linux) distributors.

The release team’s tool was called GARNOME, and was based on the GAR architecture designed by Nick Moffitt, which itself was largely based on the BSD port system. GARNOME was developed as a way for the release team to build and test alpha releases of GNOME during the 2.0 development cycle. The main difference between jhbuild and GARNOME was the latter’s focus on release archives, compared to the former’s focus on source checkouts. The main goal of GARNOME was really to replicate the process of distributors taking various releases and packaging them up in their preferred format. While editing jhbuild’s module sets was a simple matter of changing some XML, the GARNOME recipes were a fairly complicated set of Make files, with magic variables and magic include directives that would lead to building and installing each module, using Make rules to determine the dependencies. All of this meant that there was not only no overlap between who used jhbuild and who used GARNOME, but also no overlap between who contributed to which project.

Both jhbuild and GARNOME assumed you had a working system and development toolchain for all the programming languages needed to build GNOME components; they also relied on a whole host of system dependencies, especially when it came to talking to system services and hardware devices. While this was relatively less important for GARNOME, whose role was simply to build the whole of GNOME, jhbuild started to suffer from these limitations as soon as GNOME projects began interacting much more with the underlying services offered by the operating system.

It’s important to note that none of this stuff was automated; it all relied on human intervention for testing that things would not blow up in interesting ways. We were far, far away from any concept of a continuous integration pipeline. Individual developers had to hunt down breakage in library releases that would have repercussions down the line when building other libraries, system components, or applications. The net result was that building GNOME was only possible if everything was built out of release archives; anything else was deeply unstable, and proved to be hard to handle for both seasoned developers and new contributors alike, the more complexity was piled on the project.

GARNOME was pretty successful, and ended up being used for a majority of the GNOME 2 releases, until it was finally retired in favour of jhbuild itself, using a special module set that pointed to release archives instead of source code repositories. The module set was maintained by the release team, and published for every GNOME release, to let other developers and packagers reproduce and validate the process.

Jhbuild is still used to this day, mostly for the development of system components like GTK, or the GNOME Shell; application building has largely shifted towards containerised systems, like Flatpak, which have the advantage of being easily automated in a CI environment. These systems are also much easier to use from a newcomer perspective, and are quite more reliable when it comes to the stability of the underlying middleware.

The release team switched away from jhbuild for validating and publishing GNOME releases in 2018, long into the GNOME 3 release cycle, using a new tool called BuildStream which not only builds the GNOME components but it also builds the lower layers of the stack including the compiler toolchain, to ensure a level of build reproducibility that jhbuild and GARNOME never had.


by ebassi at February 24, 2019 07:18 PM

Episode 2.2: Release Day

For all intents and purposes, the 2.0 release process of GNOME was a reboot of the project; as such, it was a highly introspective event that just so happened to result in a public release of a software platform used by a large number of people. The real result of this process was not in the bits and bytes that were compiled, or interpreted, into desktop components, panel applets, or applications; it was, instead, a set of design tenets, a project ethos, and a powerful marketing brand that exist to this day.

The GNOME community of developers, documenters, translators, and designers, was faced with the result of Sun’s user testing, and with the feedback on documentation, accessibility, and design, and had two choices: double down on what everyone was doing, and maintain the existing contributor and user bases; or adapt, take a gamble, and change—possibly even lose contributors, in the hope of gaining more users, and more contributors, down the road.

The community opted for the latter gamble.

The decision was not without internal strife.

This kind of decisions is fairly common in all projects that reached a certain critical mass, especially if they don’t have a central figure keeping everything together, and being the ultimate arbiter of taste and direction. Miguel de Icaza was already really busy with Ximian, and even if the Foundation created the release team in order to get a “steering committee” to make informed, executive decisions in case of conflicts, or decide who gets to release what and when, in order to minimise disruption over the chain of dependencies, this team was hardly an entity capable of deciding the direction of project.

If effectively nobody is in charge, the direction of the project becomes an emergent condition. People have a more or less vague sense of direction, and a more or less defined end goal, so they will move towards it. Maybe not all at the same time, and maybe not all on the same path; but the sum of all vectors is not zero. Or, at least, it’s not zero all the time.

Of course, there are people that put more effort in order to balance the equation, and those are the ones that we tend to recognise as “the leaders”, or, in modern tech vernacular, “the rock stars”.

Seth Nickell and Calum Benson clearly fit that description. Both worked on usability and design, and both worked on user testing—with Calum specifically working on the professional workstation user given his position at Sun. Seth was the GNOME Usability project lead, and alongside the rest of the usability team, co-authored the GNOME Human Interface Guidelines, or HIG. The HIG was both a statement of intent on the design direction of GNOME, as well as a checklist for developers to go through and ensure that all the GUI applications would fit into the desktop environment, by looking and behaving consistently. At the very basic core of the HIG sat a few principles:

  1. Design your application to let people achieve their goals
  2. Make your application accessible to everyone
  3. Keep the design simple, pretty, and consistent
  4. Keep the user in control, informed on what happens, and forgive them for any mistake

These four tenets tried to move the needle of the design for GNOME applications from the core audience of standard nerds to the world at large. In order to design your application, you need to understand the audience that you wish to target, and how to ensure you won’t get in their way; this also means you should never limit your audience to people that are as able bodied as you are, or coming from the same country or socio-economical background; your work should be simple and reliable, in the sense that it doesn’t do two similar things in two different ways; and that users should be treated with empathy, and never with contempt. Even if you didn’t have access to the rest of the document, which detailed how to deal with whitespace, or alignment, or the right widget for the right action, you could already start working on making an application capable of fitting in with the rest of GNOME.

Consistency and forgiveness in user interfaces also reflected changes in how those interfaces should be configured—or if they should be configured at all.

In 2002 Havoc Pennington wrote what is probably the most influential essay on how free and open source software user interface ought to be designed, called “Free software UI”. The essay was the response to the evergreen question: “can free and open source software methodology lead to the creation of a good user interface?”, posed by Matthew Paul Thomas, a designer and volunteer contributor at Mozilla.

Free and open source software design and usability suffer from various ailments, most notably:

  • there are too many developers and not enough designers
  • designers can’t really submit patches

In an attempt at fixing these two issues the GNOME project worked on establishing a design and usability team, with the help of companies to jump start the effort; the presence of respected designers in leadership positions also helped creating and fostering a culture where project maintainers would ask for design review and testing. We’re still a bit far from asking for design input instead of a design review, which usually comes with pushback now that the code is in place. Small steps, I guess.

The important part of the essay, though, is on the cost of preferences—namely that there’s no such thing as “just adding an option”.

Adding an option, on a technical level, requires adding code to handle all the potential states of the option; it requires handling failure cases; a user interface for setting and retrieving the option; it requires testing, and QA, for all the states. Each option can interact with other options, which means a combinatorial explosion of potential states, each with its own failure mode. Options are optimal for a certain class of users, because they provide the illusion of control; they are also optimal for a certain class of developers, because they tickle the instinct of making general solutions to solve classes of problems, instead of fixing the actual problem.

In the context of software released “as is”, without even the implied warranty of being fit for purpose, like most free and open source software is, it removes stress because it allows the maintainer from abdicating responsibility. It’s not my fault that you frobnicated the foobariser and all your family photos have become encoded versions of the GNU C library; you should have been holding a lit black candle in your left hand and a curved blade knife in your right hand if you didn’t want that to happen.

More than that, though: what you definitely don’t want is having preferences to “fix” your application. If you have a bug, don’t add an option to work around it. If somebody is relying on a bug for their workflow, then: too bad. Adding a preference to work around a bug introduces another bug because now you encoded a failure state directly into the behaviour of your code, and you cannot ever change it.

Finally, settings should never leave you in an error state. You should never have a user break their system just because they put invalid data in a text field; or toggled the wrong checkbox; or clicked the wrong button. Recovering is good, but never putting the user in the condition of putting bad values in the system is the best approach because it is more resilient.

From a process standpoint, the development cycle of GNOME 1, and the release process of GNOME 2.0, led to a complete overhaul of how the project should be shepherded into releasing new versions. Releasing GNOME 2.0 led to many compromises: features were not complete, known bugs ended up in the release notes, and some of the underlying API provided by the development platform were not really tested enough before freezing them for all time. It is hard to decide when to shout “pencils down” when everyone is doing their own thing in their own corner of the world. It’s even harder when you have a feedback loop between a development platform that provides API for the rest of the platform, and needs validation for the changes while they can still be made; and applications that need the development platform to sit still for five minutes so that they can be ported to the new goodness.

GNOME was the first major free software project to switch away from a feature-based development cycle and towards a time-based one, thanks to the efforts of Jeff Waugh on behalf of the release team; the whole 2.0 development cycle was schedule driven, with constant reminders of the various milestones and freeze dates. Before the final 2.0 release, a full plan for the development cycle was proposed to the community of maintainers; the short version of the plan was:

The key elements of the long-term plan: a stable branch that is extremely locked-down, an unstable branch that is always compilable and dogfood-quality, and time-based releases at 6 month intervals.

Given that the 2.0 release happened at the end of June, that would put the 2.2 release in December, which would have been problematic. Instead, it was decided to have a slightly longer development cycle, to catch all the stragglers that couldn’t make the cut for 2.0, and release 2.2 in February, followed by the 2.4 release in September. This period of adjustment led to the now familiar release cadence of a March and a September releases every year. Time based releases and freezing the features, API, translatable strings—and code, around the point-zero release date—ensured that only features working to the satisfaction of the maintainers, designers, and translators would end up in the hand of the users. Or, at least, that was the general plan.

It’s important to note that all of these changes in the way GNOME as a community saw itself, and the project they were contributing to, are probably the biggest event in the history of the project itself—and, possibly, in the history of free and open source software. The decision to focus on usability, accessibility, and design, shaped the way people contributing and using GNOME think about GNOME; it even changed the perception of GNOME for people not using it, for good or ill. GNOME’s brand was solidified into one of care about design principles, and that perception continues to this day. If something user visible changes in GNOME it is assumed that design, usability, or accessibility had something to do with it—even when it really didn’t; it is assumed that designers sat together, did user studies, finalized a design, and then, in one of the less charitable versions, lobbed it over the wall to module maintainers for its implementation with no regard for objections.

That version of reality is so far removed from ours it might as well have superheroes flying around battling monsters from other dimensions; and, yet, the GNOME brand is so established that people will take it as an article of faith.

For all that, though, GNOME 2 did have usability studies conducted on it prior to release; the Human Interface Guidelines were written in response to those studies, and to established knowledge in the interaction design literature and community; the changes in the system settings and the menu structures were done after witnessing users struggle with the equivalent bits of GNOME 1. The unnecessary settings that littered the desktop as a way to escape making decisions, or as a way to provide some sort of intellectual challenge to the developers were removed because, in the end, settings are not the goal for a desktop environment that’s just supposed to launch applications and provide an environment for those applications to exist.

This was peak GNOME brand.

On June 27, 2002, GNOME 2.0 was released. The GNOME community worked days and nights for more than a year after releasing 1.4, and for more than two years after releasing 1.2. New components were created, projects were ported, documentation was written, screenshots were taken, text was translated.

Finally, the larger community of Linux users and enthusiasts would be able to witness the result of all this amazing work, and their reaction was: thanks, I hate it.

Well, no, that’s not really true.

Yes, a lot of people hated it—and they made damn well sure you knew that they hated it. Mailing lists, bug trackers, articles and comments on news websites were full of people angrily demanding their five clocks back; or their heavily nested menu structure; or their millisecond-precision animation settings; or their “Miscellanous” group of settings in a “Miscellaneous” tab of the control centre.

A lot of people simply sat in front of GNOME 2, and managed to get their work done, before turning off the machine and going home.

A few people, though, saw something new; the potential of the changes, and of the focus of the project. They saw beyond the removed configuration options; the missing features left for a future cycle; the bugs caused by the massive changes in the underlying development platform.

Those few people were the next generation of contributors to GNOME; new developers, sure, but also new designers; new documentation writers; new translators; new artists; new maintainers. They were inspired by the newly refocused direction of project, by its ethos, to the point of deciding to contribute to it. GNOME needed to be ready for them.

Next week the magic and shine of the release starts wearing off, and we’re back to flames and long discussions on features, media stacks, inclusion of applications in the release, and what happens when Novell decides to go on a shopping spree, in the episode titled “Honeymoon phase”.

by ebassi at February 24, 2019 07:18 PM

Episode 2.1: On Brand

In the beginning of the year 2000, with the 1.2 release nearly out of the door, the GNOME project was beginning to lay down the groundwork for the design and development of the 2.0 release cycle. The platform had approached a good level of stability for the basic, day to day use; whatever rough edges were present, they could be polished in minor releases, while the core components of the platform were updated and transitioned incrementally towards the next major version. Before mid-2000, we already started seeing a 1.3 branch for GTK, and new libraries like Pango and GdkPixbuf were in development to provide advanced text rendering and replace the old and limited imlib image loading library, respectively. Of course, once you get the ball rolling it’s easy to start piling features, refactorings, and world breaking fixes on top of each other.

If X11 finally got support for loading True Type fonts, then we would need a better API to render text. If we got a better API to render text, we would need a better API to compose Unicode glyphs coming from different writing systems, outside of the plain Latin set.

If we had a better image loading library, we could use better icons. We could even use SVG icons all over the platform, instead of using the aging XPM format.

By May 2000, what was supposed to be GTK 1.4 turned into the development cycle for the first major API break since the release of GTK 1.0, which happened just the year before. Of course, the 1.2 cycle would continue, and would now cover not just GNOME 1.2, but the 1.4 release as well.

It wasn’t GTK itself that led the charge, though. In a way, breaking GTK’s API was the side effect of a much deeper change.

As we’ve seen in the first chapter’s side episode on GTK, GLib was a simple C utility library for the benefit of GTK itself; the type system for writing widgets and other toolkit-related data using object orientation was part of GTK, and required depending on GTK for writing any type of object oriented C. It turns out that various C projects in the GNOME ecosystem—projects like GdkPixbuf and Pango—liked the idea of having a type system written in C and, more importantly, the ability to easily build language bindings for any API based on that type system. Those projects didn’t really need, or want, a dependency on a GUI toolkit, with its own dependency on image loading libraries, or windowing systems. Moving the type system to a lower level library, and then reusing it in GTK would have neatly solved the problem, and made it possible to create a semi-standard C library shared across various projects.

Of course, since no solution survives contact with software developers, people depending on GLib for basic data structures, like a hash table, didn’t want to suddenly have a type system as well. For that reason, GLib acquired a second shared library containing the GTK type system, upgraded and on steroid; the “signal” mechanism to invoke a named list of functions, used by GTK to deliver windowing system events; and a base object class, called GObject, to replace GTK’s own GtkObject. On top of that, GObject provided properties associated to object instances; interface types; dynamic types for loadable modules; type wrappers for plain old data types; generic function wrappers for language bindings; and a richer set of memory management semantics.

Another important set of changes finding their way down to GLib was the portability layer needed to make sure that GTK applications could run on both Unix-like, and non-Unix-like operating systems. Namely, Windows, for which GTK was getting a backend, alongside additional backends for BeOS, macOS, and direct framebuffer rendering. The Windows backend for GTK was introduced to make GIMP build and work on that platform, and increase its visibility, and possibly the amount of contributions—something that free and open source software communities always strive towards, even if it does increase the amount of feature request, bug reports, and general work for the maintainers. It’s important to note that GTK wasn’t suddenly becoming a cross-platform toolkit, meant to be used to write applications targeting multiple platforms; the main goal was always to allow extant Linux applications to be easily ported to other operating systems first, and write native, non-Linux applications as a distant second.

With a common, low level API provided by GLib and GObject, we start to see the beginning of a more comprehensive software development platform for GNOME; if all the parts of the platform, even the ones that are not directly tied to the windowing system, follow the same semantics when it comes to memory management, type inheritance, properties, signals, and coding practices, then writing documentation becomes easier; developing bindings for various programming languages becomes a much more tractable problem; creating new, low level libraries for, say, sending and receiving data from a web server, and leveraging the existing community becomes possible. With the release of GLib and GTK 2.0, the GNOME software development platform moves from a collection of libraries with a bunch of utilities built on top of GTK to a comprehensive set of functionality centered on GLib and GObject, with GTK as the GUI toolkit, and a collection of libraries fullfilling specific tasks to complement it.

Of course, that still means having libraries like libgnome and libgnomeui lying around, as a way to create GNOME applications that integrate with the GNOME ecosystem, instead of just GTK applications. GTK gaining more GNOME-related features, or more GNOME-related integration points, was a source of contention inside the community. GTK was considered by some GNOME contributors to be a “second party” project; it had its own release schedule, and its own release manager; it incorporated feedback from GNOME application and library developers, but it was also trying to serve non-GNOME communities, by providing a useful generic GUI toolkit out of the box. On the other side of the spectrum, some GNOME developers wanted to keep the core as lean as possible, and accrue functionality inside the GNOME libraries, like libgnome and libgnomeui, even if those libraries were messier and didn’t receive as much scrutiny as GLib and GTK.

Most of 2001 was spent developing GObject and Pango, with the latter proving to be one of the lynchpins of the whole platform release. As we’ve seen in episode 1.5, Pango provided support for complex, non-latin text, a basic requirement for creating GUI applications that could be used outside of the US and Europe; in order to get there, though, various pieces of the puzzle had to come together first.

The first, big piece, was adding the ability for applications to render TrueType fonts on X11, using fontconfig to configure, enumerate, and load the fonts installed in a system, and the Xft library, for rendering glyphs. Before Xft, X applications only had access to core bitmap fonts, which may look impressive if all you have a is thin terminal in 1987, but compared to the font rendering available on Windows and macOS they were already painfully out of date by about 10 years. During the GNOME 1.x cycle some components with custom rendering code already started using Xft directly, instead of going through GTK’s text rendering wrappers around X11’s core API; this led to the interesting result of, for instance, the text rendering in Nautilus pre-1.0 looking miles better than every other GTK 1 application, including the rest of the desktop components.

The other big piece of the puzzle was Unicode. Up until GTK 2.0, all text inside GTK applications was pretty much passed as it was to the underlying windowing system text rendering primitives; that mostly meant either ASCII, or one of the then common ISO encodings; this not only imposed restrictions on what kind of text could be rendered, but it also introduced additional hilarity when it came to running applications localized by somebody in Europe on a computer in the US, or in Russia, or in Japan.

Taking advantage of Unicode to present text meant adding various pieces of API to GLib, mostly around the Unicode tables, text measurement, and iteration. More importantly, though, it meant changing all text used inside GTK and GNOME applications to UTF-8. It meant that file system paths, translations, and data stored on non-Unicode systems had to be converted—if the original encoding was available—or entirely rewritten, if you didn’t want your GUI to be a tragedy of unintelligible text.

If the written word was in the process of being transformed to another format, pictures were not in a better position. The state of the art for raster images in a GTK application was still XPM, a bitmap format that was optimised for storage inside C sources, and compiled with the rest of the application. Sadly, the results were less than stellar, compared to more common formats like JPEG and PNG; additionally, the main library used to read image assets, imlib, was very much outdated, and mostly geared towards loading image data into X11 pixmaps. Imlib provided entry points for integrating with the GTK drawing API, but it was a separate project, part of the Enlightenment window manager—which was already moving to its replacement, imlib2. The work to replace imlib with a new GNOME-driven library, called GdkPixbf, began in 1999, and was merged into the GTK repository as a way to provide API to load images in various formats like PNG, GIF, JPEG, and Windows BMP and ICO files directly into GTK widgets. As an additional feature, GdkPixbuf had an extensible plugin system which allowed writing out of tree modules for loading image formats without necessarily adding new dependencies to GTK. Another feature of GdkPixbuf was a transformation and compositing API, something that imlib did not provide.

All in all, it took almost 2 years of development for GTK 1.3 to turn into GTK 2.0, with the first stable releases of GLib, Pango, and GTK cut in March 2002, after a few months of feature freeze needed to let GNOME and application developers catch up with the changes, and port their projects to the new API. The process of porting went without a hitch, with everyone agreeing that the new functionality was incredibly easy to use, and much better than what came before. Developers were so eager to move to the new libraries that they started doing so during the development cycle, and constantly kept up with the changes so that every source code change was just a few lines of code.

Yes, of course I’m lying.

Porting proceeded in fits and jumps, and it was either done at the very beginning, when the difference between major versions of the libraries were minimal and gave a false impression of what the job entailed; or it happened at the very end of the cycle, with constant renegotiation of every single change in the platform, a constant barrage of questions of why platform developers were trying to impose misery on the poor application developers, why won’t you think of the application developers…

Essentially, like every single major version change in every project, ever.

On top of the usual woes of porting, we have to remember that GNOME 1.4 started introducing technology previews for GNOME 2.0, like GConf, the new configuration storage. While Havoc Pennington had written GConf between 2000 and 2001, and concentrated mostly on the development of GTK 2 after that, it was now time to start using GConf as part of the desktop itself, as a replacement for the configuration storage used by components by the Panel, and by applications using libgnome, which is when things got heated.

Ximian developer Dieter Maurer thought that some of the trade-offs of the GConf implementation, mostly related to its use of CORBA’s type system, were enough of a road block that they decided to write their own configuration client API, called bonobo-config, which re-implemented the GConf design with a more pervasive use of CORBA types and interfaces, thanks to the libbonobo work that was pursued as part of the GNOME componentisation effort. Bonoboc-config had multiple backends, one of which would wrap GConf, as, you might have guessed it already, a way to route around a strongly opinionated maintainer, working for a different company.

The last bit is probably the first real instance of a massive flame war caused by strife between commercial entities vying for the direction of the project.

The flames started off as a disagreement in design and technical direction between GConf and bonobo-config maintainers, confined to the GConf development mailing list, but soon spiralled out of control once libgnome was changed to use bonobo-config, engulfing multiple mailing list in fires that burned brighter than a thousand suns. Accusations of attempting to destroy the GNOME projects flew through the air, followed by rehashing of every single small roadblock ever put in the way of each interested party. The newly appointed release manager for GNOME 2.0 and committer of the dependency on bonobo-config inside libgnome, Martin Baulig, dramatically quit his role and left the community (only to come back a bit later), leaving Anders Carlsson to revert the controversial change—followed by a new round of accusations.

Totally normal community interactions in a free software project.

In the end, concessions were made, hatred simmered, and bonobo-config was left behind, to be used only be applications that decided to opt into its usage, and even then it was mostly used as a wrapper around GConf.

Fonts, Unicode, and icons” seems like a small set of user-visibile features for a whole new major version of a toolkit, and desktop environment. Of course, that byline kind of ignores all the work laid down behind the scenes, but end users don’t care about that, right? We’ll see what happens when a major refactoring to pay down the technical debt accrued over the first 5 years of GNOME, and clear the rubble to build something that didn’t make designers, documentation writers, and QA testers cry tears of blood, in the next episode, “Release Day”, which will be out in two weeks time, as next week I’m going to be at the GTK hackfest, trying to pay down the technical debt accrued over the 3.0 development cycle of the toolkit. I’m sure it’ll be fine, this time. It’s fine. We’re going to be fine. It’s fine. We’re fine.


by ebassi at February 24, 2019 07:18 PM

Episode 2.0: Retrospective

Hello, everyone, and welcome back to the History of GNOME! If you’re listening to this in real time, I hope you had a nice break over the end of 2018, and are now ready to begin the second chapter of our main narrative. I had a lovely time over the holidays, and I’m now back working on both the GNOME platform and on the History of GNOME, which means reading lots of code by day, and reading lots of rants and mailing list archives in the evening—plus, the occasional hour or so spent playing videogames in order to decompress.

Before we plunge right back into the history of GNOME 2, though, I wanted to take a bit of your time to recap the first chapter, and to prepare the stage for the second one. This is a slightly opinionated episode… Well, slightly more opinionated than usual, at least… As I’m trying to establish the theme of the first chapter as a starting point for the main narrative for the future. Yes, this is an historical perspective on the GNOME project, but history doesn’t serve any practical purposes if we don’t glean trends, conflicts, and resolutions of past issues that can apply to the current period. If we don’t learn anything from history then we might as well not have any history at all.

The first chapter of this history of the GNOME project covered roughly the four years that go from Miguel de Icaza’s announcement in August 1997 to the release of GNOME 1.4 in April 2001—and we did so in about 2 hours, if you count the three side forays on GTK, language bindings, and applications that closed the first block of episodes.

In comparison, the second chapter of the main narrative will cover the 9 years of the 2.x release cycles that went from 2001 to 2010. The duration of the second major cycle of GNOME is not just twice as long as the first, it’s also more complicated, as a result of the increased complexity for any project that deals with creating a modern user experience—one not just for the desktop but also for the mobile platforms that were suddenly becoming more important in the consumer products industry, as we’re going to see during this chapter. The current rough episode count is, at the moment I’m reading this, about 12, but as I’m striving to keep the length of each episode in the 15 to 20 minutes range, I’m not entirely sure how many actual episodes will make up the second chapter.

Looking back at the beginning of the project we can say with relative certainty that GNOME started as a desktop environment in a time when desktops were simpler than they are now; at the time of its inception, the bar to clear was represented by Windows 95, and while it was ostensibly a fairly high bar to clear for any volunteer-driven effort, by the time GNOME 1.4 was released to the general public of Linux enthusiasts and Unix professionals, it was increasingly clear that a new point of comparison was needed, mostly courtesy of Apple’s OS X and Microsoft’s Windows XP. Similarly, the hardware platforms started off as simpler iterations over the PC compatible space, but vendors quickly moved the complexity further and further into the software stack—like anybody with a WinModem in the late ‘90s could tell you. Since Linux was a blip on the radars of even the most widespread hardware platforms, new hardware targeted Windows first and foremost, and support for Linux appeared only whenever some enterprising volunteer would manage to reverse engineer the chipset du jour, if it appeared at all.

As we’ve seen in the first episode of the first chapter, the precursors to what would become a “desktop environment” in the modern sense of the term were made of smaller components, bolted on top of each other according to the needs, and whims, of each user. A collection of LEGO bricks, if you will, if only the bricks were made by a bunch of different vendors and you had to glue them together to build something. KDE was the very first environment for Linux that tried to mandate a more strict integration between its parts, by developing and releasing all if its building blocks as comprehensive archives. GNOME initially followed the same approach, with libraries, utilities, and core components sharing the same CVS repositories, and released inside shared distribution archives. Then, something changed inside GNOME; and figuring out what changed is central to understanding the various tensions inside a growing free and open source software project.

If desktop environments are the result of a push towards centralisation, and comprehensive, integrated functionality exposed to the people using, but not necessarily contributing to them, splitting off modules into their own repositories, using their own release schedules, their own idiosynchrasies in build systems, options, coding styles, and contribution policies, ought to run counter to that centralising effort. The decentralisation creates strife between projects, and between maintainers; it creates modularisation and API barriers; it generates dependencies, which in turn engender the possiblity of conflict, and barriers to not just contribution, but to distribution and upgrade.

Why, then, this happens?

The mainstream analytical framework of free and open source software tells us that communities consciously end up splitting off components, instead of centralising functionality, once it reaches critical mass; community members prefer delegation and composition of components with well-defined edges and interactions between them, instead of piling functionality and API on top of a hierarchy of poorly defined abstractions. They like small components because maintainers value the design philosophy that allows them to provide choice to people using their software, and gives discerning users the ability to compose an operating system tailored to their needs, via loosely connected interfaces.

Of course, all I said above is a complete and utter fabrication.

You have no idea of the amounts of takes I needed to manage to get through all of that without laughing.

The actual answer would be Conway’s Law:

organizations which design systems […] are constrained to produce designs which are copies of the communication structures of these organizations

We have multiple contributors, typically highly opinionated, typically young or, at least, without lots of real world experience. Worse case, the only experience available comes from years of computer science lessons, where object orientation reigns supreme, and it’s still considered a good idea despite all the evidence to the contrary.

These multiple contributors end up carving their own spaces, because the required functionality is large, and the number of people working on it is always smaller than a 100% coverage. New functionality is added; older modules are dropped because “broken”, or “badly designed”; new dependencies are created to provide shared functionality, or introduced as abstraction layers to paper over multiple modules offering sligthly different takes on how some functionality ought to be implemented, or what kind of dependencies they require, or what kind of language or licensing terms ought to be used.

Complex free software projects with multiple contributors working on multiple components, favour smaller modules because it makes it easier for each maintainer to keep stuff in their head without going stark raving mad. Smaller modules make it easier to insulate a project against strongly opinionated maintainers, and let other, strongly opinionated maintainers, route around the things they don’t like. Self-contained modules make niche problems tractable, or at least they contain the damage.

Of course, if we declared this upfront, it would make everybody’s life easier as it would communicate a clear set of expectations; it would, on the other hand, have the side effect of revealing the wardrobe malfunction of the emperor, which means we have to dress up this unintended side effect of Conway’s Law as “being about choice”, or “mechanism, not policy”, or “network object model”.

The first chapter in the history of the GNOME project can be at least partially interpreted within this framework; the idea that you can take a complex problem space and partition it until each issue becomes tractable individually, and then build up the solution out of the various bits and pieces you managed to solve, letting it combine and recombine as best as it can to suit the requirements of the moment, platform, or use case. Throw in CORBA as an object model for good measure, and you end up with a big box of components that solve arbitrarily small issues on their own, and that can theoretically scale upwards in complexity. This, of course, ignores the fact that combinatorial explosions of interactions make things very interesting for anybody developing, testing, and using these components—and I use “interesting” in the “oh god oh god we’re all going to die” sense of the word.

More importantly, and on a social level, this framework allows project maintainers to avoid having to make a decision on what should work and what shouldn’t; what is supported and what isn’t; and even what is part of the project and what falls outside of it. If there some part of the stack that is misbehaving, wrap it up; even better, if there are multiple competing implementations, you can always paper over them with an abstraction layer. As long as the API surface is well defined, functionality is somebody else’s problem; and if something breaks, or mysteriously doesn’t work, then I’m sure the people using it are going to be able to fix it.

Well, it turns out that all the free software geeks capable of working on a desktop environment are already working on one, which by definition means that they are the only ones that can fix the issues they introduced.

Additionally, and this is a very important bit that many users of free and open source software fail to grapple with: volunteer work is not fungible—that is, you cannot tell people doing things on their spare time, and out of the goodness of their hearts, to stop doing what they are doing, and volunteer on something else. People just don’t work that way.

So, if “being about choice” is on the one end of the spectrum, what’s at the other? Maybe a corporate-like structure, with a project driven by the vision of a handful of individuals, and implemented by everyone else who subscribes to that vision—or, at least, that gets paid to implement it.

Of course, the moment somebody decides to propose their vision, or work to implement it, or convince people to follow it, is the moment when they open themselves up to criticism. If you don’t have a foundational framework for your project, nobody can accuse you of doing something wrong; if you do have it, though, then the possibilities fade away, and what’s left is something tangible for people to grapple with—for good or ill.

At the beginning of the GNOME project we had very few individuals, with a vision for the desktop; while it was a vision made of components interoperating to create something flexible and adaptable to various needs, it still adhered to specific design goals, instead of just putting things together from disparate sources, regardless of how well the interaction went. This led to a foundational period, where protocols and interfaces were written to ensure that the components could actually interoperate, which led to a somewhat lacklustre output; out of three 1.x minor releases all we got was a panel, a bunch of clock applets, and a control centre. All the action happened on the lower layers of the stack. GTK became a reasonably usable free software GUI toolkit for Linux and other Unix-like operating systems; the X11 world got a new set of properties and protocols to deal with modern workflows, in the form of the EWMH; applications and desktop modules got shared UI components using CORBA to communicate between them.

On a meta level, the GNOME project established a formal structure on itself, with the formation of a release team and a non-profit foundation that would work as a common place to settle the internal friction between maintainers, and the external contributions from companies and the larger free and open source software world.

Going back to our frame of reference to interpret the development of GNOME as a community of contributors, we can see this as an attempt to rein in the splintering and partition of the various components of the project, and as a push towards its new chapter. This tension between the two efforts—one to create an environment with a singular vision, even if driven by multiple people; and the other, to create a flexible environment that respected the various domains of each individual maintainer, if not each individual user—defined the first major cycle, as it would (spoiler alert) every other major cycle.

Now that the foundational period was over, though, and the challenges provided by commercial platforms like Windows and OS X had been renewed, the effort to make GNOME evolve further was not limited to releasing version 2.0, but to establish a roadmap for the future beyond it.

Next week we’re going to dive right back into the development of GNOME, starting with the interregnum period between 1.4 and 2.0, in which our plucky underdogs had finally become mainstream enough to get on Sun and IBM radars, and had to deal with the fact that GNOME was not just a hobby any more, in the episode titled: “On Brand”.


by ebassi at February 24, 2019 07:18 PM

Episode 1.c: Applications

First of all, I’d like to apologise for the lateness of this episode. As you may know, if you follow me on social media, I’ve started a new job as GTK core developer for the GNOME Foundation—yes, I’m actually working at my dream job, thank you very much. Of course this has changed some of the things around my daily schedule, and since I can only record the podcast when ambient noise around my house is not terrible, something had got to give. Again, I apologise, and hopefully it won’t happen again.

Over the course of the first chapter of the main narrative of the history of the GNOME project, we have been focusing on the desktop and core development platform produced by GNOME developers, but we did not really spend much time on the applications—except when they were part of the more “commercial” side of things, like Evolution and Nautilus.

Looking back at Miguel’s announcement, though, we can see “a complete set of user friendly applications” in the list of things that the GNOME project would be focusing on. What good a software development platform and environment are, if you can’t use them to create and run applications?

While GIMP and GNOME share a great many things, it’s hard to make the case for the image manipulation program to be part of GNOME; yes: it’s hosted on GNOME infrastructure, and yes: many developers contributed to both projects. Nevertheless, GIMP remains fairly independent, and while it consumes the GNOME platform, it tends to do so in its own way, and under its own direction.

There’s another issue to be considered, when it comes to “GNOME applications”, especially at the very beginning of the project: GNOME was not, and is not, a monolithic entity. There’s no such thing as “GNOME developers”, unless you mean “people writing code under the GNOME umbrella”. Anyone can come along, write an application, and call it “a GNOME application”, assuming you used a copyleft license, GTK for the user interface, and the few other GNOME platform libraries for integrating with things like settings. At the time, code hosting and issue trackers weren’t really a commodity like nowadays—even SourceForge, which is usually thought to have always been available, would become public in 1999, two years after GNOME started. GNOME providing CVS for hosting your code, infrastructure to upload and mirror release archives, and a bug tracker, was a large value proposition for application developers that were already philosophically and technologically aligned with the project. Additionally, if you wanted to write an application there was a strong chance that you had contributed, or you were at least willing to contribute, to the platform itself, given its relative infancy. As we’ve seen in episode 1.4, having commit access to the source code repository meant also having access to all the GNOME modules; the intent was clear: if you’re writing code good enough for your application that it ought to be shared across the platform, you should drive its inclusion in the platform.

As we’ve seen all the way back in episode 1.1, GNOME started off with a few “core” applications, typically utilities for the common use of a workstation desktop. In the 1.0 release, we had the GNOME user interface around Miguel de Icaza’s Midnight Commander file manager; the Electric Eyes image viewer, courtesy of Carsten Haitzler; a set of small utilities, in the “gnome-utils” grab bag; and three text editors: GXedit, gedit, and gnotepad+. I guess this decision to ship all of them was made in case a GNOME user ended up on a desert island, and once saved by a passing ship after 10 years, they could be able to say: “this is the text editor I use daily, this is the text editor I use in the holidays, and that’s the text editor I will never use”.

Alongside this veritable text editing bonanza, we could also find a small PIM suite, with GnomeCal, a calendar application, and GnomeCard, a contacts applications; and a spreadsheet, called Gnumeric.

The calendar application was written by Federico Mena in 1998, on a dare from Miguel, in about ten days, and it attempted to replicate the offerings of commercial Unix operating systems, like Solaris. The contacts application was written by Arturo Espinosa pretty much at the same time. GnomeCal and GnomeCard could read and export the standard vCal and vCard formats, respectively, and that allowed integration with existing software on other platforms, as an attempt to “lure away” users from those platforms and towards GNOME.

Gnumeric was the brain child of Miguel, and the first real attempt at pushing the software platform forward; the original GNOME canvas implementation, based on the Tk canvas, was modified not only to improve the performance, but also to allow writing custom canvas elements in order to have things like graphs and charts. The design of Gnumeric was mostly mutuated from Excel, but right from the start the idea was to ensure that the end result would surpass Excel and its limitations—which was a somewhat tall order for an application developed by volunteers; it clearly demonstrates the will to not just copy commercial products, but to improve on them, and deliver a better experience to users. Gnumeric, additionally, came with a plugin infrastructure that exposed the whole workbook, sheet, and cells to each plugin.

Both the PIM applications and the spreadsheet application integrated with the object model effort, and provided components to let other applications embed or manipulate their contents and data.

While Gnumeric is still active today, 20 years and three major versions later, both GnomeCal and GnomeCard were subsumed into what would become one of the centerpieces of Ximian: Evolution.

GNOME 1.2 remained pretty much similar to the 1.0, from an application perspective. Various text editors were moved out of the release, and went along at their own pace, with gedit being the main survivor; Electric Eyes fell into disrepair, and was replaced by the Eye of GNOME as an image viewer. The newly introduced ggv, a GTK-based GUI layer around ghostscript, was the postscript document viewer. Finally, for application developers, a tool called “Glade” was introduced as a companion to every programmer’s favourite text editor. Glade allowed creating a user interface using drag and drop from a palette of components; once you were done with it, it would generate the C code for you—alongside the needed Autotools gunk to build it, if it couldn’t find any. The generated code was limited to specific files, so as long as you didn’t have the unfortunate idea of hand editing your user interface, you could change it through Glade, generate the code, and then hook your own application logic into it.

Many projects of that era started off with generated code, and if you’re especially lucky, you will never have to deal with it, unless, of course, you’re trying to write something like the history of the GNOME project.

For GNOME 1.4 we only see Nautilus as the big change in the release, in terms of applications. Even with an effort to ensure that applications, as well as libraries, exposed components for other applications to reuse, most of the development effort was spent into laying down the groundwork of the desktop itself; applications came and went, but were leaf nodes in the graph of dependencies, and as such required less coordination in their development, and fewer formalities when it came to releasing them to the users.

It would be a long time before somebody actually sat down, and decided what kind of applications ought to be part of the GNOME release.

With this episode, we’ve now reached the end of the first chapter of the history of the GNOME project. The second chapter, as I said a few weeks ago, will be on January 17th, as for the next four weeks I’m going to be busy with the end of the year holidays here in London.

Once we’re back, we’re going to have a little bit of a retrospective on the first chapter of the history of GNOME, before plunging directly into the efforts to release GNOME 2.0, and what those entailed.

So, see you next year for Chapter 2 of the History of GNOME.


by ebassi at February 24, 2019 07:18 PM