Planet Closed Fist

March 21, 2024

Iain Holmes

Update19

Xcode placeholder trick

A trick I found a few months back but always forget to write down. A lot of things are like that, by the time I have time to write something, I’ve forgotten the things I want to write down.

Anyway, Xcode placeholders.

Xcode’s placeholder system can be used when you need to copy lines and change one little detail, like in this extremely contrived example

array.append("string 1")
array.append("a different string")
array.append("string number 3")
array.append("string 4")

You can type

array.append("<#string#>") and then copy and paste it 4 times, and each time you paste, the <#string#> part will be selected ready for you to enter the new bit.

March 21, 2024 09:15 PM

February 14, 2024

Tomas Frydrych

For the Love of Trees

For the Love of Trees

I have been very fond of trees since being a wee laddie, and over the years have amassed lot of photographs of them. So I decided to do something with them: For the Love of Trees, a collection of B&W photographs of trees; it's free to download, distributed under a Creative Commons license.

by tf at February 14, 2024 03:40 PM

February 01, 2024

Iain Holmes

Update1

Gentrification of the Internet

February 01, 2024 01:15 PM

January 29, 2024

Iain Holmes

Update1

Neon Lights Shining

Welcome to Hell World has had a Jason Molina list sitting in my Tabs for a few days now, and it’s a great read and a great list of tracks by a great musician of whom I am very fond. And as someone who (along with Veins Full of Static) ordained him posthumously I took a week and listened through the music made in different guises; Songs: Ohia, Magnolia Electric Co. and under his own name.

Jason Molina’s music feels hard to pigeon hole into a genre. It’s not folk, but it’s not not folk. It’s not country, but it’s not not country. But it’s not indie rock, it’s not singer songwriter, but again it’s not not those things. But he’s someone who should, in my opinion, have been respected as one of the great American songwriters, alongside Dylan, Stipe, Young, Eitzel, to name a few.

I’ve had a great week listening to them closely, reading the lyrics and singing along, and this is a list of some of my favourites.

Hammer Down

(Magnolia Electric Co. - What Comes After The Blues)

Hammer Down is a song that I can just listen to over and over again on repeat. Not the happiest song, but a song about giving up the fight isn’t exactly out of place in the Molina oeuvre. The meat of the song is just this beautiful imagery

I think the stars are just the neon lights
Shining through the dance floor
Shining through the dance floor
Of Heaven on a Saturday night

I want to get that tattooed on my body somewhere, if, y’know, I wasn’t too forgetful or disorganised to make an appointment for a tattoo. The stars are already there from a past life, but it feels right.

Coxcomb Red

(Songs: Ohia - The Lioness)

Coxcomb Red is one of the earliest Songs: Ohia songs I remember hearing, but I don’t know when it would have been, sometime in the early 2000’s I guess. The chorus lines just stuck in my head instantly

Your hair was coxcomb red
Your eyes were viper black

Love & Work: The Lioness Sessions by Songs: Ohia

The Dark Don’t Hide It

(Magnolia Electric Co. - What Comes After The Blues)

The first time I saw Jason play with at The Dirty Three ATP in 2007, when Magnolia Electric Co. were the opening act for the Dirty Three gig on the first night and I left with The Dark Don’t Hide It in my head. Walking through the main stage area at lunchtime the next day I discovered them playing a second, unplanned set and got to listen again to that solo at the end.

I saw Jason standing watching the following acts at the stage, and chatting to anyone who approached. My everlasting impression of him was how friendly he was. I didn’t talk to him.

Now death is gonna hold us up in the mirror
And say we’re so much alike we must be brothers
See I had a job to do but people like you
Been doing it for me to one another

Hold On Magnolia

(Songs: Ohia - The Magnolia Electric Company)

The Magnolia Electric Company (album, not the band) was, for me at least, the first time that Jason put the music on an even par with the vocals, and finally the songs have more filled out arrangements the perfectly match the lyrics - and ultimately led the way for Magnolia Electric Co. (the band, not the album).

Hold On Magnolia has a lovely arrangement of strings and slide guitar that really give emotional weight to Jason pleading “Hold on Magnolia” as he seems to wrestle with death once more.

Hold on Magnolia
I hear that station bell ring
You might be holding the last light I see
Before the dark finally gets a hold of me

Magnolia Electric Co. (Deluxe Edition) by Songs: Ohia

The Black Crow

(Songs: Ohia - The Lioness)

Recorded in Scotland and involving members of Arabstrap and it shows, a minimialist guitar track that builds slowly over the course of 7 minutes and I’m a sucker for repetition and crescendos and then breaks loose at the 5 minute mark.

I’m getting weaker I’m getting thin
I hate how obvious I have been
I’m getting weaker, weaker, weaker

Love & Work: The Lioness Sessions by Songs: Ohia

How To Be Perfect Men

(Songs: Ohia - Axxess & Ace)

People talk about the self-mythologising of Jason Molina, but on record he always felt honest about what he was dealing with, and this song strikes a chord with me, cos god knows I fuck up.

I will get it wrong
Be mine
Till you’re reminded of something better

Axxess & Ace by Songs: Ohia

Blue Factory Flame / Two Blue Lights / Blue Chicago Moon

(Songs: Ohia - Didn’t It Rain)

2002’s Didn’t It Rain finishes with three songs, sharing some similar themes of isolation, emptiness and loneliness in a broken city and, because they all have the word Blue in the title, are referred to, by me and no-one else, as The Blue Trilogy. It opens with the line “When I die put my bones in an empty street to remind me of how it used to be.” This feels a more noir version of Townes Van Zandt’s wish for his disposal in My Proud Mountains, where he wants to be laid down easy on a lonesome hill to look out at his mountains and the sun. Molina conjures up misty rain washed streets, soot greys and blacks as the iron ore ships sail past.

The shortest song of the trilogy, Two Blue Lights, keeps the nighttime noir broken city vibe, the moon and the bus colouring the world blue and the line “When the bells ring twelve times in hell the bells ring twelve times in this town as well”

Blue Chicago Moon (there’s the moon again…) comes “out of the ruins” of the previous two songs and closes the album with a song which is, dare I say it, up beat and encouraging - which is why I’ve ended this list with these songs. There’s a lot of darkness to Jason Molina’s music - but it’s never an end, it stops short at that critical juncture, the cliff edge and pulls back.

if the blues are your hunter
then you will come face to face
with that darkness and desolation
and the endless depression
but you are not helpless
and you are not helpless
try to beat it
try to beat it
and live through space’s loneliness
and live through space’s loneliness
you are not helpless
you are not helpless
i’ll help you to try to beat it

Didn’t It Rain (Deluxe Edition) by Songs: Ohia Didn’t It Rain (Deluxe Edition) by Songs: Ohia Didn’t It Rain (Deluxe Edition) by Songs: Ohia

For the Reverend Jason Molina (1973 - 2013) RIP

January 29, 2024 01:15 PM

January 03, 2024

Iain Holmes

Update18

On Zipping

In the last thrilling installment on mission creep I talked about how I had been distracted by a nearly finished Bandcamp collection downloader. Well, just to shave the yak further, I thought a nice feature would be to unzip the downloaded files into place.

My usecase for it is so I can download the music to my NAS so I have to download the zips to my local machine, unzip them manually and upload them by hand which is a pain.

So I thought, Apple has the Compression.framework to handle decompressing zip files from a buffered stream, so I could do it as the file downloads meaning I don’t need any intermediate storage.

I tried a few experiments but quickly realised that it’s not zip files that Compression.framework can decompress but zlib streams, that is, it handles the compressed data inside the zip file, but not the container format wrapped around the compressed data.

Took a look and found a load of Swift packages that can handle the container format, but none that could handle it in a buffered fashion and the only one that hinted that it might be possible - Zip Foundation - seemed to require a full file to be present so it could jump around a bit.

Why? Well, it seems that the Zip container format isn’t very condusive to streaming. It has an entry header that describes the data, but it might optionally put the size of the data in an extra section AFTER the data, which means working out how long the data will be is trickier. The format does have an archive catalogue which lists the offset for all these extra sections, so the way to handle it is to read that catalogue first and then parse all the extra sections. Except they decided to put this archive at the end of the file.

Which kinda sucks for trying to do the decompression on the fly.

Anyway, I spent Christmas break reading the Zip format specification and implemented a very very simple unarchiver that takes the async data stream from URLSession and unpacks it as it goes.

The twist is that the files from Bandcamp does appear to contain any compressed data so I don’t even need Compression.framework, it’s just a case of finding the data in the archive and writing it out to disk.

The code is nearly ready to go into Ohia and maybe it’ll be finished soon, cos I’ve got a ZX Spectrum Next I want to play with

January 03, 2024 01:15 PM

December 03, 2023

Iain Holmes

Update17

Mission Creep

Back in September I started talking about a new video mixing application I was writing called Cyclorama and then I stopped. Why? Well, I kind of got distracted. Bandcamp got sold to Songtradr and I wanted a way to get all my purchases downloaded from Bandcamp. I have about 1500 purchases on there, so it’s not something that could be done by hand.

There are scripts to do it, but they didn’t work reliably possibly due to the collection size, and they didn’t allow for incremental updates which I wanted because I wanted to be able to download only things that weren’t already downloaded.

So I started writing a SwiftUI based app to do it, but I don’t like using janky software even if it’s something I wrote for myself so I started work to “productise” it, with nice error messages, ability to log out, nice progress bars, all those things.

Then I found out that I can get track listings and URLs for streaming and now it’s growing and becoming more than a simple downloader.

The irony is it’s been able to download the collection for about a month and a half, but I still haven’t.

Anyway the code is at https://github.com/FalseVictories/Ohia if anyone wants to check it out. It needs an icon, but Bing Create doesn’t understand the difference between a macOS icon and an iOS one.

December 03, 2023 01:15 PM

October 25, 2023

Emmanuele Bassi

Introspection’s edge

tl;dr The introspection data for GLib (and its sub-libraries) is now generated by GLib itself instead of gobject-introspection; and in the near future, libgirepository is going to be part of GLib as well. Yes, there’s a circular dependency, but there are plans for dealing with that. Developers of language bindings should reach out to the GLib maintainers for future planning.


GLib-the-project is made of the following components:

  • glib, the base data types and C portability library
  • gmodule, a portable loadable module API
  • gobject, the object oriented run time type system
  • gio, a grab bag of I/O, IPC, file, network, settings, etc

Just like GLib, the gobject-introspection project is actually three components in a trench coat:

  • g-ir-scanner, the Python and flex based parser that generates the XML description of a GObject-based ABI
  • g-ir-compiler, the “compiler” that turns XML into an mmap()-able binary format
  • libgirepository, the C API for loading the binary format, resolving types and symbols, and calling into the underlying C library

Since gobject-introspection depends on GLib for its implementation, the introspection data for GLib has been, until now, generated during the gobject-introspection build. This has proven to be extremely messy:

  • as the introspection data is generated from the source code, gobject-introspection needs to have access to the GLib headers and source files
  • this means building against an installed copy of GLib for the declarations, and the GLib sources for the docblocks
  • to avoid having access to the sources at all times, there is a small script that extracts all the docblocks from a GLib checkout; this script has to be run manually, and its results committed to the gobject-introspection repository

This means that the gobject-introspection maintainers have to manually keep the GLib data in sync; that any change done to GLib’s annotations and documentation are updated typically at the end of the development cycle, which also means that any issue with the annotations is discovered almost always too late; and that any and all updates to the GLib introspection data require a gobject-introspection release to end up inside downstream distributions.

The gobject-introspection build can use GLib as a subproject, at the cost of some awful, awful build system messiness, but that does not help anybody working on GLib; and, of course, since gobject-introspection depends on GLib, we cannot build the former as a subproject of the latter.

As part of the push towards moving GLib’s API references towards the use of gi-docgen, GLib now checks for the availability of g-ir-scanner and g-ir-compiler on the installed system, and if they are present, they are used to generate the GLib, GObject, GModule, and GIO introspection data.

A good side benefit of having the actual project providing the ABI be in charge of generating its machine-readable description is that we can now decouple the release cycles of GLib and gobject-introspection; updates to the introspection data of GLib will be released alongside GLib, while the version of gobject-introspection can be updated whenever there’s new API inside libgirepository, instead of requiring a bump at every cycle to match GLib.

On the other hand, though, this change introduces a circular dependency between GLib and gobject-introspection; this is less than ideal, and though circular dependencies are a fact of life for whoever builds and packages complex software, it’d be better to avoid them; this means two options:

  • merging gobject-introspection into GLib
  • making gobject-introspection not depend on GLib

Sadly, the first option isn’t really workable, as it would mean making GLib depend on flex, bison, and CPython in order to build g-ir-scanner, or (worse) rewriting g-ir-scanner in C. There’s also the option of rewriting the C parser in pure Python, but that, too, isn’t exactly strife-free.

The second option is more convoluted, but it has the advantage of being realistically implemented:

  1. move libgirepository into GLib as a new sub-library
  2. move g-ir-compiler into GLib’s tools
  3. copy-paste the basic GLib data types used by the introspection scanner into gobject-introspection

Aside from breaking the circular dependency, this allows language bindings that depend on libgirepository’s API (Python, JavaScript, Perl, etc) to limit their dependency to GLib, since they don’t need to deal with the introspection parser bits; it also affords us the ability to clean up the libgirepository API, provide better documentation, and formalise the file formats.

Speaking of file formats, and to manage expectations: I don’t plan to change the binary data layout; yes, we’re close to saturation when it comes to padding bits and fields, but there hasn’t been a real case in which this has been problematic. The API, on the other hand, really needs to be cleaned up, because it got badly namespaced when we originally thought it could be moved into GLib and then the effort was abandoned. This means that libgirepository will be bumped to 2.0 (matching the rest of GLib’s sub-libraries), and will require some work in the language bindings that consume it. For that, bindings developers should get in touch with GLib maintainers, so we can plan the migration appropriately.

by ebassi at October 25, 2023 08:43 AM

September 20, 2023

Iain Holmes

Update16

Cyclorama

A few years ago when I was doing live gigs, I needed an app that could do simple projections for me and that I could manipulate quickly while on stage. I wrote a very basic application called Cyclorama in C# and Xamarin.Mac that allowed you to have two videos and crossfade between them, and have an optional static image as an overlay. It worked pretty well for what I needed but I felt it could be improved.

When it comes to projections and videos, my style is very much inspired by the experimental film maker and Godspeed You! Black Emperor projectionist, Karl Lemieux, who uses loops of 16mm footage and layers them on top of each other, and uses subtle manipulations and film burns to create striking scenes.

I’ve recreated this style in FCPX and DaVinci Resolve but neither of those are simple to use, or feasible in a live setting so I’ve decided to revisit my earlier application and rewrite it, this time in Swift and recreate this aesthetic rather than just crossfade between videos.

It can create some beautiful images. I’ll write some more in the future about what is happening. Image of 10 film burns projected over 8mm footage of a goat from the 1950's

September 20, 2023 01:15 PM

August 30, 2023

Tomas Frydrych

How to make Sauerkraut

How to make Sauerkraut

Sauerkraut is an essential part of Czech cuisine, and one that's nigh impossible to substitute for. It's a pro-biotic, but perhaps more important historically, very rich in vitamin C, and it keeps for very long time without refrigeration. I had a first go a making sauerkraut some 25 years ago in my early days in Scotland, without understanding the process (and before one could readily find information on the internet), and it didn't work. I have had another go last September, and we have eaten over 40kg of the stuff since. Turns out, it's really simple to make if you know how.

Tools

  • A suitable fermentation vessel (more below), about 1l per 800g of chopped cabbage,
  • A basin for mixing (about 3l volume per 1kg of chopped cabbage),
  • Stainless steel potato masher and / or wooden thumper,
  • Clean hands.

The fermentation vessel doesn't have to be anything more than a glass pickle jar. In the old days sauerkraut would have been made in wooden cylindrical barrels with a floating wooden lid, held down by a heavy stone. Nowadays it is possible to buy all sorts of purpose made, overpriced, fermentation vessels, that usually promise keeping your cabbage in sterile environment to justify the outlay -- that's just marketing BS. Fermentation is not a sterile process, it's simply a question of controlling which biological processes take place, and with sauerkraut this is done entirely by salinity, acidity and temperature control.

What we need to achieve for sauerkraut is anaerobic lactic fermentation, and that requires that all of the cabbage is held below the surface of the liquid. When doing a small quantity, a simple option is to use a larger, say 1l pickle jar, and then a smaller jar or drinking glass that fits snugly through the opening. The smaller jar is then weighed down suitably. I don't use this setup, but you get the idea:

How to make Sauerkraut

I use 5l pickle jar with a wooden board weighed down by some stones. The jar has an inner diameter just touch smaller than double of diameter of the opening, which allows two semicircular boards to be inserted. Obviously, when using this setup, the boards need to be clean, and untreated, and the stones must be inert to acid (quartz or such, and should be as flat as possible).

How to make Sauerkraut

Ingredience

  • White cabbage,
  • Salt,
  • Caraway seed (optional)

Note: it is possible to make sauerkraut from red cabbage, it tastes slightly differently, but most importantly, it's incredibly messy, you have been warned:

How to make Sauerkraut

The process

  • Sterilise your tools and jars by pouring boiling water over them. If you using boards and stones, these should be boiled for at least 10-15min. But everything should be allowed to cool down before the cabbage goes in.

  • Peel the outer leaves of the cabbage and discard, wash the remaining head and quarter it, then chop it to thin strips, across the edges; discard the hard core:

How to make Sauerkraut

  • Weigh the chopped cabbage, then put it into the basin. Add 20g of salt per 1kg of cabbage and, if using, the caraway (about ~ 1/2 teaspoon per 1kg of cabbage, don't over do it).

How to make Sauerkraut

  • Stir the salt in using your hands or a wooden spoon, then let it sit for about 10min to allow the salt to dissolve, as this makes it easier to work with.

  • Now thoroughly work the salt into the cabbage, traditionally this would have been done by treading it like wine, but using clean hands is fine. Give the cabbage good hard squeeze, the objective is to bruise it, so that in combination with the salt it lets water out (bashing it with potato masher works too). Compact it down, cover it up and let it sit for about 30min.

How to make Sauerkraut

  • Repeat step 4, until the cabbage lets out enough water. This takes 3-5 hours depending on the cabbage (less when it's younger, longer when it's older). Enough means there is plenty of fluid at the bottom of the basin when you squeeze it down the liquid covers up the cabbage:

How to make Sauerkraut

If the cabbage is not letting out water, it's because it's not been worked hard enough.

  • Move the cabbage into the jar, using a suitable spoon, about an inch at a time, make sure to spoon in plenty of the liquid at the start. Compact it well using either a fist or a suitable tool. The objective is to squeeze out all the air. This is the most critical part of the job, if there is air left in among the cabbage, it will go off, we are aiming for anaerobic fermentation.

How to make Sauerkraut

  • When all the cabbage is in, press it down with the inner jar, or the wooden boards & stones, or whatever technique you are using. All of the cabbage (and when using wooden board and stones also the board and stones must be below the surface of the liquid. Any bits of cabbage that float above need to be removed, to stop it going off.

How to make Sauerkraut

(The boards in the above jar were previously used with red cabbage, and are leaching the colour, hence the pink tone.)

  • If you are using the boards and stones system, it is possible there will not be enough of the liquid to cover it all. In that case top it up with a brine (20g salt / 1l), boil the water first, but make sure the solution is cool before it goes in (I make a litre or so day in advance just in case).

  • Put the jar onto a tray or into a pot to capture any fluid that will overflow once the fermentation starts, and leave it somewhere out of direct sunlight at room temperature (I cover mine with an old black t-shirt, to keep it in the dark, and to keep any stuff getting in -- note there is no rubber seal on the lid, the jar needs to breeze).

  • The fermentation typically starts in 12-48 hours depending on the cabbage and temperature. Scoop off any of the foam that forms, and should any bits of cabbage float up to the surface, fish them out.

  • At this point the jar is best moved to a cool dark place (I keep mine in the garage). The temperature has impact on the flavour, I like best the winter sauerkraut that fermented at about 5C, but that said, it will work perfectly fine at room temperature as well.

  • Initially you need to check on the jar every day, removing any foam, and any floating bits. You might have to top up the liquid with brine (as mentioned above) when the temperature drops.

  • The fermentation will slow down after a week so or, and from there on, I only check on it every week or even two. After about 3 weeks the cabbage is fully fermented. At this point the acidity is enough to keep it in a good condition for many months without being refrigerated. (While it is best to keep it cool, mine survived the summer heatwave in over 20C without issues.)

  • I find the optimal fermentation time for flavour is about 3 months, so I keep three to four 5l jars on the go at any given time, but you can start eating after about a week.

Taking the Cabbage Out

When taking cabbage out of a large batch, use a slotted spoon to keep as much of the liquid in the original jar as possible. I take about 1l at a time, and keep this in the fridge. If you take too much of the liquid out, you will have to top it up with brine, but this reduces the acidity, which is the main thing what keeps it from going off. Also, when toping it up, always make sure the brine is not warm.

Kham Yeast

Khan yeast is white bacterial culture that can form on the top of the liquid. It has a powdery appearance, and is harmless, however it should be scooped off as if it's allowed to get out of hand, it will spoil the flavour. It is caused lack of acidity, salinity and too warm temperature. I have had it happen three times over the year, the first two were due to using warm brine with not enough salt to top up the jars, and third time when the jar was nearly empty and I added too much brine. If worse comes to worst and the yeast keeps forming, you can drain the cabbage, rinse it, add fresh bine and keep it in the fridge.

It's important to distinguish between between kham yeast and mould -- if it looks like a mould, you have to throw the batch out, but generally the only thing that can go mouldy is any bits of cabbage that float at the surface, which is why these have to be removed promptly.

by tf at August 30, 2023 10:12 AM

August 23, 2023

Emmanuele Bassi

The Mirror

The GObject type system has been serving the GNOME community for more than 20 years. We have based an entire application development platform on top of the features it provides, and the rules that it enforces; we have integrated multiple programming languages on top of that, and in doing so, we expanded the scope of the GNOME platform in a myriad of directions. Unlike GTK, the GObject API hasn’t seen any major change since its introduction: aside from deprecations and little new functionality, the API is exactly the same today as it was when GLib 2.0 was released in March 2002. If you transported a GNOME developer from 2003 to 2023, they would have no problem understanding a newly written GObject class; though, they would likely appreciate the levels of boilerplate reduction, and the performance improvements that have been introduced over the years.

While having a stable API last this long is definitely a positive, it also imposes a burden on maintainers and users, because any change has to be weighted against the possibility of introducing unintended regressions in code that uses undefined, or undocumented, behaviour. There’s a lot of leeway when it comes to playing games with C, and GObject has dark corners everywhere.

The other burden is that any major change to a foundational library like GObject cascades across the entire platform. Releasing GLib 3.0 today would necessitate breaking API in the entirety of the GNOME stack and further beyond; it would require either a hard to execute “flag day”, or an impossibly long transition, reverberating across downstreams for years to come. Both solutions imply amounts of work that are simply not compatible with a volunteer-based project and ecosystem, especially the current one where volunteers of core components are now stretched thin across too many projects.

And yet, we are now at a cross-roads: our foundational code base has reached the point where recruiting new resources capable of affecting change on the project has become increasingly difficult; where any attempt at performance improvement is heavily counterbalanced by the high possibility of introducing world-breaking regressions; and where fixing the safety and ergonomics of idiomatic code requires unspooling twenty years of limitations inherent to the current design.

Something must be done if we want to improve the coding practices, the performance, and the safety of the platform without a complete rewrite.

The Mirror

‘Many things I can command the Mirror to reveal,’ she answered, ‘and to some I can show what they desire to see. But the Mirror also show things unbidden, and those are often stranger and more profitable than things we wish to behold. What you will see, if you leave the Mirror free to work, I cannot tell. For it shows things that were, and things that are, and things that yet may be. But which it is that he sees, even the wisest cannot always tell. Do you wish to look?’ — Lady Galadriel, “The Lords of the Rings”, Volume 1: The Fellowship of the Ring, Book 2: The Ring Goes South

In order to properly understand what we want to achieve, we need to understand the problem space that the type system is meant to solve, and the constraints upon which the type system was implemented. We do that by holding GObject up to Galadriel’s Mirror, and gazing into its surface.

Things that were

History became legend. Legend became myth. — Lady Galadriel, “The Lord of the Rings: The Fellowship of the Ring”

Before GObject there was GtkObject. It was a simpler time, it was a simpler stack. You added types only for the widgets and objects that related to the UI toolkit, and everything else was C89, with a touch of undefined behaviour, like calling function pointers with any number of arguments. Properties were “arguments”, likes were florps, and the timeline went sideways.

We had a class initialisation and an instance initialisation functions; properties were stored in a global hash table, but the property multiplexer pair of functions was stored on the type data instead of using the class structure. Types did not have private data: you only had keyed fields. No interfaces, only single inheritance. GtkObject was reference counted, and had an initially “floating” reference, to allow transparent ownership transfer from child to parent container when writing C code, and make the life of every language binding maintainer miserable in the process. There were weak references attached to an instance that worked by invoking a callback when the instance’s reference count reached zero. Signals operated exactly as they do today: large hash table of signal information, indexed by an integer.

None of this was thread safe. After all, GTK was not thread safe either, because X11 was not thread safe; and we’re talking about 1997: who even had hardware capable of multi-threading at the time? NPTL wasn’t even a thing, yet.

The introduction of GObject in 2001 changed some of the rules—mainly, around the idea of having dynamic types that could be loaded and unloaded in order to implement plugins. The basic design of the type system, after all, came from Beast, a plugin-heavy audio application, and it was extended to subsume the (mostly) static use cases of GTK. In order to support unloading, the class aspect of the type system was allowed to be cleaned up, but the type data had to be registered and never unloaded; in other words, once a type was registered, it was there forever.

Arguments” were renamed to properties, and were extended to include more than basic types, provide validations, and notify of changes; the overall design was still using a global hash table to store all the properties across all types. Properties were tied to the GObject type, but the property definition existed as a separate type hierarchy that was designed to validate values, but not manage fields inside a class. Signals were ported wholesale, with minimal changes mainly around the marshalling of values and abstracting closures.

The entire plan was to have GObject as one of the base classes at the root of a specific hierarchy, with all the required functionality for GTK to inherit from for its own GtkObject, while leaving open the possibility of creating other hierarchies, or even other roots with different functionality, for more lightweight objects.

These constraints were entirely intentional; the idea was to be able to port GTK to the new type system, and to an out of tree GLib, during the 1.3 development phase, and minimise the amount of changes necessary to make the transition work not just inside GTK, but inside of GNOME too.

Little by little, the entire GObject layer was ported towards thread safety in the only way that worked without breaking the type system: add global locks around everything; use read-write locks for the type data; lock the access and traversal of the property hash table and of the signals table. The only real world code bases that actively exercised multi-threading support were GStreamer and the GNOME VFS API that was mainly used by Nautilus.

With the 3.0 API, GTK dropped the GtkObject base type: the whole floating reference mechanism was moved to GObject, and a new type was introduced to provide the “initial” floating reference to derived types. Around the same time, a thread-safe version of weak references for GObject appeared as a separate API, which confused the matter even more.

Things that are

Darkness crept back into the forests of the world. Rumour grew of a Shadow in the East, whispers of a Nameless Fear. — Lady Galadriel, “The Lord of the Rings: The Fellowship of the Ring”

Let’s address the elephant in the room: it’s completely immaterial how many lines of code you have to deal with when creating a new type. It’s a one-off cost, and for most cases, it’s a matter of using the existing macros. The declaration and definition macros have the advantages of enforcing a series of best practices, and keep the code consistent across multiple projects. If you don’t want to deal with boilerplate when using C, you chose the wrong language to begin with. The existence of excessive API is mainly a requirement to allow other languages to integrate their type system with GObject’s own.

The dynamic part of the type system has gotten progressively less relevant. Yes: you can still create plugins, and those can register types; but classes are never unloaded, just like their type data. There is some attempt at enforcing an order of operations: you cannot just add an interface after a type has been instantiated any more; and while you can add properties and signals after class initialisation, it’s mainly a functionality reserved for specific language bindings to maintain backward compatibility.

Yes, defining properties is boring, and could probably be simplified, but the real cost is not in defining and installing a GParamSpec: it’s in writing the set and get property multiplexers, validating the values, boxing and unboxing the data, and dealing with the different property flags; none of those things can be wrapped in some fancy C preprocessor macros—unless you go into the weeds with X macros. The other, big cost of properties is their storage inside a separate, global, lock-happy hash table. The main use case of this functionality—adding entirely separate classes of properties with the same semantics as GObject properties, like style properties and child properties in GTK—has completely fallen out of favour, and for good reasons: it cannot be managed by generic code; it cannot be handled by documentation generators without prior knowledge; and, for the same reason, it cannot be introspected. Even calling these “properties” is kind of a misnomer: they are value validation objects that operate only when using the generic (and less performant) GObject accessor API, something that is constrained to things like UI definition files in GTK, or language bindings. If you use the C accessors for your own GObject type, you’ll have to implement validation yourself; and since idiomatic code will have the generic GObject accessors call the public C API of your type, you get twice the amount of validation for no reason whatsoever.

Signals have mostly been left alone, outside of performance improvements that were hard to achieve within the constraints of the existing implementation; the generic FFI-based closure turned out to be a net performance loss, and we’re trying to walk it back even for D-Bus, which was the main driver for it to land in the first place. Marshallers are now generated with a variadic arguments variant, to reduce the amount of boxing and unboxing of GValue containers. Still, there’s not much left to squeeze out of the old GSignal API.

The atomic nature of the reference counting can be a costly feature, especially for code bases that are by necessity single-threaded; the fact that the reference count field is part of the (somewhat) public API prevents fundamental refactorings, like switching to biased reference counting for faster operations on the same thread that created an instance. The lack of room on GObject also prevents storing the thread ID that owns the instance, which in turn prevents calling the GObjectClass.dispose() and GObjectClass.finalize() virtual functions on the right thread, and requires scheduling the destruction of an object on a separate main context, or locking the contents of an object at a further cost.

Things that yet may be

The quest stands upon the edge of a knife: stray but a little, and it will fail to the ruin of all. Yet hope remains, while the company is true. — Lady Galadriel, “The Lord of the Rings: The Fellowship of the Ring”

Over the years, we have been strictly focusing on GObject: speeding up its internals, figuring out ways to improve property registration and performance, adding new API and features to ensure it behaved more reliably. The type system has also been improved, mainly to streamline its use in idiomatic GObject code bases. Not everything worked: properties are still a problem; weak references and pointers are a mess, with two different API that interact badly with GObject; signals still exists on a completely separate plane; GObject is still wildly inefficient when it comes to locking.

The thesis of this strawman is that we reached the limits of backwards compatibility of GObject, and any attempt at improving it will inevitably lead to a more brittle code, rife with potential regressions. The typical answer, in this case, would be to bump the API/ABI of GObject, remove the mistakes of the past, and provide a new idiomatic approach. Sadly, doing so not only would require a level of resources we, as the GLib project stewards, cannot provide, but it would also completely break the entire ecosystem in a way that is not recoverable. Either nobody would port to the new GObject-3.0 API; or the various projects that depend on GObject would inevitably fracture, following whichever API version they can commit to; in the meantime, downstream distributors would suffer the worst effects of the shared platform we call “Linux”.

Between inaction and slow death, and action with catastrophic consequences, there’s the possibility of a third option: what if we stopped trying to emulate Java, and have a single “god” type?

Our type system is flexible enough to support partitioning various responsibilities, and we can defer complexity where it belongs: into faster moving dependencies, that have the benefit of being able to iterate and change at a much higher rate than the foundational library of the platform. What’s the point of shoving every possible feature into the base class, in order to cover ever increasingly complex use cases across multiple languages, when we can let consumers decide to opt into their own well-defined behaviours? What GObject ought to provide is a set of reliable types that can be combined in expressive ways, and that can be inspected by generic API.

A new, old base type

We already have a derivable type, called GTypeInstance. Typed instances don’t have any memory management: once instantiated, they can only be moved, or freed. All our objects already are typed instances, since GObject inherits from it. Contrary to the current common practices we should move towards using GTypeInstance for our types.

There’s a distinct lack of convenience API for defining typed instances, mostly derived from the fact that GTypeInstance is seen as a form of “escape hatch” for projects to use in order to avoid GObject. In practice, there’s nothing that prevents us from improving the convenience of creating new instantiatable/derivable types, especially if we start using them more often. The verbose API must still exist, to allow language bindings and introspection to handle this kind of types, but just like we made convenience macros for declaring and defining GObject types, we can provide macros for new typed instances, and for setting up a GValue table.

Optional functionality

Typed instances require a wrapper API to free their contents before calling g_type_free_instance(). Nothing prevents us from adding a GFinalizable interface that can be implemented by a GTypeInstance, though: interfaces exist at the type system level, and do not require GObject to work.

typedef struct {
  void (* finalize) (GFinalizable *self);
} GFinalizableInterface;

If a typed instance provides an implementation of GFinalizable, then g_type_free_instance() can free the contents of the instance by calling g_finalizable_finalize().

This interface is optional, in case your typed instance just contains simple values, like:

typedef struct {
  GTypeInstance parent;

  bool is_valid;
  double x1, y1;
  double x2, y2;
} Box;

and does not require deallocations outside of the instance block.

A similar interface can be introduced for cloning instances, allowing a copy operation alongside a move:

typedef struct {
  GClonable * (* clone) (GClonable *self);
} GClonable;

We could then introduce g_type_instance_clone() as a generic entry point that either used GClonable, or simply allocated a new instance and called memcpy() on it, using the size of the instance (and eventual private data) known to the type system.

The prior art for this kind of functionality exists in GIO, in the form of the GInitable and GAsyncInitable interfaces; unfortunately, those interfaces require GObject, and they depend on GCancellable and GAsyncResult objects, which prevent us from moving them into the lower level API.

Typed containers and life time management

The main functionality provided by GObject is garbage collection through reference counting: you acquire a (strong) reference when you need to access an instance, and release it when you don’t need the instance any more. If the reference you released was the last one, the instance gets finalized.

Of course, once you introduce strong references you open the door to a veritable bestiary of other type of references:

  • weak references, used to keep a “pointer” to the instance, and get a notification when the last reference drops
  • floating references, used as a C convenience to allow ownership transfer of newly constructed “child” objects to their “parent”
  • toggle references, used by language bindings that acquire a strong reference on an instance they wrap with a native object; when the toggle reference gets triggered it means that the last reference being held is the one on the native wrapper, and the wrapper can be dropped causing the instance to be finalized

All of these types of reference exist inside GObject, but since they were introduced over the years, they are bolted on top of the base class using the keyed data storage, which comes with its own costly locking and ordering; they are also managed through the finalisation code, which means there are re-entrancy issues or undefined ordering behaviours that routinely crop up over the years, especially when trying to optimise construction and destruction phases.

None of this complexity is, strictly speaking, necessary; we don’t care about an instance being reference counted: a “parent” object can move the memory of a “child” typed instance directly into its own code. What we care about is that, whenever other code interacts with ours, we can hand out a reference to that memory, so that ownership is maintained.

Other languages and standard libraries have the same concept:

These constructs are not part of a base class: they are wrappers around instances. This means you’re not handing out a reference to an instance: you are handing out a reference to a container, which holds the instance for you. The behaviour of the value is made explicit by the type system, not implicit to the type.

A simple implementation of a typed “reference counted” container would provide us with both strong and weak references:

typedef struct _GRc GRc;
typedef struct _GWeak GWeak;

GRc *g_rc_new (GType data_type, gpointer data);

GRc *g_rc_acquire (GRc *rc);
void g_rc_release (GRc *rc);

gpointer g_rc_get_data (GRc *rc);

GWeak *g_rc_downgrade (GRc *rc);
GRc *g_weak_upgrade (GWeak *weak);

bool g_weak_is_empty (GWeak *weak);
gpointer g_weak_get_data (GWeak *weak);

Alongside this type of containers, we could also have a specialisation for atomic reference counted containers; or pinned containers, which guarantee that an object is kept in the same memory location; or re-implement referenced containers inside each language binding, to ensure that the behaviour is tailored to the memory management of those languages.

Specialised types

Container types introduce the requirement of having the type system understand that an object can be the product of two types: the type of the container, and the type of the data. In order to allow properties, signals, and values to effectively provide introspection of this kind of container types we are going to need to introduce “specialised” types:

  • GRc exists as a “generic”, abstract type in the type system
  • any instance of GRc that contains a instance of type A gets a new type in the type system

A basic implementation would look like:

GRc *
g_rc_new (GType data_type, gpointer data)
{
  // Returns an existing GType if something else already
  // has registered the same GRc<T>
  GType rc_type =
    g_generic_type_register_static (G_TYPE_RC, data_type);

  // Instantiates GRc, but gives it the type of
  // GRc<T>; there is only the base GRc class
  // and instance initialization functions, as
  // GRc<T> is not a pure derived type
  GRc *res = (GRc *) g_type_create_instance (rc_type);
  res->data = data;

  return res;
}

Any instance of type GRc<A> satisfies the “is-a” relationship with GRc, but it is not a purely derived type:

GType rc_type =
  ((GTypeInstance *) rc)->g_class.g_type;
g_assert_true (g_type_is_a (rc_type, G_TYPE_RC));

The GRc<A> type does not have a different instance or class size, or its own class and instance initialisation functions; it’s still an instance of the GRc type, with a different GType. The GRc<A> type only exists at run time, as it is the result of the type instantiation; you cannot instantiate a plain GRc, or derive your type from GRc in order to create your own reference counted type, either:

// WRONG
GRc *rc = g_type_create_instance (G_TYPE_RC);

// WRONG
typedef GRc GtkWidget;

You can only use a GRc inside your own instance:

typedef struct {
  // GRc<GtkWidget>
  GRc *parent;
  // GRc<GtkWidget>
  GRc *first_child;
  // GRc<GtkWidget>
  GRc *next_sibling;

  // ...
} GtkWidgetPrivate;

Tuple types

Tuples are generic containers of N values, but right now we don’t have any way of formally declaring them into the type system. A hack is to use arrays of similarly typed values, but with the deprecation of GValueArray—which is a bad type that does not allow reference counting, and does not give you guarantees anyway—we only have C arrays and pointer types.

Registering a new tuple type would work like a generic type: a base GTuple abstract type as the “parent”, and a number of types:

typedef struct _GTuple GTuple;

GTuple *
g_tuple_new_int (size_t n_elements,
                 int elements[])
{
  GType tuple_type =
    g_tuple_type_register_static (G_TYPE_TUPLE, n_elements, G_TYPE_INT);

  GTuple *res = g_type_create_instance (tuple_type);
  for (size_t i = 0; i < n_elements; i++)
    g_tuple_add (res, elements[i]);

  return res;
}

We can also create specialised tuple types, like pairs:

typedef struct _GPair GPair;

GPair *
g_pair_new (GType this_type,
            GType that_type,
            ...);

This would give use the ability to standardise our API around fundamental types, and reduce the amount of ad hoc container types that libraries have to define and bindings have to wrap with native constructs.

Sum types

Of course, once we start with specialised types, we end up with sum types:

typedef enum {
  SQUARE,
  RECT,
  CIRCLE,
} ShapeKind;

typedef struct {
  GTypeInstance parent;

  ShapeKind kind;

  union {
    struct { Point origin; float side; };
    struct { Point origin; Size size; };
    struct { Point center; float radius; };
  } shape;
} Shape;

As of right now, discriminated unions don’t have any special handling in the type system: they are generally boxed types, or typed instances, but they require type-specific API to deal with the discriminator field and type. Since we have types for enumerations and instances, we can register them at the same time, and provide offsets for direct access:

GType
g_sum_type_register_static (const char *name,
                            size_t class_size,
                            size_t instance_size,
                            GType tag_enum_type,
                            offset_t tag_field);

This way it’s possible to ask the type system for:

  • the offset of the tag in an instance, for direct access
  • all the possible values of the tag, by inspecting its GEnum type

From then on, we can easily build types like Option and Result:

typedef enum {
  G_RESULT_OK,
  G_RESULT_ERR
} GResultKind;

typedef struct {
  GTypeInstance parent;

  GResultKind type;
  union {
    GValue value;
    GError *error;
  } result;
} GResult;

// ...
g_sum_type_register_static ("GResult",
                            sizeof (GResultClass),
                            sizeof (GResult),
                            G_TYPE_RESULT_KIND,
                            offsetof (GResult, type));

// ...
GResult *
g_result_new_boolean (gboolean value)
{
  GType res_type =
    g_generic_type_register_static (G_TYPE_RESULT,
                                    G_TYPE_BOOLEAN)
  GResult *res =
    g_type_create_instance (res_type);
  g_value_set_boolean (&res->result.value, value);

  return res;
}

// ...
g_autoptr (GResult) result = obj_finish (task);
switch (g_result_get_kind (result)) {
  case G_RESULT_OK:
    g_print ("Result: %s\n",
      g_result_get_boolean (result)
        ? "true"
        : "false");
    break;

  case G_RESULT_ERROR:
    g_printerr ("Error: %s\n",
      g_result_get_error_message (result));
    break;
}

// ...
g_autoptr (GResult) result =
  g_input_stream_read_bytes (stream);
if (g_result_is_error (result)) {
  // ...
} else {
  g_autoptr (GBytes) data = g_result_get_boxed (result);
  // ...
}

Consolidating GLib and GType

Having the type system in a separate shared library did make sense back when GLib was spun off from GTK; after all, GLib was mainly a set of convenient data types for a language that lacked a decent standard library. Additionally, not many C projects were interested in the type system, as it was perceived as a big chunk of functionality in an era where space was at a premium. These days, the smallest environment capable of running GLib code is plenty capable of running the GObject type system as well. The separation between GLib data types and the GObject type system has created data types that are not type safe, and work by copying data, by having run time defined destructor functions, or by storing pointers and assuming everything will be fine. This leads to code duplication between shared libraries, and prevents the use of GLib data types in the public API, lest the introspection information gets lost.

Moving the type system inside GLib would allow us to have properly typed generic container types, like a GVector replacing GArray, GPtrArray, GByteArray, as well as the deprecated GValueArray; or a GMap and a GSet, replacing GHashTable, GSequence, and GtkRBTree. Even the various list models could be assembled on top of these new types, and moved out of GTK.

Current consumers of GLib-only API would still have their basic C types, but if they don’t want to link against a slightly bigger shared library that includes GTypeInstance, GTypeInterface, and the newly added generic, tuple, and sum types, then they would probably be better served by projects like c-util instead.

Properties

Instead of bolting properties on top of GParamSpec, we can move their definition into the type system; after all, properties are a fundamental part of a type, so it does not make sense to bind them to the class instantiation. This would also remove the long-standing issue of properties being available for registration long after a class has been initialised; it would give us the chance to ship a utility for inspecting the type system to get all the meta-information on the hierarchy and generating introspection XML without having to compile a small binary.

If we move property registration to the type registration we can also finally move away from multiplexed accessors, and use direct instance field access where applicable:

GPropertyBuilder builder;

g_property_builder_init (&builder,
  G_TYPE_STRING, "name");
// Stop using flags, and use proper setters; since
// there's no use case for unsetting the readability
// flag, we don't even need a boolean argument
g_property_builder_set_readwrite (&builder);
// The offset is used for read and write access...
g_property_builder_set_private_offset (&builder,
  offsetof (GtkWidgetPrivate, name));
// ... unless an accessor function is provided; in
// this case we want setting a property to go through
// a function
g_property_builder_set_setter_func (&builder,
  gtk_widget_set_name);

// Register the property into the type; we return the
// offset of the property into the type node, so we can
// access the property definition with a fast look up
properties[NAME] =
  g_type_add_instance_property (type,
    g_property_builder_end (&builder));

Accessing the property information would then be a case of looking into the type system under a single reader lock, instead of traversing all properties in a glorified globally locked hash table.

Once we have a property registered in the type system, accessing it is a matter of calling API on the GProperty object:

void
gtk_widget_set_name (GtkWidget *widget,
                     const char *name)
{
  GProperty *prop =
    g_type_get_instance_property (GTK_TYPE_WIDGET,
                                  properties[NAME]);

  g_property_set (prop, name);
}

Signals

Moving signal registration into the type system would allow us to subsume the global locking into the type locks; it would also give us the chance to simplify some of the complexity for re-emission and hooks:

GSignalBuilder builder;

g_signal_builder_init (&builder, "insert-text");
g_signal_builder_set_args (&builder, 3,
  (GSignalArg[]) {
    { .name = "text", .gtype = G_TYPE_STRING },
    { .name = "length", .gtype = G_TYPE_SIZE },
    { .name = "position", .gtype = G_TYPE_OFFSET },
  });
g_signal_builder_set_retval (&builder,
  G_TYPE_OFFSET);
g_signal_builder_set_class_offset (&builder,
  offsetof (EditableClass, insert_text));

signals[INSERT_TEXT] =
  g_type_add_class_signal (type,
    g_signal_builder_end (&builder));

By taking the chance of moving signals out of the their own namespace we can also move to a model where each class is responsible for providing the API necessary to connect and emit signals, as well as providing callback types for each signal. This would allow us to increase type safety, and reduce the reliance on generic API:

typedef offset_t (* EditableInsertText) (Editable *self,
                                         const char *text,
                                         size_t length,
                                         offset_t position);

unsigned long
editable_connect_insert_text (Editable *self,
                              EditableInsertText callback,
                              gpointer user_data,
                              GSignalFlags flags);

offset_t
editable_emit_insert_text (Editable *self,
                           const char *text,
                           size_t length,
                           offset_t position);

Extending the type system

Some of the metadata necessary to provide properly typed properties and signals is missing from the type system. For instance, by design, there is no type representing a uint16_t; we are supposed to create a GParamSpec to validate the value of a G_TYPE_INT in order to fit in the 16bit range. Of course, this leads to excessive run time validation, and relies on C’s own promotion rules for variadic arguments; it also does not work for signals, as those do not use GParamSpec. More importantly, though, the missing connection between C types and GTypes prevents gathering proper introspection information for properties and signal arguments: if we only have the GType we cannot generate the full metadata that can be used by documentation and language bindings, unless we’re willing to lose specificity.

Not only the type system should be sufficient to contain all the standard C types that are now available, we also need the type system to provide us with enough information to be able to serialise those types into the introspection data, if we want to be able to generate code like signal API, type safe bindings, or accurate documentation for properties and signal handlers.

Introspection

Introspection exists outside of GObject mainly because of dependencies; the parser, abstract syntax tree, and transformers are written in Python and interface with a low level C tokeniser. Adding a CPython dependency to GObject is too much of a stretch, especially when it comes to bootstrapping a system. While we could keep the dependency optional, and allow building GObject without support for introspection, keeping the code separate is a simpler solution.

Nevertheless, GObject should not ignore introspection. The current reflection API inside GObject should generate data that is compatible with the libgirepository API and with its GIR parser. Currently, gobject-introspection is tasked with generating a small C executable, compiling it, running it to extract metadata from the type system, as well as the properties and signals of a GObject type, and generate XML that can be parsed and included into the larger GIR metadata for the rest of the ABI being introspected. GObject should ship a pre-built binary, instead; it should dlopen the given library or executable, extract all the type information, and emit the introspection data. This would not make gobject-introspection more cross-compilable, but it would simplify its internals and its distributability. We would not need to know how to compile and run C code from a Python script, for one; a simple executable wrapper around a native copy of the GObject-provided binary would be enough.

Ideally, we could move the girepository API into GObject itself, and allow it to load the binary data compiled out of the XML; language bindings loading the data at run time would then need to depend on GObject instead of an additional library, and we could ship the GIR → typelib compiler directly with GLib, leaving gobject-introspection to deal only with the parsing of C headers, docblocks, and annotations, to generate the XML representation of the C/GObject ABI.

There and back again

And the ship went out into the High Sea and passed on into the West, until at last on a night of rain Frodo smelled a sweet fragrance on the air and heard the sound of singing that came over the water. And then it seemed to him that as in his dream in the house of Bombadil, the grey rain-curtain turned all to silver glass and was rolled back, and he beheld white shores and beyond them a far green country under a swift sunrise. — “The Lord of the Rings”, Volume 3: The Return of the King, Book 6: The End of the Third Age

The hard part of changing a project in a backward compatible way is resisting the temptation of fixing the existing design. Some times it’s necessary to backtrack the chain of decisions, and consider the extant code base a dead branch; not because the code is wrong, or bug free, but because any attempt at doubling down on the same design will inevitably lead to breakage. In this sense, it’s easy to just declare “maintenance bankruptcy”, and start from a new major API version: breaks allow us to fix the implementation, at the cost of adapting to new API. For instance, widgets are still the core of GTK, even after 4 major revisions; we did not rename them to “elements” or “actors”, and we did not change how the windows are structured. You are still supposed to build a tree of widgets, connect callbacks to signals, and let the main event loop run. Porting has been painful because of underlying changes in the graphics stack, or because of portability concerns, but even with the direction change of favouring composition over inheritance, the knowledge on how to use GTK has been transferred from GTK 1 to 4.

We cannot do the same for GObject. Changing how it is implemented implies changing everything that depends on it; it means introducing behavioural changes in subtle, and hard to predict ways. Luckily for us, the underlying type system is still flexible and nimble enough that it can give us the ability to change direction, and implement an entirely different approach to object orientation—one that is more in line with languages like modern C++ and Rust. By following new approaches we can slowly migrate our platform to other languages over time, with a smaller impedance mismatch caused by the current design of our object model. Additionally, by keeping the root of the type system, we maintain the ability to provide a stable C ABI that can be consumed by multiple languages, which is the strong selling point of the GNOME ecosystem.

Why do all of this work, though? Compared to a full API break, this proposal has the advantage of being tractable and realistic; I cannot overemphasise enough how little appetite there is for a “GObject 3.0” in the ecosystem. The recent API bump from libsoup2 to libsoup3 has clearly identified that changes deep into the stack end up being too costly an effort: some projects have found it easier to switch to another HTTP library altogether, rather than support two versions of libsoup for a while; other projects have decided to drop compatibility with libsoup2, forcing the hand of every reverse dependency both upstream and downstream. Breaking GObject would end up breaking the ecosystem, with the hope of a “perfect” implementation way down the line and with very few users on one side, and a dead branch used by everybody else on the other.

Of course, the complexity of the change is not going to be trivial, and it will impact things like the introspection metadata and the various language bindings that exist today; some bindings may even require a complete redesign. Nevertheless, by implementing this new object model and leaving GObject alone, we buy ourselves enough time and space to port our software development platform towards a different future.

Maybe this way we will get to save the Shire; and even if we give up some things, or even lose them, we still get to keep what matters.

by ebassi at August 23, 2023 08:23 PM

May 31, 2023

Emmanuele Bassi

Constraints editing

Last year I talked about the newly added support for Apple’s Visual Format Language in Emeus, which allows to quickly describe layouts using a cross between ASCII art and predicates. For instance, I can use:

H:|-[icon(==256)]-[name_label]-|
H:[surname_label]-|
H:[email_label]-|
H:|-[button(<=icon)]
V:|-[icon(==256)]
V:|-[name_label]-[surname_label]-[email_label]-|
V:[button]-|

and obtain a layout like this one:

Boxes approximate widgets

Thanks to the contribution of my colleague Martin Abente Lahaye, now Emeus supports extensions to the VFL, namely:

  • arithmetic operators for constant and multiplication factors inside predicates, like [button1(button2 * 2 + 16)]
  • explicit attribute references, like [button1(button1.height / 2)]

This allows more expressive layout descriptions, like keeping aspect ratios between UI elements, without requiring hitting the code base.

Of course, editing VFL descriptions blindly is not what I consider a fun activity, so I took some time to write a simple, primitive editing tool that lets you visualize a layout expressed through VFL constraints:

I warned you that it was primitive and simple

Here’s a couple of videos showing it in action:

At some point, this could lead to a new UI tool to lay out widgets inside Builder and/or Glade.

As of now, I consider Emeus in a stable enough state for other people to experiment with it — I’ll probably make a release soon-ish. The Emeus website is up to date, as it is the API reference, and I’m happy to review pull requests and feature requests.

by ebassi at May 31, 2023 02:36 PM

May 30, 2023

Emmanuele Bassi

Configuring portals

One of the things I’ve been recently working on at Igalia is the desktop portals implementation, the middleware layer of API for application and toolkit developers that allows sandboxed applications to interact with the host system. Sandboxing technologies like Flatpak and Snap expose the portal D-Bus interfaces inside the sandbox they manage, to handle user-mediated interactions like opening a file that exists outside of the locations available to the sandboxed process, or talking to privileged components like the compositor to obtain a screenshot.

Outside of allowing dynamic permissions for sandboxed applications, portals act as a vendor-neutral API for applications to target when dealing with Linux as an OS; this is mostly helpful for commercial applications that are not tied to a specific desktop environment, but don’t want to re-implement the layer of system integration from the first principles of POSIX primitives.

The architecture of desktop portals has been described pretty well in a blog post by Peter Hutterer, but to recap:

  • desktop portals are a series of D-Bus interfaces
  • toolkits and applications call methods on those D-Bus interfaces
  • there is a user session daemon called xdg-desktop-portal that provides a service for the D-Bus interfaces
  • xdg-desktop-portal implements some of those interface directly
  • for the interfaces that involve user interaction, or interaction with desktop-specific services, we have separate services that are proxied by xdg-desktop-portal; GNOME has xdg-desktop-portal-gnome, KDE has xdg-desktop-portal-kde; Sway and wlroot-based compositors have xdg-desktop-portal-wlr; and so on, and so forth

There’s also xdg-desktop-portal-gtk, which acts a bit as a reference portal implementation, and a shared desktop portal implementation for a lot of GTK-based environments. Ideally, every desktop environment should have their own desktop portal implementation, so that applications using the portal API can be fully integrated with each desktop’s interface guidelines and specialised services.

One thing that is currently messy is the mechanism by which xdg-desktop-portal finds the portal implementations available on the system, and decides which implementation should be used for a specific interface.

Up until the current stable version of xdg-desktop-portal, the configuration worked this way:

  1. each portal implementation (xdg-desktop-portal-gtk, -gnome, -kde, …) ships a ${NAME}.portal file; the file is a simple INI-like desktop entry file with the following keys:
    • DBusName, which contains the service name of the portal, for instance, org.freedesktop.impl.portal.desktop.gnome for the GNOME portals; this name is used by xdg-desktop-portal to launch the portal implementation
    • Interfaces, which contains a list of D-Bus interfaces under the org.freedesktop.impl.portal.* namespace that are implemented by the desktop-specific portal; xdg-desktop-portal will match the portal implementation with the public facing D-Bus interface internally
    • UseIn, which contains the name of the desktop to be matched with the contents of the $XDG_CURRENT_DESKTOP environment variable
  2. once xdg-desktop-portal starts, it finds all the .portal files in a well-known location and builds a list of portal implementations currently installed in the system, containing all the interfaces they implement as well as their preferred desktop environment
  3. whenever something calls a method on an interface in the org.freedesktop.portal.* namespace, xdg-desktop-portal will check the current desktop using the XDG_CURRENT_DESKTOP environment variable, and check if the portal that has a UseIn key that matches the current desktop
  4. once there’s a match, xdg-desktop-portal will activate the portal implementation and proxy the calls made on the org.freedesktop.portal interfaces over to the org.freedesktop.impl.portal ones

This works perfectly fine for the average case of a Linux installation with a single session, using a single desktop environment, and a single desktop portal. Where things get messy is the case where you have multiple sessions on the same system, each with its own desktop and portals, or even no portals whatsoever. In a bad scenario, you may get the wrong desktop portal just because the name sorts before the one you’re interested in, so you get the GTK “reference” portals instead of the KDE-specific ones; in the worst case scenario, you may get a stall when launching an application just because the wrong desktop portal is trying to contact a session service that simply does not exist, and you have to wait 30 seconds for a D-Bus timeout.

The problem is that some desktop portal implementations are shared across desktops, or cover only a limited amount of interfaces; a mandatory list of desktop environments is far too coarse a tool to deal with this. Additionally, xdg-desktop-portal has to have enough fallbacks to ensure that, if it cannot find any implementation for the current desktop, it will proxy to the first implementation it can find in order to give a meaningful answer. Finally, since the supported desktops are shipped by the portal themselves, there’s no way to override this information by packagers, admins, or users.

After iterating over the issue, I ended up writing the support for a new configuration file. Instead of having portals say what kind of desktop environment they require, we have desktop environments saying which portal implementations they prefer. Now, each desktop should ship a ${NAME}-portals.conf INI-like desktop entry file listing each interface, and what kind of desktop portal should be used for it; for instance, the GNOME desktop should ship a gnome-portals.conf configuration file that specifies a default for every interface:

[preferred]
default=gnome

On the other hand, you could have a Foo desktop that relies on the GTK portal for everything, except for specific interfaces that are implemented by the “foo” portal:

[preferred]
default=gtk
org.freedesktop.impl.portal.Screenshot=foo
org.freedesktop.impl.portal.Screencast=foo

You could also disable all portals except for a specific interface (and its dependencies):

[preferred]
default=none
org.freedesktop.impl.portal.Account=gtk
org.freedesktop.impl.portal.FileChooser=gtk
org.freedesktop.impl.portal.Lockdown=gtk
org.freedesktop.impl.portal.Settings=gtk

Or, finally, you could disable all portal implementations:

[preferred]
default=none

A nice side effect of this work is that you can configure your own system, by dropping a portals.conf configuration file inside the XDG_CONFIG_HOME/xdg-desktop-portal directory; this should cover all the cases in which people assemble their desktop out of disparate components.

By having desktop environments (or, in a pinch, the user themselves) owning the kind of portals they require, we can avoid messy configurations in the portal implementations, and clarify the intended behaviour to downstream packagers; at the same time, generic portal implementations can be adopted by multiple environments without necessarily having to know which ones upfront.


In a way, the desktop portals project is trying to fulfill the original mission of freedesktop.org’s Cross-desktop Group: a set of API that are not bound to a single environment, and can be used to define “the Linux desktop” as a platform.

Of course, there’s a lot of work involved in creating a vendor-neutral platform API, especially when it comes to designing both the user and the developer experiences; ideally, more people should be involved in this effort, so if you want to contribute to the Linux ecosystem, this is an area where you can make the difference.

by ebassi at May 30, 2023 01:59 PM

May 22, 2023

Emmanuele Bassi

Dream About Flying

as usual — long time, no blog.

my only excuse is that I was busy with other things: new job, new office, holidays… you know, whatever happens between coding. :-)

it’s that time of year again, and we’re nearing another Clutter release — this time it’s a special one, though, as it is 1.0.0. which also means that the API will be frozen for the entire duration of the 1.x branch: only additions and deprecations will be allowed. no worries about stagnation, though — we are already planning for 2.0, even though it’ll take at least a couple of years to get there)).

since we’re in the process of finalizing the 1.0 API I thought about writing something about what changed, what was added and what has been removed for good.

let’s start with the Effects API. the Effects were meant to provide a high level API for simple, fire-and-forget animations ((even though people always tried to find new ways to abuse the term “fire-and-forget”)). they were sub-obtimal in the memory management — you had to keep around the EffectTemplate, the effects copied the timelines — and they weren’t extensible — writing your own effect would have been impossible without reimplementing the whole machinery. after the experiments done by Øyvind and myself, and after looking at what the high-level languages provided, I implemented a new implicit animation API — all based around a single object, with the most automagic memory management possible:

/* resize the actor in 250 milliseconds using a cubic easing
 * and attach a callback at the end of the animation
 */
 ClutterAnimation *animation =
   clutter_actor_animate (actor, 250, CLUTTER_EASE_IN_CUBIC,
                          "width", 200,
                          "height", 200,
                          "color", &new_color,
                          NULL);
  g_signal_connect (animation, "completed",
                    G_CALLBACK (on_animation_complete),
                    NULL);

this should make a lot of people happy. the easing modes in particular are the same shared among various animation framworks, like tweener and jQuery

what might make some people slightly less happy is the big API churn that removed both ClutterLabel and ClutterEntry and added ClutterText. the trade-off, though, is clearly in favour of ClutterText, as this is a base class for both editable and non-editable text displays; it supports pointer and keyboard selection, and multi-line as well as single-line editing.

another big changed happened on the low level COGL API, with the introduction of vertex buffers — which allow you to efficiently store arrays of vertex attributes; and, more importantly, with the introduction of the Materials which decouple the drawing operations with the fill operations. it also adds support for multi-texturing, colors and other GL features — on both GL and GLES.

Gradients

after unifying Label and Entry, we also decided to unify BehaviourPath and BehaviourBspline; after that we added support for creating paths using SVG-like descriptions and for “replaying” a Path on a cairo_t. well, the Cairo integration is also another feature — clutter-cairo has been deprecated and its functionality moved inside ClutterCairoTexture.

one of the last minute additions has been ClutterClone an efficient way to clone generic actors without using FBOs — which also supercedes the CloneTexture actor.

the Pango integration has been extended, and the internal Pango API exposed and officially supported — now you can display text using the Pango renderer and glyphs cache inside your own custom actors without using internal/unstable API.

thanks to Johan Dahlin and Owen Taylor, Clutter now generates GObject-Introspection data at compile time, so that runtime language bindings will be ready as soon as 1.0.0 hits the internets.

finally, there’s a ton of bug fixes in how we use GL, how we render text, how we relayout actors, etc.

hope you’ll have fun with Clutter!

by ebassi at May 22, 2023 09:27 AM

quiet strain

as promised during GUADEC, I’m going to blog a bit more about the development of GSK — and now that I have some code, it’s actually easier to do.

so, let’s start from the top, and speak about GDK.

in April 2008 I was in Berlin, enjoying the city, the company, and good food, and incidentally attending the first GTK+ hackfest. those were the days of Project Ridley, and when the plan for GTK+ 3.0 was to release without deprecated symbols and with all the instance structures sealed.

in the long discussions about the issue of a “blessed” canvas library to be used by GTK app developers and by the GNOME project, we ended up discussing the support of the OpenGL API in GDK and GTK+. the [original bug][bug-opegl] had been opened by Owen about 5 years prior, and while we had ancillary libraries like GtkGLExt and GtkGLArea, the integration was a pretty sore point. the consensus at the end of the hackfest was to provide wrappers around the platform-specific bits of OpenGL inside GDK, enough to create a GL context and bind it to a specific GdkWindow, to let people draw with OpenGL commands at the right time in the drawing cycle of GTK+ widgets. the consensus was also that I would look at the bug, as a person that at the time was dealing with OpenGL inside tool kits for his day job.

well, that didn’t really work out, because cue to 6 years after that hackfest, the bug is still open.

to be fair, the landscape of GTK and GDK has changed a lot since those days. we actually released GTK+ 3.0, and with a lot more features than just deprecations removal; the whole frame cycle is much better, and the paint sequence is reliable and completely different than before. yet, we still have to rely on poorly integrated external libraries to deal with OpenGL.

right after GUADEC, I started hacking on getting the minimal amount of API necessary to create a GL context, and being able to use it to draw on a GTK widget. it turns out that it wasn’t that big of a job to get something on the screen in a semi-reliable way — after all, we already had libraries like GtkGLExt and GtkGLArea living outside of the GTK git repository that did that, even if they had to use deprecated or broken API. the complex part of this work involved being able to draw GL inside the same infrastructure that we currently use for Cairo. we need to be able to synchronise the frame drawing, and we need to be able to blend the contents of the GL area with both content that was drawn before and after, likely with Cairo — otherwise we would not be able to do things like drawing an overlay notification on top of the usual spinning gears, while keeping the background color of the window:

welcome to the world of tomorrow (for values of tomorrow close to 2005)

luckily, thanks to Alex, the amount of changes in the internals of GDK was kept to a minimum, and we can enjoy GL rendering running natively on X11 and Wayland, using GLX or EGL respectively.

on top of the low level API, we have a GtkGLArea widget that renders all the GL commands you submit to it, and it behaves like any other GTK+ widgets.

today, Matthias merged the topic branch into master, which means that, barring disastrous regressions, GTK+ 3.16 will finally have native OpenGL support — and we’ll be one step closer to GSK as well.

right now, there’s still some work to do — namely: examples, performance, documentation, porting to MacOS and Windows — but the API is already fairly solid, so we’d all like to get feedback from the users of libraries like GtkGLExt and GtkGLArea, to see what they need or what we missed. feedback is, as usual, best directed at the gtk-devel mailing list, or on the #gtk+ IRC channel.

by ebassi at May 22, 2023 09:27 AM

Who wrote GTK+ (Reprise)

As I’ve been asked by different people about data from older releases of GTK+, after the previous article on Who Wrote GTK+ 3.18, I ran the git-dm script on every release and generated some more data:

Release Lines added Lines removed Delta Changesets Contributors
2.01 666495 345348 321147 2503 106
2.2 301943 227762 74181 1026 89
2.4 601707 116402 485305 2118 109
2.6 181478 88050 93428 1421 101
2.8 93734 47609 46125 1155 86
2.10 215734 54757 160977 1614 110
2.12 232831 43172 189659 1966 148
2.14 215151 102888 112263 1952 140
2.16 71335 23272 48063 929 118
2.18 52228 23490 28738 1079 90
2.20 80397 104504 -24107 761 82
2.22 51115 71439 -20324 438 70
2.24 4984 2168 2816 184 37
3.01 354665 580207 -225542 4792 115
3.2 227778 168616 59162 2435 98
3.4 126934 83313 43621 2201 84
3.6 206620 34965 171655 1011 89
3.8 84693 34826 49867 1105 90
3.10 143711 204684 -60973 1722 111
3.12 86342 54037 32305 1453 92
3.14 130387 144926 -14539 2553 84
3.16 80321 37037 43284 1725 94
3.18* 78997 54614 24383 1638 83

Here you can see the history of the GTK releases, since 2.0.

These numbers are to be taken with a truckload of salt, especially the ones from the 2.x era. During the early 2.x cycle, releases did not follow the GNOME timed release schedule; instead, they were done whenever needed:

Release Date
2.0 March 2002
2.2 December 2002
2.4 March 2004
2.6 December 2004
2.8 August 2005
2.10 July 2006
2.12 September 2007
2.14 September 2008
2.16 March 2009
2.18 September 2009
2.20 March 2010
2.22 September 2010
2.24 January 2011

Starting with 2.14, we settled to the same cycle as GNOME, as it made releasing GNOME and packaging GTK+ on your favourite distribution a lot easier.

This disparity in the length of the development cycles explains why the 2.12 and 2.14 cycles, which lasted a year, represent an anomaly in terms of contributors (148 and 140, respectively) and in terms of absolute lines changed.

The reduced activity between 2.20 and 2.24.0 is easily attributable to the fact that people were working hard on the 2.90 branch that would become 3.0.

In general, once you adjust by release time, it’s easy to see that the number of contributors is pretty much stable at around 90:

The average is 94.5, which means we have an hobbit somewhere in the commit log

Another interesting data point would be to look at the ecosystem of companies spawned around GTK+ and GNOME, and how it has changed over the years — but that’s part of a larger discussion that would probably take more than a couple of blog posts to unpack.

I guess the larger point is that GTK+ is definitely not dying; it’s pretty much being worked on by the same amount of people — which includes long timers as well as newcomers — as it was during the 2.x cycle.


  1. Both 2.0 and 3.0 are not wholly accurate; I used, as a starting point for the changeset period, the previous released branch point; for GTK+ 2.0, I started from the GTK_1_3_1 tag, whereas for GTK+ 3.0 I used the 2.90.0 tag. There are commits preceding both tags, but not enough to skew the results. 

by ebassi at May 22, 2023 09:26 AM

GSK Demystified (II) — Rendering

See the previous article for an introduction to GSK.


In order to render with GSK we need to get acquainted with two classes:

  • GskRenderNode, a single element in the rendering tree
  • GskRenderer, the object that effectively turns the rendering tree into rendering commands

GskRenderNode

The usual way to put things on the screen involves asking the windowing system to give us a memory region, filling it with something, and then asking the windowing system to present it to the graphics hardware, in the hope that everything ends up on the display. This is pretty much how every windowing system works. The only difference lies in that “filling it with something”.

With Cairo you get a surface that represents that memory region, and a (stateful) drawing context; every time you need to draw you set up your state and emit a series of commands. This happens on every frame, starting from the top level window down into every leaf object. At the end of the frame, the content of the window is swapped with the content of the buffer. Every frame is drawn while we’re traversing the widget tree, and we have no control on the rendering outside of the state of the drawing context.

A tree of GTK widgets

With GSK we change this process with a small layer of indirection; every widget, from the top level to the leaves, creates a series of render nodes, small objects that each hold the drawing state for their contents. Each node is, at its simplest, a collection of:

  • a rectangle, representing the region used to draw the contents
  • a transformation matrix, representing the parent-relative set of transformations applied to the contents when drawing
  • the contents of the node

Every frame, thus, is composed of a tree of render nodes.

A tree of GTK widgets and GSK render nodes

The important thing is that the render tree does not draw anything; it describes what to draw (which can be a rasterization generated using Cairo) and how and where to draw it. The actual drawing is deferred to the GskRenderer instance, and will happen only once the tree has been built.

After the rendering is complete we can discard the render tree. Since the rendering is decoupled from the widget state, the widgets will hold all the state across frames — as they already do. Each GskRenderNode instance is, thus, a very simple instance type instead of a full GObject, whose lifetime is determined by the renderer.

GskRenderer

The renderer is the object that turns a render tree into the actual draw commands. At its most basic, it’s a simple compositor, taking the content of each node and its state and blending it on a rendering surface, which then gets pushed to the windowing system. In practice, it’s a tad more complicated than that.

Each top-level has its own renderer instance, as it requires access to windowing system resources, like a GL context. When the frame is started, the renderer will take a render tree and a drawing context, and will proceed to traverse the render tree in order to translate it into actual render commands.

As we want to offload the rendering and blending to the GPU, the GskRenderer instance you’ll most likely get is one that uses OpenGL to perform the rendering. The GL renderer will take the render tree and convert it into a (mostly flat) list of data structures that represent the state to be pushed on the state machine — the blending mode, the shading program, the textures to sample, and the vertex buffer objects and attributes that describe the rendering. This “translation” stage allows the renderer to decide which render nodes should be used and which should be discarded; it also allows us to create, or recycle, all the needed resources when the frame starts, and minimize the state transitions when doing the actual rendering.

Going from here to there

Widgets provided by GTK will automatically start using render nodes instead of rendering directly to a Cairo context.

There are various fallback code paths in place in the existing code, which means that, luckily, we don’t have to break any existing out of tree widget: they will simply draw themselves (and their children) on an implicit render node. If you want to port your custom widgets or containers, on the other hand, you’ll have to remove the GtkWidget::draw virtual function implementation or signal handler you use, and override the GtkWidget::get_render_node() virtual function instead.

Containers simply need to create a render node for their own background, border, or custom drawing; then they will have to retrieve the render node for each of their children. We’ll provide convenience API for that, so the chances of getting something wrong will be, hopefully, reduced to zero.

Leaf widgets can remain unported a bit longer, unless they are composed of multiple rendering elements, in which case they simply need to create a new render node for each element.

I’ll provide more example of porting widgets in a later article, as soon as the API will have stabilized.

by ebassi at May 22, 2023 09:26 AM

GUADEC

Speaking at GUADEC 2016

I’m going to talk about the evolution of GTK+ rendering, from its humble origins of X11 graphics contexts, to Cairo, to GSK. If you are interested in this kind of stuff, you can either attend my presentation on Saturday at 11 in the Grace Room, or you can just find me and have a chat.

I’m also going to stick around during the BoF days — especially for the usual GTK+ team meeting, which will be on the 15th.

See you all in Karlsruhe.

by ebassi at May 22, 2023 09:26 AM

April 22, 2023

Iain Holmes

Update15

Been slowly working on Fieldwork while other things have been going on. Spent a lot of time watching SwiftUI videos so I could do SwiftUI “the right way” and I think I’ve figured it out.

I’m not convinced SwiftUI works well with the MVVM pattern that people want to push on it, at least not the standard way of thinking about MVVM. I see people creating ViewModel classes for each of their views then setting values on it in .onAppear or some similar View method, or the parent class creates a ViewModel class, populates it and then passes it to the View, but then that makes @State awkward because who really owns that object?

But it also doesn’t feel like it works well with the other way people seem to be using it, having one monolithic huge “ViewModel” object that gets passed into every SwiftUI view, because that’s not really a ViewModel, that’s just a global model that we would have worked with in the old C days.

Among other problems I’ve had with using the above patterns is that they make SwiftUI’s previews really, really awkward to develop.

One thing I read on Mastodon from someone that in SwiftUI the Views themselves are the “view model” and that makes more sense to me.

So I’ve rewritten a lot of Fieldwork to do away with ViewModels, and instead each class only gets passed in the simple types that it needs.

As an example the InfoView displays the name of the file, the length of the sample, the bitrate, number of channels, etc. Metadata type stuff. Previously it took a Recording class as a parameter, and did some digging through to find the right metadata: the name came from the Recording.metadata, the bitrate and number of frames from Recording.sample.bitrate etc.

This meant to create a preview for this, I needed to create a Recording for the preview, which meant setting up the RecordingService, the FileService and faking lots of things that the preview didn’t need to care about.

The preview doesn’t need to know about Recording class, it just wanted a few strings to display. It doesn’t even need to care if the bitrate is a number, it just wants a string to display.

So now the InfoView just takes 4 string parameters when it is created and it becomes its own ViewModel.

Swinging this back to the source of truth, the source of truth is at the highest level it needs to be. There is a large monolithic class at the very top holding things that views need to know about, but anything that’s needed further down the view heirarchy is just passed in as either a simple type parameter, or a binding to the simple type parameter. No view needs to know about the large monolithic class holding everything together.

And now Fieldwork is fairly easy to understand, and every view can be previewed fairly easily

April 22, 2023 12:00 AM

March 06, 2023

Ross Burton

Building a big-endian Arm system in Yocto

For reasons I won't bore anyone with I needed to build a 32-bit big-endian system with the Yocto Project to test a package, and I thought I'd write the steps down in case I ever need to do it again (or, even more unlikely, someone else needs to do it).

For unsurprising reasons I thought I'd do a big-endian Arm build. So we start by picking the qemuarm machine, which is a Armv7-A processor (Cortex-A15, specifically) in little-endian mode by default.

MACHINE = "qemuarm"

qemumarm.conf requires tune-cortexa15.inc which then requires arch-armv7ve.inc, and this file defines the base tunes. The default tune is armv7ve, we can make it big-endian by simply adding a b:

DEFAULTTUNE:qemuarm = "armv7veb"

And now we just build an image:

$ MACHINE=qemuarm bitbake core-image-minimal
...
Summary: 4 tasks failed:
  .../poky/meta/recipes-kernel/linux/linux-yocto_6.1.bb:do_package_qa
  .../poky/meta/recipes-graphics/xorg-proto/xorgproto_2022.2.bb:do_configure
  .../poky/meta/recipes-core/glib-2.0/glib-2.0_2.74.5.bb:do_configure
  .../poky/meta/recipes-graphics/wayland/wayland_1.21.0.bb:do_configure

Or not.

There are two failure cases here. First, the kernel:

ERROR: linux-yocto-6.1.9+gitAUTOINC+d7393c5752_ccd3b20fb5-r0 do_package_qa: QA Issue: Endiannes did not match (1, expected 0) in /lib/modules/6.1.9-yocto-standard/kernel/net/ipv4/ah4.ko [arch]

It turns out the kernel needs to be configured specifically to be big or little endian, and the default configuration is, predictably, little endian. There is a bug open to make this automatic, but big-endian really is dead because it has been open since 2016. The solution is a quick kernel configuration fragment added to the kernel's SRC_URI:

CONFIG_CPU_BIG_ENDIAN=y
CONFIG_CPU_LITTLE_ENDIAN=n

With this, the kernel builds as expected. The second set of failures are all from Meson, failing to execute a target binary:

../xorgproto-2022.2/meson.build:22:0: ERROR: Executables created by c compiler armeb-poky-linux-gnueabi-gcc [...] are not runnable.

Meson is trying to run the target binaries in a qemu-user that we set up, but the problem here is to save build time we only build the qemu targets that are typically used. This doesn't include usermode big-endian 32-bit Arm, so this target needs enabling:

QEMU_TARGETS:append = " armeb"

Now the image builds successfully, and we discover that indeed gdbm refuses to open a database which was generated on a system with a different endian.

by Ross Burton at March 06, 2023 03:24 PM

February 22, 2023

Emmanuele Bassi

Writing Bindable API, 2023 Edition

First of all, you should go on the gobject-introspection website and read the page on how to write bindable API. What I’m going to write here is going to build upon what’s already documented, or will update the best practices, so if you maintain a GObject/C library, or you’re writing one, you must be familiar with the basics of gobject-introspection. It’s 2023: it’s already too bad we’re still writing C libraries, we should at the very least be responsible about it.

A specific note for people maintaining an existing GObject/C library with an API designed before the mainstream establishment of gobject-introspection (basically, anything written prior to 2011): you should really consider writing all new types and entry points with gobject-introspection in mind, and you should also consider phasing out older API and replacing it piecemeal with a bindable one. You should have done this 10 years ago, and I can already hear the objections, but: too bad. Just because you made an effort 10 years ago it doesn’t mean things are frozen in time, and you don’t get to fix things. Maintenance means constantly tending to your code, and that doubly applies if you’re exposing an API to other people.


Let’s take the “how to write bindable API” recommendations, and elaborate them a bit.

Structures with custom memory management

The recommendation is to use GBoxed as a way to specify a copy and a free function, in order to clearly define the memory management semantics of a type.

The important caveat is that boxed types are necessary for:

  • opaque types that can only be heap allocated
  • using a type as a GObject property
  • using a type as an argument or return value for a GObject signal

You don’t need a boxed type for the following cases:

  • your type is an argument or return value for a method, function, or virtual function
  • your type can be placed on the stack, or can be allocated with malloc()/free()

Additionally, starting with gobject-introspection 1.76, you can specify the copy and free function of a type without necessarily registering a boxed type, which leaves boxed types for the thing they were created: signals and properties.

Addendum: object types

Boxed types should only ever be used for plain old data types; if you need inheritance, then the strong recommendation is to use GObject. You can use GTypeInstance, but only if you know what you’re doing; for more information on that, see my old blog post about typed instances.

Functionality only accessible through a C macro

This ought to be fairly uncontroversial. C pre-processor symbols don’t exist at the ABI level, and gobject-introspection is a mechanism to describe a C ABI. Never, ever expose API only through C macros; those are for C developers. C macros can be used to create convenience wrappers, but remember that anything they call must be public API, and that other people will need to re-implement the convenience wrappers themselves, so don’t overdo it. C developers deserve some convenience, but not at the expense of everyone else.

Addendum: inline functions

Static inline functions are also not part of the introspectable ABI of a library, because they cannot be used with dlsym(); you can provide inlined functions for performance reasons, but remember to always provide their non-inlined equivalent.

Direct C structure access for objects

Again, another fairly uncontroversial rule. You shouldn’t be putting anything into an instance structure, as it makes your API harder to future-proof, and direct access cannot do things like change notification, or memoization.

Always provide accessor functions.

va_list

Variadic argument functions are mainly C convenience. Yes, some languages can support them, but it’s a bad idea to have this kind of API exposed as the only way to do things.

Any variadic argument function should have two additional variants:

  • a vector based version, using C arrays (zero terminated, or with an explicit length)
  • a va_list version, to be used when creating wrappers with variadic arguments themselves

The va_list variant is kind of optional, since not many people go around writing variadic argument C wrappers, these days, but at the end of the day you might be going to write an internal function that takes a va_list anyway, so it’s not particularly strange to expose it as part of your public API.

The vector-based variant, on the other hand, is fundamental.

Incidentally, if you’re using variadic arguments as a way to collect similarly typed values, e.g.:

// void
// some_object_method (SomeObject *self,
//                     ...) G_GNUC_NULL_TERMINATED

some_object_method (obj, "foo", "bar", "baz", NULL);

there’s very little difference to using a vector and C99’s compound literals:

// void
// some_object_method (SomeObject *self,
//                     const char *args[])

some_object_method (obj, (const char *[]) {
                      "foo",
                      "bar",
                      "baz",
                      NULL,
                    });

Except that now the compiler will be able to do some basic type check and scream at you if you’re doing something egregiously bad.

Compound literals and designated initialisers also help when dealing with key/value pairs:

typedef struct {
  int column;
  union {
    const char *v_str;
    int v_int;
  } value;
} ColumnValue;

enum {
  COLUMN_NAME,
  COLUMN_AGE,
  N_COLUMNS
};

// void
// some_object_method (SomeObject *self,
//                     size_t n_columns,
//                     const ColumnValue values[])

some_object_method (obj, 2,
  (ColumnValue []) {
    { .column = COLUMN_NAME, .data = { .v_str = "Emmanuele" } },
    { .column = COLUMN_AGE, .data = { .v_int = 42 } },
  });

So you should seriously reconsider the amount of variadic arguments convenience functions you expose.

Multiple out parameters

Using a structured type with a out direction is a good recommendation as a way to both limit the amount of out arguments and provide some future-proofing for your API. It’s easy to expand an opaque pointer type with accessors, whereas adding more out arguments requires an ABI break.

Addendum: inout arguments

Don’t use in-out arguments. Just don’t.

Pass an in argument to the callable for its input, and take an out argument or a return value for the output.

Memory management and ownership of inout arguments is incredibly hard to capture with static annotations; it mainly works for scalar values, so:

void
some_object_update_matrix (SomeObject *self,
                           double *xx,
                           double *yy,
                           double *xy,
                           double *yx)

can work with xx, yy, xy, yx as inout arguments, because there’s no ownership transfer; but as soon as you start throwing things in like pointers to structures, or vectors of string, you open yourself to questions like:

  • who allocates the argument when it goes in?
  • who is responsible for freeing the argument when it comes out?
  • what happens if the function frees the argument in the in direction and then re-allocates the out?
  • what happens if the function uses a different allocator than the one used by the caller?
  • what happens if the function has to allocate more memory?
  • what happens if the function modifies the argument and frees memory?

Even if gobject-introspection nailed down the rules, they could not be enforced, or validated, and could lead to leaks or, worse, crashes.

So, once again: don’t use inout arguments. If your API already exposes inout arguments, especially for non-scalar types, consider deprecations and adding new entry points.

Addendum: GValue

Sadly, GValue is one of the most notable cases of inout abuse. The oldest parts of the GNOME stack use GValue in a way that requires inout annotations because they expect the caller to:

  • initialise a GValue with the desired type
  • pass the address of the value
  • let the function fill the value

The caller is then left with calling g_value_unset() in order to free the resources associated with a GValue. This means that you’re passing an initialised value to a callable, the callable will do something to it (which may or may not even entail re-allocating the value) and then you’re going to get it back at the same address.

It would be a lot easier if the API left the job of initialising the GValue to the callee; then functions could annotate the GValue argument with out and caller-allocates=1. This would leave the ownership to the caller, and remove a whole lot of uncertainty.

Various new (comparatively speaking) API allow the caller to pass an unitialised GValue, and will leave initialisation to the callee, which is how it should be, but this kind of change isn’t always possible in a backward compatible way.

Arrays

You can use three types of C arrays in your API:

  • zero-terminated arrays, which are the easiest to use, especially for pointers and strings
  • fixed-size arrays
  • arrays with length arguments

Addendum: strings and byte arrays

A const char* argument for C strings with a length argument is not an array:

/**
 * some_object_load_data:
 * @self: ...
 * @str: the data to load
 * @len: length of @str in bytes, or -1
 *
 * ...
 */
void
some_object_load_data (SomeObject *self,
                       const char *str,
                       ssize_t len)

Never annotate the str argument with array length=len. Ideally, this kind of function should not exist in the first place. You should always use const char* for NUL-terminated strings, possibly UTF-8 encoded; if you allow embedded NUL characters then use a bytes array:

/**
 * some_object_load_data:
 * @self: ...
 * @data: (array length=len) (element-type uint8): the data to load
 * @len: the length of the data in bytes
 *
 * ...
 */
void
some_object_load_data (SomeObject *self,
                       const unsigned char *data,
                       size_t len)

Instead of unsigned char you can also use uint8_t, just to drive the point home.

Yes, it’s slightly nicer to have a single entry point for strings and byte arrays, but that’s just a C convenience: decent languages will have a proper string type, which always comes with a length; and string types are not binary data.

Addendum: GArray, GPtrArray, GByteArray

Whatever you do, however low you feel on the day, whatever particular tragedy befell your family at some point, please: never use GLib array types in your API. Nothing good will ever come of it, and you’ll just spend your days regretting this choice.

Yes: gobject-introspection transparently converts between GLib array types and C types, to the point of allowing you to annotate the contents of the array. The problem is that that information is static, and only exists at the introspection level. There’s nothing that prevents you from putting other random data into a GPtrArray, as long as it’s pointer-sized. There’s nothing that prevents a version of a library from saying that you own the data inside a GArray, and have the next version assign a clear function to the array to avoid leaking it all over the place on error conditions, or when using g_autoptr.

Adding support for GLib array types in the introspection was a well-intentioned mistake that worked in very specific cases—for instance, in a library that is private to an application. Any well-behaved, well-designed general purpose library should not expose this kind of API to its consumers.

You should use GArray, GPtrArray, and GByteArray internally; they are good types, and remove a lot of the pain of dealing with C arrays. Those types should never be exposed at the API boundary: always convert them to C arrays, or wrap them into your own data types, with proper argument validation and ownership rules.

Addendum: GHashTable

What’s worse than a type that contains data with unclear ownership rules decided at run time? A type that contains twice the amount of data with unclear ownership rules decided at run time.

Just like the GLib array types, hash tables should be used but never directly exposed to consumers of an API.

Addendum: GList, GSList, GQueue

See above, re: pain and misery. On top of that, linked lists are a terrible data type that people should rarely, if ever, use in the first place.

Callbacks

Your callbacks should always be in the form of a simple callable with a data argument:

typedef void (* SomeCallback) (SomeObject *obj,
                               gpointer data);

Any function that takes a callback should also take a “user data” argument that will be passed as is to the callback:

// scope: call; the callback data is valid until the
// function returns
void
some_object_do_stuff_immediately (SomeObject *self,
                                  SomeCallback callback,
                                  gpointer data);

// scope: notify; the callback data is valid until the
// notify function gets called
void
some_object_do_stuff_with_a_delay (SomeObject *self,
                                   SomeCallback callback,
                                   gpointer data,
                                   GDestroyNotify notify);

// scope: async; the callback data is valid until the async
// callback is called
void
some_object_do_stuff_but_async (SomeObject *self,
                                GCancellable *cancellable,
                                GAsyncReadyCallback callback,
                                gpointer data);

// not pictured here: scope forever; the data is valid fori
// the entirety of the process lifetime

If your function takes more than one callback argument, you should make sure that it also takes a different user data for each callback, and that the lifetime of the callbacks are well defined. The alternative is to use GClosure instead of a simple C function pointer—but that comes at a cost of GValue marshalling, so the recommendation is to stick with one callback per function.

Addendum: the closure annotation

It seems that many people are unclear about the closure annotation.

Whenever you’re describing a function that takes a callback, you should always annotate the callback argument with the argument that contains the user data using the (closure argument) annotation, e.g.

/**
 * some_object_do_stuff_immediately:
 * @self: ...
 * @callback: (scope call) (closure data): the callback
 * @data: the data to be passed to the @callback
 *
 * ...
 */

You should not annotate the data argument with a unary (closure).

The unary (closure) is meant to be used when annotating the callback type:

/**
 * SomeCallback:
 * @self: ...
 * @data: (closure): ...
 *
 * ...
 */
typedef void (* SomeCallback) (SomeObject *self,
                               gpointer data);

Yes, it’s confusing, I know.

Sadly, the introspection parser isn’t very clear about this, but in the future it will emit a warning if it finds a unary closure on anything that isn’t a callback type.

Ideally, you don’t really need to annotate anything when you call your argument user_data, but it does not hurt to be explicit.


A cleaned up version of this blog post will go up on the gobject-introspection website, and we should really have a proper set of best API design practices on the Developer Documentation website by now; nevertheless, I do hope people will actually follow these recommendations at some point, and that they will be prepared for new recommendations in the future. Only dead and unmaintained projects don’t change, after all, and I expect the GNOME stack to last a bit longer than the 25 years it already spans today.

by ebassi at February 22, 2023 01:09 PM

February 20, 2023

Emmanuele Bassi

High Leap

I’ve been working at Endless for two years, now.

I’m incredibly lucky to be working at a great company, with great colleagues, on cool projects, using technologies I love, towards a goal I care deeply about.

We’ve been operating a bit under the radar for a while, but now it’s time to unveil what we’ve been doing — and we’re doing it via a Kickstarter campaign:

The computer for the entire world

The OS for the entire world

Endless

It’s been an honour and a privilege working on this little, huge project for the past two years, and I can’t wait to see what another two years are going to bring us.

by ebassi at February 20, 2023 12:38 AM

Constraints

GUI toolkits have different ways to lay out the elements that compose an application’s UI. You can go from the fixed layout management — somewhat best represented by the old ‘90s Visual tools from Microsoft; to the “springs and struts” model employed by the Apple toolkits until recently; to the “boxes inside boxes inside boxes” model that GTK+ uses to this day. All of these layout policies have their own distinct pros and cons, and it’s not unreasonable to find that many toolkits provide support for more than one policy, in order to cater to more use cases.

For instance, while GTK+ user interfaces are mostly built using nested boxes to control margins, spacing, and alignment of widgets, there’s a sizeable portion of GTK+ developers that end up using GtkFixed or GtkLayout containers because they need fixed positioning of children widget — until they regret it, because now they have to handle things like reflowing, flipping contents in right-to-left locales, or font size changes.

Additionally, most UI designers do not tend to “think with boxes”, unless it’s for Web pages, and even in that case CSS affords a certain freedom that cannot be replicated in a GUI toolkit. This usually results in engineers translating a UI specification made of ties and relations between UI elements into something that can be expressed with a pile of grids, boxes, bins, and stacks — with all the back and forth, validation, and resources that the translation entails.

It would certainly be easier if we could express a GUI layout in the same set of relationships that can be traced on a piece of paper, a UI design tool, or a design document:

  • this label is at 8px from the leading edge of the box
  • this entry is on the same horizontal line as the label, its leading edge at 12px from the trailing edge of the label
  • the entry has a minimum size of 250px, but can grow to fill the available space
  • there’s a 90px button that sits between the trailing edge of the entry and the trailing edge of the box, with 8px between either edges and itself

Sure, all of these constraints can be replaced by a couple of boxes; some packing properties; margins; and minimum preferred sizes. If the design changes, though, like it often does, reconstructing the UI can become arbitrarily hard. This, in turn, leads to pushback to design changes from engineers — and the cost of iterating over a GUI is compounded by technical inertia.

For my daily work at Endless I’ve been interacting with our design team for a while, and trying to get from design specs to applications more quickly, and with less inertia. Having CSS available allowed designers to be more involved in the iterative development process, but the CSS subset that GTK+ implements is not allowed — for eminently good reasons — to change the UI layout. We could go “full Web”, but that comes with a very large set of drawbacks — performance on low end desktop devices, distribution, interaction with system services being just the most glaring ones. A native toolkit is still the preferred target for our platform, so I started looking at ways to improve the lives of UI designers with the tools at our disposal.

Expressing layout through easier to understand relationships between its parts is not a new problem, and as such it does not have new solutions; other platforms, like the Apple operating systems, or Google’s Android, have started to provide this kind of functionality — mostly available through their own IDE and UI building tools, but also available programmatically. It’s even available for platforms like the Web.

What many of these solutions seem to have in common is using more or less the same solving algorithm — Cassowary.

Cassowary is:

an incremental constraint solving toolkit that efficiently solves systems of linear equalities and inequalities. Constraints may be either requirements or preferences. Client code specifies the constraints to be maintained, and the solver updates the constrained variables to have values that satisfy the constraints.

This makes it particularly suited for user interfaces.

The original implementation of Cassowary was written in 1998, in Java, C++, and Smalltalk; since then, various other re-implementations surfaced: Python, JavaScript, Haskell, slightly-more-modern-C++, etc.

To that collection, I’ve now added my own — written in C/GObject — called Emeus, which provides a GTK+ container and layout manager that uses the Cassowary constraint solving algorithm to compute the allocation of each child.

In spirit, the implementation is pretty simple: you create a new EmeusConstraintLayout widget instance, add a bunch of widgets to it, and then use EmeusConstraint objects to determine the relations between children of the layout:

simple-grid.js[Lines 89-170]download
        let button1 = new Gtk.Button({ label: 'Child 1' });
        this._layout.pack(button1, 'child1');
        button1.show();

        let button2 = new Gtk.Button({ label: 'Child 2' });
        this._layout.pack(button2, 'child2');
        button2.show();

        let button3 = new Gtk.Button({ label: 'Child 3' });
        this._layout.pack(button3, 'child3');
        button3.show();

        this._layout.add_constraints([
            new Emeus.Constraint({ target_attribute: Emeus.ConstraintAttribute.START,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button1,
                                   source_attribute: Emeus.ConstraintAttribute.START,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_object: button1,
                                   target_attribute: Emeus.ConstraintAttribute.WIDTH,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button2,
                                   source_attribute: Emeus.ConstraintAttribute.WIDTH }),
            new Emeus.Constraint({ target_object: button1,
                                   target_attribute: Emeus.ConstraintAttribute.END,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button2,
                                   source_attribute: Emeus.ConstraintAttribute.START,
                                   constant: -12.0 }),
            new Emeus.Constraint({ target_object: button2,
                                   target_attribute: Emeus.ConstraintAttribute.END,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_attribute: Emeus.ConstraintAttribute.END,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_attribute: Emeus.ConstraintAttribute.START,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button3,
                                   source_attribute: Emeus.ConstraintAttribute.START,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_object: button3,
                                   target_attribute: Emeus.ConstraintAttribute.END,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_attribute: Emeus.ConstraintAttribute.END,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_attribute: Emeus.ConstraintAttribute.TOP,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button1,
                                   source_attribute: Emeus.ConstraintAttribute.TOP,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_attribute: Emeus.ConstraintAttribute.TOP,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button2,
                                   source_attribute: Emeus.ConstraintAttribute.TOP,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_object: button1,
                                   target_attribute: Emeus.ConstraintAttribute.BOTTOM,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button3,
                                   source_attribute: Emeus.ConstraintAttribute.TOP,
                                   constant: -12.0 }),
            new Emeus.Constraint({ target_object: button2,
                                   target_attribute: Emeus.ConstraintAttribute.BOTTOM,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button3,
                                   source_attribute: Emeus.ConstraintAttribute.TOP,
                                   constant: -12.0 }),
            new Emeus.Constraint({ target_object: button3,
                                   target_attribute: Emeus.ConstraintAttribute.HEIGHT,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button1,
                                   source_attribute: Emeus.ConstraintAttribute.HEIGHT }),
            new Emeus.Constraint({ target_object: button3,
                                   target_attribute: Emeus.ConstraintAttribute.HEIGHT,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button2,
                                   source_attribute: Emeus.ConstraintAttribute.HEIGHT }),
            new Emeus.Constraint({ target_object: button3,
                                   target_attribute: Emeus.ConstraintAttribute.BOTTOM,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_attribute: Emeus.ConstraintAttribute.BOTTOM,
                                   constant: -8.0 }),
        ]);

A simple grid

This obviously looks like a ton of code, which is why I added the ability to describe constraints inside GtkBuilder XML:

centered.ui[Lines 28-45]download
            <constraints>
              <constraint target-object="button_child"
                          target-attr="center-x"
                          relation="eq"
                          source-object="super"
                          source-attr="center-x"
                          strength="required"/>
              <constraint target-object="button_child"
                          target-attr="EMEUS_CONSTRAINT_ATTRIBUTE_CENTER_Y"
                          relation="eq"
                          source-object="super"
                          source-attr="center-y"/>
              <constraint target-object="button_child"
                          target-attr="width"
                          relation="ge"
                          constant="200"
                          strength="EMEUS_CONSTRAINT_STRENGTH_STRONG"/>
            </constraints>

Additionally, I’m writing a small parser for the Visual Format Language used by Apple for their own auto layout implementation — even though it does look like ASCII art of Perl format strings, it’s easy to grasp.

The overall idea is to prototype UIs on top of this, and then take advantage of GTK+’s new development cycle to introduce something like this and see if we can get people to migrate from GtkFixed/GtkLayout.

by ebassi at February 20, 2023 12:37 AM

Recipes hackfest

The Recipes application started as a celebration of GNOME’s community and history, and it’s grown to be a great showcase for what GNOME is about:

  • design guidelines and attention to detail
  • a software development platform for modern applications
  • new technologies, strongly integrated with the OS
  • people-centered development

Additionally, Recipes has become a place where to iterate design and technology for the rest of the GNOME applications.

Nevertheless, while design patterns, toolkit features, Flatpak and portals, are part of the development experience, without content provided by the people using Recipes there would not be an application to begin with.

If we look at the work Endless has been doing on its own framework for content-driven applications, there’s a natural fit — which is why I was really happy to attend the Recipes hackfest in Yogyakarta, this week.

Fried Jawanese noodle make a healty breakfast

In the Endless framework we take structured data — like a web page, or a PDF document, or a mix of video and text — and we construct “shards”, which embed both the content, its metadata, and a Xapian database that can be used for querying the data. We take the shards and distribute them though Flatpak as a runtime extension for our applications, which means we can take advantage of Flatpak for shipping updates efficiently.

During the hackfest we talked about how to take advantage of the data model Endless applications use, as well as its distribution model; instead of taking tarballs with the recipe text, the images, and the metadata attached to each, we can create shards that can be mapped to a custom data model. Additionally, we can generate those shards locally when exporting the recipes created by new chefs, and easily re-integrate them with the shared recipe shards — with the possibility, in the future, to have a whole web application that lets you submit new recipes, and the maintainers review them without necessarily going through Matthias’s email. 😉

The data model discussion segued into how to display that data. The Endless framework has the concept of cards, which are context-aware data views; depending on context, they can have more or less details exposed to the user — and all those details are populated from the data model itself. Recipes has custom widgets that do a very similar job, so we talked about how to create a shared layer that can be reused both by Endless applications and by GNOME applications.

Sadly, I don’t remember the name of this soup, only that it had chicken hearts in it, and that Cosimo loved it

At the end of the hackfest we were able to have a proof of concept of Recipes loading the data from a custom shard, and using the Endless framework to display it; translating that into shareable code and libraries that can be used by other projects is the next step of the roadmap.

All of this, of course, will benefit more than just the Recipes application. For instance, we could have a Dictionary application that worked offline, and used Wiktionary as a source, and allowed better queries than just substring matching; we could have applications like Photos and Documents reuse the same UI elements as Recipes for their collection views; Software and Recipes already share a similar “landing page” design (and widgets), which means that Software could also use the “card” UI elements.

There’s lots for everyone to do, but exciting times are ahead!

And after we’re done we can relax by the pool


I’d be remiss if I didn’t thank our hosts at the Amikom university.

Yogyakarta is a great city; I’ve never been in Indonesia before, and I’ve greatly enjoyed my time here. There’s lots to see, and I strongly recommend visiting. I’ve loved the food, and the people’s warmth.

I’d like to thank my employer, Endless, for letting me take some time to attend the hackfest; and the GNOME Foundation, for sponsoring my travel.

The travelling Wilber


Sponsored by the GNOME Foundation

by ebassi at February 20, 2023 12:36 AM

On Vala

It seems I raised a bit of a stink on Twitter last week:

Of course, and with reason, I’ve been called out on this by various people. Luckily, it was on Twitter, so we haven’t seen articles on Slashdot and Phoronix and LWN with headlines like “GNOME developer says Vala is dead and will be removed from all servers for all eternity and you all suck”. At least, I’ve only seen a bunch of comments on Reddit about this, but nobody cares about that particular cesspool of humanity.

Sadly, 140 characters do not leave any room for nuance, so maybe I should probably clarify what I wrote on a venue with no character limit.

First of all, I’d like to apologise to people that felt I was attacking them or their technical choices: it was not my intention, but see above, re: character count. I may have only about 1000 followers on Twitter, but it seems that the network effect is still a bit greater than that, so I should be careful when wording opinions. I’d like to point out that it’s my private Twitter account, and you can only get to what it says if you follow me, or if you follow people who follow me and decide to retweet what I write.

My PSA was intended as a reflection on the state of Vala, and its impact on the GNOME ecosystem in terms of newcomers, from the perspective of a person that used Vala for his own personal projects; recommended Vala to newcomers; and has to deal with the various build issues that arise in GNOME because something broke in Vala or in projects using Vala. If you’re using Vala outside of GNOME, you have two options: either ignore all I’m saying, as it does not really apply to your case; or do a bit of soul searching, and see if what I wrote does indeed apply to you.

First of all, I’d like to qualify my assertion that Vala is a “dead language”. Of course people see activity in the Git repository, see the recent commits and think “the project is still alive”. Recent commits do not tell a complete story.

Let’s look at the project history for the past 10 cycles (roughly 2.5 years). These are the commits for every cycle, broken up in two values: one for the full repository, the other one for the whole repository except the vapi directory, which contains the VAPI files for language bindings:

Commits

Aside from the latest cycle, Vala has seen very little activity; the project itself, if we exclude binding updates, has seen less than 100 commits for every cycle — some times even far less. The latest cycle is a bit of an outlier, but we can notice a pattern of very little work for two/three cycles, followed by a spike. If we look at the currently in progress cycle, we can already see that the number of commits has decreased back to 55/42, as of this morning.

Commits

Number of commits is just a metric, though; more important is the number of contributors. After all, small, incremental changes may be a good thing in a language — though, spoiler alert: they are usually an indication of a series of larger issues, and we’ll come to that point later.

These are the number of developers over the same range of cycles, again split between committers to the full repository and to the full repository minus the vapi directory:

Developers

As you can see, the number of authors of changes is mostly stable, but still low. If we have few people that actively commit to the repository it means we have few people that can review a patch. It means patches linger longer and longer, while reviewers go through their queues; it means that contributors get discouraged; and, since nobody is paid to work full time on Vala, it means that any interruption caused by paid jobs will be a bottleneck on the project itself.

These concerns are not unique of a programming language: they exist for every volunteer-driven free and open source project. Programming languages, though, like core libraries, are problematic because any bottleneck causes ripple effects. You can take any stalled project you depend on, and vendor it into your own, but if that happens to the programming language you’re using, then you’re pretty much screwed.

For these reasons, we should also look at how well-distributed is the workload in Vala, i.e. which percentage of the work is done by the authors of those commits; the results are not encouraging. Over that range of cycles, Only two developers routinely crossed the 5% of commits:

  • Rico Tzschichholz
  • Jürg Billeter

And Rico has been the only one to consistently author >50% of the commits. This means there’s only one person dealing with the project on a day to day basis.

As the maintainer of a project who basically had to do all the work, I cannot even begin to tell you how soul-crushing that can become. You get burned out, and you feel responsible for everyone using your code, and then you get burned out some more. I honestly don’t want Rico to burn out, and you shouldn’t, either.

So, let’s go into unfair territory. These are the commits for Rust — the compiler and standard library:

Rust

These are the commits for Go — the compiler and base library:

Go

These are the commits for Vala — both compiler and bindings:

Vala

These are the number of commits over the past year. Both languages are younger than Vala, have more tools than Vala, and are more used than Vala. Of course, it’s completely unfair to compare them, but those numbers should give you a sense of scale, of what is the current high bar for a successful programming language these days. Vala is a niche language, after all; it’s heavily piggy-backing on the GNOME community because it transpiles to C and needs a standard library and an ecosystem like the one GNOME provides. I never expected Vala to rise to the level of mindshare that Go and Rust currently occupy.

Nevertheless, we need to draw some conclusions about the current state of Vala — starting from this thread, perhaps, as it best encapsulates the issues the project is facing.

Vala, as a project, is limping along. There aren’t enough developers to actively effect change on the project; there aren’t enough developers to work on ancillary tooling — like build system integration, debugging and profiling tools, documentation. Saying that “Vala compiles to C so you can use tools meant for C” is comically missing the point, and it’s effectively like saying that “C compiles to binary code, so you can disassemble a program if you want to debug it”. Being able to inspect the language using tools native to the language is a powerful thing; if you have to do the name mangling in your head in order to set a breakpoint in GDB you are elevating the barrier of contributions way above the head of many newcomers.

Being able to effect change means also being able to introduce change effectively and without fear. This means things like continuous integration and a full test suite heavily geared towards regression testing. The test suite in Vala is made of 210 units, for a total of 5000 lines of code; the code base of Vala (vala AST, codegen, C code emitter, and the compiler) is nearly 75 thousand lines of code. There is no continuous integration, outside of the one that GNOME Continuous performs when building Vala, or the one GNOME developers perform when using jhbuild. Regressions are found after days or weeks, because developers of projects using Vala update their compiler and suddenly their projects cease to build.

I don’t want to minimise the enormous amount of work that every Vala contributor brought to the project; they are heroes, all of them, and they deserve as much credit and praise as we can give. The idea of a project-oriented, community-oriented programming language has been vindicated many times over, in the past 5 years.

If I scared you, or incensed you, then you can still blame me, and my lack of tact. You can still call me an asshole, and you can think that I’m completely uncool. What I do hope, though, is that this blog post pushes you into action. Either to contribute to Vala, or to re-new your commitment to it, so that we can look at my words in 5 years and say “boy, was Emmanuele wrong”; or to look at alternatives, and explore new venues in order to make GNOME (and the larger free software ecosystem) better.

by ebassi at February 20, 2023 12:36 AM

codes of conduct

a discussion on guadec-list about adopting a code of conduct for GUADEC prompted me to write down some thoughts about the issue.

GNOME, as a community, was if not the first, one of the first high profile free software foundations to define and implement a code of conduct.

Photo credit: Jonathan Thorne, CC by-nc-2.0

to be perfectly, honest I never thought an anti-harassment policy would be a controversial issue at all in 2014, after the rate of adoption of codes of conduct, and of anti-harrasment policies, at convention and conferences all over the world. there have been high profile cases, and speakers as well as attendees have finally started to stand up, and publicly state that they won’t attend a convention or a conference (even if sponsored) if the organizers do not put in place these kind of documents.

GNOME, as a community, was if not the first, one of the first high profile free software foundations to define and implement a code of conduct. I’ll actually come back to that later, but that was a point of pride for me.

yet, I have to admit that my heart sank a bit for every email in the discussion on guadec-list, especially because they were from members of our own community.

I do understand that we like our conferences like we like our software: free-as-in-speech, and interesting to work on.

as I said, GNOME already has a code of conduct, pitiful and neutered as it may be, and it applies to every venue of communication we have: mailing list, our web servers, and also our conferences. to have and implement a code of conduct is not a per-GUADEC-edition, local-team-only decision to take — and how do I know that? because it’s the board that approved the code of conduct for the mailing lists, IRC, and web servers, and it was not the moderation team, or the IRC operators, or the system administrators that took this decision.

I do understand that we like our conferences like we like our software: free-as-in-speech, and interesting to work on. that does not imply that we should just assume bad stuff won’t happen, or that people will automatically find the right person to help them because somebody else decided to be a jerk. to be fair, our software is full of well-defined rules for redistribution, and our conferences should be equally well-defined when it comes to acceptable behaviour and responsible people to contact. why we do that? because it helps in having a clear set of rules and people responsible to avoid abuse, if that happens.

I am lucky enough, and privileged enough, that I have not been discriminated for who I am, what I like, who I like, what I do, or how I do it.

I honestly have zero patience for the people saying that «everyone can be offended by something, thus we shouldn’t do anything». the usual, trite, argument is that having an anti-harassment policy will provide a “chilling effect” on attendees; it should also not be necessary to have these policies, because we trust our community to be composed of good people, and these policies automatically assume that everyone will be misbehaving. those are both ridiculous positions, even if they are sadly fairly common in the free and open source community at large. they are based on a fairly obvious misunderstanding; the code of conduct is not a sword for preventing people from misbehaving: it’s a shield for people being the object of discrimination and harassment.

Photo credit: Jenn and Tony Bot, CC by-nc-2.0

I am lucky enough, and privileged enough, that I have not been discriminated for who I am, what I like, who I like, what I do, or how I do it. others are not in such position, and they attend GUADEC. we want them to attend GUADEC, because they are our next contributor, our next user, our next tester, our next designer, our next bugsquad member, our next person submitting a documentation patch. I don’t want them to be placed in a position where they balk at the idea of participating at GUADEC because they don’t feel safe enough, because they aren’t part of our community yet. you want to talk about a chilling effect? that is the chilling effect. I have no concerns for members of our community: I know that most of them can actually behave like actual human beings in a social context. that knowledge comes from 10+ years in this community. I don’t expect, and I’d be foolish to do so, that new people that have not been at GUADEC yet, or have been newly introduced to our community, also posses that knowledge.

Photo credit: diffendale, CC by-nc-sa 2.0

a final thought: I actually want a better code of conduct for GNOME’s online services as well, one that is clear on responsibility and consequences, because our current one is a defanged travesty, which was implemented to be the lowest common denominator possible. it does not require responsibility for enforcing it, and it does not provide accountability for actually respecting it. it is, for all intents and purposes, like not having one. it probably was thought as a good compromise eight years ago, but it clearly is not enough any more, and it makes GNOME look bad. changing the code of conduct is a topic for the new board, one that I expect will be handled this year; I’ll make sure to prod them. ;-)


I’d like to thank to Marina and Karen for reviewing the draft of this article and for their suggestions.

more information

  1. How will our Code of Conduct improve our harassment handling?
  2. Code of Conduct
  3. Codes of Conduct 101 + FAQ
  4. Conference anti-harassment
  5. My New Convention Harassment Policy
  6. Convention Harassment Policy Follow-Up

by ebassi at February 20, 2023 12:34 AM

Dream Road

Right at the moment I’m writing this blog post, the Endless Kickstarter campaign page looks like this:

With 26 days to spare

I’m incredibly humbled and proud. Thank you all so much for your support and your help in bringing Endless to the world.

The campaign goes on, though; we added various new perks, including:

  • the option to donate an Endless computer to Habitat for Humanity or Funsepa, two charities that are involved in housing and education projects in developing countries
  • the full package — computer, carabiner, mug, and t-shirt; this one ships everywhere in the world, while we’re still working out the kinks of international delivery of the merch

Again, thank you all for your support.

by ebassi at February 20, 2023 12:34 AM

Berlin DX Hackfest / Day 3

the third, and last day of the DX hackfest opened with a quick recap as to what people have been working on in the past couple of days.

we had a nice lunch nearby, and then we went back to the Endocode office to tackle the biggest topic: a road map for GTK+.

we made good progress on all the items, and we have a fairly clear idea of who is going to work on what. sadly, my optimism on GProperty landing soon did not survive a discussion with Ryan; it turns out that there are many more layers of yak to be shaved, though we kinda agreed on the assumption that there is, in fact, a yak underneath all those layers. to be fair, the work on GProperty enabled a lot of the optimizations of GObject: property notifications, bulk installation of properties, and the private instance data reorganization of last year are just examples. both Ryan and I agreed that we should not increase the cost for callers of property setters — which right now would require asking the GProperty instance to the class of the instance that we’re modifying, which implies taking locks and other unpleasant stuff. luckily, we do have access to private class data, and with few minor modification we can use that private data to store the properties; thus, getting the properties of a class can be achieved with simple pointer offsets and dereferences, without locks being involved. I’ll start working on this very soon, and hopefully we’ll be able to revisit the issue at GUADEC, in time for the next development cycle of GLib.

in the meantime, I kept hacking on my little helper library that provides data types for canvases — and about which I’ll blog soon — as well as figuring out what’s missing from the initial code drop of the GTK+ scene graph that will be ready to be shown by the time GUADEC 2014 rolls around.

I’m flying back home on Saturday, so this is the last full day in Berlin for me. it was a pleasure to be here, and I’d really like to thank Endocode for generously giving us access to their office; Chris Kühl, for being a gracious and mindful host; and the GNOME Foundation, for sponsoring attendance to all these fine people and contributors, and me.

Sponsored by the GNOME Foundation

by ebassi at February 20, 2023 12:34 AM

GUADEC 2014

like many fellow GNOME developers I will be in Strasbourg for GUADEC 2014.

on Monday morning, I will give a talk on the GTK+ Scene Graph Tool Kit, or GSK for short, but you should make sure to attend the many interesting talks that we have planned for you this year.

see you at GUADEC!

by ebassi at February 20, 2023 12:33 AM

GUADEC 2014 talk notes

I put the notes of the GSK talk I gave at GUADEC 2014 online; I believe there should be a video coming soon as well.

the notes are available on this very website.

by ebassi at February 20, 2023 12:33 AM

Berlin DX Hackfest / Day 2

the second day of the hackfest had a pretty big discussion on the roadmap for GTK+.

thanks to Matthias Clasen, we had a list of things to discuss prior to the start of the hackfest, even if Matthias himself would not be present:

  • filling the gaps between the GNOME HIG and the GTK+ API needed to implement it
  • a better cross-platform story for tool kit maintainers and application developers
  • touch support
  • scene graph to replace Clutter
  • documentation
  • improving the relationship of the tool kit with Glade
  • required clean ups for GTK+ 4

during the afternoon we managed to go through the first bullet point of the list, but we made really good progress on it, and we managed to assign each sub-issue to a prospective owner that is going to be in charge of it.

hopefully, we’re going to go through the other points during the rest of the hackfest much more quickly.

Sponsored by the GNOME Foundation

by ebassi at February 20, 2023 12:33 AM

Berlin DX Hackfest / Day 1

we had a fairly productive first day here, at the Endocode offices in Berlin. everyone is pretty excited about working on the overall experience for developers on the GNOME platform.

at first, we decided what to tackle in the next three days, and drafted a rough schedule. the hackfest then broke down into two main groups: the first tackled GObject models for the benefit of GTK+ widgets acting as views; the second worked on the developer documentation available on developer.gnome.org.

I decided to stay on the sidelines for the day, and worked on a small utility library that I’m going to use in the development of GSK, the GTK+ scene graph API that will replace Clutter in the near future; I’m going to do a proper blog post on both things later this week. I’ve also worked a bit on my old nemesis, GProperty. I have really high hopes that after three years of back and forth we’re going to finally land it in GLib, and let people have a better, easier, and more efficient way to define and use GObject properties.

In the evening we went to the Berlin GNOME beers along with the local GNOME community; it’s been a great evening, and we met both familiar faces and new ones.

I’d like to thank Endocode for kindly giving us access to their office in order to host the hackfest, as well as the GNOME Foundation for sponsoring travel attendance of many talented members of the GNOME community.

Sponsored by the GNOME Foundation

by ebassi at February 20, 2023 12:33 AM

A little testing

Years ago I started writing Graphene as a small library of 3D transformation-related math types to be used by GTK (and possibly Clutter, even if that didn’t pan out until Georges started working on the Clutter fork inside Mutter).

Graphene’s only requirement is a C99 compiler and a decent toolchain capable of either taking SSE builtins or support vectorization on appropriately aligned types. This means that, unless you decide to enable the GObject types for each Graphene type, Graphene doesn’t really need GLib types or API—except that’s a bit of a lie.

As I wanted to test what I was doing, Graphene has an optional build time dependency on GLib for its test suite; the library itself may not use anything from GLib, but if you want to build and run the test suite then you need to have GLib installed.

This build time dependency makes testing Graphene on Windows a lot more complicated than it ought to be. For instance, I need to install a ton of packages when using the MSYS2 toolchain on the CI instance on AppVeyor, which takes roughly 6 minutes each for the 32bit and the 64bit builds; and I can’t build the test suite at all when using MSVC, because then I’d have to download and build GLib as well—and just to access the GTest API, which I don’t even like.


What’s wrong with GTest

GTest is kind of problematic—outside of Google hijacking the name of the API for their own testing framework, which makes looking for it a pain. GTest is a lot more complicated than a small unit testing API needs to be, for starters; it was originally written to be used with a specific harness, gtester, in order to generate a very brief HTML report using gtester-report, including some timing information on each unit—except that gtester is now deprecated because the build system gunk to make it work was terrible to deal with. So, we pretty much told everyone to stop bothering, add a --tap argument when calling every test binary, and use the TAP harness in Autotools.

Of course, this means that the testing framework now has a completely useless output format, and with it, a bunch of default behaviours driven by said useless output format, and we’re still deciding if we should break backward compatibility to ensure that the supported output format has a sane default behaviour.

On top of that, GTest piggybacks on GLib’s own assertion mechanism, which has two major downsides:

  • it can be disabled at compile time by defining G_DISABLE_ASSERT before including glib.h, which, surprise, people tend to use when releasing; thus, you can’t run tests on builds that would most benefit from a test suite
  • it literally abort()s the test unit, which breaks any test harness in existence that does not expect things to SIGABRT midway through a test suite—which includes GLib’s own deprecated gtester harness

To solve the first problem we added a lot of wrappers around g_assert(), like g_assert_true() and g_assert_no_error(), that won’t be disabled depending on your build options and thus won’t break your test suite—and if your test suite is still using g_assert(), you’re strongly encouraged to port to the newer API. The second issue is still standing, and makes running GTest-based test suite under any harness a pain, but especially under a TAP harness, which requires listing the amount of tests you’ve run, or that you’re planning to run.

The remaining issues of GTest are the convoluted way to add tests using a unique path; the bizarre pattern matching API for warnings and errors; the whole sub-process API that relaunches the test binary and calls a single test unit in order to allow it to assert safely and capture its output. It’s very much the GLib test suite, except when it tries to use non-GLib API internally, like the command line option parser, or its own logging primitives; it’s also sorely lacking in the GObject/GIO side of things, so you can’t use standard API to create a mock GObject type, or a mock GFile.

If you want to contribute to GLib, then working on improving the GTest API would be a good investment of your time; since my project does not depend on GLib, though, I had the chance of starting with a clean slate.


A clean slate

For the last couple of years I’ve been playing off and on with a small test framework API, mostly inspired by BDD frameworks like Mocha and Jasmine. Behaviour Driven Development is kind of a buzzword, like test driven development, but I particularly like the idea of describing a test suite in terms of specifications and expectations: you specify what a piece of code does, and you match results to your expectations.

The API for describing the test suites is modelled on natural language (assuming your language is English, sadly):

  describe("your data type", function() {
    it("does something", () => {
      expect(doSomething()).toBe(true);
    });
    it("can greet you", () => {
      let greeting = getHelloWorld();
      expect(greeting).not.toBe("Goodbye World");
    });
  });

Of course, C is more verbose that JavaScript, but we can adopt a similar mechanism:

static void
something (void)
{
  expect ("doSomething",
    bool_value (do_something ()),
    to_be, true,
    NULL);
}

static void
{
  const char *greeting = get_hello_world ();

  expect ("getHelloWorld",
    string_value (greeting),
    not, to_be, "Goodbye World",
    NULL);
}

static void
type_suite (void)
{
  it ("does something", do_something);
  it ("can greet you", greet);
}


  describe ("your data type", type_suite);

If only C11 got blocks from Clang, this would look a lot less clunkier.

The value wrappers are also necessary, because C is only type safe as long as every type you have is an integer.

Since we’re good C citizens, we should namespace the API, which requires naming this library—let’s call it µTest, in a fit of unoriginality.

One of the nice bits of Mocha and Jasmine is the output of running a test suite:

$ ./tests/general 

  General
    contains at least a spec with an expectation
       a is true
       a is not false

      2 passing (219.00 µs)

    can contain multiple specs
       str contains 'hello'
       str contains 'world'
       contains all fragments

      3 passing (145.00 µs)

    should be skipped
      - skip this test

      0 passing (31.00 µs)
      1 skipped


Total
5 passing (810.00 µs)
1 skipped

Or, with colors:

Using colors means immediately taking this more seriously

The colours go automatically away if you redirect the output to something that is not a TTY, so your logs won’t be messed up by escape sequences.

If you have a test harness, then you can use the MUTEST_OUTPUT environment variable to control the output; for instance, if you’re using TAP you’ll get:

$ MUTEST_OUTPUT=tap ./tests/general
# General
# contains at least a spec with an expectation
ok 1 a is true
ok 2 a is not false
# can contain multiple specs
ok 3 str contains 'hello'
ok 4 str contains 'world'
ok 5 contains all fragments
# should be skipped
ok 6 # skip: skip this test
1..6

Which can be passed through to prove to get:

$ MUTEST_OUTPUT=tap prove ./tests/general
./tests/general .. ok
All tests successful.
Files=1, Tests=6,  0 wallclock secs ( 0.02 usr +  0.00 sys =  0.02 CPU)
Result: PASS

I’m planning to add some additional output formatters, like JSON and XML.


Using µTest

Ideally, µTest should be used as a sub-module or a Meson sub-project of your own; if you’re using it as a sub-project, you can tell Meson to build a static library that won’t get installed on your system, e.g.:

mutest_dep = dependency('mutest-1',
  fallback: [ 'mutest', 'mutest_dep' ],
  default_options: ['static=true'],
  required: false,
  disabler: true,
)

# Or, if you're using Meson < 0.49.0
mutest_dep = dependency('mutest-1', required: false)
if not mutest_dep.found()
  mutest = subproject('mutest',
    default_options: [ 'static=true', ],
    required: false,
  )

  if mutest.found()
    mutest_dep = mutest.get_variable('mutest_dep')
  else
    mutest_dep = disabler()
  endif
endif

Then you can make the tests conditional on mutest_dep.found().

µTest is kind of experimental, and I’m still breaking its API in places, as a result of documenting it and trying it out, by porting the Graphene test suite to it. There’s still a bunch of API that I’d like to land, like custom matchers/formatters for complex data types, and a decent want to skip a specification or a whole suite; plus, as I said above, some additional formatted output.

If you have feedback, feel free to open an issue—or a pull request wink wink nudge nudge.

by ebassi at February 20, 2023 12:32 AM

More little testing

Back in March, I wrote about µTest, a Behavior-Driven Development testing API for C libraries, and that I was planning to use it to replace the GLib testing API in Graphene.

As I was busy with other things in GTK, it took me a while to get back to µTest—especially because I needed some time to set up a development environment on Windows in order to port µTest there. I managed to find some time over various weekends and evenings, and ended up fixing a couple of small issues here and there, to the point that I could run µTest’s own test suite on my Windows 10 box, and then get the CI build job I have on Appveyor to succeed as well.

Setting up MSYS2 was the most time consuming bit, really

While at it, I also cleaned up the API and properly documented it.

Since depending on gtk-doc would defeat the purpose, and since I honestly dislike Doxygen, I was looking for a way to write the API reference and publish it as HTML. As luck would have it, I remembered a mention on Twitter about Markdeep, a self-contained bit of JavaScript capable of turning a Markdown document into a half decent HTML page client side. Coupled with GitHub pages, I ended up with a fairly decent online API reference that also works offline, falls back to a Markdown document when not running through JavaScript, and can get fixed via pull requests.

Now that µTest is in a decent state, I ported the Graphene test suite over to it and, now I can run it on Windows using MSVC—and MSYS2, as soon as the issue with GCC gets fixed upstream. This means that, hopefully, we won’t have regressions on Windows in the future.

The µTest API is small enough, now, that I don’t plan major changes; I don’t want to commit to full API stability just yet, but I think we’re getting close to a first stable release soon; definitely before Graphene 1.10 gets released.

In case you think this could be useful for you: feedback, in the form of issues and pull requests, is welcome.

by ebassi at February 20, 2023 12:32 AM

Fair Weather Friends

Today I released libgweather-3.90.0, the first developers snapshot of GWeather 4:

Behold! A project logo

This release is mostly meant to be used as a target for porting existing code to the new API, and verifying that everything works as it should.

The major changes from GWeather-3.0 are:

  • the GTK3 widgets have gone to a farm up state, so you’ll have to write your own UI for searching locations
  • GWeatherLocation is a GObject type, so you can use it with GListModel and friends, which should help with the point above
  • the deprecated API has been removed
  • the API that will be part of GWeather 4.0 will be stable, and regular API/ABI stability guarantees will apply

If you are using libgweather in your application, you should head over to the migration guide and check out what changed.

Ideally, there are still things that need to be cleaned up in the GWeather API, for instance:

  • GWeatherTimezone parses the tzdata file directly, which comes with its own set of issues, like having to track whether the time zone database has changed or not; we should use GTimeZone instead, but the API provided by the two types do not match entirely. I need to check the current users of that API, and if possible, just drop the whole type.
  • GWeatherInfo has a bunch of getter functions that return the bare values and that can fail, and additional getter functions that always return a formatted string, and cannot fail; it’s not a great API.
  • GWeatherLocation returns the localised names by default, and has additional getters for the “English” (really: POSIX C locale) names.

If you encounter issues when porting, please: file an issue on GitLab.

by ebassi at February 20, 2023 12:31 AM

February 15, 2023

Iain Holmes

Update14

A change of direction, mostly because I’m disappointed my Sam Coupé platform scroller was a bust - I’ve returned to an application I was writing near the end of last year.

Fieldwork, a field recording organiser / editor. Think Lightroom for audio. It looks like this currently

Fieldwork main UI

The main window is SwiftUI, the editor display is AppKit, and it’s written in a mix of Swift, ObjC and C, although I’d like to reduce the amount of ObjC over time and leave it as a simple layer that Swift can call to access the sample data.

It’s using a lot of the code from my previous sample editor project Marlin. It can handle very large files, and operate on them very quickly, and at the moment I’m working out how to integrate it all with a SwiftUI declarative UI.

Plans in the future include writing a UIKit version of the editor display and see if it runs on iOS/iPadOS.

February 15, 2023 01:15 PM

February 12, 2023

Iain Holmes

Update13

Got a player sprite displayed on the screen, and moving left and right. I made a simplified single screen map with 128 tiles on it and with the character moving around while it redraws every frame is unbelievably slow. Starting to rethink this, because having it scroll and draw sprites at the same time is just going to be impossible.

Which is a shame, I had wanted to make a Super Mario type scrolling platformer but I may have to fall back on a static screen platformer.

February 12, 2023 01:15 PM

January 29, 2023

Iain Holmes

Update12

I put a poll up on Mastodon for what bit I should do next and the results were for character movement. So before I start planning on how to do that, I thought I’d tidy up some of the graphics code a bit. The tiles and the character sprites were all using the same tilesheet, so I split them out, and now that they’re generated separately I don’t have to shift the tile data. That saves about 1.5k memory, and slightly reduces the t-count for drawing the tiles. Every little helps. I also filled the background colour in the correct colour, so no more pink.

Now, onto character movement. Guess the first thing to do is get my PS4 controller working with Sim Coupé.

January 29, 2023 01:15 PM

January 27, 2023

Iain Holmes

Update11

Final touches at the moment to the scrolling code was to move the map data and the drawing code into the graphics page so I didn’t have to switch pages and change stack twice for every tile. Now it changes to the graphics page when it wants to draw the map and back to the main page once the map is drawn. This is about 150t less per tile. So it’s worked out as a pretty big performance boost.

And here it is - it doesn’t tear on the screen just in the gif, thanks giphy capture.

Scrolling Map 2 pixels per frame

Next I think I’ll tackle either animation or character movement

January 27, 2023 01:15 PM

January 25, 2023

Iain Holmes

Update10

Victory. Solved the memory corruption bug.

The problem was in the interrupt handling code which I borrowed along with most of the rest of the basic setup code to get things going from Howard Price’s Flappy Bird clone.

The interrupt handlers are stored as a linked list of three values: the line number for a line interrupt, the address of the handler, and the address of the next entry in the list. As I’m not using the line interrupts (yet) this list is a single entry that points back to the start of itself.

At initialisation int.reset is set to the start of this linked list, and after every interrupt the list moves to the next value and sets two memory locations int.jump to the address that contains the address of the jump handler in the list and int.next to the address that contains the address of the next entry in the list. Because there’s only one entry in this list these values never change. I wonder if the extra level of indirection here is going to be important later.

Archer saying Thanks, Freddy Foreshadowing

The problematic code in question is this routine

	@check_int:											; Make sure this is the frame interrupt
	int.status: ld a,0
				bit FrameIntBit,a
				jp z,@+save_regs
		@reset_int_manager:								; If not, reset to start properly next time
				ld hl,(int.reset)
				ld (int.next),hl
				ret

What this routine does is load the status value into a (the 0 has been replaced at runtime by the actual value) and checks if the status is a frame interrupt. If it is not, it resets the int.next to point at the start of the interrupt handler linked list to go back to the start. But that’s not what int.next is supposed to contain: int.next is supposed to contain the address of the memory containing the address, not the address itself.

So the correct thing is to set int.next to int.reset + 3

        @reset_int_manager:
                ld hl,(int.reset)
                FOR 3, inc hl
				ld (int.next),hl

The only question I have is why bit FrameIntBit, a is failing and putting us into the @reset_int_manager code in the first place. There’s no non-frame interrupts set, and the status value is FF which my vague reading of the Sam Coupé development manual would be invalid. So, I dunno. It works now and that’s all that matters.

The hardest part of it was to find a simple way to reproduce the bug: at one point the reproduction steps were to add a breakpoint and skip it 157 times, but even then there were still thousands of instuctions to step through before the bug triggered. Once I worked out the bug was happening inside the interrupt handler, I added a frame interrupt breakpoint and could step the code there and saw funny things happening after the @reset_int_manager code ran. Then lots more staring trying to work out how the indirection worked.

I could delete this code and hardcode it all, but I like having the flexibility of having different interrupt handlers available if I need them. Now I can work on something more interesting now I have scrolling working again.

January 25, 2023 01:15 PM

January 22, 2023

Iain Holmes

Update9

Been tracking down a weird memory corruption bug related to the interrupts. Because of the use of pop/push to copy the tiles from memory to the screen, I’ve disabled interrupts while that is happening. This shouldn’t really be necessary as the interrupts use their own stack, but if I don’t disable interrupts then memory corruption appears on the screen, which is confusing due to the separate stack.

In the working version, if I disable interrupts once for the whole map drawing then it works fine, if I only disable interrupts around the code that manipulates the stack for each tile then the interrupt handler eventually gets messed up and jumps into invalid memory.

Been getting to know the SimCoupe debugger much better, but haven’t tracked the bug down yet.

January 22, 2023 01:15 PM

January 06, 2023

Ross Burton

PySnooper and BitBake

Yesterday I discovered PySnooper, which describes itself as "a poor man's debugger":

Your story: You're trying to figure out why your Python code isn't doing what you think it should be doing. You'd love to use a full-fledged debugger with breakpoints and watches, but you can't be bothered to set one up right now.

I know that guy! Especially when I'm debugging some Python code in a BitBake class or recipe and attaching a debugger is even more annoying than usual. I've previously written a tiny class to start a rpdb session as needed, but I don't get on with pdb for some reason.

The example makes it look pretty awesome for quick debugging:

Source path:... example.py
Starting var:.. number = 6
11:46:07.482187 call         4 def number_to_bits(number):
11:46:07.482561 line         5     if number:
11:46:07.482655 line         6         bits = []
New var:....... bits = []
11:46:07.482732 line         7         while number:
11:46:07.482830 line         8             number, remainder = divmod(number, 2)
Modified var:.. number = 3
New var:....... remainder = 0
11:46:07.482907 line         9             bits.insert(0, remainder)
Modified var:.. bits = [0]
11:46:07.483028 line         7         while number:
11:46:07.483130 line         8             number, remainder = divmod(number, 2)
Modified var:.. number = 1
Modified var:.. remainder = 1
11:46:07.483208 line         9             bits.insert(0, remainder)
Modified var:.. bits = [1, 0]
11:46:07.483323 line         7         while number:
11:46:07.483419 line         8             number, remainder = divmod(number, 2)
Modified var:.. number = 0
11:46:07.483497 line         9             bits.insert(0, remainder)
Modified var:.. bits = [1, 1, 0]
11:46:07.483593 line         7         while number:
11:46:07.483697 line        10         return bits
11:46:07.483773 return      10         return bits
Return value:.. [1, 1, 0]
Elapsed time: 00:00:00.001749

So here's my thirty second explainer on how to use PySnooper with BitBake. First, we need to install it:

$ pip3 install pysnooper

Then you can just import pysnooper and decorate functions to get them annotated at runtime:

import pysnooper

@pysnooper.snoop()
def some_function():
    ...

That's the theory, but anyone who has tried throwing print("here") messages into classes or recipes knows this doesn't work. They execute in a child process which doesn't have standard output connected to the console, but luckily the snoop function can instead write the messages to a filename or stream or callable, which lets us glue PySnooper to BitBake's logging:

import pysnooper

@pysnooper.snoop(bb.plain)
def some_function():
    ...

As a working example, I added the annotation to get_source_date_epoch() in meta/lib/oe/reproducible.py:

import pysnooper

@pysnooper.snoop(bb.plain)
def get_source_date_epoch(d, sourcedir):
    return (
        get_source_date_epoch_from_git(d, sourcedir) or
        get_source_date_epoch_from_youngest_file(d, sourcedir) or
        fixed_source_date_epoch(d)
    )

And now when we start BitBake, we get to see the output:

Source path:... /home/ross/Yocto/poky/meta/lib/oe/reproducible.py
Starting var:.. d = <bb.data_smart.DataSmart object at 0xffff9e30dcf0>
Starting var:.. sourcedir = '/yocto/ross/build/tmp/work-shared/llvm-project-source-15.0.6-r0/git'
10:56:57.198016 call       156 def get_source_date_epoch(d, sourcedir):
10:56:57.199750 line       158         get_source_date_epoch_from_git(d, sourcedir) or
10:56:57.341387 line       157     return (
10:56:57.341978 return     157     return (
Return value:.. 1669716358
Elapsed time: 00:00:00.144763

Useful!

The default log depth is 1 so you don't see inside functions, but that can be changed when decorating You can also wrap smaller code blocks using with blocks.

The biggest catch is remembering that BitBake classes and recipes are not Python, they just have Python blocks in, so you can't decorate a function inside a class or recipe. In this case you'll need to use with block.

This looks like a very useful tool and I look forward to using it next time I'm tearing my increasingly greying hair out.

NP: Charcoal, Brambles

by Ross Burton at January 06, 2023 11:02 AM

December 02, 2022

Emmanuele Bassi

On PyGObject

Okay, I can’t believe I have to do this again.

This time, let’s leave the Hellsite out of it, and let’s try to be nuanced from the start. I’d like to avoid getting grief online.

The current state of the Python bindings for GObject-based libraries is making it really hard to recommend using Python as a language for developing GTK and GNOME applications.

PyGObject is currently undermaintained, even after the heroic efforts of Christoph Reiter to keep the fires burning through the long night. The Python community needs more people to work on the bindings, if we want Python to be a first class citizen of the ecosystem.

There’s a lot to do, and not nearly enough people left to do it.

Case study: typed instances

Yes, thou shall use GObject should be the law of the land; but there are legitimate reasons to use typed instances, and GTK 4 has a few of them:

At this very moment, it is impossible to use the types above from Python. PyGObject will literally error out if you try to do so. There are technical reasons why that was a reasonable choice 15+ years ago, but most language bindings written since then can handle typed instances just fine. In fact, PyGObject does handle them, since GParamSpec is a GTypeInstance; of course, that’s because PyGObject has some ad hoc code for them.

Dealing with events and render nodes is not so important in GTK4; but not having access to the expressions API makes writing list widgets incredibly more complicated, requiring to set up everything through UI definition files and never modifying the objects programmatically.

Case study: constructing and disposing

While most of the API in PyGObject is built through introspection, the base wrapper for the GObject class is still very much written in CPython. This requires, among other things, wiring the class’s virtual functions manually, and figuring out the interactions between Python, GObject, and the type system wrappers. This means Python types that inherit from GObject don’t have automatic access to the GObjectClass.constructed and GObjectClass.dispose virtual functions. Normally, this would not be an issue, but modern GObject-based libraries have started to depend on being able to control construction and destruction sequences.

For instance, it is necessary for any type that inherits from GtkWidget to ensure that all its child widgets are disposed manually. While that was possible through the “destroy” signal in GTK3, in GTK4 the signal was removed and everything should go through the GObjectClass.dispose virtual function. Since PyGObject does not allow overriding or implementing that virtual function, your Python class cannot inherit from GtkWidget, but must inherit from ancillary classes like GtkBox or AdwBin. That’s even more relevant for the disposal of resources created through the composite template API. Theoretically, using Gtk.Template would take care of this, but since we cannot plug the Python code into the underlying CPython, we’re stuck.

Case study: documentation, examples, and tutorials

While I have been trying to write Python examples for the GNOME developers documentation, I cannot write everything by myself. Plus, I can’t convince Google to only link to what I write. The result is that searching for “Python” and “GNOME” or “GTK” will inevitably lead people to the GTK3 tutorials and references.

The fragmentation of the documentation is also an issue. The PyGObject website is off to readthedocs.org, which is understandable for a Python projects; then we have the GTK3 tutorial, which hasn’t been updated in a while, and for which there are no plans to have a GTK 4 version.

Additionally, the Python API reference is currently stuck on GTK3, with Python references for GTK4 and friends off on the side.

It would be great to unify the reference sites, and possibly have them under the GNOME infrastructure; but at this point, I’d settle for having a single place to find everything, like GJS did.

What can you do?

Pick up PyGObject. Learn how it works, and how the GObject and CPython API interact.

Write Python overrides to ensure that API like GTK and GIO are nice to use with idiomatic Python.

Write tests for PyGObject.

Write documentation.

Port existing tutorials, examples, and demos to GTK4.

Help Christoph out when it comes to triaging issues, and reviewing merge requests.

Join GNOME Python on Matrix, or the #gnome-python channel on Libera.

Ask questions and provide answers on Discourse.

What happens if nobody does anything?

Right now, we’re in maintenance mode. Things work because of inertia, and because nobody is really pushing the bindings outside of their existing functionality.

Let’s be positive, for a change, and assume that people will show up. They did for Vala when I ranted about it five years ago after a particularly frustrating week dealing with constant build failures in GNOME, so maybe the magic will happen again.

If people do not show up, though, what will likely happen is that Python will just fall on the wayside inside GNOME. Python developers won’t use GTK to write GUI applications; existing Python applications targeting GNOME will either wither and die, or get ported to other languages. The Python bindings themselves may stop working with newer versions of Python, which will inevitably lead downstream distributors to jettison the bindings themselves.

We have been through this dance with the C# bindings and Mono, and the GNOME and Mono communities are all the poorer for it, so I’d like to avoid losing another community of talented developers. History does not need to repeat itself.

by ebassi at December 02, 2022 06:56 PM

September 10, 2022

Tomas Frydrych

The Emperor's New Clothes

My father was a bibliophile. His passion for books started in his childhood, and I recall some of the early books he kept on his bookshelf: nearly complete works of Julius Verne, books by Jack London. Each of them neatly wrapped in either cream or blue packing paper, a handwritten label on the front, positioned with a millimetre precision, and inside an ex libris stamp in the shape of a pocket watch. The love of books remained with him all of his life, indeed during one of the last lucid conversations I had with him he was complaining about all the new books he acquired but didn’t have time to read yet.

But out of all of his books one sticks firmly in mind. It was a slim volume, no more than half a centimetre thick, a glossy grey cover with a cloth spine binding. Its title was ‘Emperor’s New Clothes’ and it was in English.

Back then when it first appeared I was only just learning to read Czech, so a foreign book felt somewhat mysterious (my very first encounter with the pesky ‘the’!), but I was drawn to the copious illustrations in vivid colours that were, I think, photographs of a marionette set.

I can see now that it was really a children’s book, I suspect my father bought it to practice his English, self-taught from the lessons broadcast by the Voice of America, which he taped so he could work through them properly. My wife says his English was very decent, spoken without any trace of that hard to shake Central European accent; I would not know, he steadfastly refused to speak it in my presence till the day he died, but it was certainly good enough to take me through the story.

I was reminded of that book this week. Back then, the school age me found the core premise of it laughably incredulous. The grownup me cannot but smile at the childish naiveté.

by tf at September 10, 2022 03:07 PM

September 04, 2022

Tomas Frydrych

On the Death of an Apparatchik

Let’s be clear on one thing: in the good old USSR decent people didn’t rise through the Communist Party ranks to the Politburo, they were sent to the Siberian gulags. The Politburo was the cesspit of the Soviet system — this is the basic lens through which the Gorbachev legacy must be viewed. To remove it, as the various UK commentators are invariably doing just now, is to engage in gross revisionism of history.

The difference between Gorbachev and those who preceded him was simply that by his time the Soviet economy had collapsed, not least thanks to nearly 70 years of replacing expertise with ideology, and the arms race. The skin deep liberalisation that Gorbachev introduced was merely a last ditch effort to prop up the old system. The USSR of the 1980s could no longer afford to have 500,000 soldiers in Central Europe, to wage war in Afghanistan, or to keep up with the US military budget. That’s all. Those who ascribe Gorbachev some sort of moral high ground should look no further than his attitude to the Solidarity movement in Poland. The most we can say of Gorbachev is he was the least bad of a very bad lot.

by tf at September 04, 2022 04:24 PM

June 16, 2022

Emmanuele Bassi

Amberol

In the beginning…

In 1997, I downloaded my first MP3 file. It was linked on a website, and all I had was a 56k modem, so it took me ages to download the nearly 4 megabytes of 128 kbit/s music goodness. Before that file magically appeared on my hard drive, if we exclude a brief dalliance with MOD files, the only music I had on my computer came either in MIDI or in WAV format.

In the nearly 25 years passed since that seminal moment, my music collection has steadily increased in size — to the point that I cannot comfortably keep it in my laptop’s internal storage without cutting into the available space for other stuff and without taking ages when copying it to new machines; and if I had to upload it to a cloud service, I’d end up paying monthly storage fees that would definitely not make me happy. Plus, I like being able to listen to my music without having a network connection — say, when I’m travelling. For these reasons, I have my music collection on a dedicated USB3 drive and on various 128 GB SD cards that I use when travelling, to avoid bumping around a spinning rust drive.

In order to listen to that first MP3 file, I also had to download a music player, and back in 1997 there was this little software called Winamp, which apparently really whipped the llama’s ass. Around that same time I was also dual-booting between Windows and Linux, and, obviously, Linux had its own Winamp clone called x11amp. This means that, since late 1997, I’ve also tested more or less all mainstream, GTK-based Linux music players—xmms, beep, xmms2, Rhythmbox, Muine, Banshee, Lollypop, GNOME Music—and various less mainstream/non-GTK ones—shout out to ma boi mpg123. I also used iTunes on macOS and Windows, but I don’t speak of that.

Turns out that, with the very special exception of Muine, I can’t stand any of them. They are all fairly inefficient when it comes to managing my music collection; or they are barely maintained; or (but, most often, and) they are just iTunes clones—as if cloning iTunes was a worthy goal for anything remotely connected to music, computing, or even human progress in general.

I did enjoy using Banshee, up to a point; it wasn’t overly offensive to my eyes and pointing devices, and had the advantage of being able to minimise its UI without getting in the way. It just bitrotted with the rest of the GNOME 2 platform even before GNOME bumped major version, and it still wasn’t as good as Muine.

A detour: managing a music collection

I’d like to preface this detour with a disclaimer: I am not talking about specific applications; specific technologies/libraries; or specific platforms. Any resemblance to real projects, existing or abandoned, is purely coincidental. Seriously.

Most music management software is, I feel, predicated on the fallacy that the majority of people don’t bother organising their files, and are thus willing to accept a flat storage with complex views built at run time on top of that; while simultaneously being willing to spend a disproportionate amount of time classifying those files—without, of course, using a hierarchical structure. This is a fundamental misunderstanding of human nature.

By way of an example: if we perceive the Universe in a techno-mazdeist struggle between a πνεῦμα which creates fool-proof tools for users; and a φύσις, which creates more and more adept fools; then we can easily see that, for the entirety of history until now, the pneuma has been kicked squarely in the nuts by the physis. In other words: any design or implementation that does not take into account human nature in that particular problem space is bound to fail.

While documents might benefit from additional relations that are not simply inferred by their type or location on the file system, media files do not really have the same constraints. Especially stuff like music or videos. All the tracks of an album are in the same place not because I decided that, but because the artist or the music producers willed it that way; all the episodes of a series are in the same place because of course they are, and they are divided by season because that’s how TV series work; all the episodes of a podcast are in the same place for the same reason, maybe divided by year, or by season. If that structure already exists, then what’s the point of flattening it and then trying to recreate it every time out of thin air with a database query?

The end result of constructing a UI that is just a view on top of a database is that your UI will be indistiguishable from a database design and management tool; which is why all music management software looks very much like Microsoft Access from circa 1997 onwards. Of course you can dress it up however you like, by adding fancy views of album covers, but at the end of the day it’s just an Excel spread sheet that occasionally plays music.

Another side effect of writing a database that contains the metadata of a bunch of files is that you’ll end up changing the database instead of changing the files; you could write the changes to the files, but reconciling the files with the database is a hard problem, and it also assumes you have read-write access to those files. Now that you have locked your users into your own database, switching to a new application becomes harder, unless your users enjoy figuring out what they changed over time.

A few years ago, before backing up everything in three separate storages, I had a catastrophic failure on my primary music hard drive; after recovering most of my data, I realised that a lot of the changes I made in the early years weren’t written out to music files, but were stored in some random SQLite database somewhere. I am still recovering from that particular disaster.

I want my music player to have read-only access to my music. I don’t want anything that isn’t me writing to it. I also don’t want to re-index my whole music collection just because I fixed the metadata of one album, and I don’t want to lose all my changes when I find a better music player.

Another detour: non-local media

Yes, yes: everyone listens to streamed media these days, because media (and software) companies are speed-running Adam Smith’s The Wealth of Nations and have just arrived at the bit about rentier economy. After all, why should they want to get paid once for something, when media conglomerates can “reap where they never sowed, and demand a rent even for its natural produce”.

You know what streaming services don’t like? Custom, third party clients that they can’t control, can’t use for metrics, and can’t use to serve people ads.

You know what cloud services that offer to host music don’t like? Duplicate storage, and service files that may potentially infringe the IP of a very litigious industry. Plus, of course, third party clients that they can’t use to serve you ads, as that’s how they can operate at all, because this is the Darkest Timeline, and adtech is the modern Moloch to which we must sacrifice as many lives as we can.

You may have a music player that streams somebody’s music collection, or even yours if you can accept the remote service making a mess of it, but you’re always a bad IPO or a bad quarterly revenue report away from losing access to everything.

Writing a music player for fun and no profit

For the past few years I’ve been meaning to put some time into writing a music player, mostly for my own amusement; I also had the idea of using this project to learn the Rust programming language. In 2015 I was looking for a way to read the metadata of music files with Rust, but since I couldn’t find anything decent, I ended up writing the Rust bindings for taglib. I kept noodling at this side project for the following years, but I was mostly hitting the limits of GTK3 when it came to dealing with my music collection; every single iteration of the user interface ended up with a GtkTreeView and a replica of iTunes 1.0.

In the meantime, though, the Rust ecosystem got exponentially better, with lots of crates dedicated to parsing music file metadata; GTK4 got released with new list widgets; libadwaita is available to take care of nice UI layouts; and the Rust bindings for GTK have become one of the most well curated and maintained projects in the language bindings ecosystem.

Another few things that happened in the meantime: a pandemic, a year of unemployment, and zero conferences, all of which pushed me to streaming my free and open source software contributions on Twitch, as a way to break the isolation.

So, after spending the first couple of months of 2022 on writing the beginners tutorial for the GNOME developer documentation website, in March I began writing Amberol, a local-only music player that has no plans of becoming more than that.

Desktop mode

Amberol’s scope sits in the same grand tradition of Winamp, and while its UI started off as a Muine rip off—down to the same key shortcuts—it has evolved into something that more closely resembles the music player I have on my phone.

Mobile mode

Amberol’s explicit goal is to let me play music on my desktop the same way I typically do when I am using my phone, which is: shuffling all the songs in my music collection; or, alternatively, listening to all the songs in an album or from an artist from start to finish.

Amberol’s explicit non goals are:

  • managing your music collection
  • figuring out your music metadata
  • building playlists
  • accessing external services for stuff like cover art, song lyrics, or the artist’s Wikipedia page

The actual main feature of this application is that it has forced me to figure out how to deal with GStreamer after 15 years.

I did try to write this application in a way that reflects the latest best practices of GTK4:

  • model objects
  • custom view widgets
  • composite widgets using templates
  • property bindings/expressions to couple model/state to its view/representation
  • actions and actionable widgets

The ability to rely on libadwaita has allowed me to implement the recoloring of the main window without having to deal with breakage coming from rando style sheets:

The main thing I did not expect was how much of a good fit was Rust in all of this. The GTK bindings are top notch, and constantly improving; the type system has helped me much more than hindering me, a poor programmer whose mind has been twisted by nearly two decades of C. Good idiomatic practices for GTK are entirely within the same ballpark of idiomatic practices for Rust, especially for application development.

On the tooling side, Builder has been incredibly helpful in letting me concentrate on the project—starting from the basic template for a GNOME application in Rust, to dealing with the build system; from the Flatpak manifest, to running the application under a debugger. My work was basically ready to be submitted to Flathub from day one. I did have some challenge with the AppData validation, mostly caused by appstream-utils undocumented validation rules, but luckily it’s entirely possible to remove the validation after you deal with the basic manifest.

All in all, I am definitely happy with the results of basically two months of hacking and refactoring, mostly off an on (and with two weeks of COVID in the middle).

by ebassi at June 16, 2022 12:46 PM

April 19, 2022

Chris Lord

WebKit frame delivery

Part of my work on WebKit at Igalia is keeping an eye on performance, especially when it concerns embedded devices. In my experience, there tend to be two types of changes that cause the biggest performance gains – either complete rewrites (e.g. WebRender for Firefox) or tiny, low-hanging fruit that get overlooked because the big picture is so multi-faceted and complex (e.g. making unnecessary buffer copies, removing an errant sleep(5) call). I would say that it’s actually the latter bug that’s harder to diagnose and fix, even though the code produced at the end of it tends to be much smaller. What I’m writing about here falls firmly into the latter group.

For a good while now, Alejandro García has been producing internal performance reports for WPE WebKit. A group of us will gather and do some basic testing of WPE WebKit on the same hardware – a Raspberry Pi 3B+. This involves running things like MotionMark and going to popular sites like YouTube and Google Maps and noting how well they work. We do this periodically and it’s a great help in tracking down regressions or obvious, user-facing issues. Another part of it is monitoring things like memory and CPU usage during this testing. Alex noted that we have a lot of idle CPU time during benchmarks, and at the same time our benchmark results fall markedly behind Chromium. Some of this is expected, Chromium has a more advanced rendering architecture that better makes use of the GPU, but a well-written benchmark should certainly be close to saturating the CPU and we certainly have CPU time to spare to improve our results. Based on this idea, Alex crafted a patch that would pre-emptively render frames (this isn’t quite what it did, but for the sake of brevity, that’s how I’m going to describe it). This showed significant improvements in MotionMark results and proved at the very least that it was legitimately possible to improve our performance in this synthetic test without any large changes.

This patch couldn’t land as it was due to concerns with it subtly changing the behaviour of how requestAnimationFrame callbacks work, but the idea was sound and definitely pointed towards an area where there may be a relatively small change that could have a large effect. This laid the groundwork for us to dig deeper and debug what was really happening here. Being a fan of computer graphics and video games, frame delivery/cadence is quite a hot topic in that area. I had the idea that something was unusual, but the code that gets a frame to the screen in WebKit is spread across many classes (and multiple threads) and isn’t so easy to reason about. On the other hand, it isn’t so hard to write a tool to analyse it from the client side and this would also give us a handy comparison with other browsers too.

So I went ahead and wrote a tool that would help us analyse exactly how frames are delivered from the client side, including under load, and the results were illuminating. The tool visualises the time between frames, the time between requesting a frame and receiving the corresponding callback and the time passed between performance.now() called at the start of a callback and the timestamp received as the callback parameter. To simulate ‘load’, it uses performance.now() and waits until the given amount of time has elapsed since the start of the callback before returning control to the main loop. If you want to sustain a 60fps update, you have about 16ms to finish whatever you’re doing in your requestAnimationFrame callback. If you exceed that, you won’t be able to maintain 60fps. Here’s Firefox under a 20ms load:

Firefox frame-times under a 20ms rendering load

And here’s Chrome under the same parameters:

Firefox frame-times under a 20ms rendering load

Now here’s WPE WebKit, using the Wayland backend, without any patches:

WebKit/WPE Wayland under a 20ms rendering load

One of these graphs does not look like the others, right? We can immediately see that when a frame exceeds the 16.67ms/60fps rendering budget in WebKit/WPE under the Wayland backend, that it hard drops to 30fps. Other browsers don’t wait for a vsync to kick off rendering work and so are able to achieve frame-rates between 30fps and 60fps, when measured over multiple frames (all of these browsers are locked to the screen refresh, there is no screen tearing present). The other noticeable thing in these is that the green line is missing on the WebKit test – this shows that the timestamp delivered to the frame callback is exactly the same as performance.now(), where as the timestamps in both Chrome and Firefox appear to be the time of the last vsync. This makes sense from an animation point of view and would mean that animations that are timed using this timestamp would move at a rate that’s more consistent with the screen refresh when under load.

What I’m omitting from this write-up is the gradual development of this tool, the subtle and different behaviour of different WebKit backends and the huge amount of testing and instrumentation that was required to reach these conclusions. I also wrote another tool to visualise the WebKit render pipeline, which greatly aided in writing potential patches to fix these issues, but perhaps that’s the topic of another blog post for another day.

In summary, there are two identified bugs, or at least, major differences in behaviour with other browsers here, both of which affect both fluidity and synthetic test performance. I’m not too concerned about the latter, but that’s a hard sell to a potential client that’s pointing at concrete numbers that say WebKit is significantly worse than some competing option. The first bug is that if a frame goes over budget and we miss a screen refresh (a vsync signal), we wait for the next one before kicking off rendering again. This is what causes the hard drop from 60fps to 30fps. As it concerns Linux, this only affects the Wayland WPE backend because that’s the only backend that implements vsync signals fully, so this doesn’t affect GTK or other WPE backends. The second bug, which is less of a bug, as a reading of the spec (steps 9 and 11.10) would certainly indicate that WebKit is doing the correct thing here, is that the timestamp given to requestAnimationFrame callbacks is the current time and not the vsync time as it is in other browsers (which makes more sense for timing animations). I have patches for both of these issues and they’re tracked in bug 233312 and bug 238999.

With these two issues fixed, this is what the graph looks like.

Patched WebKit/WPE with a 20ms rendering load

And another nice consequence is that MotionMark 1.2 has gone from:

WebKit/WPE MotionMark 1.2 results

to:

Patched WebKit/WPE MotionMark 1.2 results

Much better 🙂

No ETA on these patches landing; perhaps I’ve drawn some incorrect conclusions, or done something in a way that won’t work long term, or is wrong in some fashion that I’m not aware of yet. Also, this will most affect users of WPE/Wayland, so don’t get too excited if you’re using GNOME Web or similar. Fingers crossed though! A huge thanks to Alejandro García who worked with me on this and did an awful lot of debugging and testing (as well as the original patch that inspired this work).

by Chris Lord at April 19, 2022 12:08 PM

January 06, 2022

Emmanuele Bassi

GNOME Keyring Kills Babies

I’ve just downloaded the gnome-keyring-manager application, written for the GNOME Love effort, in order to see its new UI; along the way, I decided to give a look to the gnome-keyring library.

The more I look at gnome-keyring.h, the more I regret had having breakfast this morning.

Using gpointers all over the place, no GObjects, no signals and properties, basically impossible to bind it for the usage with other languages apart from C. It looks like someone took each point I dislike in the current GNOME-VFS implementation in order to build a library.

It’s utter madness, and lack of proper software engineering, that which drives this library; and its authors really did propose it for inclusion in the Developer Platform? For Sauron’ sake, do not include it!

I know it’s meant to be a private library, but please: we should try to enforce a certain coding policy for GNOME libraries, even private ones like this; as it is, gnome-keyring is something I’d like not to touch with a ten-feet pole.

by ebassi at January 06, 2022 11:15 PM

December 28, 2021

Tomas Frydrych

Scotland and Sitka

I see that Chris Packham and others are making waves about the amount of Sitka being planted in Scotland. Funny that. There is nothing new here. I have raised this issue years back when the Scottish Government first published its Draft Climate Change Plan (2016?), and various environmental groups and outdoor influencers were praising its tree planting objectives without bothering to scrutinise the numbers (simply putting the tree planting numbers along the timber production targets in a different section of the DCCP made it clear that most of the new trees were going to be Sitka). At the time I got some fairly condescending responses to those concerns from some who should have known better.

But here is the thing, Sitka might not be doing much for the romantic images of Scotland as a would be wilderness, but we need it. Timber is one of the keys to dealing with climate change, as we need to wean ourselves off our dependency on concrete which accounts for something like 15% of worldwide CO2 emissions (the estimates vary somewhat, but it’s a lot). In Britain we happen to import most of out timber, and not all of it is grown sustainabilty elsewhere. We should really be self sufficient, and Sitka is perhaps the most productive source of construction timber.

The other thing people don’t seem to get is that natural forests do not provide for long term carbon capture simply because the natural life cycle of a tree is carbon neutral. As the stump said to the seedling, ‘remember, from CO2 thou came and into CO2 thou shall return’. Planting new forests gives us some initial carbon capture, but that trails off as the forest matures into its regular life cycle. To provide ongoing capture requires removing the wood and locking the carbon in the form of timber.

That is not to say we don’t need (many) more native trees, and ultimately established mature woodlands, but we need those because in our latitude they are key to biodiversity, and the loss of biodiversity is as much as an existential threat to us as is climate change. The real problem in Scotland is not that we are planting too much Sitka, but simply that we do not plant enough trees. The Scottish Government’s tree planting numbers are not ambitious enough, we could easily be aiming for at least double that, with a better balance between timber and native woodlands.

(A bigger issue is that the Scottish Government’s environmental policies are reduced largely to cutting down CO2 emissions, they haven’t cotton on yet that retaining biodiversity matters at least as much, and, unfortunately, the actions needed to address the latter are at times at odds with the former. Just now the Scottish Government is obsessed with the expansion of renewable energy installations, and, as it happens, windfarms and forests don’t mix, so every new windfarm built is a forest not planted. This environmental reductionism is going to hurt us badly in not so distant future.)

by tf at December 28, 2021 10:20 AM

October 15, 2021

Emmanuele Bassi

GWeather next

tl;dr Libgweather, the small GNOME library that queries weather services, is getting a major version bump to allow applications using it to be ported to GTK4.

In the beginning, there was a weather applet in the GNOME panel. It had a bunch of code that poked at a couple of websites to get the weather information for a given airport or weather observation stations, and shipped with a list of locations and their nearest METAR code.

In 2007, the relevant code was moved to its own separate repository, so that other applications and system settings could reuse the same code as the panel applet: the libgweather library was born. Aside from the basic weather information and location objects, libgweather also had a couple of widgets: one for selecting a location (with autocompletion), and one for selecting a timezone using a location.

Since libgweather was still very much an ad hoc library for a handful of applications, there was no explicit API and ABI stability guarantee made by its maintainers; in fact, in order to use it, you had to “opt in” with a specific C pre-processor symbol.

Time passed, and a few more applications appeared during the initial GNOME 3 cycles—like Weather, followed by Clocks a month later. Most of the consumers of libgweather were actually going through a language binding, which meant they were not really “opting into” the API through the explicit pre-processor symbol; it also meant that changes in the API and ABI could end up being found only after a libgweather release, instead of during a development cycle. Of course, back then, we only had a single CI/CD pipeline for the whole project, with far too little granularity and far too wide scope. Still, the GWeather consumers were few and far between, and the API was not stabilised.

Fast forward to now.

The core GNOME applications using GWeather are in the process of being ported to GTK4, but GWeather still ships with two GTK3 widgets. Since you cannot have GTK3 and GTK4 types in the same process, this requires either porting GWeather to GTK4 or dropping the widgets. As it turns out, the widgets are not really shared across applications using libgweather, and all of them have also been redesigned or are using the libadwaita/GTK4 port as a chance to refresh their overall appearences. This makes our life a little bit easier, as we can drop the widgets without really losing any actual functionality that people do care about.

For GNOME 42, the plan for libgweather is:

  • bump up the API version to 4.0, and ensure parallel installability with the older libgweather-3; this requires renaming things like the pkg-config file and the settings schema, alongside the shared library
  • drop the GTK widgets, and some old API that hasn’t been working in years, like getting the radar image animation
  • stabilise the API, and turn libgweather into a proper library, with the usual API and ABI stability guarantees (deprecations and new symbols added only during development cycles, no changes/removals until the following major API bump)
  • make it easier to use libgweather objects with GListModel-based API
  • document the API properly
  • clean up the internals from various years of inconsistent coding style and practices

I’m also going through the issues imported from Bugzilla and closing the ones that have long since been fixed.

In the meantime, the old libgweather-3 API is going to be frozen, for the tools that still use it and won’t be ported to GTK4 any time soon.

For more information, you can read:

If you’re using libgweather, I strongly recommend you to use the 40.0 release or build from the libgweather-3 branch until you are planning to port to GTK4.

If you’re distributing libgweather, I recommend you package the new libgweather under a new name, given that it’s parallel installable with the old one; my recommendation is to use libgweather4 or libgweather-4 as the name of the package.

by ebassi at October 15, 2021 09:54 AM

October 06, 2021

Emmanuele Bassi

More documentation changes

It’s been nearly a month since I’ve talked about gi-docgen, my little tool to generate API references from introspection data. In between my blog post and now, a few things have changed:

  • the generated API reference has had a few improvements, most notably the use of summaries in all index pages
  • all inheritable types now show the properties, signals, and methods inherited from their ancestors and from the implemented interfaces; this should hopefully make the reference much more useful for newcomers to GTK
  • we allow cross-linking between dependent namespaces; this is done using an optional URL map, with links re-written on page load. Websites hosting the API reference would need only to provide an urlmap.js file to rewrite those links, instead of doing things like parsing the HTML and changing the href attribute of every link cough library-web cough
  • we parse custom GIR attributes to provide better cross-linking between methods, properties, and signals.
  • we generate an index file with all the possible end-points, and a dictionary of terms that can be used for searching; the terms are stemmed using the Porter stemming algorithm
  • the default template will let you search using the generated index; the search supports scoping, so using method:show widget will look for all the symbols in which the term show appears in method descriptions, alongside the widget term
  • we also generate a DevHelp file, so theoretically DevHelp can load up the API references built by gi-docgen; there is still work to be done, there, but thanks to the help of Jan Tojnar, it’s not entirely hopeless

Thanks to all these changes, both Pango and GTK have switched from gtk-doc to gi-docgen for their API references in their respective main development branches.

Now, here’s the part where it gets complicated.

Using gi-docgen

Quick reminder: the first and foremost use case for gi-docgen is GTK (and some of its dependencies). If it works for you, I’m happy, but I will not go out of my way to make your use case work—especially if it comes at the expense of Job #1, i.e. generating the API reference for GTK.

Since gi-docgen is currently a slightly moving target, I strongly recommend using it as a Meson subproject. I also strongly recommend vendoring it inside your release tarballs, using:

meson dist --include-subprojects

when generating the distribution archive. Do not try and depend on an installed copy of gi-docgen.

Additionally, it’s possible to include the gi-docgen API reference into the Meson tarball by using a dist script. The API reference will be re-generated when building, but it can be extracted from the tarball, like in the good old gtk-doc-on-Autotools days.

Publishing your API reference

The tool we use to generate developer.gnome.org, library-web, is unmaintained and, quite frankly, fairly broken. It is a Python2 script that got increasingly more complicated without actually getting more reliable; it got progressively more broken once we started having more than two GTK modules, and then it got severely broken once we started using Meson and CMake, instead of Autotools. These days, you’ll be lucky to get your API reference uploaded to developer.gnome.org (as a separate archive), and you can definitely forget about cross-linking, because the tool will most likely get things wrong in its quest to restyle any HTML it finds, and then fix the references to what it thinks is the correct place:

The support for Doxygen (which is used by the C++ bindings) is minimal, and it ended up breaking a few times. Switching away from gtk-doc to gi-docgen is basically the death knell for the whole thing:

  • first of all, it cannot match the documentation module with the configuration for it, because git-docgen does not have the concept of a “documentation module”; at most, it has a project configuration file.
  • additionally, we really don’t want library-web messing about with the generated HTML, especially if the end result breaks stuff.

So, the current solution is to try and make library-web detect if we’re using gi-docgen, by looking for toml and toml.in files in the release archive, and then upload various files as they are. It’s a bad and fragile stop gap solution, but it’s the best we can do without breaking everything in even more terrible ways.

For GNOME 41 my plan is to sidestep the whole thing, and send library-web to a farm upstate. We’re going to use gnome-build-meta to build the API references of the projects we have in our SDK, and then publish them according to the SDK version.

My recommendation for library authors, in any case, is to build the API reference for the development branch of their project as part of their CI, and then publish it to the GitLab pages space. For instance:

This way, you’ll always have access to the latest documentation.

Sadly, we can’t have per-branch references, because GitLab pages are nuked every time a branch gets built; for that, we’d have to upload the artifacts somewhere else, like an S3 bucket.


Things are going to get better in the near future, after 10 years of stagnation; sadly, this means we’re living in Interesting Times, so I ask of you to please be patient while we transition towards a new and improved way to document our platform.

by ebassi at October 06, 2021 11:54 PM

September 21, 2021

Emmanuele Bassi

Properties, introspection, and you

It is a truth universally acknowledged, that a GObject class in possession of a property, must be in want of an accessor function.

The main issue with that statement is that it’s really hard to pair the GObject property with the accessor functions that set the property’s value, and retrieve it.

From a documentation perspective, tools might not establish any relation (gtk-doc), or they might require some additional annotation to do so (gi-docgen); but at the introspection level there’s nothing in the XML or the binary data that lets you go from a property name to a setter, or a getter, function. At least, until now.

GObject-introspection 1.70, released alongside GLib 2.70 and GNOME 41, introduced various annotations for both properties and methods that let you go from one to the other; additionally, new API was added to libgirepository to allow bindings to dynamic languages to establish that relation at run time.

Annotations

If you have a property, and you document it as you should, you’ll have something like this:

/**
 * YourWidget:your-property
 *
 * A property that does something amazing.
 */

If you want to associate the setter and getter functions to this property, all you need to do is add the following identifier annotations to it:

/**
 * YourWidget:your-property: (setter set_your_property) (getter get_your_property)
 *
 * A property that does something amazing.
 */

The (setter) and (getter) annotations take the name of the method that is used to set, and get, the property, respectively. The method name is relative to the type, so you should not pass the C symbol.

On the accessor methods side, you have two additional annotations:

/**
 * your_widget_set_your_property: (set-property your-property)
 * @self: your widget
 * @value: the value to set
 *
 * Sets the given value for your property.
 */

and:

/**
 * your_widget_get_your_property: (get-property your-property)
 * @self: your widget
 *
 * Retrieves the value of your property.
 *
 * Returns: the value of the property
 */

Heuristics

Of course, you’re now tempted to go and add those annotations to all your properties and related accessors. Before you do that, though, you should know that the introspection scanner will try and match properties and accessors by itself, using appropriate heuristics:

  • if your object type has a writable, non-construct-only property, and a method that is called set_<property>, then the property will have a setter and the method will be matched to the property
  • if your object type has a readable property, and a method that is called get_<property>, then the property will have a getter and the method will be matched to the property
  • additionally, if the property is read-only and the property type is boolean, the scanner will look at a method that has the same name as the property as well; this is meant to catch getters like gtk_widget_has_focus(), which accesses the read-only property has-focus

API

All of the above ends up in the introspection XML, which is used by documentation tools and code generators. Bindings for dynamic languages using libgirepository can also access this information at run time, by using the API in GIPropertyInfo to retrieve the setter and getter function information for a property; and the API in GIFunctionInfo to retrieve the property being set.

Future

Ideally, with this information, language bindings should be able to call the accessor functions instead of going through the generic g_object_set_property() and g_object_get_property() API, except as a fallback. This should speed up the property access in various cases. Additionally, bindings could decide to stop exposing C accessors, and only expose the property, in order to make the API more idiomatic.

On the documentation side, this will ensure that tools like gi-docgen will be able to bind the properties and their accessors more reliably, without requiring extra attributes.

And one more thing

One thing that did not make it in time for the 1.70 release, but will land early in the next development cycle for gobject-introspection, is the validation for properties. Language bindings don’t really like it when the C API exposes properties that have the same name of methods and virtual functions; we already have a validation pass ready to land, so expect warnings in the near future.

Another feature that will land early in the cycle is the (emitter) annotation, which will bind a method emitting a signal with the signal name. This is a feature taken from Vala’s metadata, and should improve the quality of life of people using introspection data with Vala, as well as removing the need for another attribute in gi-docgen.

Finally, if you maintain a language binding: please look at !204, and make sure you’re not calling g_assert_not_reached() or g_error() when encountering a new scope type. The forever scope cannot land if it breaks every single binding in existence.

by ebassi at September 21, 2021 08:23 PM

August 26, 2021

Emmanuele Bassi

Publishing your documentation

The main function of library-web, the tool that published the API reference of the various GNOME libraries, was to take release archives and put their contents in a location that would be visible to a web server. In 2006, this was the apex of automation, of course. These days? Not so much.

Since library-web is going the way of the Dodo, and we do have better ways to automate the build and publishing of files with GitLab, how do we replace library-web in 2021? The answer is, unsurprisingly: continuous integration pipelines.

I will assume that you’re already building—and testing—your library using GitLab’s CI; if you aren’t, then you have bigger problems than just publishing your API.

So, let’s start with these preconditions:

If your project doesn’t satisfy these preconditions you might want to work on doing so; alternatively, you can implement your own CI pipeline.

Let’s start with a simple job template:

# Expected variables:
# PROJECT_DEPS: the dependencies for your own project
# MESON_VERSION: the version of Meson you depend on
# MESON_EXTRA_FLAGS: additional Meson setup options
#   you wish to pass to the configuration phase
# DOCS_FLAGS: the Meson setup option for enabling the
#   documentation, if any
# DOCS_PATH: the path of the generated reference,
#   relative to the build root
.gidocgen-build:
  image: fedora:latest
  before_script:
    - export PATH="$HOME/.local/bin:$PATH"
    - >
      dnf install -y
      python3
      python3-pip
      python3-wheel
      gobject-introspection-devel
      graphviz
      ninja-build
      redhat-rpm-config
    - dnf install -y ${PROJECT_DEPS}
    - >
      pip3 install
      meson==${MESON_VERSION}
      gi-docgen
      jinja2
      Markdown
      markupsafe
      pygments
      toml
      typogrify
  script:
    - meson setup ${MESON_EXTRA_FLAGS} ${DOCS_FLAGS} _docs .
    - meson compile -C _docs
    - |
      pushd "_docs/${DOCS_PATH}" > /dev/null 
      tar cfJ ${CI_PROJECT_NAME}-docs.tar.xz .
      popd > /dev/null
    - mv _docs/${DOCS_PATH}/${CI_PROJECT_NAME}-docs.tar .
  artifacts:
    when: always
    name: 'Documentation'
    expose_as: 'Download the API reference'
    paths:
      - ${CI_PROJECT_NAME}-docs.tar.xz

This CI template will:

  • download all the required dependencies for building the API reference using gi-docgen
  • build your project, including the API reference
  • create an archive with the API reference
  • store the archive as a CI artefact that you can easily download

Incidentally, by adding a meson test -C _build to the script section, you can easily test your build as well; and if you have a test() target in your build that runs gi-docgen check, then you can verify that your documentation is always complete.

Now, all you have to do is create your own CI job that inherits from the template inside its own stage. I will use JSON-GLib as a reference:

stages:
  - docs

api-reference:
  stage: docs
  extends: .gidocgen-build
  needs: []
  variables:
    MESON_VERSION: 0.55.3
    DOCS_FLAGS: -Dgtk_doc=true
    PROJECT_DEPS:
      gcc
      git
      gettext
      glib2-devel
    DOCS_PATH: docs/json-glib-1.0

What about gtk-doc!”, I hear from the back of the room. Well, fear not, because there’s a similar template you can use if you’re still using gtk-doc in your project:

# Expected variables:
# PROJECT_DEPS: the dependencies for your own project
# MESON_VERSION: the version of Meson you depend on
# MESON_EXTRA_FLAGS: additional Meson setup options you
#   wish to pass to the configuration phase
# DOCS_FLAGS: the Meson setup option for enabling the
#   documentation, if any
# DOCS_TARGET: the Meson target for building the
#   documentation, if any
# DOCS_PATH: the path of the generated reference,
#   relative to the build root
.gtkdoc-build:
  image: fedora:latest
  before_script:
    - export PATH="$HOME/.local/bin:$PATH"
    - >
      dnf install -y
      python3
      python3-pip
      python3-wheel
      gtk-doc
      ninja-build
      redhat-rpm-config
    - dnf install -y ${PROJECT_DEPS}
    - pip3 install  meson==${MESON_VERSION}
  script:
    - meson setup ${MESON_EXTRA_FLAGS} ${DOCS_FLAGS} _docs .
    # This is exceedingly annoying, but sadly its how
    # gtk-doc works in Meson
    - ninja -C _docs ${DOCS_TARGET}
    - |
      pushd "_docs/${DOCS_PATH}" > /dev/null 
      tar cfJ ${CI_PROJECT_NAME}-docs.tar.xz .
      popd > /dev/null
    - mv _docs/${DOCS_PATH}/${CI_PROJECT_NAME}-docs.tar .
  artifacts:
    when: always
    name: 'Documentation'
    expose_as: 'Download the API reference'
    paths:
      - ${CI_PROJECT_NAME}-docs.tar.xz

And now you can use extends: .gtkdoc-build in your api-reference job.

Of course, this is just half of the job: the actual goal is to publish the documentation using GitLab’s Pages. For that, you will need another CI job in your pipeline, this time using the deploy stage:

stages:
  - docs
  - deploy

# ... the api-reference job goes here...

pages:
  stage: deploy
  needs: ['api-reference']
  script:
    - mkdir public && cd public
    - tar xfJ ../${CI_PROJECT_NAME}-docs.tar.xz
  artifacts:
    paths:
      - public
  only:
    - master
    - main

Now, once you push to your main development branch, your API reference will be built by your CI pipeline, and the results published in your project’s Pages space—like JSON-GLib.

The CI pipeline and GitLab Pages are also useful for building complex, static websites presenting multiple versions of the documentation; or presenting multiple libraries. An example of the former is libadwaita’s website, while an example of the latter is the GTK documentation website. I’ll write a blog post about them another time.


Given that the CI templates are pretty generic, I’m working on adding them into the GNOME ci-templates repository, so you will be able to use something like:

include: 'https://gitlab.gnome.org/GNOME/citemplates/raw/HEAD/docs/gidocgen.yml'

or:

include: 'https://gitlab.gnome.org/GNOME/citemplates/raw/HEAD/docs/gtkdoc.yml'

without having to copy-paste the template in your own .gitlab-ci.yml file.


The obvious limitation of this approach is that you will need to depend on the latest version of Fedora to build your project. Sadly, we cannot use Flatpak and the GNOME run time images for this, mainly because we are building libraries, not applications; and because extracting files out of a Flatpak build after it has completed isn’t entirely trivial. Another side effect is that if you bump up the dependencies of your project to something on the bleeding edge and currently not packaged on the latest stable Fedora, you will need to have it included as a Meson sub-project. Of course, you should already be doing that, so it’s a minor downside.

Ideally, if GNOME built actual run time images for the SDK, we could install gtk-doc, gi-docgen, and all their dependencies into the SDK itself, and avoid depending on a real Linux distribution for the libraries in our platform.

by ebassi at August 26, 2021 12:01 AM

August 05, 2021

Chris Lord

OffscreenCanvas update

Hold up, a blog post before a year’s up? I’d best slow down, don’t want to over-strain myself 🙂 So, a year ago, OffscreenCanvas was starting to become usable but was missing some key features, such as asynchronous updates and text-related functions. I’m pleased to say that, at least for Linux, it’s been complete for quite a while now! It’s still going to be a while, I think, before this is a truly usable feature in every browser. Gecko support is still forthcoming, support for non-Linux WebKit is still off by default and I find it can be a little unstable in Chrome… But the potential is huge, and there are now double the number of independent, mostly-complete implementations that prove it’s a workable concept.

Something I find I’m guilty of, and I think that a lot of systems programmers tend to be guilty of, is working on a feature but not using that feature. With that in mind, I’ve been spending some time in the last couple of weeks to try and bring together demos and information on the various features that the WebKit team at Igalia has been working on. With that in mind, I’ve written a little OffscreenCanvas demo. It should work in any browser, but is a bit pointless if you don’t have OffscreenCanvas, so maybe spin up Chrome or a canary build of Epiphany.

OffscreenCanvas fractal renderer demo, running in GNOME Web Canary

Those of us old-skool computer types probably remember running fractal renderers back on their old home computers, whatever they may have been (PC for me, but I’ve seen similar demos on Amigas, C64s, Amstrad CPCs, etc.) They would take minutes to render a whole screen. Of course, with today’s computing power, they are much faster to render, but they still aren’t cheap by any stretch of the imagination. We’re talking 100s of millions of operations to render a full-HD frame. Running on the CPU on a single thread, this is still something that isn’t really real-time, at least implemented naively in JavaScript. This makes it a nice demonstration of what OffscreenCanvas, and really, Worker threads allow you to do without too much fuss.

The demo, for which you can look at my awful code, splits that rendering into 64 tiles and gives each tile to the first available Worker in a pool of rendering threads (different parts of the fractal are much more expensive to render than others, so it makes sense to use a work queue, rather than just shoot them all off distributed evenly amongst however many Workers you’re using). Toggle one of the animation options (palette cycling looks nice) and you’ll get a frame-rate counter in the top-right, where you can see the impact on performance that adding Workers can have. In Chrome, I can hit 60fps on this 40-core Xeon machine, rendering at 1080p. Just using a single worker, I barely reach 1fps (my frame-rates aren’t quite as good in WebKit, I expect because of some extra copying – there are some low-hanging fruit around OffscreenCanvas/ImageBitmap and serialisation when it comes to optimisation). If you don’t have an OffscreenCanvas-capable browser (or a monster PC), I’ve recorded a little demonstration too.

The important thing in this demo is not so much that we can render fractals fast (this is probably much, much faster to do using WebGL and shaders), but how easy it is to massively speed up a naive implementation with relatively little thought. Google Maps is great, but even on this machine I can get it to occasionally chug and hitch – OffscreenCanvas would allow this to be entirely fluid with no hitches. This becomes even more important on less powerful machines. It’s a neat technology and one I’m pleased to have had the opportunity to work on. I look forward to seeing it used in the wild in the future.

by Chris Lord at August 05, 2021 03:33 PM

August 02, 2021

Emmanuele Bassi

Documenting GNOME for developers

You may have just now noticed that the GNOME developers documentation website has changed after 15 years. You may also have noticed that it contains drastically less content than it used to. Before you pick up torches and pitchforks, let me give you a short tl;dr of the changes:

  • Yes, this is entirely intentional
  • Yes, I know that stuff has been moved
  • Yes, I know that old URLs don’t work
  • Yes, some redirections will be put in place
  • No, we can’t go back

So let’s recap a bit the state of the developers documentation website in 2021, for those who weren’t in attendance at my GUADEC 2021 presentation:

  • library-web is a Python application, which started as a Summer of Code project in 2006, whose job was to take Autotools release tarballs, explode them, fiddle with their contents, and then publish files on the gnome.org infrastructure.
  • library-web relies heavily on Autotools and gtk-doc.
  • library-web does a lot of pre-processing of the documentation to rewrite links and CSS from the HTML files it receives.
  • library-web is very much a locally sourced, organic, artisanal pile of hacks that revolve very much around the GNOME infrastructure from around 2007-2009.
  • library-web is incredibly hard to test locally, even when running inside a container, and the logging is virtually non-existent.
  • library-web is still running on Python 2.
  • library-web is entirely unmaintained.

That should cover the infrastructure side of things. Now let’s look at the content.

The developers documentation is divided in four sections:

  • a platform overview
  • the Human Interface guidelines
  • guides and tutorials
  • API references

The platform overview is slightly out of date; the design team has been reviewing the HIG and using a new documentation format; the guides and tutorials still like GTK1 and GTK2 content; or how to port GNOME 2 applications to GNOME 3; or how to write a Metacity theme.

This leaves us with the API references, which are a grab bag of miscellaneous things, listed by version numbers. Outside of the C API documentation, the only other references hosted on developer.gnome.org are the C++ bindings—which, incidentally, use Doxygen and when they aren’t broken by library-web messing about with the HTML, they have their own franken-style mash up of gtkmm.org and developer.gnome.org.

Why didn’t I know about this?

If you’re asking this question, allow me to be blunt for a second: the reason you never noticed that the developers documentation website was broken is that you never actually experienced it for its intended use case. Most likely, you either just looked in a couple of well known places and never ventured outside of those; and/or you are a maintainer, and you never literally cared how things worked (or didn’t work) after you uploaded a release tarball somewhere. Like all infrastructure, it was somebody else’s problem.

I completely understand that we’re all volunteers, and that things that work can be ignored because everyone has more important things to think about.

Sadly, things change: we don’t use Autotools (that much), which means release archives do not contain the generated documentation any more; this means library-web cannot be updated, unless somebody modifies the configuration to look for a separate documentation tarball that the maintainer has to generate manually and upload in a magic location on the gnome.org file server—this has happened for GTK4 and GLib for the past two years.

Projects change the way they lay out the documentation, or gtk-doc changes something, and that causes library-web to stop extracting the right files; you can look at the ATK reference for the past year and a half for an example.

Projects bump up their API, and now the cross-referencing gets broken, like the GTK3 pages linking GDK2 types.

Finally, projects decide to change how their documentation is generated, which means that library-web has no idea how to extract the HTML files, or how to fiddle with them.

If you’re still using Autotools and gtk-doc, and haven’t done an API bump in 15 years, and all you care about is copying a release archive to the gnome.org infrastructure I’m sure all of this will come as a surprise, and I’m sorry you’re just now being confronted with a completely broken infrastructure. Sadly, the infrastructure was broken for everybody else long before this point.

What did you do?

I tried to make library-web deal with the changes in our infrastructure. I personally built and uploaded multiple versions of the documentation for GLib (three different archives for each release) for a year and a half; I configured library-web to add more “extra tarball” locations for various projects; I tried making library-web understand the new layout of various projects; I even tried making library-web publish the gi-docgen references used by GTK, Pango, and other projects.

Sadly, every change broke something else—and I’m not just talking about the horrors of the code base. As library-web is responsible for determining the structure of the documentation, any change to how the documentation is handled leads to broken URLs, broken links, or broken redirections.

The entire castle of cards needed to go.

Which brings us to the plan.

What are you going to do?

Well, the first step has been made: the new developer.gnome.org website does not use library-web. The content has been refreshed, and more content is on the way.

Again, this leaves the API references. For those, there are two things that need to happen—and are planned for GNOME 41:

  1. all the libraries that are part of the GNOME SDK run time, built by gnome-build-meta must also build their documentation, which will be published as part of the org.gnome.Sdk.Docs extension; the contents of the extension will also be published online.
  2. every library that is hosted on gnome.org infrastructure should publish their documentation through their CI pipeline; for that, I’m working on a CI template file and image that should take care of the easy projects, and will act as model for projects that are more complicated.

I’m happy to guide maintainers to deal with that, and I’m also happy to open merge requests on various projects.

In the meantime, the old documentation is still available as a static snapshot, and the sysadmins are going to set up some redirections to bridge us from the old platform to the new—and hopefully we’ll soon be able to redirect to each project’s GitLab pages.

Can we go back, please?

Sadly, since nobody has ever bothered picking up the developers documentation when it was still possible to incrementally fix it, going back to a broken infrastructure isn’t going to help anybody.

We also cannot keep the old developer.gnome.org and add a new one, of course; now we’d have two websites, one of which broken and unmaintained and linked all over the place, and a new one that nobody knows exists.

The only way is forward, for better or worse.

What about Devhelp

Some of you may have noticed that I picked up the maintenance of Devhelp, and landed a few fixes to ensure that it can read the GTK4 documentation. Outside of some visual refresh for the UI, I also am working on making it load the contents of the org.gnome.Sdk.Docs run time extension, which means it’ll be able to load all the core API references. Ideally, we’re also going to see a port to GTK4 and libadwaita, as soon as WebKitGTK for GTK4 is more wideley available.

by ebassi at August 02, 2021 09:32 AM

July 28, 2021

Tomas Frydrych

The BBC's 'Riding the North Coast 500'

The Beeb has a photographic piece on cycling the NC500. Unfortunately, these serene images, featuring four cyclists riding quiet scenic roads, are completely misleading as to the reality of cycling on the NC500.

I have just spent two weeks cycling on a section of the route in the North West, and the simple truth is the creation of the NC500 has turned what once might have been one of the best cycle touring destinations in the UK into a busy, unpleasant, thoroughfare.

The two lane sections of the route are particularly awful to cycle on. The two lane roads in the NW frequently lend themselves to driving at speeds well above the speed limit even though the winding and undulating nature of the road often means very limited visibility. Performance car racing along the NC500 is rampant and Police Scotland seem unperturbed (I did not see a single police car on the open road). As a cyclist you will be repeatedly passed by cars travelling at idiotic speeds, and often passing very close.

On the single track roads the traffic usually moves a bit slower and you at least have the option to prevent close passing by systematically riding in the primary position. But traffic jams are common, as the traffic bunches up and passing places are far apart, and this makes all of the drivers just that bit more inconsiderate. Again and again you will run into drivers who will not wait at passing places for you, expecting you to dismount and step off the road. The worst offenders in this regard are the large camper vans, even though these vehicles barely fit the narrow tarmac lanes alone, and it is literally impossible for a cyclist to pass them.

I have always wanted to do a proper bike tour of the NW, but after those couple of weeks this July I largely lost the appetite for it; perhaps during the off season months it still might be a nice trip.

I have made a formal complaint to the BBC over the above piece, the selection of images presented without any commentary misrepresents what the NC500 is like, and I'd like to see a bit more balanced coverage of it.

by tf at July 28, 2021 02:11 PM

Emmanuele Bassi

Final Types

The type system at the base of our platform, GType, has various kinds of derivability:

  • simple derivability, where you’re allowed to create your derived version of an existing type, but you cannot derive your type any further;
  • deep derivability, where you’re allowed to derive types from other types;

An example of the first kind is any type inheriting from GBoxed, whereas an example of the second kind is anything that inherits from GTypeInstance, like GObject.

Additionally, any derivable type can be marked as abstract; an abstract type cannot be instantiated, but you can create your own derived type which may or may not be “concrete”. Looking at the GType reference documentation, you’ll notice various macros and flags that exist to implement this functionality—including macros that were introduced to cut down the boilerplate necessary to declare and define new types.

The G_DECLARE_* family of macros, though, introduced a new concept in the type system: a “final” type. Final types are leaf nodes in the type hierarchy: they can be instantiated, but they cannot be derived any further. GTK 4 makes use of this kind of types to nudge developers towards composition, instead of inheritance. The main problem is that the concept of a “final” type is entirely orthogonal to the type system; there’s no way to programmatically know that a type is “final”—unless you have access to the introspection data and start playing with heuristics about symbol visibility. This means that language bindings are unable to know without human intervention if a type can actually be inherited from or not.

In GLib 2.70 we finally plugged the hole in the type system, and we introduced the G_TYPE_FLAG_FINAL flag. Types defined as “final” cannot be derived any further: as soon as you attempt to register your new type that inherits from a “final” type, you’ll get a warning at run time. There are macros available that will let you define final types, as well.

Thanks to the “final” flag, we can also include this information into the introspection data; this will allow language bindings to warn you if you attempt at inheriting from a “final” type, likely using language-native tools, instead of getting a run time warning.

If you are using G_DECLARE_FINAL_TYPE in your code you should bump up your GObject dependency to 2.70, and switch your implementation from G_DEFINE_TYPE and friends to G_DEFINE_FINAL_TYPE.

by ebassi at July 28, 2021 10:02 AM

June 10, 2021

Ross Burton

Faster image transfer across the network with zsync

Those of us involved in building operating system images using tools such as OpenEmbedded/Yocto Project or Buildroot don't always have a power build machine under our desk or in the same building on gigabit. Our build machine may be in the cloud, or in another office over a VPN running over a slow residential ADSL connection. In these scenarios, repeatedly downloading gigabyte-sized images for local testing can get very tedious.

There are some interesting solutions if you use Yocto: you could expose the shared state over the network and recreate the image, which if the configurations are the same will result in no local compilation. However this isn't feasible if your local machine isn't running Linux or you just want to download the image without any other complications. This is where zsync is useful.

zsync is a tool similar to rsync but optimised for transfering single large files across the network. The server generates metadata containing the chunk information, and then shares both the image and the metadata over HTTP. The client can then use any existing local file as a seed file to speed up downloading the remote file.

On the server, run zsyncmake on the file to be transferred to generate the .zsync metadata. You can also pass -z if the file isn't already compressed to tell it to compress the file first.

$ ls -lh core-image-minimal-*.wic*
-rw-r--r-- 1 ross ross 421M Jun 10 13:44 core-image-minimal-fvp-base-20210610124230.rootfs.wic

$ zsyncmake -z core-image-minimal-*.wic

$ ls -lh core-image-minimal-*.wic*
-rw-r--r-- 1 ross ross 4.7K Jun 10 13:44 core-image-minimal-fvp-base-20210610124230.rootfs.manifest
-rw-r--r-- 1 ross ross 421M Jun 10 13:44 core-image-minimal-fvp-base-20210610124230.rootfs.wic
-rw-r--r-- 1 ross ross  53M Jun 10 13:45 core-image-minimal-fvp-base-20210610124230.rootfs.wic.gz

Here we have ~420MB of disk image, which compressed down to a slight 53MB, and just ~5KB of metadata. This image compressed very well as the raw image is largely empty space, but for the purposes of this example we can ignore that.

The zsync client downloads over HTTP and has some non-trivial requirements so you can't just use any HTTP server, specifically my go-to dumb server (Python's integrated http.server) isn't sufficient. If you want a hassle-free server then the Node.js package http-server works nicely, or any other proper server will work. However you choose to do it, share both the .zsync and .wic.gz files.

$ npm install -g http-server
$ http-server -p 8080 /path/to/images

Now you can use the zsync client to download the images. Sadly zsync isn't actually magical, so the first download will still need to download the full file:

$ zsync http://buildmachine:8080/core-image-minimal-fvp-base-20210610124230.rootfs.wic.zsync
No relevent local data found - I will be downloading the whole file.
downloading from http://buildmachine:8080/core-image-minimal-fvp-base-20210610124230.rootfs.wic.gz:
#################### 100.0% 7359.7 kBps DONE

verifying download...checksum matches OK
used 0 local, fetched 55208393

However, subsequent downloads will be a lot faster as only the differences will be fetched. Say I decide that core-image-minimal is too, well, minimal, and build core-image-sato which is a full X.org stack instead of just busybox. After building the the image and metadata we now have a ~700MB image:

-rw-r--r-- 1 ross ross 729M Jun 10 14:17 core-image-sato-fvp-base-20210610125939.rootfs.wic
-rw-r--r-- 1 ross ross 118M Jun 10 14:18 core-image-sato-fvp-base-20210610125939.rootfs.wic.gz
-rw-r--r-- 1 ross ross 2.2M Jun 10 14:19 core-image-sato-fvp-base-20210610125939.rootfs.wic.zsync```

Normally we'd have to download the full 730MB, but with zsync we can just fetch the differences. By telling the client to use the existing core-image-minimal as a seed file, we can fetch the new core-image-sato:

$ zsync -i core-image-minimal-fvp-base-20210610124230.rootfs.wic  http://buildmachine:8080/core-image-sato-fvp-base-20210610125939.rootfs.wic.zsync
reading seed file core-image-minimal-fvp-base-20210610124230.rootfs.wic
core-image-minimal-fvp-base-20210610124230.rootfs.wic. Target 70.5% complete.
downloading from http://buildmachine:8080/core-image-sato-fvp-base-20210610125939.rootfs.wic.gz:
#################### 100.0% 10071.8 kBps DONE     

verifying download...checksum matches OK
used 538800128 local, fetched 70972961

By using the seed file, zsync determined that it already has 70% of the file on disk, and downloaded just the remaining chunks.

For incremental builds the differences can be very small when using the Yocto Project, as thanks to the reproducible builds effort there are no spurious changes (such as embedded timestamps or non-deterministic compilation) on recompiles.

Now, obviously I don't recommend doing all of this by hand. For Yocto Project users, as of right now there is a patch queued for meta-openembedded adding a recipe for zsync-curl, and a patch queued for openembedded-core to add zsync and gzsync image conversion types (for IMAGE_FSTYPES, for example wic.gzsync) to generate the metadata automatically. Bring your own HTTP server and you can fetch without further effort.

by Ross Burton at June 10, 2021 03:28 PM

May 31, 2021

Tomas Frydrych

New Year Reflection

New Year Reflection

The year has began splendidly: the sun is shinning, the sky is blue, the hills around caped in snow, the reservoir part frozen, and under my wheels the satisfying crunch of hard ice. I turn onto the minor single track road that takes me over the hills. Here in the shade of the frost decorated evergreens it’s noticeably colder. I leave behind a stuck 4x4, pedalling out of the trees into the sun, grinning. At this moment there is nowhere else I’d rather be, nothing else I’d rather be doing.

Deciding to get a road bike again, after some 25+ year pause, was one of the best decisions of 2020, and today is the best thirty miles I have done on this bike yet. Here in the relative solitude among sheep and cows, the hum of my tyres echoing the hum of the wind turbines on the hills, as you do on a New Year’s Day, I reminisce about the years (and bikes) gone by.

My first bike was a birthday present, I think the year before I started school. It had 22" wheels, coaster brake, the frame was of the girl’s type and it was pink, but it was mine. My parents didn’t believe in stabilisers. That day Dad simply took me up to the top of the gentle hill that was our street and said ‘I am going to hold you by the saddle’ ... and then let go off me. And that was that.

My second bike came a few years later. A standard 700c, alas also of the girl’s type; this time I did mind, but at least it was blue (it was only years later I came to understand how hard bikes were to get in 1970s Czechoslovakia, plenty were made, but majority were exported).

During those years, the bike was primarily a means to an end, which came to its own during the summer holidays: a way to get between the grannies and aunties, to the forest hunting for mushrooms, to the river for a swim. But my real interests lied elsewhere, and on foot.

That changed in the year of my 20th birthday. The autumn before I was diagnosed with an early onset arthritis in my knees and was told to, for the rest of my life, forget about sport, except for swimming and cycling. It wasn’t what I wanted to hear, but the pain was unbearably real. And so that January I scoured the For Sales ads until I found a ‘real’ bike I could afford.

I took three buses across the city, expecting to cycle back. The seller wasn’t in, but his wife took me to a barn where the bike was kept—neglected, long unused, the pedals not even attached. I knew nil about bikes at the time, but all in all I thought it could be fixed up, and so the bike and me took three buses back home.

As I realised later, the frame came from a track bike that someone (not too gently) spread to accommodate a 6-speed cassette; this resulted in the wheels not being entirely aligned, but that didn’t bother me too much. The frame had super short chain stays and to match the fork had a zero rake, so that my toes overlapped the front wheel by more than an inch—that took some getting used to. But the whole thing was stupidly light compared to even the top end bike that could be bought in a shop.

I spent that winter fixing it up—hand painted it with white enamel paint, fitted hooded brake leavers, nice bar tape, new skinny tubulars, new pedals, and, with particular pride, new aluminium crankset. Then I saved up for a proper pair of cycling shoes; they gave me a few scary moments at traffic lights to start with, but I got the hang of the toe clip straps in the end.

As I said before, I knew nothing about bikes at this point, so I didn’t know tubulars had to be glued onto the rims. Fortunately, there was enough glue residue left on the rims from the original tyres to keep me in one piece until a friend set me right.

That bike took me to new places and friendships, provided means of keeping sane, of escaping bleak industrial landscapes; I rode it on our first date with Linda. Then in my late twenties my knees problems went away, and I turned back to my old passions, and to mountain bikes.

Over the years that passed I have forgotten what a simple pleasure there is in riding a road bike, but like my early bikes, this bike is more than anything else a means to an end, a bike ‘to ride to somewhere’ rather than just ‘to ride’. It’s about maintaining sanity in a world where a car is suddenly not a viable option: an old fashioned steel frame, mudguards, a rack to take my cameras and sandwiches, a flask of coffee. And just now it’s time for a cup.

[Just found this in drafts folder :-)]

by tf at May 31, 2021 09:09 AM

April 24, 2021

Tomas Frydrych

Return the National Parks to the Tribes

This piece in the Atlantic is well worth a read, cutting right through the cosy myth of the 19th century conservationism. There are lessons there for the present here in Scotland today for sure.

by tf at April 24, 2021 08:21 AM

April 18, 2021

Tomas Frydrych

A Photographer’s Quest for Purpose

A Photographer’s Quest for Purpose

I rarely enter photographic competitions, but the Scottish Landscape Photographer of the Year is an important point in my photographic calendar: Scottish landscape is my primary photographic interest and it’s always worthwhile to be able to place my own work in the wider context of other people's imagery.

While the competition has been of a very high standard in the four years I have been taking part, the images that have come through to the final this year seem to be particularly so. I have much enjoyed browsing through the galleries, and there are a few images there that have really made me pause and reflect on the beauty of this wee land of ours.

But, Friday a week ago, as I was browsing the landscape and treescape galleries, I was struck not simply by what is in these images, but also, and perhaps more so, by what is not. By the absence of the very hallmarks of contemporary Scottish landscape: the wind turbines, the hydro schemes, the electricity lines; the dirt roads, the detritus of industrial scale clear felling. After all, in today’s Scotland there is a hardly a vista where at least some of these signs of human intrusion would not be visible.

Now, I quite understand why these man-made objects are largely absent from contemporary Scottish landscapes. They are not the things we seek to see, they are not the bits of Scotland we feel emotionally attached to, nor they are what the tourists come to Scotland for. And of course, not least for the reasons just stated, it’s bloody hard to take an engaging photo of a wind farm, never mind the visual mess left behind by forestry operations.

This year, two of my images made it into the second round, and while none of them has made it any further, the Birth of a Storm will be part of the SLPOTY exhibition. This image is a quintessential Scottish landscape: a loch, a well-known hill, some clouds. It was an awkward image to print, one which challenged my darkroom technique, and I have really enjoyed the process. But on an emotional level this image doesn’t mean much to me, I have taken better images of the same subject, and images like that abound.

The other image, called Scotland’s Bright Future?, captures the pylons of the Beauly to Denny electricity line. It’s not an image I felt thrilled taking, nor is it a subject matter I’d necessarily want to see on my living room wall. Yet, I felt strangely compelled to print it at A3 size, and I felt compelled to include it among the 15 images of my SLPOTY portfolio. I admit I was much surprised it made it into the second round, and not at all it didn’t not make it any further. It’s not that good, I have taken better images this year. Yet, for me this is an image with a high emotional charge, an image that expresses something of my growing angst for this land of ours.

If we as landscape photographers systematically choose not to capture these human intrusions, are we not guilty of perpetuating a mythical landscape of Bonnie Scotland that has not existed for some time now? Does such a myth not contribute to our society’s blasé attitude to this land of ours? Is the purpose of landscape photography to simply entertain, or should it also be asking probing questions about what lies out there?

These questions are not new for me. I have touched elsewhere on my discovery of the ‘other Scotland’, the one that doesn’t make it into tourist guidebooks, and if anything the last year has made these questions more pressing. Being stuck for a year exploring timber plantations and wind farms has not been without a value: I have a lot better idea of what hides behind the ‘renewables’ moniker, and what an explosive growth of them would mean for the land. And I am left with much more doubt that the nation as a whole, and those in power in particular, understand or care.

But perhaps a more balanced and realistic landscape photography can make a degree of difference?

by tf at April 18, 2021 02:27 PM