Planet Closed Fist

May 19, 2017

Emmanuele Bassi

Further experiments in Meson

Meson is definitely getting more traction in GNOME (and other projects), with many components adding support for it in parallel to autotools, or outright switching to it. There are still bugs, here and there, and we definitely need to improve build environments — like Continuous — to support Meson out of the box, but all in all I’m really happy about not having to deal with autotools any more, as well as being able to build the G* stack much more quickly when doing continuous integration.

Now that GTK+ has added Meson support, though, it’s time to go through the dependency chain in order to clean up and speed up the build in the lower bits of our stack. After an aborted attempt at porting GdkPixbuf, I decided to port Pango.

All in all, Pango proved to be an easy win; it took me about one day to port from Autotools to Meson, and most of it was mechanical translation from weird autoconf/automake incantations that should have been removed years ago1. Most of the remaining bits were:

  • ensuring that both Autotools and Meson would build the same DSOs, with the same symbols
  • generating the same introspection data and documentation
  • installing tests and data in the appropriate locations

Thanks to the ever vigilant eye of Nirbheek Chauhan, and thanks to the new Meson reference, I was also able to make the Meson build slightly more idiomatic than a straight, 1:1 port would have done.

The results are a full Meson build that takes about the same time as ./ to run:

*                         * meson
  real        0m11.149s                 real          0m2.525s
  user        0m8.153s                  user          0m1.609s
  sys         0m2.363s                  sys           0m1.206s

* make -j$(($(nproc) + 2))            * ninja
  real        0m9.186s                  real          0m3.387s
  user        0m16.295s                 user          0m6.887s
  sys         0m5.337s                  sys           0m1.318s


* autotools                           * meson + ninja
  real        0m27.669s                 real          0m5.772s
  user        0m45.622s                 user          0m8.465s
  sys         0m10.698s                 sys           0m2.357s

Not bad for a day’s worth of work.

My plan would be to merge this in the master branch pretty soon; I also have a branch that drops Autotools entirely but that can wait a cycle, as far as I’m concerned.

Now comes the hard part: porting libraries like GdkPixbuf, ATK, gobject-introspection, and GLib to Meson. There’s already a GLib port, courtesy of Centricular, but it needs further testing; GdkPixbuf is pretty terrible, since it’s a really old library; I don’t expect ATK and GObject introspection to be complicated, but the latter has a non-recursive Make layout that is full of bees.

It would be nice to get to GUADEC and have the whole G* stack build with Meson and Ninja. If you want to help out, reach out in #gtk+, on IRC or on Matrix.

  1. The Windows support still checks for GCC 2.x or 3.x flags, for instance. 

by ebassi at May 19, 2017 05:20 PM

February 23, 2017

Chris Lord

Machine Learning Speech Recognition

Keeping up my yearly blogging cadence, it’s about time I wrote to let people know what I’ve been up to for the last year or so at Mozilla. People keeping up would have heard of the sad news regarding the Connected Devices team here. While I’m sad for my colleagues and quite disappointed in how this transition period has been handled as a whole, thankfully this hasn’t adversely affected the Vaani project. We recently moved to the Emerging Technologies team and have refocused on the technical side of things, a side that I think most would agree is far more interesting, and also far more suited to Mozilla and our core competence.

Project DeepSpeech

So, out with Project Vaani, and in with Project DeepSpeech (name will likely change…) – Project DeepSpeech is a machine learning speech-to-text engine based on the Baidu Deep Speech research paper. We use a particular layer configuration and initial parameters to train a neural network to translate from processed audio data to English text. You can see roughly how we’re progressing with that here. We’re aiming for a 10% Word Error Rate (WER) on English speech at the moment.

You may ask, why bother? Google and others provide state-of-the-art speech-to-text in multiple languages, and in many cases you can use it for free. There are multiple problems with existing solutions, however. First and foremost, most are not open-source/free software (at least none that could rival the error rate of Google). Secondly, you cannot use these solutions offline. Third, you cannot use these solutions for free in a commercial product. The reason a viable free software alternative hasn’t arisen is mostly down to the cost and restrictions around training data. This makes the project a great fit for Mozilla as not only can we use some of our resources to overcome those costs, but we can also use the power of our community and our expertise in open source to provide access to training data that can be used openly. We’re tackling this issue from multiple sides, some of which you should start hearing about Real Soon Now™.

The whole team has made contributions to the main code. In particular, I’ve been concentrating on exporting our models and writing clients so that the trained model can be used in a generic fashion. This lets us test and demo the project more easily, and also provides a lower barrier for entry for people that want to try out the project and perhaps make contributions. One of the great advantages of using TensorFlow is how relatively easy it makes it to both understand and change the make-up of the network. On the other hand, one of the great disadvantages of TensorFlow is that it’s an absolute beast to build and integrates very poorly with other open-source software projects. I’ve been trying to overcome this by writing straight-forward documentation, and hopefully in the future we’ll be able to distribute binaries and trained models for multiple platforms.

Getting Involved

We’re still at a fairly early stage at the moment, which means there are many ways to get involved if you feel so inclined. The first thing to do, in any case, is to just check out the project and get it working. There are instructions provided in READMEs to get it going, and fairly extensive instructions on the TensorFlow site on installing TensorFlow. It can take a while to install all the dependencies correctly, but at least you only have to do it once! Once you have it installed, there are a number of scripts for training different models. You’ll need a powerful GPU(s) with CUDA support (think GTX 1080 or Titan X), a lot of disk space and a lot of time to train with the larger datasets. You can, however, limit the number of samples, or use the single-sample dataset (LDC93S1) to test simple code changes or behaviour.

One of the fairly intractable problems about machine learning speech recognition (and machine learning in general) is that you need lots of CPU/GPU time to do training. This becomes a problem when there are so many initial variables to tweak that can have dramatic effects on the outcome. If you have the resources, this is an area that you can very easily help with. What kind of results do you get when you tweak dropout slightly? Or layer sizes? Or distributions? What about when you add or remove layers? We have fairly powerful hardware at our disposal, and we still don’t have conclusive results about the affects of many of the initial variables. Any testing is appreciated! The Deep Speech 2 paper is a great place to start for ideas if you’re already experienced in this field. Note that we already have a work-in-progress branch implementing some of these ideas.

Let’s say you don’t have those resources (and very few do), what else can you do? Well, you can still test changes on the LDC93S1 dataset, which consists of a single sample. You won’t be able to effectively tweak initial parameters (as unsurprisingly, a dataset of a single sample does not represent the behaviour of a dataset with many thousands of samples), but you will be able to test optimisations. For example, we’re experimenting with model quantisation, which will likely be one of multiple optimisations necessary to make trained models usable on mobile platforms. It doesn’t particularly matter how effective the model is, as long as it produces consistent results before and after quantisation. Any optimisation that can be made to reduce the size or the processor requirement of training and using the model is very valuable. Even small optimisations can save lots of time when you start talking about days worth of training.

Our clients are also in a fairly early state, and this is another place where contribution doesn’t require expensive hardware. We have two clients at the moment. One written in Python that takes advantage of TensorFlow serving, and a second that uses TensorFlow’s native C++ API. This second client is the beginnings of what we hope to be able to run on embedded hardware, but it’s very early days right now.

And Finally

Imagine a future where state-of-the-art speech-to-text is available, for free (in cost and liberty), on even low-powered devices. It’s already looking like speech is going to be the next frontier of human-computer interaction, and currently it’s a space completely tied up by entities like Google, Amazon, Microsoft and IBM. Putting this power into everyone’s hands could be hugely transformative, and it’s great to be working towards this goal, even in a relatively modest capacity. This is the vision, and I look forward to helping make it a reality.

by Chris Lord at February 23, 2017 04:55 PM

February 13, 2017

Emmanuele Bassi

On Vala

It seems I raised a bit of a stink on Twitter last week:

Of course, and with reason, I’ve been called out on this by various people. Luckily, it was on Twitter, so we haven’t seen articles on Slashdot and Phoronix and LWN with headlines like “GNOME developer says Vala is dead and will be removed from all servers for all eternity and you all suck”. At least, I’ve only seen a bunch of comments on Reddit about this, but nobody cares about that particular cesspool of humanity.

Sadly, 140 characters do not leave any room for nuance, so maybe I should probably clarify what I wrote on a venue with no character limit.

First of all, I’d like to apologise to people that felt I was attacking them or their technical choices: it was not my intention, but see above, re: character count. I may have only about 1000 followers on Twitter, but it seems that the network effect is still a bit greater than that, so I should be careful when wording opinions. I’d like to point out that it’s my private Twitter account, and you can only get to what it says if you follow me, or if you follow people who follow me and decide to retweet what I write.

My PSA was intended as a reflection on the state of Vala, and its impact on the GNOME ecosystem in terms of newcomers, from the perspective of a person that used Vala for his own personal projects; recommended Vala to newcomers; and has to deal with the various build issues that arise in GNOME because something broke in Vala or in projects using Vala. If you’re using Vala outside of GNOME, you have two options: either ignore all I’m saying, as it does not really apply to your case; or do a bit of soul searching, and see if what I wrote does indeed apply to you.

First of all, I’d like to qualify my assertion that Vala is a “dead language”. Of course people see activity in the Git repository, see the recent commits and think “the project is still alive”. Recent commits do not tell a complete story.

Let’s look at the project history for the past 10 cycles (roughly 2.5 years). These are the commits for every cycle, broken up in two values: one for the full repository, the other one for the whole repository except the vapi directory, which contains the VAPI files for language bindings:


Aside from the latest cycle, Vala has seen very little activity; the project itself, if we exclude binding updates, has seen less than 100 commits for every cycle — some times even far less. The latest cycle is a bit of an outlier, but we can notice a pattern of very little work for two/three cycles, followed by a spike. If we look at the currently in progress cycle, we can already see that the number of commits has decreased back to 55/42, as of this morning.


Number of commits is just a metric, though; more important is the number of contributors. After all, small, incremental changes may be a good thing in a language — though, spoiler alert: they are usually an indication of a series of larger issues, and we’ll come to that point later.

These are the number of developers over the same range of cycles, again split between committers to the full repository and to the full repository minus the vapi directory:


As you can see, the number of authors of changes is mostly stable, but still low. If we have few people that actively commit to the repository it means we have few people that can review a patch. It means patches linger longer and longer, while reviewers go through their queues; it means that contributors get discouraged; and, since nobody is paid to work full time on Vala, it means that any interruption caused by paid jobs will be a bottleneck on the project itself.

These concerns are not unique of a programming language: they exist for every volunteer-driven free and open source project. Programming languages, though, like core libraries, are problematic because any bottleneck causes ripple effects. You can take any stalled project you depend on, and vendor it into your own, but if that happens to the programming language you’re using, then you’re pretty much screwed.

For these reasons, we should also look at how well-distributed is the workload in Vala, i.e. which percentage of the work is done by the authors of those commits; the results are not encouraging. Over that range of cycles, Only two developers routinely crossed the 5% of commits:

  • Rico Tzschichholz
  • Jürg Billeter

And Rico has been the only one to consistently author >50% of the commits. This means there’s only one person dealing with the project on a day to day basis.

As the maintainer of a project who basically had to do all the work, I cannot even begin to tell you how soul-crushing that can become. You get burned out, and you feel responsible for everyone using your code, and then you get burned out some more. I honestly don’t want Rico to burn out, and you shouldn’t, either.

So, let’s go into unfair territory. These are the commits for Rust — the compiler and standard library:


These are the commits for Go — the compiler and base library:


These are the commits for Vala — both compiler and bindings:


These are the number of commits over the past year. Both languages are younger than Vala, have more tools than Vala, and are more used than Vala. Of course, it’s completely unfair to compare them, but those numbers should give you a sense of scale, of what is the current high bar for a successful programming language these days. Vala is a niche language, after all; it’s heavily piggy-backing on the GNOME community because it transpiles to C and needs a standard library and an ecosystem like the one GNOME provides. I never expected Vala to rise to the level of mindshare that Go and Rust currently occupy.

Nevertheless, we need to draw some conclusions about the current state of Vala — starting from this thread, perhaps, as it best encapsulates the issues the project is facing.

Vala, as a project, is limping along. There aren’t enough developers to actively effect change on the project; there aren’t enough developers to work on ancillary tooling — like build system integration, debugging and profiling tools, documentation. Saying that “Vala compiles to C so you can use tools meant for C” is comically missing the point, and it’s effectively like saying that “C compiles to binary code, so you can disassemble a program if you want to debug it”. Being able to inspect the language using tools native to the language is a powerful thing; if you have to do the name mangling in your head in order to set a breakpoint in GDB you are elevating the barrier of contributions way above the head of many newcomers.

Being able to effect change means also being able to introduce change effectively and without fear. This means things like continuous integration and a full test suite heavily geared towards regression testing. The test suite in Vala is made of 210 units, for a total of 5000 lines of code; the code base of Vala (vala AST, codegen, C code emitter, and the compiler) is nearly 75 thousand lines of code. There is no continuous integration, outside of the one that GNOME Continuous performs when building Vala, or the one GNOME developers perform when using jhbuild. Regressions are found after days or weeks, because developers of projects using Vala update their compiler and suddenly their projects cease to build.

I don’t want to minimise the enormous amount of work that every Vala contributor brought to the project; they are heroes, all of them, and they deserve as much credit and praise as we can give. The idea of a project-oriented, community-oriented programming language has been vindicated many times over, in the past 5 years.

If I scared you, or incensed you, then you can still blame me, and my lack of tact. You can still call me an asshole, and you can think that I’m completely uncool. What I do hope, though, is that this blog post pushes you into action. Either to contribute to Vala, or to re-new your commitment to it, so that we can look at my words in 5 years and say “boy, was Emmanuele wrong”; or to look at alternatives, and explore new venues in order to make GNOME (and the larger free software ecosystem) better.

by ebassi at February 13, 2017 01:12 PM

February 11, 2017

Emmanuele Bassi


Epoxy is a small library that GTK+, and other projects, use in order to access the OpenGL API in somewhat sane fashion, hiding all the awful bits of craziness that actually need to happen because apparently somebody dosed the water supply at SGI with large quantities of LSD in the mid-‘90s, or something.

As an added advantage, Epoxy is also portable on different platforms, which is a plus for GTK+.

Since I’ve started using Meson for my personal (and some work-related) projects as well, I’ve been on the lookout for adding Meson build rules to other free and open source software projects, in order to improve both their build time and portability, and to improve Meson itself.

As a small, portable project, Epoxy sounded like a good candidate for the port of its build system from autotools to Meson.

To the Bat Build Machine!


Since you may be interested just in the numbers, building Epoxy with Meson on my Kaby Lake four Core i7 and NMVe SSD takes about 45% less time than building it with autotools.

A fairly good fraction of the autotools time is spent going through the autogen and configure phases, because they both aren’t parallelised, and create a ton of shell invocations.

Conversely, Meson’s configuration phase is incredibly fast; the whole Meson build of Epoxy fits in the same time the and configure scripts complete their run.


Epoxy is a simple library, which means it does not need a hugely complicated build system set up; it does have some interesting deviations, though, which made the porting an interesting challenge.

For instance, on Linux and similar operating systems Epoxy uses pkg-config to find things like the EGL availability and the X11 headers and libraries; on Windows, though, it relies on finding the opengl32 shared or static library object itself. This means that we get something straightforward in the former case, like:

# Optional dependencies
gl_dep = dependency('gl', required: false)
egl_dep = dependency('egl', required: false)

and something slightly less straightforward in the latter case:

if host_system == 'windows'
  # Required dependencies on Windows
  opengl32_dep = cc.find_library('opengl32', required: true)
  gdi32_dep = cc.find_library('gdi32', required: true)

And, still, this is miles better than what you have to deal with when using autotools.

Let’s take a messy thing in autotools, like checking whether or not the compiler supports a set of arguments; usually, this involves some m4 macro that’s either part of autoconf-archive or some additional repository, like the xorg macros. Meson handles this in a much better way, out of the box:

# Use different flags depending on the compiler
if cc.get_id() == 'msvc'
  test_cflags = [
elif cc.get_id() == 'gcc'
  test_cflags = [
  test_cflags = [ ]

common_cflags = []
foreach cflag: test_cflags
  if cc.has_argument(cflag)
    common_cflags += [ cflag ]

In terms of speed, the configuration step could be made even faster by parallelising the compiler argument checks; right now, Meson has to do them all in a series, but nothing except some additional parsing effort would prevent Meson from running the whole set of checks in parallel, and gather the results at the end.

Generating code

In order to use the GL entry points without linking against libGL or libGLES* Epoxy takes the XML description of the API from the Khronos repository and generates the code that ends up being compiled by using a Python script to parse the XML and generating header and source files.

Additionally, and unlike most libraries in the G* stack, Epoxy stores its public headers inside a separate directory from its sources:

├── cross
├── doc
├── include
│   └── epoxy
├── registry
├── src
└── test

The autotools build has the src/ script create both the source and the header file for each XML at the same time using a rule processed when recursing inside the src directory, and proceeds to put the generated header under $(top_builddir)/include/epoxy, and the generated source under $(top_builddir)/src. Each code generation rule in the Makefile manually creates the include/epoxy directory under the build root to make up for parallel dispatch of each rule.

Meson makes is harder to do this kind of spooky-action-at-a-distance build, so we need to generate the headers in one pass, and the source in another. This is a bit of a let down, to be honest, and yet a build that invokes the generator script twice for each API description file is still faster under Ninja than a build with the single invocation under Make.

There are sill issues in this step that are being addressed by the Meson developers; for instance, right now we have to use a custom target for each generated header and source separately instead of declaring a generator and calling it multiple times. Hopefully, this will be fixed fairly soon.


Epoxy has a very small footprint, in terms of API, but it still benefits from having some documentation on its use. I decided to generate the API reference using Doxygen, as it’s not a G* library and does not need the additional features of gtk-doc. Sadly, Doxygen’s default style is absolutely terrible; it would be great if somebody could fix it to make it look half as good as the look gtk-doc gets out of the box.

Cross-compilation and native builds

Now we get into “interesting” territory.

Epoxy is portable; it works on Linux and *BSD systems; on macOS; and on Windows. Epoxy also works on both Intel Architecture and on ARM.

Making it run on Unix-like systems is not at all complicated. When it comes to Windows, though, things get weird fast.

Meson uses cross files to determine the environment and toolchain of the host machine, i.e. the machine where the result of the build will eventually run. These are simple text files with key/value pairs that you can either keep in a separate repository, in case you want to share among projects; or you can keep them in your own project’s repository, especially if you want to easily set up continuous integration of cross-compilation builds.

Each toolchain has its own; for instance, this is the description of a cross compilation done on Fedora with MingW:

c = '/usr/bin/x86_64-w64-mingw32-gcc'
cpp = '/usr/bin/x86_64-w64-mingw32-cpp'
ar = '/usr/bin/x86_64-w64-mingw32-ar'
strip = '/usr/bin/x86_64-w64-mingw32-strip'
pkgconfig = '/usr/bin/x86_64-w64-mingw32-pkg-config'
exe_wrapper = 'wine'

This section tells Meson where the binaries of the MingW toolchain are; the exe_wrapper key is useful to run the tests under Wine, in this case.

The cross file also has an additional section for things like special compiler and linker flags:

root = '/usr/x86_64-w64-mingw32/sys-root/mingw'
c_args = [ '-pipe', '-Wp,-D_FORTIFY_SOURCE=2', '-fexceptions', '--param=ssp-buffer-size=4', '-I/usr/x86_64-w64-mingw32/sys-root/mingw/include' ]
c_link_args = [ '-L/usr/x86_64-w64-mingw32/sys-root/mingw/lib' ]

These values are taken from the equivalent bits that Fedora provides in their MingW RPMs.

Luckily, the tool that generates the headers and source files is written in Python, so we don’t need an additional layer of complexity, with a tool built and run on a different platform and architecture in order to generate files to be built and run on a different platform.

Continuous Integration

Of course, any decent process of porting, these days, should deal with continuous integration. CI gives us confidence as to whether or not any change whatsoever we make actually works — and not just on our own computer, and our own environment.

Since Epoxy is hosted on GitHub, the quickest way to deal with continuous integration is to use TravisCI, for Linux and macOS; and Appveyor for Windows.

The requirements for Meson are just Python3 and Ninja; Epoxy also requires Python 2.7, for the dispatch generation script, and the shared libraries for GL and the native API needed to create a GL context (GLX, EGL, or WGL); it also optionally needs the X11 libraries and headers and Xvfb for running the test suite.

Since Travis offers an older version of Ubuntu LTS as its base system, we cannot build Epoxy with Meson; additionally, running the test suite is a crapshoot because the Mesa version if hopelessly out of date and will either cause most of the tests to be skipped or, worse, make them segfault. To sidestep this particular issue, I’ve prepared a Docker image with its own harness, and I use it as the containerised environment for Travis.

On Appveyor, thanks to the contribution of Thomas Marrinan we just need to download Python3, Python2, and Ninja, and build everything inside its own root; as an added bonus, Appveyor allows us to take the build artefacts when building from a tag, and shoving them into a zip file that gets deployed to the release page on GitHub.


Most of this work has been done off and on over a couple of months; the rough Meson build conversion was done last December, with the cross-compilation and native builds taking up the last bit of work.

Since Eric does not have any more spare time to devote to Epoxy, he was kind enough to give me access to the original repository, and I’ve tried to reduce the amount of open pull requests and issues there.

I’ve also released version 1.4.0 and I plan to do a 1.4.1 release soon-ish, now that I’m positive Epoxy works on Windows.

I’d like to thank:

  • Eric Anholt, for writing Epoxy and helping out when I needed a hand with it
  • Jussi Pakkanen and Nirbheek Chauhan, for writing Meson and for helping me out with my dumb questions on #mesonbuild
  • Thomas Marrinan, for working on the Appveyor integration and testing Epoxy builds on Windows
  • Yaron Cohen-Tal, for maintaining Epoxy in the interim

by ebassi at February 11, 2017 01:34 AM

January 11, 2017

Emmanuele Bassi

Constraints editing

Last year I talked about the newly added support for Apple’s Visual Format Language in Emeus, which allows to quickly describe layouts using a cross between ASCII art and predicates. For instance, I can use:


and obtain a layout like this one:

Boxes approximate widgets

Thanks to the contribution of my colleague Martin Abente Lahaye, now Emeus supports extensions to the VFL, namely:

  • arithmetic operators for constant and multiplication factors inside predicates, like [button1(button2 * 2 + 16)]
  • explicit attribute references, like [button1(button1.height / 2)]

This allows more expressive layout descriptions, like keeping aspect ratios between UI elements, without requiring hitting the code base.

Of course, editing VFL descriptions blindly is not what I consider a fun activity, so I took some time to write a simple, primitive editing tool that lets you visualize a layout expressed through VFL constraints:

I warned you that it was primitive and simple

Here’s a couple of videos showing it in action:

At some point, this could lead to a new UI tool to lay out widgets inside Builder and/or Glade.

As of now, I consider Emeus in a stable enough state for other people to experiment with it — I’ll probably make a release soon-ish. The Emeus website is up to date, as it is the API reference, and I’m happy to review pull requests and feature requests.

by ebassi at January 11, 2017 02:30 PM

December 17, 2016

Emmanuele Bassi

Laptop review

Dell XPS 13 (Developer Edition 2016)

After three and a half years with my trusty mid-2013 MacBook Air, I decided to get a new personal laptop. To be fair, my Air could have probably lasted another 12-18 months, even though its 8GB of RAM and Haswell Core i7 were starting to get pretty old for system development. The reason why I couldn’t keep using it reliably was that the SSD had already started showing SMART errors in January, and I already had to reset it and re-install from scratch once. Refurbishing the SSD out of warranty is still an option, if I decided to fork over a fair chunk of money and could live without a laptop for about a month1.

After getting recommendations for the previous XPS iterations by various other free software developers and Linux users, I waited until the new, Kaby Lake based model was available in the EU and ordered one. After struggling a bit with Dell’s website, I managed to get an XPS 13 with a US keyboard layout2 — which took about two weeks from order to delivery.

The hardware out of the box experience is pretty neat, with a nice, clean box; very Apple-like. The software’s first boot experience could be better, to say the least. Since I chose the Developer Edition, I got Ubuntu as the main OS instead of Windows, and I have been thoroughly underwhelmed by the effort spent by Dell and Canonical in polishing the software side of things. As soon as you boot the laptop, you’re greeted with an abstract video playing while the system does something. The video playback is not skippable, and does not have volume controls, so I got to “experience” it at full blast out of the speakers.

Ubuntu’s first boot experience UI to configure the machine is rudimentary, at best, and not really polished; it’s the installer UI without the actual installation bits, but it clearly hasn’t been refined for the HiDPI screen. The color scheme has progressively gone worse over the years; while all other OSes are trying to convey a theme of lightness using soft tones, the dark grey, purple, and dark orange tones used by Ubuntu make the whole UI seem heavier and oppressive.

After that, you get into Unity, and no matter how many times I try it, I still cannot enjoy using it. I also realized why various people coming from Ubuntu complain about the GNOME theme being too heavy on the whitespace: the Ubuntu default theme is super-compressed, with controls hugging together so closely that they almost seem to overlap. There is barely no affordance for the pointer, let alone for interacting through the touchscreen.

All in all, I resisted half a day on it, mostly to see what was the state of stock Ubuntu after many years of Fedora3. After that, I downloaded a Fedora 25 USB image and re-installed from scratch.

Sadly, I still have to report that Anaconda doesn’t shine at all. Luckily, I didn’t have to deal with dual booting, so I only needed to interact with the installer just enough to tell it to use the stock on disk layout and create the root user. Nevertheless, figuring out how to tell it to split my /home volume and encrypt it required me to go through the partitioning step three times because I couldn’t for the life of me understand how to commit to the layout I wanted.

After that, I was greeted by GNOME’s first boot experience — which is definitely more polished than Ubuntu’s, but it’s still a bit too “functional” and plain.

Fedora recognised the whole hardware platform out of the box: wifi, bluetooth, webcam, HiDPI screen. On the power management side, I was able to wring out about 8 hours of work (compilation, editing, web browsing, and a couple of Google hangouts) while on wifi, without having to plug in the AC.

Coming from years of Apple laptops, I was especially skeptical of the quality of the touchpad, but I have to say I was pleasantly surprised by its accuracy and feedback. It’s not MacBook-level, but it’s definitely the closest anyone has ever been to that slice of fried gold.

The only letdowns I can find are the position of the webcam, which is on the bottom of the panel and to the left, which makes for very dramatic angles when doing video calls, and requires you never type if you don’t want your fingers to be in the way; and the power brick, which has its own proprietary connector. There’s a USB-C port, though, so there may be provisions for powering the laptop through it.

The good

  • Fully supported hardware (Fedora 25)
  • Excellent battery life
  • Nice keyboard
  • Very good touchpad

The bad

  • The position of the webcam
  • Yet another power brick with custom connector I have to lug around

Lenovo Yoga

Thanks to my employer I now have a work laptop as well, in the shape of a Lenovo Yoga 900. I honestly crossed off Lenovo as a vendor after the vast amounts of stupidity they imposed on their clients — and that was after I decided to stop buying ThinkPad-branded laptops, given their declining build quality and bad technical choices. Nevertheless, you don’t look a gift horse in the mouth.

The out of the box experience of the Yoga is very much on par with the one I had with the XPS, which is to say: fairly Apple-like.

The Yoga 900 is a fairly well made machine. It’s an Intel Sky Lake platform, with a nice screen and good components. The screen can fold and turn the whole thing into a “tablet”, except that the keyboard faces downward, so it’s weird to handle in that mode. Plus, a 13” tablet is a pretty big thing to carry around. On the other hand, folding the laptop into a “tent” and using an external keyboard and pointer device is a nice twist on the whole “home office” approach. The webcam is, thankfully, centered and placed at the top of the panel — something that Lenovo has apparently changed in the 910 model, when they realised that folding the laptop would put the webcam at the bottom of the panel.

On the software side, the first boot experience into Windows 10 was definitely less than stellar. The Lenovo FBE software was not HiDPI-aware, which posed interesting challenges to the user interaction. This is something that a simple bit of QA would have found out, but apparently QA is too much to ask when dealing with a £1000 laptop. Luckily, I had to deal with that only inasmuch as I needed to get and install the latest firmware updates before installing Linux on the machine. Again, I went for Fedora.

As in the case of the Dell XPS, Fedora recognised all components of the hardware plaform out of the box. Even the screen rotation and folding works out of the box — though it can still get into inconsistent states when you move the laptop around, so I kind of recommend you keep the screen rotation locked until you actually need it.

On the power management side, I was impressed by how well the sleep states conserve battery power; I’m able to leave the Yoga suspended for a week and still have power on resume. The power brick has a weird USB-like connector to the laptop which makes me wonder what on earth were Lenovo engineers thinking; on the other hand, the adapter has a USB port which means you can charge it from a battery pack or from a USB adapter as well. There’s also a USB-C port, but I still haven’t tested if I can put power through it.

The keyboard is probably the biggest let down; the travel distance and feel of the keys is definitely not up to par with the Dell XPS, or with the Apple keyboards. The 900 has an additional column of navigation keys on the right edge that invariably messes up my finger memory — though it seems that the 910 has moved them to Function key combinations.5 The power button is on the right side of the laptop, which makes for unintended suspend/resume cycles when trying to plug in the headphones, or when moving the laptop. The touchpad is, sadly, very much lacking, with ghost tap events that forced me to disable the middle-click emulation everywhere4.

The good

  • Fully supported hardware (Fedora 25)
  • Solid build
  • Nice flip action
  • Excellent power management

The bad

  • Keyboard is a toy
  • Touchpad is a pale imitation of a good pointing device

  1. Which may still happen, all things considered; I really like the Air as a travel laptop. 

  2. After almost a decade with US layouts I find the UK layout inferior to the point of inconvenience. 

  3. On my desktop machine/gaming rig I dual boot between Windows 10 and Ubuntu GNOME, mostly because of the nVidia GPU and Steam. 

  4. That also increased my hatred of the middle-click-to-paste-selection easter egg a thousandfold, and I already hated the damned thing so much that my rage burned with the intensity of a million suns. 

  5. Additionally, the keyboard layout is UK — see note 2 above. 

by ebassi at December 17, 2016 12:00 AM

November 01, 2016

Emmanuele Bassi

Constraints (reprise)

After the first article on Emeus various people expressed interest in the internals of the library, so I decided to talk a bit about what makes it work.

Generally, you can think about constraints as linear equations:

view1.attr1 = view2.attr2 × multiplier + constant

You take the the value of attr2 on the widget view2, multiply it by a multiplier, add a constant, and apply the value to the attribute attr1 on the widget view1. You don’t need view2.attr2 either, for instance:

view1.attr1 = constant

is a perfectly valid constraint.

You also don’t need to use an equality; these two constraints:

view1.width ≥ 180
view1.width ≤ 250

specify that the width of view1 must be in the [ 180, 250 ] range, extremes included.


A layout, then, is just a pile of linear equations that describe the relations between each element. So, if we have a simple grid:

| super                                      |
|  +----------------+   +-----------------+  |
|  |     child1     |   |     child2      |  |
|  |                |   |                 |  |
|  +----------------+   +-----------------+  |
|                                            |
|  +--------------------------------------+  |
|  |               child3                 |  |
|  |                                      |  |
|  +--------------------------------------+  |
|                                            |

We can describe each edge’s position and size using constraints. It’s important to note that there’s an implicit “reading” order that makes it easier to write constraints; in this case, we start from left to right, and from top to bottom. Generally speaking, it’s possible to describe constraints in any order, but the Cassowary solving algorithm is geared towards the “reading” order above.

Each layout has some implicit constraint already available. For instance, the “trailing” edge is equal to the leading edge plus the width; the bottom edge is equal to the top edge plus the height; the center point is equal to the width or height, divided by two, plus the leading or bottom edges. These constraints help solving the layout, as well as provide additional values to other constraints.

So, let’s start.

From the first row:

  • the leading edge of the super container is the same as the leading edge of child1, minus a padding
  • the trailing edge of child1 is the same as the leading edge of child2, minus a padding
  • the trailing edge of child2 is the same as the trailing edge of the super container, minus a padding
  • the width of child1 is the same as the width of child2

From the second row:

  • the leading edge of the super container is the same as the leading edge of child3, minus a padding
  • the trailing edge of child2 is the same as the trailing edge of the super container, minus a padding

From the first column:

  • the top edge of the super container is the same as the top edge of child1, minus a padding
  • the bottom edge of child1 is the same as the top edge of child3, minus a padding
  • the bottom edge of the super container is the same as the bottom edge of child3, minus a padding
  • the height of child3 is the same as the height of child1

From the second column:

  • the top edge of the super container is the same as the top edge of the child2, minus a padding
  • the bottom edge of child1 is the same as the top edge of child3, minus a padding
  • the bottom edge of the super container is the same as the bottom edge of child3, minus a padding
  • the height of child3 is the same as the height of child2

As you can see, there are some redundancies; these are necessary to ensure that the layout is fully resolved, though obviously there are some properties of the elements of the layout that implicitly eliminate some results. For instance, if child3s height is the same as child1, and child1 lies on the same row as child2 and it’s an axis-aligned rectangle, the it immediately follows that child3 must have the same height of child2 as well. It’s important to note that, from a solver perspective, there only are values, not boxes, and you could use the solver with any kind of geometric shape; only the constraints give us the information on what those shapes should be. It’s also easier to start from a fully constrained layout and then remove constraints, than to start from a loosely constrained layout and add constraints until it’s stable.


From the text description we can now get into a system of equations:

  • super.start = child1.start - padding
  • child1.end = child2.start - padding
  • super.end = child2.end - padding
  • child1.width = child2.width
  • super.start = child3.start - padding
  • super.end = child3.end - padding
  • = - padding
  • child1.bottom = - padding
  • super.bottom = child3.bottom - padding
  • child3.height = child1.height
  • = - padding
  • child2.bottom = - padding
  • child3.height = child2.height

Apple, in its infinite wisdom and foresight, decided that this form is still too verbose. After looking at the Perl format page for far too long, Apple engineers came up with the Visual Format Language, or VFL for short.

Using VFL, the constraints above become:


Emeus, incidentally, ships with a simple utility that can take a set of VFL format strings and generate GtkBuilder descriptions that you can embed into your templates.


We’ve used a fair amount of constraints, or four lines of faily cryptic ASCII art, to basically describe a non-generic GtkGrid with two equally sized horizontal cells on the first row, and a single cell with a column span of two; compared to the common layout managers inside GTK+, this does not seem like a great trade off.

Except that we can describe any other layout without necessarily having to pack widgets inside boxes, with margins and spacing and alignment rules; we also don’t have to change the hierarchy of the boxes if we want to change the layout. For instance, let’s say that we want child3 to have a different horizontal padding, and a minimum and maximum width; we just need to change the constraints involved in that row:


Additionally, we now want to decouple child1 and child3 heights, and make child1 a fixed height item:


And make the height of child3 move within a range of values:


For all these cases we’d have to add intermediate boxes in between our children and the parent container — with all the issues of theming and updating things like GtkBuilder XML descriptions that come with that.


The truth is, though, that describing layouts in terms of constraints is another case of software engineering your way out of talking with designers; it’s great to start talking about incremental simplex solvers, and systems of linear equations, and ASCII art to describe your layouts, but it doesn’t make UI designers really happy. They can deal with it, and having a declarative language to describe constraints is more helpful than parachuting them into an IDE with a Swiss army knife and a can of beans, but I wouldn’t recommend it as a solid approach to developer experience.

Havoc wrote a great article on how layout management API doesn’t necessarily have to suck:

  • we can come up with a better, descriptive API that does not make engineers and designers cringe in different ways
  • we should have support from our tools, in order to manipulate constraints and UI elements
  • we should be able to combine boxes (which are easy to style) and constraints (which are easy to lay out) together in a natural and flexible way

Improving layout management should be a goal in the development of GTK+ 4.0, so feel free to jump in and help out.

by ebassi at November 01, 2016 05:07 PM

October 17, 2016

Emmanuele Bassi


GUI toolkits have different ways to lay out the elements that compose an application’s UI. You can go from the fixed layout management — somewhat best represented by the old ‘90s Visual tools from Microsoft; to the “springs and struts” model employed by the Apple toolkits until recently; to the “boxes inside boxes inside boxes” model that GTK+ uses to this day. All of these layout policies have their own distinct pros and cons, and it’s not unreasonable to find that many toolkits provide support for more than one policy, in order to cater to more use cases.

For instance, while GTK+ user interfaces are mostly built using nested boxes to control margins, spacing, and alignment of widgets, there’s a sizeable portion of GTK+ developers that end up using GtkFixed or GtkLayout containers because they need fixed positioning of children widget — until they regret it, because now they have to handle things like reflowing, flipping contents in right-to-left locales, or font size changes.

Additionally, most UI designers do not tend to “think with boxes”, unless it’s for Web pages, and even in that case CSS affords a certain freedom that cannot be replicated in a GUI toolkit. This usually results in engineers translating a UI specification made of ties and relations between UI elements into something that can be expressed with a pile of grids, boxes, bins, and stacks — with all the back and forth, validation, and resources that the translation entails.

It would certainly be easier if we could express a GUI layout in the same set of relationships that can be traced on a piece of paper, a UI design tool, or a design document:

  • this label is at 8px from the leading edge of the box
  • this entry is on the same horizontal line as the label, its leading edge at 12px from the trailing edge of the label
  • the entry has a minimum size of 250px, but can grow to fill the available space
  • there’s a 90px button that sits between the trailing edge of the entry and the trailing edge of the box, with 8px between either edges and itself

Sure, all of these constraints can be replaced by a couple of boxes; some packing properties; margins; and minimum preferred sizes. If the design changes, though, like it often does, reconstructing the UI can become arbitrarily hard. This, in turn, leads to pushback to design changes from engineers — and the cost of iterating over a GUI is compounded by technical inertia.

For my daily work at Endless I’ve been interacting with our design team for a while, and trying to get from design specs to applications more quickly, and with less inertia. Having CSS available allowed designers to be more involved in the iterative development process, but the CSS subset that GTK+ implements is not allowed — for eminently good reasons — to change the UI layout. We could go “full Web”, but that comes with a very large set of drawbacks — performance on low end desktop devices, distribution, interaction with system services being just the most glaring ones. A native toolkit is still the preferred target for our platform, so I started looking at ways to improve the lives of UI designers with the tools at our disposal.

Expressing layout through easier to understand relationships between its parts is not a new problem, and as such it does not have new solutions; other platforms, like the Apple operating systems, or Google’s Android, have started to provide this kind of functionality — mostly available through their own IDE and UI building tools, but also available programmatically. It’s even available for platforms like the Web.

What many of these solutions seem to have in common is using more or less the same solving algorithm — Cassowary.

Cassowary is:

an incremental constraint solving toolkit that efficiently solves systems of linear equalities and inequalities. Constraints may be either requirements or preferences. Client code specifies the constraints to be maintained, and the solver updates the constrained variables to have values that satisfy the constraints.

This makes it particularly suited for user interfaces.

The original implementation of Cassowary was written in 1998, in Java, C++, and Smalltalk; since then, various other re-implementations surfaced: Python, JavaScript, Haskell, slightly-more-modern-C++, etc.

To that collection, I’ve now added my own — written in C/GObject — called Emeus, which provides a GTK+ container and layout manager that uses the Cassowary constraint solving algorithm to compute the allocation of each child.

In spirit, the implementation is pretty simple: you create a new EmeusConstraintLayout widget instance, add a bunch of widgets to it, and then use EmeusConstraint objects to determine the relations between children of the layout:

simple-grid.js [Lines 89-170] download
        let button1 = new Gtk.Button({ label: 'Child 1' });
        this._layout.pack(button1, 'child1');;

        let button2 = new Gtk.Button({ label: 'Child 2' });
        this._layout.pack(button2, 'child2');;

        let button3 = new Gtk.Button({ label: 'Child 3' });
        this._layout.pack(button3, 'child3');;

            new Emeus.Constraint({ target_attribute: Emeus.ConstraintAttribute.START,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button1,
                                   source_attribute: Emeus.ConstraintAttribute.START,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_object: button1,
                                   target_attribute: Emeus.ConstraintAttribute.WIDTH,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button2,
                                   source_attribute: Emeus.ConstraintAttribute.WIDTH }),
            new Emeus.Constraint({ target_object: button1,
                                   target_attribute: Emeus.ConstraintAttribute.END,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button2,
                                   source_attribute: Emeus.ConstraintAttribute.START,
                                   constant: -12.0 }),
            new Emeus.Constraint({ target_object: button2,
                                   target_attribute: Emeus.ConstraintAttribute.END,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_attribute: Emeus.ConstraintAttribute.END,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_attribute: Emeus.ConstraintAttribute.START,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button3,
                                   source_attribute: Emeus.ConstraintAttribute.START,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_object: button3,
                                   target_attribute: Emeus.ConstraintAttribute.END,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_attribute: Emeus.ConstraintAttribute.END,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_attribute: Emeus.ConstraintAttribute.TOP,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button1,
                                   source_attribute: Emeus.ConstraintAttribute.TOP,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_attribute: Emeus.ConstraintAttribute.TOP,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button2,
                                   source_attribute: Emeus.ConstraintAttribute.TOP,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_object: button1,
                                   target_attribute: Emeus.ConstraintAttribute.BOTTOM,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button3,
                                   source_attribute: Emeus.ConstraintAttribute.TOP,
                                   constant: -12.0 }),
            new Emeus.Constraint({ target_object: button2,
                                   target_attribute: Emeus.ConstraintAttribute.BOTTOM,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button3,
                                   source_attribute: Emeus.ConstraintAttribute.TOP,
                                   constant: -12.0 }),
            new Emeus.Constraint({ target_object: button3,
                                   target_attribute: Emeus.ConstraintAttribute.HEIGHT,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button1,
                                   source_attribute: Emeus.ConstraintAttribute.HEIGHT }),
            new Emeus.Constraint({ target_object: button3,
                                   target_attribute: Emeus.ConstraintAttribute.HEIGHT,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button2,
                                   source_attribute: Emeus.ConstraintAttribute.HEIGHT }),
            new Emeus.Constraint({ target_object: button3,
                                   target_attribute: Emeus.ConstraintAttribute.BOTTOM,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_attribute: Emeus.ConstraintAttribute.BOTTOM,
                                   constant: -8.0 }),

A simple grid

This obviously looks like a ton of code, which is why I added the ability to describe constraints inside GtkBuilder XML:

centered.ui [Lines 28-45] download
              <constraint target-object="button_child"
              <constraint target-object="button_child"
              <constraint target-object="button_child"

Additionally, I’m writing a small parser for the Visual Format Language used by Apple for their own auto layout implementation — even though it does look like ASCII art of Perl format strings, it’s easy to grasp.

The overall idea is to prototype UIs on top of this, and then take advantage of GTK+’s new development cycle to introduce something like this and see if we can get people to migrate from GtkFixed/GtkLayout.

by ebassi at October 17, 2016 05:30 PM

September 21, 2016

Emmanuele Bassi

Who wrote GTK+ and more

I’ve just posted on the GTK+ development blog the latest article in the “who writes GTK+” series. Now that we have a proper development blog, this is the kind of content that should be present there instead of my personal blog.

If you’re not following the GTK+ development blog, you really should do so now. Additionally, you should follow @GTKToolkit on Twitter and, if you’re into niche social platforms, a kernel developer, or a Linus groupie, there’s the GTK+ page on Google Plus.

Additionally, if you’re contributing to GTK+, or using it, you should consider writing an article for the developers blog; just contact me on IRC (I’m ebassi on or send me an email if you have an idea about something you worked on, a new feature, some new API, or the internals of the platform.

by ebassi at September 21, 2016 10:19 AM

August 27, 2016

Emmanuele Bassi

GSK Demystified (III) — Interlude

See the the tag for the GSK demystified series for the other articles in the series.

There have been multiple reports after GUADEC about the state of GSK, so let’s recap a bit by upholding the-long standing tradition of using a FAQ format as a rhetorical device.

Q: Is GSK going to be merged in time for 3.22?

A: Short answer: no.

Long-ish answer: landing a rewrite of how GTK renders its widgets near the end of a stable API cycle, when a bunch of applications that tend to eschew GTK+ itself for rendering their content — like Firefox or LibreOffice — finally, after many years, ported to GTK+ 3, seemed a bit sadistic on our part.

Additionally, GSK still has some performance issues when it comes to large or constantly updating UIs; try running, for instance gtk3-widget-factory on HiDPI using the wip/ebassi/gsk-renderer branch and marvel at the 10 fps we achieve currently.

Q: Aside from performance, are there any other concerns?

A: Performance is pretty much the biggest concern we found. We need to reduce the amount of rasterizations we perform with Cairo, and we need better ways to cache and reuse those rasterizations across frames; we really want all buttons with the same CSS state and size to be rasterized once, for instance, and just drawn multiple times in their right place. The same applies to things like icons. Caching text runs and glyphs would also be a nice win.

The nice bit is that, with a fully retained render tree, now we can actually do this.

The API seems to have survived contact with the widget drawing code inside GTK+, so it’s a matter of deciding how much we need to provide in terms of convenience API for out-of-tree widgets and containers. The fallback code is in place, right now, which means that porting widgets can proceed at its own pace.

There are a few bugs in the rendering code, like blend modes; and I still want to have filters like blur and color conversions in the GskRenderNode API.

Finally, there’s still the open question of the mid-level scene graph API, or GskLayer, that will replace Clutter and Clutter-GTK; the prototype is roughly done, but things like animations are not set in stone due to lack of users.

Q: Is there a plan for merging GSK?

A: Yes, we do have a plan.

The merge window mostly hinges on when we’re going to start with a new development cycle for the next API, but we decided that as soon as the window opens, GSK will land. Ideally we want to ensure that, by the time 4.0 rolls around, there won’t be any users of GtkWidget::draw left inside GNOME, so we’ll be able to deprecate its use, and applications targeting the new stable API will be able to port away from it.

Having a faster, more featureful, and more optimized rendering pipeline inside GTK+ is a pretty good new feature for the next API cycle, and we think that the port is not going to be problematic, given the amount of fallback code paths in place.

Additionaly, by the time we release GTK+ 4.0, we’ll have a more battle-tested API to replace Clutter and Clutter-GTK, allowing applications to drop a dependency.

Q: How can I help?

A: If you’re a maintainer of a GTK+ library or application, or if you want to help out the development of GTK+ itself, then you can pick up my GSK development branch, fork it off, and look at porting widgets and containers. I’m particularly interested in widgets using complex drawing operations. See where the API is too bothersome, and look for patterns we can wrap into convenience API provided by GTK+ itself. For instance, the various gtk_render_* family of functions are a prime candidate for being replaced by equivalent functions that return a GskRenderNode instead.

Testing is also welcome; for instance, look at missing widgets or fragments of rendering.

Hopefully, the answers above should have provided enough context for the current state of GSK.

The next time, we’ll return to design and implementation notes of the API itself.

by ebassi at August 27, 2016 09:28 AM

August 26, 2016

Ross Burton

So long Wordpress, thanks for all the exploits

I've been meaning to move my incredibly occasional blog away from Wordpress for a long time, considering that I rarely use my blog and it's a massive attack surface. But there's always more important things to do, so I never did.

Then in the space of ten days I received two messages from my web host, one that they'd discovered a spam bot running on my account, and after that was cleared and passwords reset another that they discovered a password cracker.

Clearly I needed to do something. A little research led me to Pelican, which ticks my "programming language I can read" (Python), "maintained", and "actually works" boxes. A few evenings of fiddling later and I just deleted both Wordpress and Pyblosxom from my host, so hopefully that's the end of the exploits.

No doubt there's some links that are now dead, all the comments have disappeared, and the theme needs a little tweaking still, but that is all relatively minor stuff. I promise to blog more, too.

by Ross Burton at August 26, 2016 10:10 PM

August 18, 2016

Emmanuele Bassi


Writing this from my home office, while drinking the first coffee of the day.

Once again, GUADEC has come and gone.

Once again, it was impeccably organized by so many wonderful volunteers.

Once again, I feel my batteries recharged.

Once again, I’ve had so many productive conversations.

Once again, I’ve had many chances to laugh.

Once again, I’ve met both new and long since friends.

Once again, I’ve visited a new city, with interesting life, food, drinks, and locations.

Once again, thanks to everybody who worked very hard to make this GUADEC happen and be the success it was.

Once again, we return.

by ebassi at August 18, 2016 08:00 AM

August 10, 2016

Emmanuele Bassi


Speaking at GUADEC 2016

I’m going to talk about the evolution of GTK+ rendering, from its humble origins of X11 graphics contexts, to Cairo, to GSK. If you are interested in this kind of stuff, you can either attend my presentation on Saturday at 11 in the Grace Room, or you can just find me and have a chat.

I’m also going to stick around during the BoF days — especially for the usual GTK+ team meeting, which will be on the 15th.

See you all in Karlsruhe.

by ebassi at August 10, 2016 02:40 PM

GSK Demystified (II) — Rendering

See the previous article for an introduction to GSK.

In order to render with GSK we need to get acquainted with two classes:

  • GskRenderNode, a single element in the rendering tree
  • GskRenderer, the object that effectively turns the rendering tree into rendering commands


The usual way to put things on the screen involves asking the windowing system to give us a memory region, filling it with something, and then asking the windowing system to present it to the graphics hardware, in the hope that everything ends up on the display. This is pretty much how every windowing system works. The only difference lies in that “filling it with something”.

With Cairo you get a surface that represents that memory region, and a (stateful) drawing context; every time you need to draw you set up your state and emit a series of commands. This happens on every frame, starting from the top level window down into every leaf object. At the end of the frame, the content of the window is swapped with the content of the buffer. Every frame is drawn while we’re traversing the widget tree, and we have no control on the rendering outside of the state of the drawing context.

A tree of GTK widgets

With GSK we change this process with a small layer of indirection; every widget, from the top level to the leaves, creates a series of render nodes, small objects that each hold the drawing state for their contents. Each node is, at its simplest, a collection of:

  • a rectangle, representing the region used to draw the contents
  • a transformation matrix, representing the parent-relative set of transformations applied to the contents when drawing
  • the contents of the node

Every frame, thus, is composed of a tree of render nodes.

A tree of GTK widgets and GSK render nodes

The important thing is that the render tree does not draw anything; it describes what to draw (which can be a rasterization generated using Cairo) and how and where to draw it. The actual drawing is deferred to the GskRenderer instance, and will happen only once the tree has been built.

After the rendering is complete we can discard the render tree. Since the rendering is decoupled from the widget state, the widgets will hold all the state across frames — as they already do. Each GskRenderNode instance is, thus, a very simple instance type instead of a full GObject, whose lifetime is determined by the renderer.


The renderer is the object that turns a render tree into the actual draw commands. At its most basic, it’s a simple compositor, taking the content of each node and its state and blending it on a rendering surface, which then gets pushed to the windowing system. In practice, it’s a tad more complicated than that.

Each top-level has its own renderer instance, as it requires access to windowing system resources, like a GL context. When the frame is started, the renderer will take a render tree and a drawing context, and will proceed to traverse the render tree in order to translate it into actual render commands.

As we want to offload the rendering and blending to the GPU, the GskRenderer instance you’ll most likely get is one that uses OpenGL to perform the rendering. The GL renderer will take the render tree and convert it into a (mostly flat) list of data structures that represent the state to be pushed on the state machine — the blending mode, the shading program, the textures to sample, and the vertex buffer objects and attributes that describe the rendering. This “translation” stage allows the renderer to decide which render nodes should be used and which should be discarded; it also allows us to create, or recycle, all the needed resources when the frame starts, and minimize the state transitions when doing the actual rendering.

Going from here to there

Widgets provided by GTK will automatically start using render nodes instead of rendering directly to a Cairo context.

There are various fallback code paths in place in the existing code, which means that, luckily, we don’t have to break any existing out of tree widget: they will simply draw themselves (and their children) on an implicit render node. If you want to port your custom widgets or containers, on the other hand, you’ll have to remove the GtkWidget::draw virtual function implementation or signal handler you use, and override the GtkWidget::get_render_node() virtual function instead.

Containers simply need to create a render node for their own background, border, or custom drawing; then they will have to retrieve the render node for each of their children. We’ll provide convenience API for that, so the chances of getting something wrong will be, hopefully, reduced to zero.

Leaf widgets can remain unported a bit longer, unless they are composed of multiple rendering elements, in which case they simply need to create a new render node for each element.

I’ll provide more example of porting widgets in a later article, as soon as the API will have stabilized.

by ebassi at August 10, 2016 02:20 PM

July 05, 2016

Emmanuele Bassi

GSK Demystified (I) — A GSK primer

Last month I published an article on how GTK+ draws widgets on the toolkit development blog. The article should give you some background on the current state of what GTK does when something asks it to draw what you see on the screen — so it’s probably a good idea to read that first, and then come back here. Don’t worry, I’ll wait…

Welcome back! Now that we’re on the same page… What I didn’t say in that article is that most of it happens on your CPU, rather than on your GPU — except the very last step, when the compositor takes the contents of each window and pushes them to the GPU, likely via the 3D pipeline provided by your windowing system, to composite them into what you’ll likely see on your screen.

The goal for GUI toolkits, for the past few years, has been to take advantage of the GPU programmable pipeline as much as possible, as it allows to use the right hardware for the job, while keeping your CPU free for working on the application logic, or simply powered down and avoid polar bears to squeeze on an ever reducing sheet of artic ice. It also allows to improve the separation of jobs internally to the toolkit, with the potential of splitting up the work across multiple CPU cores.

As toolkit developers, we currently have only one major API for talking to the GPU, programming it, and using it to put the contents of a window on the screen, and that’s OpenGL.

You may think: well, we use Cairo; Cairo has support for an OpenGL device. Just enable that, and we’re good to go, right? and you wouldn’t be entirely wrong — except that you really don’t want to use the OpenGL Cairo device in production, as it’s both a poor fit for the Cairo drawing model and it’s basically unmaintained. Also, Cairo is pretty much 2D only, and while you can fake some 3D transformations with it, it’s definitely not up to the task of implementing the full CSS transformation specification.

Using OpenGL to generate pixel-perfect results is complicated, and in some cases it just goes against the expectations of the GPU itself: reading back data; minuscule fragments and tesselations; tons of state changes — those are all pretty much no-go areas when dealing with a GPU.

On the other hand, we really want to stop relying so much on the CPU for drawing; leaving your cores idle allows them to go into low power states, preserving them and improving your battery life; additionally, any cycle that is not spent inside the toolkit is a cycle available to your application logic.

As you may know from the past few years, I’ve been working on writing a new API that lets GTK offload to the GPU what currently happens on the CPU; it’s called GSK — short for GTK Scene Kit — and its meant to achieve two things:

  • render the contents of a GTK application more efficiently
  • provide a scene graph API to both the toolkit and applications

With these two goals in mind, I want to give a quick overview on how GSK works, and at which point we are in the development.

As GSK is meant to serve two purposes it makes sense to have two separate layers of API. This is a design decision that solidified after various discussions at GUADEC 2015. As such, it required a fair amount of rework of the existing code base, but very much for the better.

At the lowest level we have:

  • GskRenderNode, which is used to describe a tree of textures, blend modes, filters, and transformations; this tree is easily converted in render operations for graphics API like Cairo and OpenGL, and Vulkan in the near future.
  • GskRenderer, an object that takes a tree of GskRenderNode instances that describes the contents of a frame, and renders it on a given GdkDrawingContext.

Every time you wish to render something, you build a tree of render nodes; specify their content; set up their transformations, opacity, and blending; and, finally, you pass the tree to the renderer. After that, the renderer owns the render nodes tree, so you can safely discard it after each frame.

On top of this lower level API we can implement both the higher level scene graph API based on GskLayer that I presented at GUADEC; and GTK+ itself, which allows us to avoid reimplementing GTK+ widgets in terms of GSK layers.

I’m going to talk about GskRenderer and GskRenderNode more in depth in a future blog post, but if you’re looking for some form of prior art, you can check the ClutterPaintNode API in Clutter.

Widgets in GTK+ would not really be required to use render nodes: ideally, we want to get to a future where widgets are a small, composable unit whose appearances that can be described using CSS; while we build towards that future, though, we can incrementally transition from the current immediate more rendering model to a more structured tree of rendering operations that can be reordered and optimized for the target graphics layer.

Additionally, by sharing the same rendering model between the more complex widget API and the more freeform layers one, we only have to care about optmizing a single set of operations.

You can check the current progress of my work in the gsk-renderer branch of the GTK+ repository.

by ebassi at July 05, 2016 10:11 AM

June 15, 2016

Emmanuele Bassi

Long term support for GTK+

Dear Morten,

A belief that achieving stability can be done after most of the paid contributors have run off to play with new toys is delusional. The record does not support it.

The record (in terms of commit history) seems to not support your position — as much as you think everyone else is “delusional” about it, the commit log does not really lie.

The 2.24.0 release was cut in January, 2011 — five and half years ago. No new features, no new API. Precisely what would happen with the new release plan, except that the new plan would also give a much better cadence to this behaviour.

Since then, the 2.24 branch — i.e. the “feature frozen” branch has seen 873 commits (as of this afternoon, London time), and 30 additional releases.

Turns out that people are being paid to maintain feature-frozen branches because that’s where the “boring” bits are — security issues, stability bugs, etc. Volunteers are much more interested in getting the latest and greatest feature that probably does not interest you now, but may be requested by your users in two years.

Isn’t it what you asked multiple times? A “long term support” release that gives you time to port your application to a stable API that has seen most of the bugs and uncertainty already squashed?

by ebassi at June 15, 2016 05:24 PM

June 08, 2016

Emmanuele Bassi

Experiments in Meson

Last GUADEC I attended Jussi Pakkanen’s talk about his build system, Meson; if you weren’t there, I strongly recommend you watch the recording. I left the talk impressed, and I wanted to give Meson a try. Cue 9 months later, and a really nice blog post from Nirbheek on how Centricular is porting GStreamer from autotools to Meson, and I decided to spend some evening/weekend time on learning Meson.

I decided to use the simplest project I maintain, the one with the minimal amount of dependencies and with a fairly clean autotools set up — i.e. Graphene.

Graphene has very little overhead in terms of build system by itself; all it needs are:

  • a way to check for compiler flags
  • a way to check for the existence of headers and types
  • a way to check for platform-specific extensions, like SSE or NEON

Additionally, it needs a way to generate documentation and introspection data, but those are mostly hidden in weird incantations provided by other projects, like gtk-doc and gobject-introspection, so most of the complexity is hidden from the maintainer (and user) point of view.

Armed with little more than the Meson documentation wiki and the GStreamer port as an example, I set off towards the shining new future of a small, sane, fast build system.

The Good

Meson uses additional files, so I didn’t have to drop the autotools set up while working on the Meson one. Once I’m sure that the results are the same, I’ll be able to remove the various,, and friends, and leave just the Meson file.

Graphene generates two header files during its configuration process:

  • a config.h header file, for internal use; we use this file to check if a specific feature or header is available while building Graphene itself
  • a graphene-config.h header file, for public use; we expose this file to Graphene users for build time detection of platform features

While the autotools code that generates config.h is pretty much hidden from the developer perspective, with autoconf creating a template file for you by pre-parsing the build files, the part of the build system that generates the graphene-config.h one is pretty much a mess of shell script, cacheable variables for cross-compilation, and random m4 escaping rules. Meson, on the other hand, treats both files exactly the same way: generate a configuration object, set variables on it, then take the appropriate configuration object and generate the header file — with or without a template file as an input:

# Internal configuration header
configure_file(input: 'config.h.meson',
               output: 'config.h',
               configuration: conf)

# External configuration header
configure_file(input: 'graphene-config.h.meson',
               output: 'graphene-config.h',
               configuration: graphene_conf,
               install: true,
               install_dir: 'lib/graphene-1.0/include')

While explicit is better than implicit, at least most of the time, having things taken care for you avoids the boring bits and, more importantly, avoids getting the boring bits wrong. If I had a quid for every broken invocation of the introspection scanner I’ve ever seen or had to fix, I’d probably retire on a very small island. In Meson, this is taken care by a function in the gnome module:


    # Build introspection only if we enabled building GObject types
    build_gir = build_gobject
    if build_gobject and get_option('enable-introspection')
      gir = find_program('g-ir-scanner', required: false)
      build_gir = gir.found() and not meson.is_cross_build()

    if build_gir
      gir_extra_args = [
        '--identifier-filter-cmd=' + meson.source_root() + '/src/',
        '-I' + meson.source_root() + '/src',
        '-I' + meson.build_root() + '/src',
                         sources: headers + sources,
                         namespace: 'Graphene',
                         nsversion: graphene_api_version,
                         identifier_prefix: 'Graphene',
                         symbol_prefix: 'graphene',
                         export_packages: 'graphene-gobject-1.0,
                         includes: [ 'GObject-2.0' ],
                         install: true,
                         extra_args: gir_extra_args)

Meson generates Ninja rules by default, and it’s really fast at that. I can get a fully configured Graphene build set up in less that a couple of seconds. On top of that, Ninja is incredibly fast. The whole build of Graphene takes less than 5 seconds — and I’m counting building the tests and benchmarks, something that I had to move to be on demand for the autotools set up because they added a noticeable delay to the build. Now I always know if I’ve just screwed up the build, and not just when I run make check.

Jussi is a very good maintainer, helpful and attentive at issues reported to his project, and quick at reviewing patches. The terms for contributing to Meson are fairly standard, and the barrier for entry is very low. For a project like a build system, which interacts and enables other projects, this is a very important thing.

The Ugly

As I said, Meson has some interesting automagic handling of the boring bits of building software, like the introspection data. But there are other boring bits that do not have convenience wrappers, and thus you get into overly verbose section of your — and while it’s definitely harder to get those wrong, compared to autoconf or automake, it can still happen.

Even in the case of automagic handling, though, there are cases when you have to deal with some of the magic escaping from under the rug. Generally it’s not hard to understand what’s missing or what’s necessary, but it can be a bit daunting when you’re just staring at a Python exception barfed on your terminal.

The documentation is kept in a wiki, which is generally fine for keeping it up to date; but it’s hard to search — as all wikis are — and hard to visually scan. I’ve lost count of the times I had to search for all the methods on the meson built-in object, and I never remember which page I have to search for, or in.

The inheritance chain for some objects is mentioned in passing, but it’s hard to track; which methods does the test object have? What kind of arguments does the compiler.compiles() method have? Are they positional or named?

The syntax and API reference documentation should probably be generated from the code base, and look more like an API reference than a wiki.

Examples are hard to come by. I looked at the GStreamer port, but I also had to start looking at Meson’s own test suite.

Modules are all in tree, at least for the time being. This means that if I want to add an ad hoc module for a whole complex project like, say, GNOME, I’d have to submit it to upstream. Yeah, I know: bad example, Meson already has a GNOME module; but the concept still applies.

Meson does not do dist tarballs. I’ve already heard people being skeptical about this point, but I personally don’t care that much. I can generate a tarball from a Git tag, and while it won’t be self-hosting, it’s already enough to get a distro going. Seriously, though: building from a Git tag is a better option than building from a tarball, in 2016.

The Bad

There shocking twist is that nothing stands out as “bad”. Mostly, it’s just ugly stuff — caused either by missing convenience functionality that will by necessity appear once people start using Meson more; or by the mere fact that all build systems are inherently ugly.

On the other hand, there’s some badness in the tooling around project building. For instance, Travis-CI does not support it, mostly because they use an ancient version of Ubuntu LTS as the base environment. Jhbuild does not have a Meson/Ninja build module, so we’ll have to write that one; same thing for GNOME Builder. While we wait, having a dummy configure script or a dummy Makefile that would probably help.

These are not bad things per se, but they definitely block further adoption.


I think Meson has great potential, and I’d love to start using it more for my projects. If you’re looking for a better, faster, and more understandable build system then you should grab Meson and explore it.

by ebassi at June 08, 2016 11:00 PM

June 01, 2016

Chris Lord

Open Source Speech Recognition

I’m currently working on the Vaani project at Mozilla, and part of my work on that allows me to do some exploration around the topic of speech recognition and speech assistants. After looking at some of the commercial offerings available, I thought that if we were going to do some kind of add-on API, we’d be best off aping the Amazon Alexa skills JS API. Amazon Echo appears to be doing quite well and people have written a number of skills with their API. There isn’t really any alternative right now, but I actually happen to think their API is quite well thought out and concise, and maps well to the sort of data structures you need to do reliable speech recognition.

So skipping forward a bit, I decided to prototype with Node.js and some existing open source projects to implement an offline version of the Alexa skills JS API. Today it’s gotten to the point where it’s actually usable (for certain values of usable) and I’ve just spent the last 5 minutes asking it to tell me Knock-Knock jokes, so rather than waste any more time on that, I thought I’d write this about it instead. If you want to try it out, check out this repository and run npm install in the usual way. You’ll need pocketsphinx installed for that to succeed (install sphinxbase and pocketsphinx from github), and you’ll need espeak installed and some skills for it to do anything interesting, so check out the Alexa sample skills and sym-link the ‘samples‘ directory as a directory called ‘skills‘ in your ferris checkout directory. After that, just run the included example file with node and talk to it via your default recording device (hint: say ‘launch wise guy‘).

Hopefully someone else finds this useful – I’ll be using this as a base to prototype further voice experiments, and I’ll likely be extending the Alexa API further in non-standard ways. What was quite neat about all this was just how easy it all was. The Alexa API is extremely well documented, Node.js is also extremely well documented and just as easy to use, and there are tons of libraries (of varying quality…) to do what you need to do. The only real stumbling block was pocketsphinx’s lack of documentation (there’s no documentation at all for the Node bindings and the C API documentation is pretty sparse, to say the least), but thankfully other members of my team are much more familiar with this codebase than I am and I could lean on them for support.

I’m reasonably impressed with the state of lightweight open source voice recognition. This is easily good enough to be useful if you can limit the scope of what you need to recognise, and I find the Alexa API is a great way of doing that. I’d be interested to know how close the internal implementation is to how I’ve gone about it if anyone has that insider knowledge.

by Chris Lord at June 01, 2016 04:54 PM

May 16, 2016

Emmanuele Bassi

Reviving the GTK development blog

The GTK+ project has a development blog.

I know it may come as a shock to many of you, and you’d be completely justified in thinking that I just made that link up — but the truth is, the GTK+ project has had a development blog for a long while.

Sadly, the blog hasn’t been updated in five years — mostly around the time 3.0 was released, and the GTK+ website was revamped; even before that, the blog was mostly used for release announcements, which do not make for very interesting content.

Like many free and open source software projects, GTK+ has various venues of interaction between its contributors and its users; mailing lists, personal blogs, IRC, Stack Overflow, reddit, and many, many other channels. In this continuum of discussions it’s both easy to get lost and to lose the sense of having said things before — after all, if I repeat something at least three times a week on three different websites for three years, how can people still not know about it? Some users will always look at catching up after three years, because their projects live on very different schedules that the GTK releases one; others will try to look for official channels, even if the free and open source software landscape has fragmented to such a degree that any venue can be made “official” by the simple fact of having a contributor on it; others again will look at the API reference for any source of truth, forgetting, possibly, that if everything went into the API reference then it would cease to be useful as a reference.

The GTK+ development blog is not meant to be the only source for truth, or the only “official” channel; it’s meant to be a place for interesting content regarding the project, for developers using GTK+ or considering to use it; a place that acts as a hub to let interested people discover what’s up with GTK+ itself but that don’t want to subscribe to the commits list or join IRC.

From an editorial standpoint, I’d like the GTK+ development blog to be open to contribution from people contributing to GTK+; using GTK+; and newcomers to the GTK+ code base and their experiences. What’s a cool GTK+ feature that you worked on? How did GTK+ help you in writing your application or environment? How did you find contributing to GTK+ for the first time? If you want to write an article for the GTK+ blog talking about this, then feel free to reach out to me with an outline, and I’ll be happy to help you.

In the meantime, the first post in the This Week in GTK+ series has gone up; you’ll get a new post about it every Monday, and if you want to raise awareness on something that happened during a week, feel free to point it out on the wiki.

by ebassi at May 16, 2016 06:38 PM

May 05, 2016

Emmanuele Bassi

Who wrote GTK+ 3.20

Last time I tried to dispel the notion that GTK+ is dead or dying. Others have also chimed in, and it seems that we’re picking up the pace into making GTK a more modern, more useful driving force into the Linux desktop ecosystem.

Let’s see how much has changed in the six months of the 3.20 development cycle.

Once again, to gather the data, I’ve used the most excellent git-dm tool that Jonathan Corbet wrote for the “Who wrote the Linux kernel” columns for LWN. As usual, I’ve purposefully skipped the commits dealing with translations, to avoid messing up the statistics.

You should look at my previous article as a comparison point.


For the 3.20 cycle, the numbers are:

Version Lines added Lines removed Delta Contributors
GLib 2.48 20597 7544 13053 55
GTK+ 3.20 158427 117823 40604 81

More or less stable in terms of contributors, but as you can see the number of lines added and removed has doubled. This is definitely the result of the changes in the CSS machinery that have (finally) brought it to a stable as well as more featureful state.



Of the 55 developers that contributed the 271 changesets of GLib during the 3.20 development cycle, the most active are:

Name Per changeset Name Per changed lines
Ignacio Casal Quinteiro 56 (20.7%) Ignacio Casal Quinteiro 8530 (39.7%)
Philip Withnall 42 (15.5%) Philip Withnall 5402 (25.1%)
Allison Ryan Lortie 27 (10.0%) Matthias Clasen 3228 (15.0%)
Chun-wei Fan 22 (8.1%) Chun-wei Fan 1440 (6.7%)
Matthias Clasen 18 (6.6%) Allison Ryan Lortie 1338 (6.2%)
Dan Winship 9 (3.3%) Javier Jardón 565 (2.6%)
Mikhail Zabaluev 7 (2.6%) Iain Lane 149 (0.7%)
Marc-André Lureau 6 (2.2%) Ruslan Izhbulatov 147 (0.7%)
Ruslan Izhbulatov 6 (2.2%) Dan Winship 95 (0.4%)
Rico Tzschichholz 6 (2.2%) Lars Uebernickel 79 (0.4%)
Xavier Claessens 6 (2.2%) Xavier Claessens 74 (0.3%)
Emmanuele Bassi 5 (1.8%) Christian Hergert 71 (0.3%)
Iain Lane 4 (1.5%) Mikhail Zabaluev 48 (0.2%)
Lars Uebernickel 3 (1.1%) Rico Tzschichholz 45 (0.2%)
Sébastien Wilmet 3 (1.1%) Daiki Ueno 42 (0.2%)
Simon McVittie 3 (1.1%) Simon McVittie 27 (0.1%)
Javier Jardón 3 (1.1%) Emmanuele Bassi 25 (0.1%)
Christian Hergert 3 (1.1%) Robert Ancell 23 (0.1%)
coypu 2 (0.7%) Marc-André Lureau 14 (0.1%)
Sebastian Geiger 2 (0.7%) Jan de Groot 14 (0.1%)

Ignacio has been hard at work, helped by Ruslan and Fan, in making 2.48 the best GLib release ever in terms of supporting Windows — both for cross and native compilation, using autotools and the Microsoft Visual C compiler suite. If you can build an application for Windows as reliably as you can on Linux, it’s because of their work.


For GTK+, on the other hand, the most active of the 81 contributors are:

Name Per changeset Name Per changed lines
Matthias Clasen 1220 (43.7%) Matthias Clasen 78960 (41.1%)
Benjamin Otte 472 (16.9%) Benjamin Otte 35975 (18.7%)
Lapo Calamandrei 203 (7.3%) Lapo Calamandrei 35352 (18.4%)
Cosimo Cecchi 167 (6.0%) Cosimo Cecchi 10408 (5.4%)
Carlos Garnacho 147 (5.3%) Jakub Steiner 6927 (3.6%)
Timm Bäder 107 (3.8%) Carlos Garnacho 5334 (2.8%)
Emmanuele Bassi 41 (1.5%) Alexander Larsson 3128 (1.6%)
Paolo Borelli 39 (1.4%) Chun-wei Fan 2394 (1.2%)
Ruslan Izhbulatov 29 (1.0%) Paolo Borelli 1771 (0.9%)
Carlos Soriano 28 (1.0%) Ruslan Izhbulatov 1635 (0.9%)
Jakub Steiner 26 (0.9%) Timm Bäder 1326 (0.7%)
Olivier Fourdan 26 (0.9%) Takao Fujiwara 1269 (0.7%)
Jonas Ådahl 23 (0.8%) Jonas Ådahl 1243 (0.6%)
Chun-wei Fan 22 (0.8%) Emmanuele Bassi 885 (0.5%)
Piotr Drąg 18 (0.6%) Olivier Fourdan 646 (0.3%)
Ray Strode 18 (0.6%) Ray Strode 570 (0.3%)
Ignacio Casal Quinteiro 16 (0.6%) Sébastien Wilmet 494 (0.3%)
William Hua 16 (0.6%) Carlos Soriano 427 (0.2%)
Alexander Larsson 14 (0.5%) Ignacio Casal Quinteiro 333 (0.2%)
Christoph Reiter 10 (0.4%) William Hua 321 (0.2%)

Benjamin has worked on the new CSS gadget internal API; Matthias, Cosimo, and Timm have worked on porting existing widgets to it, in order to validate the API. Lapo and Jakub have worked on updating Adwaita and the other in tree themes to the new style declarations.

Carlos Soriano has worked on the widgets shared between the file chooser dialog and Nautilus.

Carlos Garnacho has worked on the input layer in GDK, in order to make it behave correctly under the new world order of Wayland; and speaking of Wayland, Carlos, Jonas, and Olivier have worked really hard to implement all the missing features in the Wayland backend, as well as the fallout of the Wayland switch when it comes to window sizing and positioning.



Affiliation Per changeset Affiliation Per lines Affiliation Per contributor (total 55)
(Unknown) 136 (50.2%) (Unknown) 10942 (50.9%) (Unknown) 35 (60.3%)
Collabora 49 (18.1%) Collabora 5491 (25.6%) Red Hat 9 (15.5%)
Canonical 41 (15.1%) Red Hat 3398 (15.8%) Canonical 5 (8.6%)
Red Hat 36 (13.3%) Canonical 1612 (7.5%) Collabora 4 (6.9%)
Endless 6 (2.2%) Endless 34 (0.2%) Endless 2 (3.4%)
Centricular 1 (0.4%) Centricular 4 (0.0%) Centricular 1 (1.7%)
Intel 1 (0.4%) Intel 2 (0.0%) Intel 1 (1.7%)
Novell 1 (0.4%) Novell 1 (0.0%) Novell 1 (1.7%)

As usual, GLib is a little bit more diverse, in terms of employers, because of its versatility and use in various platforms.


Affiliation Per changeset Affiliation Per lines Affiliation Per contributor (total 81)
Red Hat 1940 (69.5%) Red Hat 131833 (68.7%) (Unknown ) 63 (75.9%)
(Unknown) 796 (28.5%) (Unknown) 59204 (30.8%) Red Hat 15 (18.1%)
Endless 41 (1.5%) Endless 885 (0.5%) Canonical 4 (4.8%)
Canonical 13 (0.5%) Canonical 104 (0.1%) Endless 1 (1.2%)

Not many changes in these tables, but if your company uses the GNOME core platform and you wish to have a voice in where the platform goes, you should really consider contributing employee time to work upstream.

It is also very important to note that, while Red Hat still retains the majority of commits, the vast majority of committers are unaffiliated.


The command line I used for gitdm is:

git log \
 --numstat \
 -M $START..$END | \
 gitdm -r '.*(?<!po)$' -l 20 -u -n

For GLib, I started from commit 37fcab17 which contains the version bump to 2.47, and ended on the 2.48.0 tag.

For GTK+, I started from commit 2f0d4b68 which contains the first new API of the 3.19 cycle and precedes the version bump, and ended on the 3.20.0 tag.

The only changes to the gitdm stock configuration are the addition of a couple of email/name/employer association; I can publish them on request.

by ebassi at May 05, 2016 11:00 PM

May 03, 2016

Damien Lespiau

Testing for pending migrations in Django

DB migration support has been added in Django 1.7+, superseding South. More specifically, it's possible to automatically generate migrations steps when one or more changes in the application models are detected. Definitely a nice feature!

I've written a small generic unit-test that one should be able to drop into the tests directory of any Django project and that checks there's no pending migrations, ie. if the models are correctly in sync with the migrations declared in the application. Handy to check nobody has forgotten to git add the migration file or that an innocent looking change in doesn't need a migration step generated. Enjoy!

See the code on djangosnippets or as a github gist!

by Damien Lespiau ( at May 03, 2016 05:10 PM

March 08, 2016

Chris Lord

State of Embedding in Gecko

Following up from my last post, I’ve had some time to research and assess the current state of embedding Gecko. This post will serve as a (likely incomplete) assessment of where we are today, and what I think the sensible path forward would be. Please note that these are my personal opinions and not those of Mozilla. Mozilla are gracious enough to employ me, but I don’t yet get to decide on our direction 😉

The TLDR; there are no first-class Gecko embedding solutions as of writing.

EmbedLite (aka IPCLite)

EmbedLite is an interesting solution for embedding Gecko that relies on e10s (Electrolysis, Gecko’s out-of-process feature code-name) and OMTC (Off-Main-Thread Compositing). From what I can tell, the embedding app creates a new platform-specific compositor object that attaches to a window, and with e10s, a separate process is spawned to handle the brunt of the work (rendering the site, running JS, handling events, etc.). The existing widget API is exposed via IPC, which allows you to synthesise events, handle navigation, etc. This builds using the xulrunner application target, which unfortunately no longer exists. This project was last synced with Gecko on April 2nd 2015 (the day before my birthday!).

The most interesting thing about this project is how much code it reuses in the tree, and how little modification is required to support it (almost none – most of the changes are entirely reasonable, even outside of an embedding context). That we haven’t supported this effort seems insane to me, especially as it’s been shipping for a while as the basis for the browser in the (now defunct?) Jolla smartphone.

Building this was a pain, on Fedora 22 I was not able to get the desktop Qt build to compile, even after some effort, but I was able to compile the desktop Gtk build (trivial patches required). Unfortunately, there’s no support code provided for the Gtk version and I don’t think it’s worth the time me implementing that, given that this is essentially a dead project. A huge shame that we missed this opportunity, this would have been a good base for a lightweight, relatively easily maintained embedding solution. The quality of the work done on this seems quite high to me, after a brief examination.


Spidernode is a port of Node.js that uses Gecko’s ‘spidermonkey’ JavaScript engine instead of Chrome’s V8. Not really a Gecko embedding solution, but certainly something worth exploring as a way to enable more people to use Mozilla technology. Being a much smaller project, of much more limited scope, I had no issues building and testing this.

Node.js using spidermonkey ought to provide some interesting advantages over a V8-based Node. Namely, modern language features, asm.js (though I suppose this will soon be supplanted by WebAssembly) and speed. Spidernode is unfortunately unmaintained since early 2012, but I thought it would be interesting to do a simple performance test. Using the (very flawed) technique detailed here, I ran a few quick tests to compare an old copy of Node I had installed (~0.12), current stable Node (4.3.2) and this very old (~0.5) Spidermonkey-based Node. Spidermonkey-based Node was consistently over 3x faster than both old and current (which varied very little in performance). I don’t think you can really draw any conclusions than this, other than that it’s an avenue worth exploring.

Many new projects are prototyped (and indeed, fully developed) in Node.js these days; particularly Internet-Of-Things projects. If there’s the potential for these projects to run faster, unchanged, this seems like a worthy project to me. Even forgetting about the advantages of better language support. It’s sad to me that we’re experimenting with IoT projects here at Mozilla and so many of these experiments don’t promote our technology at all. This may be an irrational response, however.


GeckoView is the only currently maintained embedding solution for Gecko, and is Android-only. GeckoView is an Android project, split out of Firefox for Android and using the same interfaces with Gecko. It provides an embeddable widget that can be used instead of the system-provided WebView. This is not a first-class project from what I can tell, there are many bugs and many missing features, as its use outside of Firefox for Android is not considered a priority. Due to this dependency, however, one would assume that at least GeckoView will see updates for the foreseeable future.

I’d experimented with this in the past, specifically with this project that uses GeckoView with Cordova. I found then that the experience wasn’t great, due to the huge size of the GeckoView library and the numerous bugs, but this was a while ago and YMMV. Some of those bugs were down to GeckoView not using the shared APZC, a bug which has since been fixed, at least for Nightly builds. The situation may be better now than it was then.

The Future

This post is built on the premise that embedding Gecko is a worthwhile pursuit. Others may disagree about this. I’ll point to my previous post to list some of the numerous opportunities we missed, partly because we don’t have an embedding story, but I’m going to conjecture as to what some of our next missed opportunities might be.

IoT is generating a lot of buzz at the moment. I’m dubious that there’s much decent consumer use of IoT, at least that people will get excited about as opposed to property developers, but if I could predict trends, I’d have likely retired rich already. Let’s assume that consumer IoT will take off, beyond internet-connected thermostats (which are actually pretty great) and metered utility boxes (which I would quite like). These devices are mostly bespoke hardware running random bits and bobs, but an emerging trend seems to be Node.js usage. It might be important for Mozilla to provide an easily deployed out-of-the-box solution here. As our market share diminishes, so does our test-bed and contribution base for our (currently rather excellent) JavaScript engine. While we don’t have an issue here at the moment, if we find that a huge influx of diverse, resource-constrained devices starts running V8 and only V8, we may eventually find it hard to compete. It could easily be argued that it isn’t important for our solution to be based on our technology, but I would argue that if we have to start employing a considerable amount of people with no knowledge of our platform, our platform will suffer. By providing a licensed out-of-the-box solution, we could also enforce that any client-side interface remain network-accessible and cross-browser compatible.

A less tenuous example, let’s talk about VR. VR is also looking like it might finally break out into the mid/high-end consumer realm this year, with heavy investment from Facebook (via Oculus), Valve/HTC (SteamVR/Vive), Sony (Playstation VR), Microsoft (HoloLens), Samsung (GearVR) and others. Mozilla are rightly investing in WebVR, but I think the real end-goal for VR is an integrated device with no tether (certainly Microsoft and Samsung seem to agree with me here). So there may well be a new class of device on the horizon, with new kinds of browsers and ways of experiencing and integrating the web. Can we afford to not let people experiment with our technology here? I love Mozilla, but I have serious doubts that the next big thing in VR is going to come from us. That there’s no supported way of embedding Gecko worries me for future classes of device like this.

In-vehicle information/entertainment systems are possibly something that will become more of the norm, now that similar devices have become such commodity. Interestingly, the current big desktop and mobile players have very little presence here, and (mostly awful) bespoke solutions are rife. Again, can we afford to make our technology inaccessible to the people that are experimenting in this area? Is having just a good desktop browser enough? Can we really say that’s going to remain how people access the internet for the next 10 years? Probably, but I wouldn’t want to bet everything on that.

A plan

If we want an embedding solution, I think the best way to go about it is to start from Firefox for Android. Due to the way Android used to require its applications to interface with native code, Firefox for Android is already organised in such a way that it is basically an embedding API (thus GeckoView). From this point, I think we should make some of the interfaces slightly more generic and remove the JNI dependency from the Gecko-side of the code. Firefox for Android would be the main consumer of this API and would guarantee that it’s maintained. We should allow for it to be built on Linux, Mac and Windows and provide the absolute minimum harness necessary to allow for it to be tested. We would make no guarantees about API or ABI. Externally to the Gecko tree, I would suggest that we start, and that the community maintain, a CEF-compatible library, at least at the API level, that would be a Tier-3 project, much like Firefox OS now is. This, to me, seems like the minimal-effort and most useful way of allowing embeddable Gecko.

In addition, I think we should spend some effort in maintaining a fork of Node.js LTS that uses spidermonkey. If we can promise modern language features and better performance, I expect there’s a user-base that would be interested in this. If there isn’t, fair enough, but I don’t think current experiments have had enough backing to ascertain this.

I think that both of these projects are important, so that we can enable people outside of Mozilla to innovate using our technology, and by osmosis, become educated about our mission and hopefully spread our ideals. Other organisations will do their utmost to establish a monopoly in any new emerging market, and I think it’s a shame that we have such a powerful and comprehensive technology platform and we aren’t enabling other people to use it in more diverse situations.

This post is some insightful further reading on roughly the same topic.

by Chris Lord at March 08, 2016 05:22 PM

February 24, 2016

Chris Lord

The case for an embeddable Gecko

Strap yourself in, this is a long post. It should be easy to skim, but the history may be interesting to some. I would like to make the point that, for a web rendering engine, being embeddable is a huge opportunity, how Gecko not being easily embeddable has meant we’ve missed several opportunities over the last few years, and how it would still be advantageous to make Gecko embeddable.


Embedding Gecko means making it easy to use Gecko as a rendering engine in an arbitrary 3rd party application on any supported platform, and maintaining that support. An embeddable Gecko should make very few constraints on the embedding application and should not include unnecessary resources.


  • A 3rd party browser with a native UI
  • A game’s embedded user manual
  • OAuth authentication UI
  • A web application
  • ???


It’s hard to predict what the next technology trend will be, but there’s is a strong likelihood it’ll involve the web, and there’s a possibility it may not come from a company/group/individual with an existing web rendering engine or particular allegiance. It’s important for the health of the web and for Mozilla’s continued existence that there be multiple implementations of web standards, and that there be real competition and a balanced share of users of the various available engines.

Many technologies have emerged over the last decade or so that have incorporated web rendering or web technologies that could have leveraged Gecko;

(2007) iPhone: Instead of using an existing engine, Apple forked KHTML in 2002 and eventually created WebKit. They did investigate Gecko as an alternative, but forking another engine with a cleaner code-base ended up being a more viable route. Several rival companies were also interested in and investing in embeddable Gecko (primarily Nokia and Intel). WebKit would go on to be one of the core pieces of the first iPhone release, which included a better mobile browser than had ever been seen previously.

(2008) Chrome: Google released a WebKit-based browser that would eventually go on to eat a large part of Firefox’s user base. Chrome was initially praised for its speed and light-weightedness, but much of that was down to its multi-process architecture, something made possible by WebKit having a well thought-out embedding capability and API.

(2008) Android: Android used WebKit for its built-in browser and later for its built-in web-view. In recent times, it has switched to Chromium, showing they aren’t adverse to switching the platform to a different/better technology, and that a better embedding story can benefit a platform (Android’s built in web view can now be updated outside of the main OS, and this may well partly be thanks to Chromium’s embedding architecture). Given the quality of Android’s initial WebKit browser and WebView (which was, frankly, awful until later revisions of Android Honeycomb, and arguably remained awful until they switched to Chromium), it’s not much of a leap to think they may have considered Gecko were it easily available.

(2009) WebOS: Nothing came of this in the end, but it perhaps signalled the direction of things to come. WebOS survived and went on to be the core of LG’s Smart TV, one of the very few real competitors in that market. Perhaps if Gecko was readily available at this point, we would have had a large head start on FirefoxOS?

(2009) Samsung Smart TV: Also available in various other guises since 2007, Samsung’s Smart TV is certainly the most popular smart TV platform currently available. It appears Samsung built this from scratch in-house, but it includes many open-source projects. It’s highly likely that they would have considered a Gecko-based browser if it were possible and available.

(2011) PhantomJS: PhantomJS is a headless, scriptable browser, useful for testing site behaviour and performance. It’s used by several large companies, including Twitter, LinkedIn and Netflix. Had Gecko been more easily embeddable, such a product may well have been based on Gecko and the benefits of that would be many sites that use PhantomJS for testing perhaps having better rendering and performance characteristics on Gecko-based browsers. The demand for a Gecko-based alternative is high enough that a similar project, SlimerJS, based on Gecko was developed and released in 2013. Due to Gecko’s embedding deficiencies though, SlimerJS is not truly headless.

(2011) WIMM One: The first truly capable smart-watch, which generated a large buzz when initially released. WIMM was based on a highly-customised version of Android, and ran software that was compatible with Android, iOS and BlackBerryOS. Although it never progressed past the development kit stage, WIMM was bought by Google in 2012. It is highly likely that WIMM’s work forms the base of the Android Wear platform, released in 2014. Had something like WebOS been open, available and based on Gecko, it’s not outside the realm of possibility that this could have been Gecko based.

(2013) Blink: Google decide to fork WebKit to better build for their own uses. Blink/Chromium quickly becomes the favoured rendering engine for embedding. Google were not afraid to introduce possible incompatibility with WebKit, but also realised that embedding is an important feature to maintain.

(2014) Android Wear: Android specialised to run on watch hardware. Smart watches have yet to take off, and possibly never will (though Pebble seem to be doing alright, and every major consumer tech product company has launched one), but this is yet another area where Gecko/Mozilla have no presence. FirefoxOS may have lead us to have an easy presence in this area, but has now been largely discontinued.

(2014) Atom/Electron: Github open-sources and makes available its web-based text editor, which it built on a home-grown platform of Node.JS and Chromium, which it later called Electron. Since then, several large and very successful projects have been built on top of it, including Slack and Visual Studio Code. It’s highly likely that such diverse use of Chromium feeds back into its testing and development, making it a more robust and performant engine, and importantly, more widely used.

(2016) Brave: Former Mozilla co-founder and CTO heads a company that makes a new browser with the selling point of blocking ads and tracking by default, and doing as much as possible to protect user privacy and agency without breaking the web. Said browser is based off of Chromium, and on iOS, is a fork of Mozilla’s own WebKit-based Firefox browser. Brendan says they started based off of Gecko, but switched because it wasn’t capable of doing what they needed (due to an immature embedding API).

Current state of affairs

Chromium and V8 represent the state-of-the-art embeddable web rendering engine and JavaScript engine and have wide and varied use across many platforms. This helps reenforce Chrome’s behaviour as the de-facto standard and gradually eats away at the market share of competing engines.

WebKit is the only viable alternative for an embeddable web rendering engine and is still quite commonly used, but is generally viewed as a less up-to-date and less performant engine vs. Chromium/Blink.

Spidermonkey is generally considered to be a very nice JavaScript engine with great support for new EcmaScript features and generally great performance, but due to a rapidly changing API/ABI, doesn’t challenge V8 in terms of its use in embedded environments. Node.js is likely the largest user of embeddable V8, and is favoured even by Mozilla employees for JavaScript-based systems development.

Gecko has limited embedding capability that is not well-documented, not well-maintained and not heavily invested in. I say this with the utmost respect for those who are working on it; this is an observation and a criticism of Mozilla’s priorities as an organisation. We have at various points in history had embedding APIs/capabilities, but we have either dropped them (gtkmozembed) or let them bit-rot (IPCLite). We do currently have an embedding widget for Android that is very limited in capability when compared to the default system WebView.


It’s not too late. It’s incredibly hard to predict where technology is going, year-to-year. It was hard to predict, prior to the iPhone, that Nokia would so spectacularly fall from the top of the market. It was hard to predict when Android was released that it would ever overtake iOS, or even more surprisingly, rival it in quality (hard, but not impossible). It was hard to predict that WebOS would form the basis of a major competing Smart TV several years later. I think the examples of our missed opportunities are also good evidence that opening yourself up to as much opportunity as possible is a good indicator of future success.

If we want to form the basis of the next big thing, it’s not enough to be experimenting in new areas. We need to enable other people to experiment in new areas using our technology. Even the largest of companies have difficulty predicting the future, or taking charge of it. This is why it’s important that we make easily-embeddable Gecko a reality, and I plead with the powers that be that we make this higher priority than it has been in the past.

by Chris Lord at February 24, 2016 06:10 PM

February 15, 2016

Damien Lespiau

Augmenting mailing-lists with Patchwork - Another try

The mailing-list problem

Many software projects use mailing-lists, which usually means mailman, not only for discussions around that project, but also for code contributions. A lot of open source projects work that way, including the one I interact with the most, the Linux kernel. A contributor sends patches to a mailing list, these days using git send-email, and waits for feedback or for his/her patches to be picked up for inclusion if fortunate enough.

Problem is, mailing-lists are awful for code contribution.

A few of the issues at hand:
  • Dealing with patches and emails can be daunting for new contributors,
  • There's no feedback that someone will look into the patch at some point,
  • There's no tracking of which patch has been processed (eg. included into the tree). A shocking number of patches are just dropped as a direct consequence,
  • There's no way to add metadata to a submission. For instance, we can't assign a reviewer from a pool of people working on the project. As a result, review is only working thanks to the good will of people. It's not necessarily a bad thing, but it doesn't work in a corporate environment with deadlines,
  • Mailing-lists are all or nothing: one subscribes to the activity of the full project, but may only care about following the progress of a couple of patches,
  • There's no structure at all actually, it's all just emails,
  • No easy way to hook continuous integration testing,
  • The tools are really bad any time they need to interact with the mailing-list: try to send a patch as a reply to a review comment, addressing it. It starts with going to look at the headers of the review email to copy/paste its Message-ID, followed by an arcane incantation:
    $ git send-email --to=<mailing-list> --cc=<reviewer> \
    --in-reply-to=<reviewer-mail-message-id> \
    --reroll-count 2 -1 HEAD~2

Alternative to mailing-lists

Before mentioning Patchwork, it's worth saying that a project can merely decide to switch to using something else than a mailing-list to handle code contributions; To name a few: Gerrit, Phabricator, Github, Gitlab, Crucible.

However, there can be some friction preventing the adoption those tools. People have built their own workflow around mailing-lists for years and it's somewhat difficult to adopt anything else over night. Projects can be big with no clear way to make decisions, so sticking to mailing-lists can just be the result of inertia.

The alternatives also have problems of their own and there's no clear winner, nothing like how git took over the world.


So, the path of least resistance is to keep mailing-lists. Jemery Kerr had the idea to augment mailing-lists with a tool that would track the activity there and build a database of patches and their status (new, reviewed, merged, dropped, ...). Patchwork was born.

Here are some Patchwork instances in the wild:

The KMS and DRI Linux subsystems are using to host their mailing-lists, which includes the i915 Intel driver, project I've been contributing to since 2012. We have an instance of Patchwork there, and, while somewhat useful, the tool fell short of what we really wanted to do with our code contribution process.

Patches are welcome!

So? it was time to do something about the situation and I started improving Patchwork to answer some of the problems outlined above. Given enough time, it's possible to help on all fronts.

The code can be found on github, along with the current list of issues and enhancements we have thought about. I also maintain's instance for the graphics team at Intel, but also any project that would like to give it a try.

Design, Design, Design

First things first, we improved how Patchwork looks and feels. Belén, of OpenEmbedded/Yocto fame, has very graciously spent some of her time to rethink how the interaction should behave.

Before, ...

... and after!

There is still a lot of work remaining to roll out the new design and the new interaction model on all of Patchwork. A glimpse of what that interaction looks like so far:


One thing was clear from the start: I didn't want to have Patches as the main object tracked, but Series, a collection of patches. Typically, developing a  new feature requires more than one patch, especially with the kernel where it's customary to write a lot of orthogonal smaller commits rather than a big (and often all over the place) one. Single isolated commits, like a small bug fix, are treated as a series of one patch.

But that's not all. Series actually evolve over time as the developer answers review comments and the patch-set matures. Patchwork also tracks that evolution, creating several Revisions for the same series. This colour management series from Lionel shows that history tracking (beware, this is not the final design!).

I have started documenting what Patchwork can understand. Two ways can be used to trigger the creation of a new revision: sending a revised patch as a reply to the reviewer email or resending the full series with a similar cover letter subject.

There are many ambiguous cases and some others cases not really handled yet, one of them being sending a series as a reply to another series. That can be quite confusing for the patch submitter but the documented flows should work.


Next is dusting off Patchwork's XML-RPC API. I wanted to be able to use the same API from both the web pages and git-pw, a command line client.

This new API is close to complete enough to replace the XML-RPC one and already offers a few more features (eg. testing integration). I've also been carefully documenting it.


Rob Clark had been asking for years for a better integration with git from the Patchwork's command line tool, especially sharing its configuration file. There also are a number of git "plugins" that have appeared to bridge git with various tools, like git-bz or git-phab.

Patchwork has now his own git-pw, using the REST API. There, again, more work is needed to be in an acceptable shape, but it can already be quite handy to, for instance, apply a full series in one go:

$ git pw apply -s 122
Applying series: DP refactoring v2 (rev 1)
Applying: drm/i915: Don't pass *DP around to link training functions
Applying: drm/i915: Split write of pattern to DP reg from intel_dp_set_link_train
Applying: drm/i915 Call get_adjust_train() from clock recovery and channel eq
Applying: drm/i915: Move register write into intel_dp_set_signal_levels()
Applying: drm/i915: Move generic link training code to a separate file

Testing Integration

This is what kept my busy the last couple of months: How to integrate patches sent to a mailing-list with Continuous Integration systems. The flow I came up with is not very complicated but a picture always helps:

Hooking tests to Patchwork

Patchwork is exposing an API so mailing-lists are completely abstracted from systems using that API. Both retrieving the series/patches to test and sending back test results is done through HTTP. That makes testing systems fairly easy to write.

Tomi Sarvela hooked our test-suite, intel-gpu-tools, to patches sent to intel-gfx and we're now gating patch acceptance to the kernel driver with the result of that testing.

Of course, it's not that easy. In our case, we've accumulated some technical debt in both the driver and the test suite, which means it will take time to beat both into be a fully reliable go/no-go signal. People have been actively looking at improving the situation though (thanks!) and I have hope we can reach that reliability sooner rather than later.

As a few words of caution about the above, I'd like to remind everyone that the devil always is in the details:
  • We've restricted the automated testing to a subset of the tests we have (Basic Acceptance Tests aka BATs) to provide a quick answer to developers, but also because some of our tests aren't well bounded,
  • We have no idea how much code coverage that subset really exercises, playing with the kernel gcov support would be interesting for sure,
  • We definitely don't deal with the variety of display sinks (panels and monitors) that are present in the wild.
This means we won't catch all the i915 regressions. Time will definitely improve things as we connect more devices to the testing system and fix our tests and driver.

Anyway, let's leave i915 specific details for another time. A last thing about this testing integration is that Patchwork can be configured to send emails back to the submitter/mailing-list with some test results. As an example, I've written a integration that will tell people to fix their patches without the need of a reviewer to do it. I know, living in the future.

For more in-depth documentation about continuous testing with Patchwork, see the testing section of the manual.

What's next?

This blog post is long enough as is, let's finish by the list of things I'd like to be in a acceptable state before I'll happily tag a first version:
  • Series support without without known bugs
  • REST API and git pw able to replace XML-RPC and pwclient
  • Series, Patches and Bundles web pages ported to the REST API and the new filter/action interaction.
  • CI integration
  • Patch and Series life cycle redesigned with more automatic state changes (ie. when someone gives a reviewed-by tag, the patch state should change to reviewed)
There are plenty of other exciting ideas captured in the github issues for when this is done.


by Damien Lespiau ( at February 15, 2016 06:12 PM

Continuous Testing with Patchwork

As promised in the post introducing my recent work on Patchwork, I've written some more in-depth documentation to explain how to hook testing to Patchwork. I've also realized that a blog post might not be the best place to put that documentation and opted to put it in the proper manual:

Happy reading!

by Damien Lespiau ( at February 15, 2016 06:01 PM

February 13, 2016

Hylke Bons

Film developing setup that fits your backpack

It’s lots of fun developing your own black & white film. Here’s the setup I’ve been using. My goals were to keep costs down and to have a simple, compact setup that’s easy to use.

Developing tank and reel ~ £ 22

This is the main cost and you want to make it a good one. You can shop around for a second hand for much less.

Thermometer ~ £ 4

To make sure the solutions are at the right temperature. A glass spirit thermometer also provides a means of stirring.

Developer ~ £ 5

A 120 mL bottle of Rodinal develops about 20 rolls of film at 1+25 dilutions. You can double the dilution to 1+50 for 40, that’s just 12 pence per roll! This stuff lasts forever if you store it in darkness and air tight. Rodinal is a "one shot" developer so you toss out your dilution after use.

Fixer ~ £ 3

Fixer dilution can be reused many times, so store it after use. One liter of a 1+5 dilution fixes 17 rolls of film.

To check if your fixer dilution is still good: take a piece of cut off film leader and put it in small cup filled with fixer. If the film becomes transparent after a few minutes the fixer is still good to use.

Measuring jug ~ £ 3

To mix chemicals in. Get one with a spout for easy pouring.

Spout bags ~ £ 2

These keep air out compared to using bottles, so your chemicals will last longer. They save space too. Label them well, you don’t want to mess up!

Funnel ~ £ 1

One with a small mouth, so it fits the spout bags easily when you need to pour chemicals back.

Syringe ~ £ 1

To measure the amount of developer. Around 10 to 20 mL volume will do. Make sure to get one with 1 mL marks for more accurate measuring, and a blunt needle to easily extract from the spout bag.

Common household items

You probably already have these: a clothes peg, for hanging your developed film to dry. And a pair of scissors, to remove the film from the cannister and to cut the film into strips after drying.

Developed Ilford HP5+ film

Total ~ £ 41

As you can see, it’s only a small investment. After developing a few rolls the equipment has paid for itself, compared to sending your rolls off for processing. There’s something special about seeing your images appear on a film for the first time that’s well worth it. Like magic. :)

by Hylke Bons at February 13, 2016 07:34 PM

Lockee to the rescue

Using public computers can be a huge privacy and security risk. There’s no way you can tell who may be spying on you using key loggers or other evil software.

Some friends and family don’t see the problem at all, and use any computer to log in to personal accounts. I actually found myself not being able to recommend an easy solution here. So I decided to build a service that I hope will help remove the need to sign in to sensitive services in some cases at least.


You want to use the printer at your local library to print an e-ticket. As you’re on a public computer, you really don’t want to log in to your personal email account fetch the document for security reasons. You’re not too bothered about your personal information on the ticket, but typing in your login details on a public computer is a cause for concern.

This is a use case I have every now and then, and I’m sure there many other similar situations where you have to log in to a service to get some kind of file, but you don’t really want to.

Existing storage services

There are temporary file storage solutions on the internet, but most of them give out long links that are long and hard to remember, ask for an email address to send the links to, are public, or have any combination of these problems. Also, you have no idea what will happen to your data.

USB drives can help sometimes, but you may not always have one handy, it might get infected, and it’s easy to forget once plugged in.

Lockee to the rescue

Lockee is a small service that temporarily hosts files for you. Seen those luggage lockers at the railway station? It’s like that, but for files.

A Lockee locker

It allows you to create temporary file lockers, with easy to remember URLs (you can name your locker anything you want). Lockers are protected using passphrases, so your file isn’t out in the open.

Files are encrypted and decrypted in the browser, there’s no record of their real content on the server side. There’s no tracking of anything either, and lockers are automatically emptied after 24 hours.

Give it a go

I’m hosting an instance of Lockee on The source is also available if you’d like to run your own instance or contribute.

by Hylke Bons at February 13, 2016 04:00 PM

Ways to improve download page flow

App stores on every platform are getting more popular, and take care of downloads in a consistent and predictable way. Sometimes stores aren’t an option or you prefer not to use them, specially if you’re a Free and Open Source project and/or Linux distribution.

Here are some tips to improve your project’s download page flow. It’s based on confusing things I frequently run into when trying to download a FOSS project and think can be done a lot better.

This is in no way an exhaustive list, but is meant to help as a quick checklist to make sure people can try out your software without being confused or annoyed by the process. I hope it will be helpful.

Project name and purpose

The first thing people will (or should) see. Take advantage of this fact and pick a descriptive name. Avoid technical terms, jargon, and implementation details in the name. Common examples are: “-gui”, “-qt”, “gtk-”, “py-”, they just clutter up names with details that don’t matter.

Describe what your software does, what problem it solves, and why you should care. This sounds like stating the obvious, but this information is often buried in other less important information, like which programming language and/or free software license is used. Make this section prominent on the website and keep it down on the buzzwords.

The fact that the project is Free and Open Source, whilst important, is secondary. Oh, and recursive acronyms are not funny.


Try to autodetect as much as possible. Is the visitor running Linux, Windows, or Mac? Which architecture? Make suggestions more prominent, but keep other options open in case someone wants to download a version for a platform other than the one they’re currently using.

Architecture names can be confusing as well: “amd64” and “x86” are labels often used to specify to distinguish between 32-bit and 64-bit systems, however they do a bad job at this. AMD is not the only company making 64-bit processors anymore, and “x86” doesn’t even mention “32-bit”.


Timestamps are a good way to find out if a project is actively maintained, you can’t (usually) tell from a version number when the software was released. Use human friendly date formatting that is unambiguous. For example, use “February 1, 2003” as opposed to “01-02-03”. If you keep a list of older versions, sort by time and clearly mark which is the latest version.

File sizes

Again, keep it human readable. I’ve seen instances where the file size are reported in bytes (e.g. 209715200 bytes, instead of 200 MB). Sometimes you need to round numbers or use thousands separators when numbers are large to improve readability.

File sizes are mostly there to make rough guesses, and depending on context you don’t need to list them at all. Don’t spend too much time debating whether you should be using MB or MiB.

Integrity verification

Download pages are often littered with checksums and GPG signatures. Not everybody is going to be familiar with these concepts. I do think checking (source) integrity is important, but also think source and file integrity verification should be automated by the browser. There’s no reason for it to be done manually, but there doesn’t seem to be a common way to do this yet.

If you do offer ways to check file and source integrity, add explanations or links to documentation on how to perform these checks. Don’t ditch strange random character strings on pages. Educate, or get out of the way.

Keep in mind search engines may link to the insecure version of your page. Not serving pages over HTTPS at all makes providing signatures checks rather pointless, and could even give a false sense of security.

Compression formats

Again something that should be handled by the browser. Compressing downloads can save a lot of time and bandwidth. Often though, specially on Linux, we’re presented with a choice of compression formats that hardly matter in size (.tar.gz, .tar.bz2, .7z, .xz, .zip).

I’d say pick one. Every operating system supports the .zip format nowadays. The most important lesson here though is to not put people up with irrelevant choices and clutter the page.


Detect the closest mirror if possible, instead of letting people pick from a long list. Don’t bother for small downloads, as the time required picking one is probably going to outweigh the benefit of the increased download speed.

Starting the download

Finally, don’t hide the link in paragraphs of text. Make it a big and obvious button.

by Hylke Bons at February 13, 2016 04:00 PM

San Francisco impressions

Had the opportunity to visit San Francisco for two weeks in March, it was great. Hope to be back there soon.

by Hylke Bons at February 13, 2016 04:00 PM

London Zoo photos

Visited the London Zoo for the first time and took a few photos.

by Hylke Bons at February 13, 2016 04:00 PM

A bit about taking pictures

Though I like going out and take pictures at the places I visit, I haven’t actually blogged about taking pictures before. I thought I should share some tips and experiences.

This is not a “What’s in my bag” kind of post. I won’t, and can’t, tell you what the best cameras or lenses are. I simply don’t know. These are some things I’ve learnt and that have worked for me and my style of taking pictures, and wish I knew earlier on.


Keep gear light and compact, and focus on what you have. You will often bring more than you need. If you get the basics sorted out, you don’t need much to take a good picture. Identify a couple of lenses you like using and get to know their qualities and limits.

Your big lenses aren’t going to do you any good if you’re reluctant to take them with you. Accept that your stuff is going to take a beating. I used to obsess over scratches on my gear, I don’t anymore.

I don’t keep a special bag. I wrap my camera in a hat or hoody and lenses in thick socks and toss them into my rucksack. (Actually, this is one tip you might want to ignore.)

Watch out for gear creep. It’s tempting to wait until that new lens comes out and get it. Ask yourself: will this make me go out and shoot more? The answer usually is probably not, and the money is often better spent on that trip to take those nice shots with the stuff you already have.


Try some old manual lenses to learn with. Not only are these cheap and able to produce excellent image quality, it’s a great way to learn how aperture, shutter speed, and sensitivity affect exposure. Essential for getting the results you want.

I only started understanding this after having inherited some old lenses and started playing around with them. The fact they’re all manual makes you realise quicker how things physically change inside the camera when you modify a setting, compared to looking at abstract numbers on the back of the screen. I find them much more engaging and fun to use compared to full automatic lenses.

You can get M42 lens adapters for almost any camera type, but they work specially well with mirrorless cameras. Here’s a list of the Asahi Takumar (old Pentax) series of lenses, which has some gems. You can pick them up off eBay for just a few tenners.

My favourites are the SMC 55mm f/1.8 and SMC 50mm f/1.4. They produce lovely creamy bokeh and great sharpness of in focus at the same time.


A nice side effect of having a camera on you is that you look at the world differently. Crouch. Climb on things. Lean against walls. Get unique points of view (but be careful!). Annoy your friends because you need to take a bit more time photographing that beetle.

Some shots you take might be considered dumb luck. However, it’s up to you to increase your chances of “being lucky”. You might get lucky wandering around through that park, but you know you certainly won’t be when you just sit at home reading the web about camera performance.

Don’t worry about the execution too much. The important bit is that your picture conveys a feeling. Some things can be fixed in post-production. You can’t fix things like focus or motion blur afterwards, but even these are details and not getting them exactly right won’t mean your picture will be bad.

Don’t compare

Even professional photographers take bad pictures. You never see the shots that didn’t make it. Being a good photographer is as much about being a good editor. The very best still take crappy shots sometimes, and alright shots most of the time. You just don’t see the bad ones.

Ask people you think are great photographers to point out something they’re unhappy about in that amazing picture they took. Chances are they will point out several flaws that you weren’t even aware about.


Don’t forget to actually have a place to actually post your images. Flickr or Instagram are fine for this. We want to see your work! Even if it’s not perfect in your eyes. Do your own thing. You have your own style.


I hope that was helpful. Now stop reading and don’t worry too much. Get out there and have fun. Shoot!

by Hylke Bons at February 13, 2016 04:00 PM

February 07, 2016

Damien Lespiau

libpneu first import

Waw, definitely hard to keep a decent pace at posting news in my blog. Nevertheless, a first import of libpneu has reached my public git repository. libpneu is an effort to make a tracing library that I could use in every single project I start. Basically, you put tracing points in your programs and libpneu prints them whenever you need to know what is happening. Different backends can be used to display traces and debug messages, from printing them to stdout, to sending them over an UDP socket. More about libpneu in a few days/weeks !

A small screenshot to better understand what it does:

by Damien Lespiau ( at February 07, 2016 02:12 PM

ADV: ADV is a Dependency Viewer

A few months ago I wrote a small script to draw a dependency graph between the object files of a library (the original idea is from Lionel Landwerlin). You'll need an archive of your library for the tool to be able to look for the needed pieces. Let's have a look at a sample of its output to understand what it does. I ran it against the HEAD of clutter.

A view of the clutter library

This graph was generated with the following (tred is part of graphviz to do transitive reductions on graphs):

$ clutter/.libs/libclutter-glx-0.9.a | tred | dot -Tsvg > clutter.svg

You can provide more than one library to the tool:

./ ../clutter/clutter/.libs/libclutter-glx-0.9.a \
../glib-2.18.4/glib/.libs/libglib-2.0.a \
../glib-2.18.4/gobject/.libs/libgobject-2.0.a \
| tred | dot -Tsvg > clutter-glib-gobject-boxed.svg

What you can do with this:
  • trim down your library by removing the object files you don't need and that are leafs in the graph. This was actually the reason behind the script and it proved useful,
  • get an overview of a library,
  • make part of a library optional more easily.

To make the script work you'll need graphviz, python, ar and nm (you can provide a cross compiler prefix with --cross-prefix).

Interested? clone it! (or look at the code)

$ git clone git://

by Damien Lespiau ( at February 07, 2016 02:11 PM

shave: making the autotools output sane

updated: Automake 1.11 has been release with "silent rules" support, a feature that supersedes the hack that shave is. If you can depend on automake 1.11 please consider using its silent rules rather than shave.
updated: add some gtk-doc info
updated: CXX support thanks to Tommi Komulainen


Fed up with endless screens of libtool/automake output? Fed up with having to resort to -Werror to see warnings in your code? Then shave might be for you. shave transforms the messy output of autotools into a pretty Kbuild-like one (Kbuild is the Linux build system). It's composed of a m4 macro and 2 small shell scripts and it's available in a git repository.
git clone git://
Hopefully, in a few minutes, you should be able to see your project compile like this:
$ make
Making all in foo
Making all in internal
CC internal-file0.o
CC lib-file0.o
CC lib-file1.o
Making all in tools
CC tool0-tool0.o
LINK tool0
Just like Kbuild, shave supports outputting the underlying commands using:
$ make V=1


  • Put the two shell scripts and in the directory of your choice (it can be at the root of your autotooled project).
  • add shave and shave-libtool to AC_CONFIG_FILES
  • add shave.m4 either in acinclude.m4 or your macro directory
  • add a call to SHAVE_INIT just before AC_CONFIG_FILES/AC_OUTPUT. SHAVE_INIT takes one argument, the directory where shave and shave-libtool are.

Custom rules

Sometimes you have custom Makefile rules, e.g. to generate a small header, run glib-mkenums or glib-genmarshal. It would be nice to output a pretty 'GEN' line. That's quite easy actually, just add few (portable!) lines at the top of your
V         = @
Q = $(V:1=)
QUIET_GEN = $(Q:@=@echo ' GEN '$@;)
and then it's just a matter of prepending $(QUIET_GEN) to the rule creating the file:
lib-file2.h: Makefile
$(QUIET_GEN)echo "#define FOO_DEFINE 0xbabe" > lib-file2.h

gtk-doc + shave

gtk-doc + shave + libtool 1.x (2.x is fine) is known to have a small issue, a patch is available. Meanwhile I suggest adding a few lines to your script.
sed -e 's#) --mode=compile#) --tag=CC --mode=compile#' gtk-doc.make > gtk-doc.temp \
&& mv gtk-doc.temp gtk-doc.make
sed -e 's#) --mode=link#) --tag=CC --mode=link#' gtk-doc.make > gtk-doc.temp \
&& mv gtk-doc.temp gtk-doc.make

dolt + shave

It's possible to use dolt in conjunction with shave with a surprisingly small patch to dolt.

Real world example: Clutter

$ make
GEN   stamp-clutter-marshal.h
GEN   clutter-marshal.c
GEN   stamp-clutter-enum-types.h
Making all in cogl
Making all in common
CC    cogl-util.o
CC    cogl-bitmap.o
CC    cogl-bitmap-fallback.o
CC    cogl-primitives.o
CC    cogl-bitmap-pixbuf.o
CC    cogl-clip-stack.o
CC    cogl-fixed.o
CC    cogl-color.o
cogl-color.c: In function ‘cogl_set_source_color4ub’:
cogl-color.c:141: warning: implicit declaration of function ‘cogl_set_source_color’
CC    cogl-vertex-buffer.o
CC    cogl-matrix.o
CC    cogl-material.o

Eh! now we can see a warning there!


This is a first release, shave has not been widely tested aka it may not work for you!
  • test it with a wider range of automake/libtool versions
  • shave won't work without AC_CONFIG_HEADERS due to shell quoting problems
  • see what can be done for make install/dist (they are prettier thanks to make -s, but we probably miss a few actions)
  • there is a '-s' hardcoded in MAKEFLAGS,  I have to find a way to make it more flexible

by Damien Lespiau ( at February 07, 2016 02:08 PM


A few concerns have been raised by shave, namely not being able to debug build failure in an automated environment as easily as before, or users giving  useless bug reports of failed builds.

One capital thing to realize is that, even when compiling with make V=1, everything that was not echoed was not showed (MAKEFLAGS=-s).

Thus, I've made a few changes:
  • Add CXX support (yes, that's unrelated, but the question was raised, thanks to Tommi Komulainen for the initial patch),
  • add a --enable-shave option to the configure script,
  • make the Good Old Behaviour the default one,
  • as a side effect, the V and Q variables are now defined in the m4 macro, please remove them from your files.

The rationale for the last point can be summarized as follow:
  • the default behaviour is as portable as before (for non GNU make that is), which is not the case is shave is activated by default,
  • you can still add --enable-shave to you script, bootstraping your project from a SCM will enable shave and that's cool!
  • don't break tools that were relying on automake's output.

Grab the latest version! (git://

by Damien Lespiau ( at February 07, 2016 02:06 PM

Still some hair left

I've been asked to give more input on make V=1 Vs. --disable-shave, so here it is: once again, before shipping your package with shave enabled by default, there is something crucial to understand: make V=1 (when having configured your package with --enable-shave) is NOT equivalent to no shave at all (ie --disable-shave). This is because the shave m4 macro is setting MAKEFLAGS=-s in every single Makefile. This means that make won't print the commands as is used to, and that the only way to print something on the screen is to echo it. It's precisely what the shave wrappers do, they echo the CC/CXX and LIBTOOL commands when V=1. So in short custom rules and a few automake commands won't be displayed with make V=1.

That said, it's possible to craft a rule that would display the command with shaved enabled and make V=1. The following rule:
lib-file2.h: Makefile
$(SHAVE_GEN)echo "#define FOO_DEFINE 0xbabe" > lib-file2.h
would become:
lib-file2.h: Makefile
@cmd='echo "#define FOO_DEFINE 0xbabe" > lib-file2.h'; \
if test x"$$V" = x1; then echo $$cmd; fi
$(SHAVE_GEN)echo "#define FOO_DEFINE 0xbabe" > lib-file2.h
which is quite ugly, to say the least. (if you find a smarter way, please enlighten me!).

On the development side, shave is slowly becoming more mature:
  • Thanks to Jan Schmidt, shave works with non GNU sed and echo that do not support -n. It now works on Solaris, hopefully on BSDs and various Unixes as well (not tested though).
  • SHAVE_INIT has a new, optional, parameter which empowers the programmer to define shave's default behaviour (when ./configure is run without shave any related option): either enable or disable. ie. SHAVE_INIT([autootols], [enable]) will instruct shave to find its wrapper scripts in the autotools directory and that running ./configure will actually enable the beast. SHAVE_INIT without parameters at all is supposed to mean that the wrapper scripts are in $top_builddir and that ./configure will not enable shave without the --enable-shave option.
  • however, shave has been reported to fail miserably with scratchbox.

by Damien Lespiau ( at February 07, 2016 02:06 PM

Per project .vimrc

My natural C indentation style is basically kernel-like and my ~/.vimrc reflects that. Unfortunately I have to hack on GNUish-style projects and I really don't want to edit my ~/.vimrc every single time I switch between different indentation styles.

Modelines are evil.

To solve that terrible issue, vim can use per directory configuration files. To enable that neat feature only two little lines are needed in your ~/.vimrc:
set exrc   " enable per-directory .vimrc files
set secure " disable unsafe commands in local .vimrc files
Then it's just a matter of writing a per project .vimrc like this one:
set tabstop=8
set softtabstop=2
set shiftwidth=2
set expandtab
set cinoptions=>4,n-2,{2,^-2,:0,=2,g0,h2,t0,+2,(0,u0,w1,m1
You can find help with the wonderful cinoptions variable in the Vim documentation. As sane persons open files from the project's root directory, this works like a charm. As for the Makefiles, they are special anyway, you really should add an autocmd in your ~/.vimrc.
" add list lcs=tab:>-,trail:x for tab/trailing space visuals
autocmd BufEnter ?akefile* set noet ts=8 sw=8 nocindent

by Damien Lespiau ( at February 07, 2016 02:05 PM

Blending two RGBA 5551 layers

I've just stumbled accross a small piece of code, written one year and a half ago, that blends two 512x512 RGBA 5551 images. It was originally written for a (good!) GIS, so the piece of code blends roads with rivers (and displays the result in a GdkPixbuf). The only thing interesting is that it uses some MMX, SSE2 and rdtsc instructions. You can have a look at the code in its git repository.

by Damien Lespiau ( at February 07, 2016 02:04 PM

Cogl + JS = Love

Played a bit with Gjs and Cogl this weekend and ended up rewriting Clutter's test-cogl-primitives in JavaScript. In the unlikely case someone is interested in trying it, you'll need a patch to support arrays of float as argument in introspected functions and another small patch to add introspection annotations for a few Cogl symbols. As usual you can grab the code in its git repository:

by Damien Lespiau ( at February 07, 2016 02:03 PM

Using and GDB scripts

Some time ago, Alexander Larson blogged about using gdb python macros when debugging Glib and GObject projects. I've wanted to try those for ages, so I spent part of the week-end looking at what you could do with the new python enabled GDB, result: quite a lot of neat stuff!

Let's start by making the script that now comes with glib work on stock gdb 7.0 and 7.1 (ie not the archer branch that contains more of the python work). If those two scripts don't work for you yet (because your distribution is not packaging them, or is packaging a stock gdb 7.0. 7.1), here are a few hints you can follow:
  • glib's GDB macros rely on GDB's auto-load feature, ie, every time GDB load a library your program uses, it'll look for a corresponding python script to execute:
open("/lib/", O_RDONLY)
open("/usr/lib/debug/lib/", O_RDONLY)
open("/usr/share/gdb/auto-load/lib/", O_RDONLY)
Some distributions have decided not to ship glib's and gobject's auto-load helpers, if you are in that case, you'd need to load and by hand. For that purpose I've added a small python command in my ~/.gdbinit:
import os.path
import sys
import gdb

# Update module path.
dir = os.path.join(os.path.expanduser("~"), ".gdb")
if not dir in sys.path:
sys.path.insert(0, dir)

class RegisterCommand (gdb.Command):
"""Register GLib and GObject modules"""

def __init__ (self):
super (RegisterCommand, self).__init__ ("gregister",

def invoke (self, arg, from_tty):
objects = gdb.objfiles ()
for object in objects:
if object.filename.find ("") != -1:
from glib import register
register (object)
elif object.filename.find ("") != -1:
from gobject import register
register (object)

RegisterCommand ()
What I do is put and in a ~/.gdb directory and don't forget to call gregister inside GDB (once gdb has loaded glib and gobject)
  • The scripts that are inside glib's repository were written with the archer branch of gdb (which bring all the python stuff). Unfortunately stock GDB (7.0 and 7.1) does not have everything the archer gdb has. I have a couple of patches to fix that in the queue. Meanwhile you can grab them in my survival kit repository. This will disable the back trace filters as they are still not in stock GDB.

You're all set! it's time to enjoy pretty printing and gforeach. Hopefully people will join the fun at some point and add more GDB python macro goodness both inside glib and in other projects (for instance a ClutterActor could print its name).
int main (int argc, char **argv)
glist = g_list_append (glist, "first");
glist = g_list_append (glist, "second");

return breeeaaak_oooon_meeeee ();
(gdb) b breeeaaak_oooon_meeeee
Breakpoint 1 at 0x80484b7: file glib.c, line 9.
(gdb) r
Starting program: /home/damien/src/test-gdb/glib
Breakpoint 1, breeeaaak_oooon_meeeee () at glib.c:9
9        return 0;
(gdb) gregister
(gdb) gforeach s in glistp: print ((char *)$s)
No symbol "glistp" in current context.
(gdb) gforeach s in glist: print ((char *)$s)
$2 = 0x80485d0 "first"
$3 = 0x80485d6 "second"

by Damien Lespiau ( at February 07, 2016 02:02 PM

Learning how to draw

I can't draw. I've never been able to. Yet, for some reason, I decided to give it a serious try, buy a book to guide me in that journey (listening to an advice from pippin, yeah I know, crazy). The first step was, like a pilgrim walking to a sacred place, to go and buy some art supplies, which turned out to be a really enjoyable experience.

The first thing you have to do is a snapshot of your skills before reading more of the book to be able to do a "before/after" comparison. I thought it was quite hard, but was surprised that the result was all right, by my low standards anyway. You have to do 3 drawings: a self-portrait, looking at yourself in a mirror, a person/character drawn from memory without a visual help and your hand.

The next exercise is there to make you realize that you'll have to forget everything you know and re-learn how to see to draw. It's about copying drawings upside down, copying it curve by curve without associating any meaning to what you are doing. The result is quite surprising as you can see on the left. Now it's a matter to learn how to do that without resorting to the upside down trick.

It's only the beginning of a long journey, so many things can go wrong, but worth giving it a try!

by Damien Lespiau ( at February 07, 2016 01:58 PM

The GStreamer conference from a Clutter point of view

Two weeks ago I attended the first GStreamer conference, and it was great. I won't talk about the 1.0 plan that seems to take shape and looks really good but just what stroke me the most: Happy Clutter Stories and an Tale To Be Told to your manager.

Let's move on the Clutter stories. You had a surprising number of people mixing GStreamer and Clutter, two talks especially:
  • Florent Thiery founder of Ubicast talked about one of their products: a portable recording system with quite a bit of bling (records the slides, movement detection with OpenCV, RoI, ...). The system was used to record the talks on the main track. Now, what was of particular interest for me is that the UI to control the system is entirely written with Clutter and python. They have built a whole toolkit on top of Clutter, in python, called candies/touchwizard and written their UI with it, cooool.
  • A very impressive talk from the Tanberg (now Cisco) guys about their Movi software, video conferencing at its finest. It uses GStreamer extensively and Clutter for its UI (on Windows!). They said that about 150,000 copies of Movi are deployed in the wild. Patches from Ole André Vadla Ravnås and Haakon Sporsheim have been flowing to Clutter and Clutter-gst (win32 support).

As a side note, Fluendo talked about their Open Source, Intel founded, GStreamer codecs for Intel CE3100/CE4100. This platform specificities are supported natively by Clutter (./configure --with-flavour=cex100) using the native EGL winsys called "GDL" and evdev events coming from the kernel. More on this later :p

A very interesting point about those success stories is that the companies and engineers working with open source software to build their applications, sometimes with parts heavily covered by patents, while contributing back to the ecosystem that allowed to build those applications in the first place. Contributing is done at many levels: directly patches but also feedback on the libraries/platform (eg. input for GStreamer 1.0). And guess what? It works! To me, that's exactly how the GNOME platform should be used to build proprietary applications: build on top and contribute back to consolidate the libraries. I'd go as far as saying that contributing upstream is the best way to share code inside the same big corporation. Such companies are always very bad a cooperating between divisions.

by Damien Lespiau ( at February 07, 2016 01:55 PM

A simple transition effect with Clutter

When doing something with graphics, your first need an idea (granted, as with pretty much everything else). In this case, a simple transition that I've seen somewhere a long time ago and I wanted to reproduce with Clutter.

The code is available in a branch of a media explorer I'm currently working on. A few bullet points to follow the code:
  • As the effect needs a "screenshot" of a Clutter scene to play with. You first need to create a subclass of ClutterOffscreenEffect as it does the work of redirecting the painting of a subtree of actors in an offscreen buffer that you can  reuse to texture the rectangles you'll be animating in the effect. This subclass has a "progress" property to control the animation.
  • Then actually compute the coordinates of the grid cells both in screen space and in texture space. To be able to use cogl_rectangles_with_texture_coords(), to try limit the number of GL calls (and/or by the Cogl journal and to ease the animation of the cells fading out, I decided to store the diagonals of the rectangle in a 1D array so that the following grid:

    is stored as the following array:
  • ::paint_target()looks at the "progress" property, animate those grid cells accordingly and draw them. priv->rects is the array storing the initial rectangles, priv->animated_rects the animated ones and priv->chunks stores the start and duration of each diagonal animation along with a (index, length) tuple that references the diagonal rectangles in priv->rects and priv->animated_rects.
Some more details:
  • in the ::paint_target() function, you can special case when the progress is 0.0 (paint the whole FBO instead of the textured grid) and 1.0 (don't do anything),
  • Clutter does not currently allow to just rerun the effect when you animate a property of an offscreen effect for instance. This means that when animating the "progress" property on the effect, it queues a redraw on the actor that end up in the offscreen to trigger the effect ::paint_target() again. A branch from Neil allows to queue a "rerun" on the effect to avoid having to do that,
  • The code has some limitations right now (ie, n_colums must be equal to n_rows) but easily fixable. Once done, it makes sense to try to push the effect to Mx.

by Damien Lespiau ( at February 07, 2016 01:53 PM

Clutter on Android: first results

With the release of Android 2.3, there's a decent way to integrate native applications with the NativeActivity class, an EGL library, and some C API to expose events, main loop, etc. So? how about porting Clutter to it now that it looks actually feasible? After a few days of work, the first results are there, quite promising!

There's still a fairly large number of items in my TODO before being happy with the state of this work, the most prominent items are:
  • Get a clean up pass done to have something upstreamable, this includes finishing the event integration (it receives events but not yet forward them to Clutter),
  • Come up with a plan to manage the application life cycle and handle the case when Android destroys the EGL surface that you were using (probably by having the app save a state, and properly tear down Clutter).,
  • While you probably have the droid font installed in /system/fonts, this is not part of the advertised NDK interface. The safest choice is to embed the font you want to use with your application. Unfortunately fontconfig + freetype + pango + compressed assets in your Android package don't work really well together. Maybe solve it at the Pango level with a custom "direct" fontmap implementation that would let you register fonts from files easily?
  • What to do with text entries? show soft keyboard? Mx or Clutter problem? what happens to the GL surface in that case?
  • Better test the GMainLoop/ALooper main loop integration (esp. adding and removing file descriptors),
  • All the libraries that Clutter depends on are linked into a big .so (which is the Android NDK application). It results in a big .so (~5 MB, ~1.7 MB compressed in the .apk). That size can be dramatically reduced, sometimes at the expense of changes that will break the current API/ABI, but hell, you'll be statically linking anyway,
  • Provide "prebuilt libraries", ie. pre-compiled libraries that makes it easy to just use Clutter to build applications.

by Damien Lespiau ( at February 07, 2016 01:38 PM

git commit --fixup and git rebase -i --autosquash

It's not unusual that I need to fix previous commits up when working  on a branch or in the review phase. Until now I used a regular commit with some special marker to remember which commit to squash it with and then git rebase -i to reorder the patches and squash the fixup commits with their corresponding "parent" commits.

Turns out, git can handle quite a few of those manual manipulations for you. git commit --fixup <commit> allows you to commit work, marking it as a fixup of a previous commit. git rebase -i --autosquash will then present the usual git rebase -i screen but with the fixup commits moved just after their parents and ready to be squashed without any extra manipulation.

For instance, I had a couple of changes to a commit buried 100 patches away from HEAD (yes, a big topic branch!):
$ git diff
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 29f3813..08ea851 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -2695,6 +2695,11 @@ static void skylake_update_primary_plane(struct drm_crtc *crtc,

intel_fb = to_intel_framebuffer(fb);
obj = intel_fb->obj;
+ /*
+ * The stride is expressed either as a multiple of 64 bytes chunks for
+ * linear buffers or in number of tiles for tiled buffers.
+ */
switch (obj->tiling_mode) {
case I915_TILING_NONE:
stride = fb->pitches[0] >> 6;
@@ -2707,7 +2712,6 @@ static void skylake_update_primary_plane(struct drm_crtc *crtc,


I915_WRITE(PLANE_CTL(pipe, 0), plane_ctl);
And I wanted to squash those changes with commit 2021785
$ git commit -a --fixup 2021785
git will then go ahead and create a new commit with the subject taken from the referenced commit and prefixed with fixup!
commit d2d278ffbe87d232369b028d0c9ee9e6ecd0ba20
Author: Damien Lespiau <>
Date: Sat Sep 20 11:09:15 2014 +0100

fixup! drm/i915/skl: Implement thew new update_plane() for primary planes
Then when using the interactive rebase with autosquash:
$ git rebase -i --autosquash drm-intel/drm-intel-nightly
The fixup will be next after the reference commit
pick 2021785 drm/i915/skl: Implement thew new update_plane() for primary planes
fixup d2d278ff fixup! drm/i915/skl: Implement thew new update_plane() for primary planes
validating the proposed change (by in my case leaving vim) will squash the fixup commits. Definitely what I'll be using from now on!

Oh, and there's a config option to have git rebase automatically autosquash if there are some fixup commits:
$ git config --global rebase.autosquash true

by Damien Lespiau ( at February 07, 2016 12:53 PM

February 06, 2016

Damien Lespiau

A simple autotool template

Every now and then, you feel a big urge to start hacking on a small thingy and need to create Makefiles for it. Turns out that the autotools won't be that intrusive when we are talking about small programs and you get do a reasonable job with a few lines, first the file:
# autoconf
AC_INIT([fart], [0.0.1], [])

# automake
AM_INIT_AUTOMAKE([1.11 -Wall foreign no-define])

# Check for programs

# Check for header files

["-Wall -Wshadow -Wcast-align -Wno-uninitialized
-Wno-strict-aliasing -Wempty-body -Wformat -Wformat-security
-Winit-self -Wdeclaration-after-statement -Wvla

PKG_CHECK_MODULES([GLIB], [glib-2.0 >= 2.24])

and then

bin_PROGRAMS = fart

fart_SOURCES = fart.c
After that, it's just a matter of running autoreconf
$ autoreconf -i
and you are all set!
So, what do you get for this amount of lines?
  • The usual set of automake targets, handy! ("make tags" is so under used!) and bonus features (out of tree builds, extra rules to reconfigure/rebuild the Makefiles on changes in, ...)
  • Trying to make the autoconf/automake discreet (putting auxiliary files out of the way, silence mode, automake for non GNU projects)
  • Some decent warning flags (tweak to your liking!)
  • autoreconf cooperating with aclocal thanks to ACLOCAL_AMFLAGS and coping with non standard locations for system m4 macros
I'll maintain a git tree to help bootstrap my next small hacks, feel free to use it as well!

by Damien Lespiau ( at February 06, 2016 11:59 PM

Extracting part of files with sed

For reference for my future self, a few handy sed commands. Let's consider this file:
$ cat test-sed
First line
Second line
Another line
Last line
We can extract the lines from the start of the file to the marker by deleting the rest:
$ sed '/--/,$d' test-sed 
First line
Second line
a,b is the range the command, here d(elete), applies to. a and b can be, among others, line numbers, regular expressions or $ for end of the file. We can also extract the lines from the marker to the end of the file with:
$ sed -n '/--/,$p' test-sed 
Another line
Last line
This one is slightly more complicated. By default sed spits all the lines it receives as input, '-n' is there to tell sed not to do that. The rest of the expression is to p(rint) the lines between -- and the end of the file.
That's all folks!

by Damien Lespiau ( at February 06, 2016 11:55 PM

A git pre-commit hook to check the year of copyright notices

Like every year, touching a source file means you also need to update the year of the copyright notice you should have at the top of the file. I always end up forgetting about them, this is where a git pre-commit hook would be ultra-useful, so I wrote one:
# Check if copyright statements include the current year
files=`git diff --cached --name-only`
year=`date +"%Y"`

for f in $files; do
head -10 $f | grep -i copyright 2>&1 1>/dev/null || continue

if ! grep -i -e "copyright.*$year" $f 2>&1 1>/dev/null; then
missing_copyright_files="$missing_copyright_files $f"

if [ -n "$missing_copyright_files" ]; then
echo "$year is missing in the copyright notice of the following files:"
for f in $missing_copyright_files; do
echo " $f"
exit 1
Hope this helps!

by Damien Lespiau ( at February 06, 2016 06:35 PM

Working on more than one line with sed's 'N' command

Yesterday I was asked to help solving a small sed problem. Considering that file (don't look too closely on the engineering of the defined elements):
The problem was: How to change value1 to VALUE!. The problem here is that you can't blindly execute a s command matching <string>.*</string>.
Sed maintains a buffer called the "pattern space" and processes commands on this buffer. From the GNU sed manual:
sed operates by performing the following cycle on each line of input: first, sed reads one line from the input stream, removes any trailing newline, and places it in the pattern space. Then commands are executed; each command can have an address associated to it: addresses are a kind of condition code, and a command is only executed if the condition is verified before the command is to be executed.

When the end of the script [(list of sed commands)] is reached, unless the -n option is in use, the contents of pattern space are printed out to the output stream, adding back the trailing newline if it was removed.3 Then the next cycle starts for the next input line.
So the idea is to first, use a /pattern/ address to select the the right <key> line, append the next line to the pattern space (with the N command) and finally run a s command on the buffer now containing both lines:
And so we end up with:
$ cat input 
$ sed -e '/<key>key1<\/key>/{N;s#<string>.*<\/string>#<string>VALUE!<\/string#;}' < input

by Damien Lespiau ( at February 06, 2016 06:33 PM

HDMI stereo 3D & KMS

If everything goes according to plan, KMS in linux 3.13 should have stereo 3D support. Should one be interested in scanning out a stereo frame buffer to a 3D capable HDMI sink, here's a rough description of how those modes are exposed to user space and how to use them.

A reader not well acquainted with the DRM sub-system and its mode setting API (Aka Kernel Mode Setting, KMS) could start by watching the first part of Laurent Pinchart's Anatomy of an Embedded KMS Driver or read David Herrmann's heavily documented mode setting example code.

Stereo modes work by sending a left eye and right eye picture per frame to the monitor. It's then up to the monitor to use those 2 pictures to display a 3D frame and the technology there varies.

There are different ways to organise the 2 pictures inside a bigger frame buffer. For HDMI, those layouts are described in the HDMI 1.4 specification. Provided you give them your contact details, it's possible to download the stereo 3D part of the HDMI 1.4 spec from

As one inevitably knows, modes supported by a monitor can be retrieved out of the KMS connector object in the form of drmModeModeInfo structures (when using libdrm, it's also possible to write your own wrappers around the KMS ioctls, should you want to):
typedef struct _drmModeModeInfo {
uint32_t clock;
uint16_t hdisplay, hsync_start, hsync_end, htotal, hskew;
uint16_t vdisplay, vsync_start, vsync_end, vtotal, vscan;

uint32_t vrefresh;

uint32_t flags;
uint32_t type;
char name[...];
} drmModeModeInfo, *drmModeModeInfoPtr;
To keep existing software blissfully unaware of those modes, a DRM client interested in having stereo modes listed starts by telling the kernel to expose them:
drmSetClientCap(drm_fd, DRM_CLIENT_CAP_STEREO_3D, 1);
Stereo modes use the flags field to advertise which layout the mode requires:
uint32_t layout = mode->flags & DRM_MODE_FLAG_3D_MASK;
This will give you a non zero value when the mode is a stereo mode, value among:
User space is then responsible for choosing which stereo mode to use and to prepare a buffer that matches the size and left/right placement requirements of that layout. For instance, when choosing Side by Side (half), the frame buffer is the same size as its 2D equivalent (that is hdisplay x vdisplay) with the left and right images sub-sampled by 2 horizontally:

Side by Side (half)

Other modes need a bigger buffer than hdisplay x vdisplay. This is the case with frame packing, where each eye has the the full 2D resolution, separated by the number of vblank lines:

Fame Packing

Of course, anything can be used to draw into the stereo frame buffer, including OpenGL. Further work should enable Mesa to directly render into such buffers, say with the EGL/gbm winsys for a wayland compositor to use.

Wipe Out using Frame Packing on the PS3

Behind the scene, the kernel's job is to parse the EDID to discover which stereo modes the HDMI sink supports and, once user-space instructs to use a stereo mode, to send infoframes (metadata sent during the vblank interval) with the information about which 3D mode is being sent.

A good place to start for anyone wanting to use this API is testdisplay, part of the Intel GPU tools test suite. testdisplay can list the available modes with:
$ sudo ./tests/testdisplay -3 -i
name refresh (Hz) hdisp hss hse htot vdisp vss vse vtot flags type clock
[0] 1920x1080 60 1920 2008 2052 2200 1080 1084 1089 1125 0x5 0x48 148500
[1] 1920x1080 60 1920 2008 2052 2200 1080 1084 1089 1125 0x5 0x40 148352
[2] 1920x1080i 60 1920 2008 2052 2200 1080 1084 1094 1125 0x15 0x40 74250
[3] 1920x1080i 60 1920 2008 2052 2200 1080 1084 1094 1125 0x20015 0x40 74250 (3D:SBSH)
[4] 1920x1080i 60 1920 2008 2052 2200 1080 1084 1094 1125 0x15 0x40 74176
[5] 1920x1080i 60 1920 2008 2052 2200 1080 1084 1094 1125 0x20015 0x40 74176 (3D:SBSH)
[6] 1920x1080 50 1920 2448 2492 2640 1080 1084 1089 1125 0x5 0x40 148500
[7] 1920x1080i 50 1920 2448 2492 2640 1080 1084 1094 1125 0x15 0x40 74250
[8] 1920x1080i 50 1920 2448 2492 2640 1080 1084 1094 1125 0x20015 0x40 74250 (3D:SBSH)
[9] 1920x1080 24 1920 2558 2602 2750 1080 1084 1089 1125 0x5 0x40 74250
[10] 1920x1080 24 1920 2558 2602 2750 1080 1084 1089 1125 0x1c005 0x40 74250 (3D:TB)
[11] 1920x1080 24 1920 2558 2602 2750 1080 1084 1089 1125 0x4005 0x40 74250 (3D:FP)
To test a specific mode:
$ sudo ./tests/testdisplay -3 -o 17,10
1920x1080 24 1920 2558 2602 2750 1080 1084 1089 1125 0x1c005 0x40 74250 (3D:TB)
To cycle through all the supported stereo modes:
$ sudo ./tests/testdisplay -3
testdisplay uses cairo to compose the final frame buffer from two separate left and right test images.

by Damien Lespiau ( at February 06, 2016 06:28 PM

Working in a separate prefix

I've been surprised in the past to discover that even some seasoned engineers didn't know how to use the autotools prefix feature. A sign they've been lucky enough and didn't have to deal with Autotools too much. Here's my attempt to provide some introduction to ./configure --prefix.

Working with or in "a separate prefix" is working with libraries and binaries (well, anything produced by 'make install' in an autotooled project really) installed in a different directory than the system-wide ones (/usr or even /usr/local that can become quite messy). It is the preferred way to hack on a full stack without polluting your base distribution and has several advantages:
  • One can hack on the whole stack without the fear of not being able to run your desktop environment you're working with if something goes wrong,
  • More often than not, one needs a relatively recent library that your distribution doesn't ship with (say a recent libdrm). When working with the dependencies in a prefix, it's just a matter of recompiling it.

Let's take an example to make the discussion easier:
  •  We want to compile libdrm and intel-gpu-tools (because intel-gpu-needs needs a more recent libdrm than the one coming with your distribution),
  •  We want to use the ~/gfx directory for our work,
  • git trees with be cloned in ~/gfx/sources,
  • ~/gfx/install is chosen as the prefix.

First, let's clone the needed git repositories:
$ mkdir -p ~/gfx/sources ~/gfx/install
$ cd ~/gfx/sources
$ git clone git:// libdrm
$ git clone git://
Then you need to source a script that will set-up your environment with a few variables to tell the system to use the prefix (both at run-time and compile-time). A minimal version of that script for our example is (I store my per-project setup scripts to source at the root of the project, in our case ~/gfx):
$ cat ~/gfx/setup-env
export PATH=$PROJECT/install/bin:$PATH
export PKG_CONFIG_PATH=$PROJECT/install/lib/pkgconfig:$PKG_CONFIG_PATH
export ACLOCAL_FLAGS="-I $PROJECT/install/share/aclocal $ACLOCAL_FLAG"
$ source ~/gfx/setup-env
Then it's time to compile libdrm, telling the configure script that we want to install it in in our prefix:
$ cd ~/gfx/sources/libdrm
$ ./ --prefix=/home/damien/gfx/install
$ make
$ make install
Note that you don't need to run "sudo make install" since we'll be installing in our prefix directory that is writeable by the current user.

Now it's time to compile i-g-t:
$ cd ~/gfx/sources/intel-gpu-tools
$ ./ --prefix=/home/damien/gfx/install
$ make
$ make install
The configure script may complain about dependencies (eg. cairo, SWIG,...). Different ways to solve those:
  • For dependencies not directly linked with the graphics stack (like SWIG), it's recommended to use the development package provided by the distribution
  • For old enough dependencies that don't change very often (like cairo) you can use the distribution development package or compile them in your prefix
  • For dependencies more recent than your distribution ones, you need to install them in the chosen prefix.

by Damien Lespiau ( at February 06, 2016 12:27 PM

September 15, 2015

Emmanuele Bassi

Who wrote GTK+ (Reprise)

As I’ve been asked by different people about data from older releases of GTK+, after the previous article on Who Wrote GTK+ 3.18, I ran the git-dm script on every release and generated some more data:

Release Lines added Lines removed Delta Changesets Contributors
2.01 666495 345348 321147 2503 106
2.2 301943 227762 74181 1026 89
2.4 601707 116402 485305 2118 109
2.6 181478 88050 93428 1421 101
2.8 93734 47609 46125 1155 86
2.10 215734 54757 160977 1614 110
2.12 232831 43172 189659 1966 148
2.14 215151 102888 112263 1952 140
2.16 71335 23272 48063 929 118
2.18 52228 23490 28738 1079 90
2.20 80397 104504 -24107 761 82
2.22 51115 71439 -20324 438 70
2.24 4984 2168 2816 184 37
3.01 354665 580207 -225542 4792 115
3.2 227778 168616 59162 2435 98
3.4 126934 83313 43621 2201 84
3.6 206620 34965 171655 1011 89
3.8 84693 34826 49867 1105 90
3.10 143711 204684 -60973 1722 111
3.12 86342 54037 32305 1453 92
3.14 130387 144926 -14539 2553 84
3.16 80321 37037 43284 1725 94
3.18* 78997 54614 24383 1638 83

Here you can see the history of the GTK releases, since 2.0.

These numbers are to be taken with a truckload of salt, especially the ones from the 2.x era. During the early 2.x cycle, releases did not follow the GNOME timed release schedule; instead, they were done whenever needed:

Release Date
2.0 March 2002
2.2 December 2002
2.4 March 2004
2.6 December 2004
2.8 August 2005
2.10 July 2006
2.12 September 2007
2.14 September 2008
2.16 March 2009
2.18 September 2009
2.20 March 2010
2.22 September 2010
2.24 January 2011

Starting with 2.14, we settled to the same cycle as GNOME, as it made releasing GNOME and packaging GTK+ on your favourite distribution a lot easier.

This disparity in the length of the development cycles explains why the 2.12 and 2.14 cycles, which lasted a year, represent an anomaly in terms of contributors (148 and 140, respectively) and in terms of absolute lines changed.

The reduced activity between 2.20 and 2.24.0 is easily attributable to the fact that people were working hard on the 2.90 branch that would become 3.0.

In general, once you adjust by release time, it’s easy to see that the number of contributors is pretty much stable at around 90:

The average is 94.5, which means we have an hobbit somewhere in the commit log

Another interesting data point would be to look at the ecosystem of companies spawned around GTK+ and GNOME, and how it has changed over the years — but that’s part of a larger discussion that would probably take more than a couple of blog posts to unpack.

I guess the larger point is that GTK+ is definitely not dying; it’s pretty much being worked on by the same amount of people — which includes long timers as well as newcomers — as it was during the 2.x cycle.

  1. Both 2.0 and 3.0 are not wholly accurate; I used, as a starting point for the changeset period, the previous released branch point; for GTK+ 2.0, I started from the GTK_1_3_1 tag, whereas for GTK+ 3.0 I used the 2.90.0 tag. There are commits preceding both tags, but not enough to skew the results. 

by ebassi at September 15, 2015 11:00 PM

September 14, 2015

Emmanuele Bassi

Who wrote GTK+ 3.18

It’s common “knowledge” in the Internet Peanut Gallery that GTK+ is “dead” or “dying” — I assume in the same sense that NetCraft certified that BSD is dead. It’d be (and, in point of fact, it is) easy to dismiss these rumors; it’s not like they come with actual numbers and trends, because the gods of old never mentioned the requirement for comments on the Internet to be cogent, let alone factually true, when they laid down the various RFCs.

On the other hand, not having an actual answer is a bit of a self-serving argument on the side of people, like me, that actually contribute to GTK+ and to the GNOME core platform; it allows a bit of leeway when we ask for help, contributions, or simply for decent bug reports — “we don’t have enough resources, so if you want your pet bug fixed, or feature implemented, you’ll have to help us help you”. Not that having actual numbers would change that; after all, resources are not infinite, and there are a ton of bugs that need to be fixed — including ones that require a time machine.

As much as I wanted to dispel the rumors about the impending death of GTK+, my goal was definitely to provide numbers on how much gets done every six months in the core GNOME platform, thus honoring the people that work hard on it. Having numbers also allows us to quantify what kind of help we need, so that, the next time somebody complains that we don’t work hard enough on fixing ALL the bugs, at least we’ll have a handy way to point out how much we work already.

To gather the data, I’ve used the most excellent git-dm tool that Jonathan Corbet wrote for the “Who wrote the Linux kernel” columns for LWN.


To provide a baseline, this is how the development activity looked like during the GNOME 3.14 and 3.16 cycles, i.e. during the past year:

Version Lines added Lines removed Delta Contributors
GLib 2.42 17195 9934 7261 61
GLib 2.44 12504 2240 10264 49
GTK+ 3.14 130387 144945 -14558 84
GTK+ 3.16 80321 37043 43278 94

Note: the numbers do not include the translation work; as much as translations are an important part of our stack, they tend to skew any statistic, given the sheer size of lines touched.

The 3.14 cycle is an outlier for GTK+ because of the move of Adwaita in tree, as well as the GTK+ Inspector.

For the 3.18 cycle, the numbers up to September 15th are:

Version Lines added Lines removed Delta Contributors
GLib 2.46 19763 12437 7326 50
GTK+ 3.18 78676 54508 24168 83

As you can see, the numbers are mostly stable, in terms for code changes and number of developers.


Of the 50 developers that contributed the 355 changesets of GLib during the 3.18 cycle, the most active are:

Name Per changeset Name Per changed lines
Matthias Clasen 89 (25.9%) Руслан Ижбулатов 7337 (27.6%)
Philip Withnall 50 (14.5%) Ryan Lortie 5709 (21.5%)
Ryan Lortie 31 (9.0%) Chun-wei Fan 3426 (12.9%)
Dan Winship 29 (8.4%) Matthias Clasen 2881 (10.8%)
Simon McVittie 19 (5.5%) Philip Withnall 1729 (6.5%)
Chun-wei Fan 14 (4.1%) Dan Winship 1590 (6.0%)
Руслан Ижбулатов 11 (3.2%) Simon McVittie 867 (3.3%)
Mikhail Zabaluev 8 (2.3%) Alexander Larsson 648 (2.4%)
Ting-Wei Lan 8 (2.3%) Paolo Borelli 588 (2.2%)
Garrett Regier 6 (1.7%) Patrick Griffis 358 (1.3%)
Alexander Larsson 5 (1.5%) Janusz Lewandowski 313 (1.2%)
Michael Catanzaro 5 (1.5%) Mikhail Zabaluev 256 (1.0%)
Emmanuele Bassi 5 (1.5%) Iain Lane 130 (0.5%)
Christophe Fergeau 5 (1.5%) Garrett Regier 128 (0.5%)
Paolo Borelli 5 (1.5%) Michael Catanzaro 106 (0.4%)
Piotr Drąg 4 (1.2%) Richard Hughes 97 (0.4%)
Kalev Lember 4 (1.2%) Emmanuele Bassi 70 (0.3%)
Iain Lane 4 (1.2%) Xavier Claessens 48 (0.2%)
Patrick Griffis 4 (1.2%) Ross Lagerwall 35 (0.1%)
Rico Tzschichholz 3 (0.9%) Ting-Wei Lan 32 (0.1%)

Руслан Ижбулатов has been working on the Windows support, ensuring that the library and test suites work correctly there. Chun-wei Fan has been fixing the project files for building GLib (and GTK+, as well as a lot of libraries in the GNOME stack) with Microsoft Visual Studio and the Microsoft Visual C Compiler. Philip Withnall has been hard at work on the API reference and GObject tutorial, incorporating the feedback he got from clients at Collabora. Dan Winship and Michael Catanzaro have been working on the certificate API inside GIO, even though the bulk of the work has been going on inside the external glib-networking module. Simon McVittie has been working on GDBus; on the testing API; and has been incorporating patches coming from the Debian project.

For GTK+, on the other hand, the most active of the 83 contributors are:

Name Per changeset Name Per changed lines
Matthias Clasen 811 (49.8%) Matthias Clasen 37393 (37.3%)
Benjamin Otte 184 (11.3%) Chun-wei Fan 22644 (22.6%)
Carlos Garnacho 107 (6.6%) Benjamin Otte 10991 (11.0%)
Cosimo Cecchi 40 (2.5%) Jakub Steiner 4762 (4.7%)
Jakub Steiner 37 (2.3%) Georges Basile Stavracas Neto 3879 (3.9%)
Lapo Calamandrei 35 (2.1%) Carlos Soriano 3827 (3.8%)
Emmanuele Bassi 33 (2.0%) Lapo Calamandrei 3208 (3.2%)
Carlos Soriano 30 (1.8%) Carlos Garnacho 2690 (2.7%)
Timm Bäder 29 (1.8%) Руслан Ижбулатов 1480 (1.5%)
Chun-wei Fan 24 (1.5%) Alexander Larsson 1001 (1.0%)
William Hua 24 (1.5%) William Hua 947 (0.9%)
Alexander Larsson 23 (1.4%) Cosimo Cecchi 704 (0.7%)
Georges Basile Stavracas Neto 23 (1.4%) Paolo Borelli 671 (0.7%)
Jonas Ådahl 19 (1.2%) Jasper St. Pierre 627 (0.6%)
Christian Hergert 17 (1.0%) Christian Hergert 592 (0.6%)
Piotr Drąg 17 (1.0%) Sebastien Lafargue 570 (0.6%)
Paolo Borelli 17 (1.0%) Emmanuele Bassi 556 (0.6%)
Christoph Reiter 14 (0.9%) Jonas Ådahl 543 (0.5%)
Руслан Ижбулатов 13 (0.8%) Christoph Reiter 488 (0.5%)
Jasper St. Pierre 11 (0.7%) Ryan Lortie 424 (0.4%)

While Benjamin is hard at work at improving the correctness and performance of the style machinery inside GTK+, Jakub and Lapo are constantly trying to find ways to make Adwaita and the High Constrast themes push the boundaries of the same style machinery. Carlos Soriano and Georges Basile Stavracas Neto have been working on the components of the file selection dialog following the new designs from Allan Day, for the Google Summer of Code; the code is going to be shared between GTK+ and Nautilus, to improve consistency between Nautilus and the GtkFileChooser widget, and keep bugs to a minimum. Carlos Garnacho has worked on input — mostly the support for touchpad on Wayland. Also on Wayland, Jonas Ådahl has been working on bug fixing and feature parity in GDK between Wayland and X11.

From a company perspective, Red Hat still dominates, as it employs many of the more prolific contributors; nevertheless, it’s important to note that a larger number of developers are unaffiliated, or contribute to GLib and GTK+ in their own time:


Affiliation Per changeset Affiliation Per lines Affiliation Per contributor (total 52)
Red Hat 138 (40.1%) (Unknown) 12892 (48.5%) (Unknown) 32 (61.5%)
(Unknown) 95 (27.6%) Canonical 5794 (21.8%) Red Hat 10 (19.2%)
Collabora 65 (18.9%) Red Hat 5222 (19.6%) Canonical 3 (5.8%)
Canonical 37 (10.8%) Collabora 2574 (9.7%) Collabora 2 (3.8%)
Endless 6 (1.7%) Endless 72 (0.3%) Endless 2 (3.8%)
Centricular 2 (0.6%) Centricular 28 (0.1%) Centricular 2 (3.8%)


Affiliation Per changeset Affiliation Per lines Affiliation Per contributor (total 85)
Red Hat 1230 (75.5%) Red Hat 61725 (61.6%) (Unknown) 58 (68.2%)
(Unknown) 340 (20.9%) (Unknown) 36845 (36.7%) Red Hat 19 (22.4%)
Endless 43 (2.6%) Endless 1181 (1.2%) Canonical 3 (3.5%)
Canonical 13 (0.8%) Canonical 491 (0.5%) Endless 2 (2.4%)
Collabora 2 (0.1%) Collabora 19 (0.0%) Collabora 2 (2.4%)
Intel 1 (0.1%) Intel 2 (0.0%) Intel 1 (1.2%)


One of the most obvious conclusions that I can draw from these numbers is that GLib and GTK+ are definitely capable of retaining existing contributors — you just need to look at the names in the top committers and check how many GUADECs they have attended; what’s less obvious is the capacity of acquiring and retaining new contributors. For the latter, the Summer of Code and Outreachy programs are definitely a great resource. Carlos and Georges have been working their way down the stack at an impressive speed, and are now responsible for core functionality.

In terms of contributions, I think the code base has long since reached a point where it cannot be increased without also increasing the number of stable contributors. This is not a bad thing, per se; GLib and GTK+ are not the Linux kernel; we cannot add widgets like the kernel adds drivers, or file systems. It is mostly clear, though, that new functionality must come with new people that take responsibility for it, or at the expense of deprecating old API.

by ebassi at September 14, 2015 11:00 PM

May 29, 2015

Emmanuele Bassi

PyClutter Reborn

Back when I started hacking on Clutter, the Python bindings were the first thing that I was tasked to write. In order to make them happen, I had to understand the intent of the library, and fix up the API to be binding-friendly. It was another time — a time before introspection made possible to have run-time bindings that automatically acquired support for new API once the underlying shared library changed. I had to learn the CPython API; the code generation utilities that turned C header files into defs files, and defs files into C code; I also had to fix those utilities, whenever they lied about what they did, or whenever they were pretty much ad hoc code supporting GTK+ quirks; and, finally, I had to understand the build system used by PyGTK that ate all this stuff on one end, blended it all, and spat out a Python C module on the other.

For a long while, the Python bindings for Clutter have been based on that particular brand on insanity; people were able to switch to the introspection-based ones and drop PyClutter entirely, but the API was not as nice to use, and you’d be missing out on some niceties provided by both Python and Clutter.

In 2011, Bastian Winkler started porting the whole infrastructure to the overrides mechanism provided by PyGObject; instead of just loading the Clutter introspection data, you’d load a pure Python module that wrapped specific bits of the exposed API, and made it nicer to use.

For various reasons — mostly lack of time on my side and lack of testing — that work has been bitrotting in a branch of the Git repository. Well, no more. The master branch of the PyClutter repository is now providing introspection overrides similar to the ones for GTK+. I also added a bunch of examples, ported from their C equivalent, to show the idiomatic use of Clutter in a Python context.

I’ll probably do a release, but I’d be happy if somebody wanted to pick up the bindings and run with them — I’m not much of a Python programmer myself.


by ebassi at May 29, 2015 11:00 AM

April 24, 2015

Chris Lord

Web Navigation Transitions

Wow, so it’s been over a year since I last blogged. Lots has happened in that time, but I suppose that’s a subject for another post. I’d like to write a bit about something I’ve been working on for the last week or so. You may have seen Google’s proposal for navigation transitions, and if not, I suggest reading the spec and watching the demonstration. This is something that I’ve thought about for a while previously, but never put into words. After reading Google’s proposal, I fear that it’s quite complex both to implement and to author, so this pushed me both to document my idea, and to implement a proof-of-concept.

I think Google’s proposal is based on Android’s Activity Transitions, and due to Android UI’s very different display model, I don’t think this maps well to the web. Just my opinion though, and I’d be interested in hearing peoples’ thoughts. What follows is my alternative proposal. If you like, you can just jump straight to a demo, or view the source. Note that the demo currently only works in Gecko-based browsers – this is mostly because I suck, but also because other browsers have slightly inscrutable behaviour when it comes to adding stylesheets to a document. This is likely fixable, patches are most welcome.

 Navigation Transitions specification proposal


An API will be suggested that will allow transitions to be performed between page navigations, requiring only CSS. It is intended for the API to be flexible enough to allow for animations on different pages to be performed in synchronisation, and for particular transition state to be selected on without it being necessary to interject with JavaScript.

Proposed API

Navigation transitions will be specified within a specialised stylesheet. These stylesheets will be included in the document as new link rel types. Transitions can be specified for entering and exiting the document. When the document is ready to transition, these stylesheets will be applied for the specified duration, after which they will stop applying.

Example syntax:

<link rel="transition-enter" duration="0.25s" href="URI" />
<link rel="transition-exit" duration="0.25s" href="URI" />

When navigating to a new page, the current page’s ‘transition-exit‘ stylesheet will be referenced, and the new page’s ‘transition-enter‘ stylesheet will be referenced.

When navigation is operating in a backwards direction, by the user pressing the back button in browser chrome, or when initiated from JavaScript via manipulation of the location or history objects, animations will be run in reverse. That is, the current page’s ‘transition-enter‘ stylesheet will be referenced, and animations will run in reverse, and the old page’s ‘transition-exit‘ stylesheet will be referenced, and those animations also run in reverse.


Anne van Kesteren suggests that forcing this to be a separate stylesheet and putting the duration information in the tag is not desirable, and that it would be nicer to expose this as a media query, with the duration information available in an @-rule. Something like this:

@viewport {
  navigate-away-duration: 500ms;

@media (navigate-away) {

I think this would indeed be nicer, though I think the exact naming might need some work.


When a navigation is initiated, the old page will stay at its current position and the new page will be overlaid over the old page, but hidden. Once the new page has finished loading it will be unhidden, the old page’s ‘transition-exit‘ stylesheet will be applied and the new page’s ‘transition-enter’ stylesheet will be applied, for the specified durations of each stylesheet.

When navigating backwards, the CSS animations timeline will be reversed. This will have the effect of modifying the meaning of animation-direction like so:

Forwards          | Backwards
normal            | reverse
reverse           | normal
alternate         | alternate-reverse
alternate-reverse | alternate

and this will also alter the start time of the animation, depending on the declared total duration of the transition. For example, if a navigation stylesheet is declared to last 0.5s and an animation has a duration of 0.25s, when navigating backwards, that animation will effectively have an animation-delay of 0.25s and run in reverse. Similarly, if it already had an animation-delay of 0.1s, the animation-delay going backwards would become 0.15s, to reflect the time when the animation would have ended.

Layer ordering will also be reversed when navigating backwards, that is, the page being navigated from will appear on top of the page being navigated backwards to.


When a transition starts, a ‘navigation-transition-startNavigationTransitionEvent will be fired on the destination page. When this event is fired, the document will have had the applicable stylesheet applied and it will be visible, but will not yet have been painted on the screen since the stylesheet was applied. When the navigation transition duration is met, a ‘navigation-transition-end‘ will be fired on the destination page. These signals can be used, amongst other things, to tidy up state and to initialise state. They can also be used to modify the DOM before the transition begins, allowing for customising the transition based on request data.

JavaScript execution could potentially cause a navigation transition to run indefinitely, it is left to the user agent’s general purpose JavaScript hang detection to mitigate this circumstance.

Considerations and limitations

Navigation transitions will not be applied if the new page does not finish loading within 1.5 seconds of its first paint. This can be mitigated by pre-loading documents, or by the use of service workers.

Stylesheet application duration will be timed from the first render after the stylesheets are applied. This should either synchronise exactly with CSS animation/transition timing, or it should be longer, but it should never be shorter.

Authors should be aware that using transitions will temporarily increase the memory footprint of their application during transitions. This can be mitigated by clear separation of UI and data, and/or by using JavaScript to manipulate the document and state when navigating to avoid keeping unused resources alive.

Navigation transitions will only be applied if both the navigating document has an exit transition and the target document has an enter transition. Similarly, when navigating backwards, the navigating document must have an enter transition and the target document must have an exit transition. Both documents must be on the same origin, or transitions will not apply. The exception to these rules is the first document load of the navigator. In this case, the enter transition will apply if all prior considerations are met.

Default transitions

It is possible for the user agent to specify default transitions, so that navigation within a particular origin will always include navigation transitions unless they are explicitly disabled by that origin. This can be done by specifying navigation transition stylesheets with no href attribute, or that have an empty href attribute.

Note that specifying default transitions in all situations may not be desirable due to the differing loading characteristics of pages on the web at large.

It is suggested that default transition stylesheets may be specified by extending the iframe element with custom ‘default-transition-enter‘ and ‘default-transition-exit‘ attributes.


Simple slide between two pages:


  <link rel="transition-exit" duration="0.25s" href="page-1-exit.css" />
    body {
      border: 0;
      height: 100%;

    #bg {
      width: 100%;
      height: 100%;
      background-color: red;
  <div id="bg" onclick="window.location='page-2.html'"></div>


#bg {
  animation-name: slide-left;
  animation-duration: 0.25s;

@keyframes slide-left {
  from {}
  to { transform: translateX(-100%); }


  <link rel="transition-enter" duration="0.25s" href="page-2-enter.css" />
    body {
      border: 0;
      height: 100%;

    #bg {
      width: 100%;
      height: 100%;
      background-color: green;
  <div id="bg" onclick="history.back()"></div>


#bg {
  animation-name: slide-from-left;
  animation-duration: 0.25s;

@keyframes slide-from-left {
  from { transform: translateX(100%) }
  to {}

I believe that this proposal is easier to understand and use for simpler transitions than Google’s, however it becomes harder to express animations where one element is transitioning to a new position/size in a new page, and it’s also impossible to interleave contents between the two pages (as the pages will always draw separately, in the predefined order). I don’t believe this last limitation is a big issue, however, and I don’t think the cognitive load required to craft such a transition is considerably higher. In fact, you can see it demonstrated by visiting this link in a Gecko-based browser (recommended viewing in responsive design mode Ctrl+Shift+m).

I would love to hear peoples’ thoughts on this. Am I actually just totally wrong, and Google’s proposal is superior? Are there huge limitations in this proposal that I’ve not considered? Are there security implications I’ve not considered? It’s highly likely that parts of all of these are true and I’d love to hear why. You can view the source for the examples in your browser’s developer tools, but if you’d like a way to check it out more easily and suggest changes, you can also view the git source repository.

by Chris Lord at April 24, 2015 09:26 AM

April 19, 2015

Ross Burton

Cycling Dad Nirvana Approaches

Last week Alex wanted to go for a bike ride so we had a play on the local pump track and some of the cheeky trails hidden away nearby. This was his first ride off the pavements so I was cautious but much fun was had by Alex and he's spent most of the last week talking about the ride and in particular the pump track. There's a pretty good one at Thetford Forest now so now that Spring has (mostly) sprung we decided to have a family day out and give Isla some proper practise at riding her new bike.

Family ride at Thetford Forest

Family ride at Thetford Forest

For the start of the ride it was me and Alex riding ahead with Vicky riding alongside Isla whilst she practised the hard bit of stopping and starting. It didn't take long before we heard a loud "COMING THROUGH!" and Isla flew by. A few kilometres down the Shepherd trail we decided to head (badly, I can't recall the Shepherd route at all) before little legs tired and find the pump track. There were a few tumbles, Alex was getting tired and there's a tight berm with a very loose straight line. Isla of course had the confidence of a bold little sister and wanted a go, which led to a great double face-plant when I was running alongside guiding her around. Nothing a bit of Savlon won't solve though, and ice cream made it all better!

All in all, a good day: Alex had a good ride and fun on the pump track, and Isla is massively more confident on her bike, especially when the path isn't new-pavement-smooth. Not panicking when the path is a bit "bumpy lumpy" is an important skill when riding alongside traffic!

by Ross Burton at April 19, 2015 08:44 PM

April 18, 2015

Emmanuele Bassi

Dream Road

Right at the moment I’m writing this blog post, the Endless Kickstarter campaign page looks like this:

With 26 days to spare

I’m incredibly humbled and proud. Thank you all so much for your support and your help in bringing Endless to the world.

The campaign goes on, though; we added various new perks, including:

  • the option to donate an Endless computer to Habitat for Humanity or Funsepa, two charities that are involved in housing and education projects in developing countries
  • the full package — computer, carabiner, mug, and t-shirt; this one ships everywhere in the world, while we’re still working out the kinks of international delivery of the merch

Again, thank you all for your support.

by ebassi at April 18, 2015 11:27 AM

April 14, 2015

Emmanuele Bassi

High Leap

I’ve been working at Endless for two years, now.

I’m incredibly lucky to be working at a great company, with great colleagues, on cool projects, using technologies I love, towards a goal I care deeply about.

We’ve been operating a bit under the radar for a while, but now it’s time to unveil what we’ve been doing — and we’re doing it via a Kickstarter campaign:

The computer for the entire world

The OS for the entire world


It’s been an honour and a privilege working on this little, huge project for the past two years, and I can’t wait to see what another two years are going to bring us.

by ebassi at April 14, 2015 01:00 PM

April 13, 2015

Hylke Bons

San Francisco Impressions

Had the opportunity to visit San Francisco for two weeks in March, it was great. Hope to be back there soon.

by Hylke Bons at April 13, 2015 12:03 AM

March 07, 2015

Damien Lespiau

shave 0.1.0

After a month without anyone shouting at shave in despair or horror, it's time to tag something to have a "stable" branch so people can rely on a stable interface (yes, it's important even for a 100 lines macro!).

What's the most amazing is that quite a few projects have adopted shave in the GNOME and communities : Clutter, Niepce Digital, Giggle, GStreamer, GObject introspection, PulseAudio, ConnMan, Json-glib, libunique, gnote, seed, gnome-utils, libccss, xorg ? and maybe some more I've forgotten or I don't even know about.

You can grab the tarball or clone the git repositoy (git clone git:// and have a look at the README file.

Time to celebrate.

by Damien Lespiau ( at March 07, 2015 09:16 PM