Planet Closed Fist

June 01, 2016

Chris Lord

Open Source Speech Recognition

I’m currently working on the Vaani project at Mozilla, and part of my work on that allows me to do some exploration around the topic of speech recognition and speech assistants. After looking at some of the commercial offerings available, I thought that if we were going to do some kind of add-on API, we’d be best off aping the Amazon Alexa skills JS API. Amazon Echo appears to be doing quite well and people have written a number of skills with their API. There isn’t really any alternative right now, but I actually happen to think their API is quite well thought out and concise, and maps well to the sort of data structures you need to do reliable speech recognition.

So skipping forward a bit, I decided to prototype with Node.js and some existing open source projects to implement an offline version of the Alexa skills JS API. Today it’s gotten to the point where it’s actually usable (for certain values of usable) and I’ve just spent the last 5 minutes asking it to tell me Knock-Knock jokes, so rather than waste any more time on that, I thought I’d write this about it instead. If you want to try it out, check out this repository and run npm install in the usual way. You’ll need pocketsphinx installed for that to succeed (install sphinxbase and pocketsphinx from github), and you’ll need espeak installed and some skills for it to do anything interesting, so check out the Alexa sample skills and sym-link the ‘samples‘ directory as a directory called ‘skills‘ in your ferris checkout directory. After that, just run the included example file with node and talk to it via your default recording device (hint: say ‘launch wise guy‘).

Hopefully someone else finds this useful – I’ll be using this as a base to prototype further voice experiments, and I’ll likely be extending the Alexa API further in non-standard ways. What was quite neat about all this was just how easy it all was. The Alexa API is extremely well documented, Node.js is also extremely well documented and just as easy to use, and there are tons of libraries (of varying quality…) to do what you need to do. The only real stumbling block was pocketsphinx’s lack of documentation (there’s no documentation at all for the Node bindings and the C API documentation is pretty sparse, to say the least), but thankfully other members of my team are much more familiar with this codebase than I am and I could lean on them for support.

I’m reasonably impressed with the state of lightweight open source voice recognition. This is easily good enough to be useful if you can limit the scope of what you need to recognise, and I find the Alexa API is a great way of doing that. I’d be interested to know how close the internal implementation is to how I’ve gone about it if anyone has that insider knowledge.

by Chris Lord at June 01, 2016 04:54 PM

May 03, 2016

Damien Lespiau

Testing for pending migrations in Django

DB migration support has been added in Django 1.7+, superseding South. More specifically, it's possible to automatically generate migrations steps when one or more changes in the application models are detected. Definitely a nice feature!

I've written a small generic unit-test that one should be able to drop into the tests directory of any Django project and that checks there's no pending migrations, ie. if the models are correctly in sync with the migrations declared in the application. Handy to check nobody has forgotten to git add the migration file or that an innocent looking change in models.py doesn't need a migration step generated. Enjoy!

See the code on djangosnippets or as a github gist!

by Damien Lespiau (noreply@blogger.com) at May 03, 2016 05:10 PM

March 08, 2016

Chris Lord

State of Embedding in Gecko

Following up from my last post, I’ve had some time to research and assess the current state of embedding Gecko. This post will serve as a (likely incomplete) assessment of where we are today, and what I think the sensible path forward would be. Please note that these are my personal opinions and not those of Mozilla. Mozilla are gracious enough to employ me, but I don’t yet get to decide on our direction 😉

The TLDR; there are no first-class Gecko embedding solutions as of writing.

EmbedLite (aka IPCLite)

EmbedLite is an interesting solution for embedding Gecko that relies on e10s (Electrolysis, Gecko’s out-of-process feature code-name) and OMTC (Off-Main-Thread Compositing). From what I can tell, the embedding app creates a new platform-specific compositor object that attaches to a window, and with e10s, a separate process is spawned to handle the brunt of the work (rendering the site, running JS, handling events, etc.). The existing widget API is exposed via IPC, which allows you to synthesise events, handle navigation, etc. This builds using the xulrunner application target, which unfortunately no longer exists. This project was last synced with Gecko on April 2nd 2015 (the day before my birthday!).

The most interesting thing about this project is how much code it reuses in the tree, and how little modification is required to support it (almost none – most of the changes are entirely reasonable, even outside of an embedding context). That we haven’t supported this effort seems insane to me, especially as it’s been shipping for a while as the basis for the browser in the (now defunct?) Jolla smartphone.

Building this was a pain, on Fedora 22 I was not able to get the desktop Qt build to compile, even after some effort, but I was able to compile the desktop Gtk build (trivial patches required). Unfortunately, there’s no support code provided for the Gtk version and I don’t think it’s worth the time me implementing that, given that this is essentially a dead project. A huge shame that we missed this opportunity, this would have been a good base for a lightweight, relatively easily maintained embedding solution. The quality of the work done on this seems quite high to me, after a brief examination.

Spidernode

Spidernode is a port of Node.js that uses Gecko’s ‘spidermonkey’ JavaScript engine instead of Chrome’s V8. Not really a Gecko embedding solution, but certainly something worth exploring as a way to enable more people to use Mozilla technology. Being a much smaller project, of much more limited scope, I had no issues building and testing this.

Node.js using spidermonkey ought to provide some interesting advantages over a V8-based Node. Namely, modern language features, asm.js (though I suppose this will soon be supplanted by WebAssembly) and speed. Spidernode is unfortunately unmaintained since early 2012, but I thought it would be interesting to do a simple performance test. Using the (very flawed) technique detailed here, I ran a few quick tests to compare an old copy of Node I had installed (~0.12), current stable Node (4.3.2) and this very old (~0.5) Spidermonkey-based Node. Spidermonkey-based Node was consistently over 3x faster than both old and current (which varied very little in performance). I don’t think you can really draw any conclusions than this, other than that it’s an avenue worth exploring.

Many new projects are prototyped (and indeed, fully developed) in Node.js these days; particularly Internet-Of-Things projects. If there’s the potential for these projects to run faster, unchanged, this seems like a worthy project to me. Even forgetting about the advantages of better language support. It’s sad to me that we’re experimenting with IoT projects here at Mozilla and so many of these experiments don’t promote our technology at all. This may be an irrational response, however.

GeckoView

GeckoView is the only currently maintained embedding solution for Gecko, and is Android-only. GeckoView is an Android project, split out of Firefox for Android and using the same interfaces with Gecko. It provides an embeddable widget that can be used instead of the system-provided WebView. This is not a first-class project from what I can tell, there are many bugs and many missing features, as its use outside of Firefox for Android is not considered a priority. Due to this dependency, however, one would assume that at least GeckoView will see updates for the foreseeable future.

I’d experimented with this in the past, specifically with this project that uses GeckoView with Cordova. I found then that the experience wasn’t great, due to the huge size of the GeckoView library and the numerous bugs, but this was a while ago and YMMV. Some of those bugs were down to GeckoView not using the shared APZC, a bug which has since been fixed, at least for Nightly builds. The situation may be better now than it was then.

The Future

This post is built on the premise that embedding Gecko is a worthwhile pursuit. Others may disagree about this. I’ll point to my previous post to list some of the numerous opportunities we missed, partly because we don’t have an embedding story, but I’m going to conjecture as to what some of our next missed opportunities might be.

IoT is generating a lot of buzz at the moment. I’m dubious that there’s much decent consumer use of IoT, at least that people will get excited about as opposed to property developers, but if I could predict trends, I’d have likely retired rich already. Let’s assume that consumer IoT will take off, beyond internet-connected thermostats (which are actually pretty great) and metered utility boxes (which I would quite like). These devices are mostly bespoke hardware running random bits and bobs, but an emerging trend seems to be Node.js usage. It might be important for Mozilla to provide an easily deployed out-of-the-box solution here. As our market share diminishes, so does our test-bed and contribution base for our (currently rather excellent) JavaScript engine. While we don’t have an issue here at the moment, if we find that a huge influx of diverse, resource-constrained devices starts running V8 and only V8, we may eventually find it hard to compete. It could easily be argued that it isn’t important for our solution to be based on our technology, but I would argue that if we have to start employing a considerable amount of people with no knowledge of our platform, our platform will suffer. By providing a licensed out-of-the-box solution, we could also enforce that any client-side interface remain network-accessible and cross-browser compatible.

A less tenuous example, let’s talk about VR. VR is also looking like it might finally break out into the mid/high-end consumer realm this year, with heavy investment from Facebook (via Oculus), Valve/HTC (SteamVR/Vive), Sony (Playstation VR), Microsoft (HoloLens), Samsung (GearVR) and others. Mozilla are rightly investing in WebVR, but I think the real end-goal for VR is an integrated device with no tether (certainly Microsoft and Samsung seem to agree with me here). So there may well be a new class of device on the horizon, with new kinds of browsers and ways of experiencing and integrating the web. Can we afford to not let people experiment with our technology here? I love Mozilla, but I have serious doubts that the next big thing in VR is going to come from us. That there’s no supported way of embedding Gecko worries me for future classes of device like this.

In-vehicle information/entertainment systems are possibly something that will become more of the norm, now that similar devices have become such commodity. Interestingly, the current big desktop and mobile players have very little presence here, and (mostly awful) bespoke solutions are rife. Again, can we afford to make our technology inaccessible to the people that are experimenting in this area? Is having just a good desktop browser enough? Can we really say that’s going to remain how people access the internet for the next 10 years? Probably, but I wouldn’t want to bet everything on that.

A plan

If we want an embedding solution, I think the best way to go about it is to start from Firefox for Android. Due to the way Android used to require its applications to interface with native code, Firefox for Android is already organised in such a way that it is basically an embedding API (thus GeckoView). From this point, I think we should make some of the interfaces slightly more generic and remove the JNI dependency from the Gecko-side of the code. Firefox for Android would be the main consumer of this API and would guarantee that it’s maintained. We should allow for it to be built on Linux, Mac and Windows and provide the absolute minimum harness necessary to allow for it to be tested. We would make no guarantees about API or ABI. Externally to the Gecko tree, I would suggest that we start, and that the community maintain, a CEF-compatible library, at least at the API level, that would be a Tier-3 project, much like Firefox OS now is. This, to me, seems like the minimal-effort and most useful way of allowing embeddable Gecko.

In addition, I think we should spend some effort in maintaining a fork of Node.js LTS that uses spidermonkey. If we can promise modern language features and better performance, I expect there’s a user-base that would be interested in this. If there isn’t, fair enough, but I don’t think current experiments have had enough backing to ascertain this.

I think that both of these projects are important, so that we can enable people outside of Mozilla to innovate using our technology, and by osmosis, become educated about our mission and hopefully spread our ideals. Other organisations will do their utmost to establish a monopoly in any new emerging market, and I think it’s a shame that we have such a powerful and comprehensive technology platform and we aren’t enabling other people to use it in more diverse situations.

This post is some insightful further reading on roughly the same topic.

by Chris Lord at March 08, 2016 05:22 PM

February 24, 2016

Chris Lord

The case for an embeddable Gecko

Strap yourself in, this is a long post. It should be easy to skim, but the history may be interesting to some. I would like to make the point that, for a web rendering engine, being embeddable is a huge opportunity, how Gecko not being easily embeddable has meant we’ve missed several opportunities over the last few years, and how it would still be advantageous to make Gecko embeddable.

What?

Embedding Gecko means making it easy to use Gecko as a rendering engine in an arbitrary 3rd party application on any supported platform, and maintaining that support. An embeddable Gecko should make very few constraints on the embedding application and should not include unnecessary resources.

Examples

  • A 3rd party browser with a native UI
  • A game’s embedded user manual
  • OAuth authentication UI
  • A web application
  • ???

Why?

It’s hard to predict what the next technology trend will be, but there’s is a strong likelihood it’ll involve the web, and there’s a possibility it may not come from a company/group/individual with an existing web rendering engine or particular allegiance. It’s important for the health of the web and for Mozilla’s continued existence that there be multiple implementations of web standards, and that there be real competition and a balanced share of users of the various available engines.

Many technologies have emerged over the last decade or so that have incorporated web rendering or web technologies that could have leveraged Gecko;

(2007) iPhone: Instead of using an existing engine, Apple forked KHTML in 2002 and eventually created WebKit. They did investigate Gecko as an alternative, but forking another engine with a cleaner code-base ended up being a more viable route. Several rival companies were also interested in and investing in embeddable Gecko (primarily Nokia and Intel). WebKit would go on to be one of the core pieces of the first iPhone release, which included a better mobile browser than had ever been seen previously.

(2008) Chrome: Google released a WebKit-based browser that would eventually go on to eat a large part of Firefox’s user base. Chrome was initially praised for its speed and light-weightedness, but much of that was down to its multi-process architecture, something made possible by WebKit having a well thought-out embedding capability and API.

(2008) Android: Android used WebKit for its built-in browser and later for its built-in web-view. In recent times, it has switched to Chromium, showing they aren’t adverse to switching the platform to a different/better technology, and that a better embedding story can benefit a platform (Android’s built in web view can now be updated outside of the main OS, and this may well partly be thanks to Chromium’s embedding architecture). Given the quality of Android’s initial WebKit browser and WebView (which was, frankly, awful until later revisions of Android Honeycomb, and arguably remained awful until they switched to Chromium), it’s not much of a leap to think they may have considered Gecko were it easily available.

(2009) WebOS: Nothing came of this in the end, but it perhaps signalled the direction of things to come. WebOS survived and went on to be the core of LG’s Smart TV, one of the very few real competitors in that market. Perhaps if Gecko was readily available at this point, we would have had a large head start on FirefoxOS?

(2009) Samsung Smart TV: Also available in various other guises since 2007, Samsung’s Smart TV is certainly the most popular smart TV platform currently available. It appears Samsung built this from scratch in-house, but it includes many open-source projects. It’s highly likely that they would have considered a Gecko-based browser if it were possible and available.

(2011) PhantomJS: PhantomJS is a headless, scriptable browser, useful for testing site behaviour and performance. It’s used by several large companies, including Twitter, LinkedIn and Netflix. Had Gecko been more easily embeddable, such a product may well have been based on Gecko and the benefits of that would be many sites that use PhantomJS for testing perhaps having better rendering and performance characteristics on Gecko-based browsers. The demand for a Gecko-based alternative is high enough that a similar project, SlimerJS, based on Gecko was developed and released in 2013. Due to Gecko’s embedding deficiencies though, SlimerJS is not truly headless.

(2011) WIMM One: The first truly capable smart-watch, which generated a large buzz when initially released. WIMM was based on a highly-customised version of Android, and ran software that was compatible with Android, iOS and BlackBerryOS. Although it never progressed past the development kit stage, WIMM was bought by Google in 2012. It is highly likely that WIMM’s work forms the base of the Android Wear platform, released in 2014. Had something like WebOS been open, available and based on Gecko, it’s not outside the realm of possibility that this could have been Gecko based.

(2013) Blink: Google decide to fork WebKit to better build for their own uses. Blink/Chromium quickly becomes the favoured rendering engine for embedding. Google were not afraid to introduce possible incompatibility with WebKit, but also realised that embedding is an important feature to maintain.

(2014) Android Wear: Android specialised to run on watch hardware. Smart watches have yet to take off, and possibly never will (though Pebble seem to be doing alright, and every major consumer tech product company has launched one), but this is yet another area where Gecko/Mozilla have no presence. FirefoxOS may have lead us to have an easy presence in this area, but has now been largely discontinued.

(2014) Atom/Electron: Github open-sources and makes available its web-based text editor, which it built on a home-grown platform of Node.JS and Chromium, which it later called Electron. Since then, several large and very successful projects have been built on top of it, including Slack and Visual Studio Code. It’s highly likely that such diverse use of Chromium feeds back into its testing and development, making it a more robust and performant engine, and importantly, more widely used.

(2016) Brave: Former Mozilla co-founder and CTO heads a company that makes a new browser with the selling point of blocking ads and tracking by default, and doing as much as possible to protect user privacy and agency without breaking the web. Said browser is based off of Chromium, and on iOS, is a fork of Mozilla’s own WebKit-based Firefox browser. Brendan says they started based off of Gecko, but switched because it wasn’t capable of doing what they needed (due to an immature embedding API).

Current state of affairs

Chromium and V8 represent the state-of-the-art embeddable web rendering engine and JavaScript engine and have wide and varied use across many platforms. This helps reenforce Chrome’s behaviour as the de-facto standard and gradually eats away at the market share of competing engines.

WebKit is the only viable alternative for an embeddable web rendering engine and is still quite commonly used, but is generally viewed as a less up-to-date and less performant engine vs. Chromium/Blink.

Spidermonkey is generally considered to be a very nice JavaScript engine with great support for new EcmaScript features and generally great performance, but due to a rapidly changing API/ABI, doesn’t challenge V8 in terms of its use in embedded environments. Node.js is likely the largest user of embeddable V8, and is favoured even by Mozilla employees for JavaScript-based systems development.

Gecko has limited embedding capability that is not well-documented, not well-maintained and not heavily invested in. I say this with the utmost respect for those who are working on it; this is an observation and a criticism of Mozilla’s priorities as an organisation. We have at various points in history had embedding APIs/capabilities, but we have either dropped them (gtkmozembed) or let them bit-rot (IPCLite). We do currently have an embedding widget for Android that is very limited in capability when compared to the default system WebView.

Plea

It’s not too late. It’s incredibly hard to predict where technology is going, year-to-year. It was hard to predict, prior to the iPhone, that Nokia would so spectacularly fall from the top of the market. It was hard to predict when Android was released that it would ever overtake iOS, or even more surprisingly, rival it in quality (hard, but not impossible). It was hard to predict that WebOS would form the basis of a major competing Smart TV several years later. I think the examples of our missed opportunities are also good evidence that opening yourself up to as much opportunity as possible is a good indicator of future success.

If we want to form the basis of the next big thing, it’s not enough to be experimenting in new areas. We need to enable other people to experiment in new areas using our technology. Even the largest of companies have difficulty predicting the future, or taking charge of it. This is why it’s important that we make easily-embeddable Gecko a reality, and I plead with the powers that be that we make this higher priority than it has been in the past.

by Chris Lord at February 24, 2016 06:10 PM

February 15, 2016

Damien Lespiau

Augmenting mailing-lists with Patchwork - Another try

The mailing-list problem


Many software projects use mailing-lists, which usually means mailman, not only for discussions around that project, but also for code contributions. A lot of open source projects work that way, including the one I interact with the most, the Linux kernel. A contributor sends patches to a mailing list, these days using git send-email, and waits for feedback or for his/her patches to be picked up for inclusion if fortunate enough.

Problem is, mailing-lists are awful for code contribution.

A few of the issues at hand:
  • Dealing with patches and emails can be daunting for new contributors,
  • There's no feedback that someone will look into the patch at some point,
  • There's no tracking of which patch has been processed (eg. included into the tree). A shocking number of patches are just dropped as a direct consequence,
  • There's no way to add metadata to a submission. For instance, we can't assign a reviewer from a pool of people working on the project. As a result, review is only working thanks to the good will of people. It's not necessarily a bad thing, but it doesn't work in a corporate environment with deadlines,
  • Mailing-lists are all or nothing: one subscribes to the activity of the full project, but may only care about following the progress of a couple of patches,
  • There's no structure at all actually, it's all just emails,
  • No easy way to hook continuous integration testing,
  • The tools are really bad any time they need to interact with the mailing-list: try to send a patch as a reply to a review comment, addressing it. It starts with going to look at the headers of the review email to copy/paste its Message-ID, followed by an arcane incantation:
    $ git send-email --to=<mailing-list> --cc=<reviewer> \
    --in-reply-to=<reviewer-mail-message-id> \
    --reroll-count 2 -1 HEAD~2

Alternative to mailing-lists


Before mentioning Patchwork, it's worth saying that a project can merely decide to switch to using something else than a mailing-list to handle code contributions; To name a few: Gerrit, Phabricator, Github, Gitlab, Crucible.

However, there can be some friction preventing the adoption those tools. People have built their own workflow around mailing-lists for years and it's somewhat difficult to adopt anything else over night. Projects can be big with no clear way to make decisions, so sticking to mailing-lists can just be the result of inertia.

The alternatives also have problems of their own and there's no clear winner, nothing like how git took over the world.


Patchwork


So, the path of least resistance is to keep mailing-lists. Jemery Kerr had the idea to augment mailing-lists with a tool that would track the activity there and build a database of patches and their status (new, reviewed, merged, dropped, ...). Patchwork was born.

Here are some Patchwork instances in the wild:

The KMS and DRI Linux subsystems are using freedesktop.org to host their mailing-lists, which includes the i915 Intel driver, project I've been contributing to since 2012. We have an instance of Patchwork there, and, while somewhat useful, the tool fell short of what we really wanted to do with our code contribution process.

Patches are welcome!


So? it was time to do something about the situation and I started improving Patchwork to answer some of the problems outlined above. Given enough time, it's possible to help on all fronts.

The code can be found on github, along with the current list of issues and enhancements we have thought about. I also maintain freedesktop.org's instance for the graphics team at Intel, but also any freedesktop.org project that would like to give it a try.


Design, Design, Design


First things first, we improved how Patchwork looks and feels. Belén, of OpenEmbedded/Yocto fame, has very graciously spent some of her time to rethink how the interaction should behave.

Before, ...

... and after!

There is still a lot of work remaining to roll out the new design and the new interaction model on all of Patchwork. A glimpse of what that interaction looks like so far:



Series


One thing was clear from the start: I didn't want to have Patches as the main object tracked, but Series, a collection of patches. Typically, developing a  new feature requires more than one patch, especially with the kernel where it's customary to write a lot of orthogonal smaller commits rather than a big (and often all over the place) one. Single isolated commits, like a small bug fix, are treated as a series of one patch.

But that's not all. Series actually evolve over time as the developer answers review comments and the patch-set matures. Patchwork also tracks that evolution, creating several Revisions for the same series. This colour management series from Lionel shows that history tracking (beware, this is not the final design!).

I have started documenting what Patchwork can understand. Two ways can be used to trigger the creation of a new revision: sending a revised patch as a reply to the reviewer email or resending the full series with a similar cover letter subject.

There are many ambiguous cases and some others cases not really handled yet, one of them being sending a series as a reply to another series. That can be quite confusing for the patch submitter but the documented flows should work.

REST API


Next is dusting off Patchwork's XML-RPC API. I wanted to be able to use the same API from both the web pages and git-pw, a command line client.

This new API is close to complete enough to replace the XML-RPC one and already offers a few more features (eg. testing integration). I've also been carefully documenting it.

git-pw


Rob Clark had been asking for years for a better integration with git from the Patchwork's command line tool, especially sharing its configuration file. There also are a number of git "plugins" that have appeared to bridge git with various tools, like git-bz or git-phab.

Patchwork has now his own git-pw, using the REST API. There, again, more work is needed to be in an acceptable shape, but it can already be quite handy to, for instance, apply a full series in one go:

$ git pw apply -s 122
Applying series: DP refactoring v2 (rev 1)
Applying: drm/i915: Don't pass *DP around to link training functions
Applying: drm/i915: Split write of pattern to DP reg from intel_dp_set_link_train
Applying: drm/i915 Call get_adjust_train() from clock recovery and channel eq
Applying: drm/i915: Move register write into intel_dp_set_signal_levels()
Applying: drm/i915: Move generic link training code to a separate file
...

Testing Integration



This is what kept my busy the last couple of months: How to integrate patches sent to a mailing-list with Continuous Integration systems. The flow I came up with is not very complicated but a picture always helps:

Hooking tests to Patchwork


Patchwork is exposing an API so mailing-lists are completely abstracted from systems using that API. Both retrieving the series/patches to test and sending back test results is done through HTTP. That makes testing systems fairly easy to write.

Tomi Sarvela hooked our test-suite, intel-gpu-tools, to patches sent to intel-gfx and we're now gating patch acceptance to the kernel driver with the result of that testing.

Of course, it's not that easy. In our case, we've accumulated some technical debt in both the driver and the test suite, which means it will take time to beat both into be a fully reliable go/no-go signal. People have been actively looking at improving the situation though (thanks!) and I have hope we can reach that reliability sooner rather than later.

As a few words of caution about the above, I'd like to remind everyone that the devil always is in the details:
  • We've restricted the automated testing to a subset of the tests we have (Basic Acceptance Tests aka BATs) to provide a quick answer to developers, but also because some of our tests aren't well bounded,
  • We have no idea how much code coverage that subset really exercises, playing with the kernel gcov support would be interesting for sure,
  • We definitely don't deal with the variety of display sinks (panels and monitors) that are present in the wild.
This means we won't catch all the i915 regressions. Time will definitely improve things as we connect more devices to the testing system and fix our tests and driver.

Anyway, let's leave i915 specific details for another time. A last thing about this testing integration is that Patchwork can be configured to send emails back to the submitter/mailing-list with some test results. As an example, I've written a checkpatch.pl integration that will tell people to fix their patches without the need of a reviewer to do it. I know, living in the future.

For more in-depth documentation about continuous testing with Patchwork, see the testing section of the manual.

What's next?


This blog post is long enough as is, let's finish by the list of things I'd like to be in a acceptable state before I'll happily tag a first version:
  • Series support without without known bugs
  • REST API and git pw able to replace XML-RPC and pwclient
  • Series, Patches and Bundles web pages ported to the REST API and the new filter/action interaction.
  • CI integration
  • Patch and Series life cycle redesigned with more automatic state changes (ie. when someone gives a reviewed-by tag, the patch state should change to reviewed)
There are plenty of other exciting ideas captured in the github issues for when this is done.

Links




by Damien Lespiau (noreply@blogger.com) at February 15, 2016 06:12 PM

Continuous Testing with Patchwork

As promised in the post introducing my recent work on Patchwork, I've written some more in-depth documentation to explain how to hook testing to Patchwork. I've also realized that a blog post might not be the best place to put that documentation and opted to put it in the proper manual:


Happy reading!

by Damien Lespiau (noreply@blogger.com) at February 15, 2016 06:01 PM

February 13, 2016

Hylke Bons

Film developing setup that fits your backpack

It’s lots of fun developing your own black & white film. Here’s the setup I’ve been using. My goals were to keep costs down and to have a simple, compact setup that’s easy to use.

Developing tank and reel ~ £ 22

This is the main cost and you want to make it a good one. You can shop around for a second hand for much less.

Thermometer ~ £ 4

To make sure the solutions are at the right temperature. A glass spirit thermometer also provides a means of stirring.

Developer ~ £ 5

A 120 mL bottle of Rodinal develops about 20 rolls of film at 1+25 dilutions. You can double the dilution to 1+50 for 40, that’s just 12 pence per roll! This stuff lasts forever if you store it in darkness and air tight. Rodinal is a "one shot" developer so you toss out your dilution after use.

Fixer ~ £ 3

Fixer dilution can be reused many times, so store it after use. One liter of a 1+5 dilution fixes 17 rolls of film.

To check if your fixer dilution is still good: take a piece of cut off film leader and put it in small cup filled with fixer. If the film becomes transparent after a few minutes the fixer is still good to use.

Measuring jug ~ £ 3

To mix chemicals in. Get one with a spout for easy pouring.

Spout bags ~ £ 2

These keep air out compared to using bottles, so your chemicals will last longer. They save space too. Label them well, you don’t want to mess up!

Funnel ~ £ 1

One with a small mouth, so it fits the spout bags easily when you need to pour chemicals back.

Syringe ~ £ 1

To measure the amount of developer. Around 10 to 20 mL volume will do. Make sure to get one with 1 mL marks for more accurate measuring, and a blunt needle to easily extract from the spout bag.

Common household items

You probably already have these: a clothes peg, for hanging your developed film to dry. And a pair of scissors, to remove the film from the cannister and to cut the film into strips after drying.

Developed Ilford HP5+ film

Total ~ £ 41

As you can see, it’s only a small investment. After developing a few rolls the equipment has paid for itself, compared to sending your rolls off for processing. There’s something special about seeing your images appear on a film for the first time that’s well worth it. Like magic. :)

by Hylke Bons at February 13, 2016 07:34 PM

Lockee to the rescue

Using public computers can be a huge privacy and security risk. There’s no way you can tell who may be spying on you using key loggers or other evil software.

Some friends and family don’t see the problem at all, and use any computer to log in to personal accounts. I actually found myself not being able to recommend an easy solution here. So I decided to build a service that I hope will help remove the need to sign in to sensitive services in some cases at least.

Example

You want to use the printer at your local library to print an e-ticket. As you’re on a public computer, you really don’t want to log in to your personal email account fetch the document for security reasons. You’re not too bothered about your personal information on the ticket, but typing in your login details on a public computer is a cause for concern.

This is a use case I have every now and then, and I’m sure there many other similar situations where you have to log in to a service to get some kind of file, but you don’t really want to.

Existing storage services

There are temporary file storage solutions on the internet, but most of them give out long links that are long and hard to remember, ask for an email address to send the links to, are public, or have any combination of these problems. Also, you have no idea what will happen to your data.

USB drives can help sometimes, but you may not always have one handy, it might get infected, and it’s easy to forget once plugged in.

Lockee to the rescue

Lockee is a small service that temporarily hosts files for you. Seen those luggage lockers at the railway station? It’s like that, but for files.

A Lockee locker

It allows you to create temporary file lockers, with easy to remember URLs (you can name your locker anything you want). Lockers are protected using passphrases, so your file isn’t out in the open.

Files are encrypted and decrypted in the browser, there’s no record of their real content on the server side. There’s no tracking of anything either, and lockers are automatically emptied after 24 hours.

Give it a go

I’m hosting an instance of Lockee on lockee.me. The source is also available if you’d like to run your own instance or contribute.

by Hylke Bons at February 13, 2016 04:00 PM

Ways to improve download page flow

App stores on every platform are getting more popular, and take care of downloads in a consistent and predictable way. Sometimes stores aren’t an option or you prefer not to use them, specially if you’re a Free and Open Source project and/or Linux distribution.

Here are some tips to improve your project’s download page flow. It’s based on confusing things I frequently run into when trying to download a FOSS project and think can be done a lot better.

This is in no way an exhaustive list, but is meant to help as a quick checklist to make sure people can try out your software without being confused or annoyed by the process. I hope it will be helpful.

Project name and purpose

The first thing people will (or should) see. Take advantage of this fact and pick a descriptive name. Avoid technical terms, jargon, and implementation details in the name. Common examples are: “-gui”, “-qt”, “gtk-”, “py-”, they just clutter up names with details that don’t matter.

Describe what your software does, what problem it solves, and why you should care. This sounds like stating the obvious, but this information is often buried in other less important information, like which programming language and/or free software license is used. Make this section prominent on the website and keep it down on the buzzwords.

The fact that the project is Free and Open Source, whilst important, is secondary. Oh, and recursive acronyms are not funny.

Platforms

Try to autodetect as much as possible. Is the visitor running Linux, Windows, or Mac? Which architecture? Make suggestions more prominent, but keep other options open in case someone wants to download a version for a platform other than the one they’re currently using.

Architecture names can be confusing as well: “amd64” and “x86” are labels often used to specify to distinguish between 32-bit and 64-bit systems, however they do a bad job at this. AMD is not the only company making 64-bit processors anymore, and “x86” doesn’t even mention “32-bit”.

Timestamps

Timestamps are a good way to find out if a project is actively maintained, you can’t (usually) tell from a version number when the software was released. Use human friendly date formatting that is unambiguous. For example, use “February 1, 2003” as opposed to “01-02-03”. If you keep a list of older versions, sort by time and clearly mark which is the latest version.

File sizes

Again, keep it human readable. I’ve seen instances where the file size are reported in bytes (e.g. 209715200 bytes, instead of 200 MB). Sometimes you need to round numbers or use thousands separators when numbers are large to improve readability.

File sizes are mostly there to make rough guesses, and depending on context you don’t need to list them at all. Don’t spend too much time debating whether you should be using MB or MiB.

Integrity verification

Download pages are often littered with checksums and GPG signatures. Not everybody is going to be familiar with these concepts. I do think checking (source) integrity is important, but also think source and file integrity verification should be automated by the browser. There’s no reason for it to be done manually, but there doesn’t seem to be a common way to do this yet.

If you do offer ways to check file and source integrity, add explanations or links to documentation on how to perform these checks. Don’t ditch strange random character strings on pages. Educate, or get out of the way.

Keep in mind search engines may link to the insecure version of your page. Not serving pages over HTTPS at all makes providing signatures checks rather pointless, and could even give a false sense of security.

Compression formats

Again something that should be handled by the browser. Compressing downloads can save a lot of time and bandwidth. Often though, specially on Linux, we’re presented with a choice of compression formats that hardly matter in size (.tar.gz, .tar.bz2, .7z, .xz, .zip).

I’d say pick one. Every operating system supports the .zip format nowadays. The most important lesson here though is to not put people up with irrelevant choices and clutter the page.

Mirrors

Detect the closest mirror if possible, instead of letting people pick from a long list. Don’t bother for small downloads, as the time required picking one is probably going to outweigh the benefit of the increased download speed.

Starting the download

Finally, don’t hide the link in paragraphs of text. Make it a big and obvious button.

by Hylke Bons at February 13, 2016 04:00 PM

San Francisco impressions

Had the opportunity to visit San Francisco for two weeks in March, it was great. Hope to be back there soon.

by Hylke Bons at February 13, 2016 04:00 PM

London Zoo photos

Visited the London Zoo for the first time and took a few photos.

by Hylke Bons at February 13, 2016 04:00 PM

A bit about taking pictures

Though I like going out and take pictures at the places I visit, I haven’t actually blogged about taking pictures before. I thought I should share some tips and experiences.

This is not a “What’s in my bag” kind of post. I won’t, and can’t, tell you what the best cameras or lenses are. I simply don’t know. These are some things I’ve learnt and that have worked for me and my style of taking pictures, and wish I knew earlier on.

Pack

Keep gear light and compact, and focus on what you have. You will often bring more than you need. If you get the basics sorted out, you don’t need much to take a good picture. Identify a couple of lenses you like using and get to know their qualities and limits.

Your big lenses aren’t going to do you any good if you’re reluctant to take them with you. Accept that your stuff is going to take a beating. I used to obsess over scratches on my gear, I don’t anymore.

I don’t keep a special bag. I wrap my camera in a hat or hoody and lenses in thick socks and toss them into my rucksack. (Actually, this is one tip you might want to ignore.)

Watch out for gear creep. It’s tempting to wait until that new lens comes out and get it. Ask yourself: will this make me go out and shoot more? The answer usually is probably not, and the money is often better spent on that trip to take those nice shots with the stuff you already have.

Learn

Try some old manual lenses to learn with. Not only are these cheap and able to produce excellent image quality, it’s a great way to learn how aperture, shutter speed, and sensitivity affect exposure. Essential for getting the results you want.

I only started understanding this after having inherited some old lenses and started playing around with them. The fact they’re all manual makes you realise quicker how things physically change inside the camera when you modify a setting, compared to looking at abstract numbers on the back of the screen. I find them much more engaging and fun to use compared to full automatic lenses.

You can get M42 lens adapters for almost any camera type, but they work specially well with mirrorless cameras. Here’s a list of the Asahi Takumar (old Pentax) series of lenses, which has some gems. You can pick them up off eBay for just a few tenners.

My favourites are the SMC 55mm f/1.8 and SMC 50mm f/1.4. They produce lovely creamy bokeh and great sharpness of in focus at the same time.

See

A nice side effect of having a camera on you is that you look at the world differently. Crouch. Climb on things. Lean against walls. Get unique points of view (but be careful!). Annoy your friends because you need to take a bit more time photographing that beetle.

Some shots you take might be considered dumb luck. However, it’s up to you to increase your chances of “being lucky”. You might get lucky wandering around through that park, but you know you certainly won’t be when you just sit at home reading the web about camera performance.

Don’t worry about the execution too much. The important bit is that your picture conveys a feeling. Some things can be fixed in post-production. You can’t fix things like focus or motion blur afterwards, but even these are details and not getting them exactly right won’t mean your picture will be bad.

Don’t compare

Even professional photographers take bad pictures. You never see the shots that didn’t make it. Being a good photographer is as much about being a good editor. The very best still take crappy shots sometimes, and alright shots most of the time. You just don’t see the bad ones.

Ask people you think are great photographers to point out something they’re unhappy about in that amazing picture they took. Chances are they will point out several flaws that you weren’t even aware about.

Share

Don’t forget to actually have a place to actually post your images. Flickr or Instagram are fine for this. We want to see your work! Even if it’s not perfect in your eyes. Do your own thing. You have your own style.

Go

I hope that was helpful. Now stop reading and don’t worry too much. Get out there and have fun. Shoot!

by Hylke Bons at February 13, 2016 04:00 PM

February 07, 2016

Damien Lespiau

libpneu first import

Waw, definitely hard to keep a decent pace at posting news in my blog. Nevertheless, a first import of libpneu has reached my public git repository. libpneu is an effort to make a tracing library that I could use in every single project I start. Basically, you put tracing points in your programs and libpneu prints them whenever you need to know what is happening. Different backends can be used to display traces and debug messages, from printing them to stdout, to sending them over an UDP socket. More about libpneu in a few days/weeks !

A small screenshot to better understand what it does:


by Damien Lespiau (noreply@blogger.com) at February 07, 2016 02:12 PM

ADV: ADV is a Dependency Viewer

A few months ago I wrote a small script to draw a dependency graph between the object files of a library (the original idea is from Lionel Landwerlin). You'll need an archive of your library for the tool to be able to look for the needed pieces. Let's have a look at a sample of its output to understand what it does. I ran it against the HEAD of clutter.

A view of the clutter library


This graph was generated with the following (tred is part of graphviz to do transitive reductions on graphs):

$ adv.py clutter/.libs/libclutter-glx-0.9.a | tred | dot -Tsvg > clutter.svg

You can provide more than one library to the tool:

./adv.py ../clutter/clutter/.libs/libclutter-glx-0.9.a \
../glib-2.18.4/glib/.libs/libglib-2.0.a \
../glib-2.18.4/gobject/.libs/libgobject-2.0.a \
| tred | dot -Tsvg > clutter-glib-gobject-boxed.svg




What you can do with this:
  • trim down your library by removing the object files you don't need and that are leafs in the graph. This was actually the reason behind the script and it proved useful,
  • get an overview of a library,
  • make part of a library optional more easily.

To make the script work you'll need graphviz, python, ar and nm (you can provide a cross compiler prefix with --cross-prefix).

Interested? clone it! (or look at the code)

$ git clone git://git.lespiau.name/misc/adv

by Damien Lespiau (noreply@blogger.com) at February 07, 2016 02:11 PM

shave: making the autotools output sane

updated: Automake 1.11 has been release with "silent rules" support, a feature that supersedes the hack that shave is. If you can depend on automake 1.11 please consider using its silent rules rather than shave.
updated: add some gtk-doc info
updated: CXX support thanks to Tommi Komulainen

shave


Fed up with endless screens of libtool/automake output? Fed up with having to resort to -Werror to see warnings in your code? Then shave might be for you. shave transforms the messy output of autotools into a pretty Kbuild-like one (Kbuild is the Linux build system). It's composed of a m4 macro and 2 small shell scripts and it's available in a git repository.
git clone git://git.lespiau.name/shave
Hopefully, in a few minutes, you should be able to see your project compile like this:
$ make
Making all in foo
Making all in internal
CC internal-file0.o
LINK libinternal.la
CC lib-file0.o
CC lib-file1.o
LINK libfoo.la
Making all in tools
CC tool0-tool0.o
LINK tool0
Just like Kbuild, shave supports outputting the underlying commands using:
$ make V=1

Setup


  • Put the two shell scripts shave.in and shave-libtool.in in the directory of your choice (it can be at the root of your autotooled project).
  • add shave and shave-libtool to AC_CONFIG_FILES
  • add shave.m4 either in acinclude.m4 or your macro directory
  • add a call to SHAVE_INIT just before AC_CONFIG_FILES/AC_OUTPUT. SHAVE_INIT takes one argument, the directory where shave and shave-libtool are.

Custom rules


Sometimes you have custom Makefile rules, e.g. to generate a small header, run glib-mkenums or glib-genmarshal. It would be nice to output a pretty 'GEN' line. That's quite easy actually, just add few (portable!) lines at the top of your Makefile.am:
V         = @
Q = $(V:1=)
QUIET_GEN = $(Q:@=@echo ' GEN '$@;)
and then it's just a matter of prepending $(QUIET_GEN) to the rule creating the file:
lib-file2.h: Makefile
$(QUIET_GEN)echo "#define FOO_DEFINE 0xbabe" > lib-file2.h

gtk-doc + shave


gtk-doc + shave + libtool 1.x (2.x is fine) is known to have a small issue, a patch is available. Meanwhile I suggest adding a few lines to your autogen.sh script.
sed -e 's#) --mode=compile#) --tag=CC --mode=compile#' gtk-doc.make > gtk-doc.temp \
&& mv gtk-doc.temp gtk-doc.make
sed -e 's#) --mode=link#) --tag=CC --mode=link#' gtk-doc.make > gtk-doc.temp \
&& mv gtk-doc.temp gtk-doc.make

dolt + shave


It's possible to use dolt in conjunction with shave with a surprisingly small patch to dolt.

Real world example: Clutter


$ make
GEN   stamp-clutter-marshal.h
GEN   clutter-marshal.c
GEN   stamp-clutter-enum-types.h
Making all in cogl
Making all in common
CC    cogl-util.o
CC    cogl-bitmap.o
CC    cogl-bitmap-fallback.o
CC    cogl-primitives.o
CC    cogl-bitmap-pixbuf.o
CC    cogl-clip-stack.o
CC    cogl-fixed.o
CC    cogl-color.o
cogl-color.c: In function ‘cogl_set_source_color4ub’:
cogl-color.c:141: warning: implicit declaration of function ‘cogl_set_source_color’
CC    cogl-vertex-buffer.o
CC    cogl-matrix.o
CC    cogl-material.o
LINK  libclutter-cogl-common.la
[...]

Eh! now we can see a warning there!

TODO


This is a first release, shave has not been widely tested aka it may not work for you!
  • test it with a wider range of automake/libtool versions
  • shave won't work without AC_CONFIG_HEADERS due to shell quoting problems
  • see what can be done for make install/dist (they are prettier thanks to make -s, but we probably miss a few actions)
  • there is a '-s' hardcoded in MAKEFLAGS,  I have to find a way to make it more flexible

by Damien Lespiau (noreply@blogger.com) at February 07, 2016 02:08 PM

After-shave

A few concerns have been raised by shave, namely not being able to debug build failure in an automated environment as easily as before, or users giving  useless bug reports of failed builds.

One capital thing to realize is that, even when compiling with make V=1, everything that was not echoed was not showed (MAKEFLAGS=-s).

Thus, I've made a few changes:
  • Add CXX support (yes, that's unrelated, but the question was raised, thanks to Tommi Komulainen for the initial patch),
  • add a --enable-shave option to the configure script,
  • make the Good Old Behaviour the default one,
  • as a side effect, the V and Q variables are now defined in the m4 macro, please remove them from your Makefile.am files.

The rationale for the last point can be summarized as follow:
  • the default behaviour is as portable as before (for non GNU make that is), which is not the case is shave is activated by default,
  • you can still add --enable-shave to you autogen.sh script, bootstraping your project from a SCM will enable shave and that's cool!
  • don't break tools that were relying on automake's output.

Grab the latest version! (git://git.lespiau.name/shave)

by Damien Lespiau (noreply@blogger.com) at February 07, 2016 02:06 PM

Still some hair left

I've been asked to give more input on make V=1 Vs. --disable-shave, so here it is: once again, before shipping your package with shave enabled by default, there is something crucial to understand: make V=1 (when having configured your package with --enable-shave) is NOT equivalent to no shave at all (ie --disable-shave). This is because the shave m4 macro is setting MAKEFLAGS=-s in every single Makefile. This means that make won't print the commands as is used to, and that the only way to print something on the screen is to echo it. It's precisely what the shave wrappers do, they echo the CC/CXX and LIBTOOL commands when V=1. So in short custom rules and a few automake commands won't be displayed with make V=1.

That said, it's possible to craft a rule that would display the command with shaved enabled and make V=1. The following rule:
lib-file2.h: Makefile
$(SHAVE_GEN)echo "#define FOO_DEFINE 0xbabe" > lib-file2.h
would become:
lib-file2.h: Makefile
@cmd='echo "#define FOO_DEFINE 0xbabe" > lib-file2.h'; \
if test x"$$V" = x1; then echo $$cmd; fi
$(SHAVE_GEN)echo "#define FOO_DEFINE 0xbabe" > lib-file2.h
which is quite ugly, to say the least. (if you find a smarter way, please enlighten me!).

On the development side, shave is slowly becoming more mature:
  • Thanks to Jan Schmidt, shave works with non GNU sed and echo that do not support -n. It now works on Solaris, hopefully on BSDs and various Unixes as well (not tested though).
  • SHAVE_INIT has a new, optional, parameter which empowers the programmer to define shave's default behaviour (when ./configure is run without shave any related option): either enable or disable. ie. SHAVE_INIT([autootols], [enable]) will instruct shave to find its wrapper scripts in the autotools directory and that running ./configure will actually enable the beast. SHAVE_INIT without parameters at all is supposed to mean that the wrapper scripts are in $top_builddir and that ./configure will not enable shave without the --enable-shave option.
  • however, shave has been reported to fail miserably with scratchbox.

by Damien Lespiau (noreply@blogger.com) at February 07, 2016 02:06 PM

Per project .vimrc

My natural C indentation style is basically kernel-like and my ~/.vimrc reflects that. Unfortunately I have to hack on GNUish-style projects and I really don't want to edit my ~/.vimrc every single time I switch between different indentation styles.

Modelines are evil.

To solve that terrible issue, vim can use per directory configuration files. To enable that neat feature only two little lines are needed in your ~/.vimrc:
set exrc   " enable per-directory .vimrc files
set secure " disable unsafe commands in local .vimrc files
Then it's just a matter of writing a per project .vimrc like this one:
set tabstop=8
set softtabstop=2
set shiftwidth=2
set expandtab
set cinoptions=>4,n-2,{2,^-2,:0,=2,g0,h2,t0,+2,(0,u0,w1,m1
You can find help with the wonderful cinoptions variable in the Vim documentation. As sane persons open files from the project's root directory, this works like a charm. As for the Makefiles, they are special anyway, you really should add an autocmd in your ~/.vimrc.
" add list lcs=tab:>-,trail:x for tab/trailing space visuals
autocmd BufEnter ?akefile* set noet ts=8 sw=8 nocindent

by Damien Lespiau (noreply@blogger.com) at February 07, 2016 02:05 PM

Blending two RGBA 5551 layers

I've just stumbled accross a small piece of code, written one year and a half ago, that blends two 512x512 RGBA 5551 images. It was originally written for a (good!) GIS, so the piece of code blends roads with rivers (and displays the result in a GdkPixbuf). The only thing interesting is that it uses some MMX, SSE2 and rdtsc instructions. You can have a look at the code in its git repository.

by Damien Lespiau (noreply@blogger.com) at February 07, 2016 02:04 PM

Cogl + JS = Love

Played a bit with Gjs and Cogl this weekend and ended up rewriting Clutter's test-cogl-primitives in JavaScript. In the unlikely case someone is interested in trying it, you'll need a patch to support arrays of float as argument in introspected functions and another small patch to add introspection annotations for a few Cogl symbols. As usual you can grab the code in its git repository:


by Damien Lespiau (noreply@blogger.com) at February 07, 2016 02:03 PM

Using glib.py and gobject.py GDB scripts

Some time ago, Alexander Larson blogged about using gdb python macros when debugging Glib and GObject projects. I've wanted to try those for ages, so I spent part of the week-end looking at what you could do with the new python enabled GDB, result: quite a lot of neat stuff!

Let's start by making the script that now comes with glib work on stock gdb 7.0 and 7.1 (ie not the archer branch that contains more of the python work). If those two scripts don't work for you yet (because your distribution is not packaging them, or is packaging a stock gdb 7.0. 7.1), here are a few hints you can follow:
  • glib's GDB macros rely on GDB's auto-load feature, ie, every time GDB load a library your program uses, it'll look for a corresponding python script to execute:
open("/lib/libglib-2.0.so.0.2200.4-gdb.py", O_RDONLY)
open("/usr/lib/debug/lib/libglib-2.0.so.0.2200.4-gdb.py", O_RDONLY)
open("/usr/share/gdb/auto-load/lib/libglib-2.0.so.0.2200.4-gdb.py", O_RDONLY)
Some distributions have decided not to ship glib's and gobject's auto-load helpers, if you are in that case, you'd need to load gobject.py and glib.py by hand. For that purpose I've added a small python command in my ~/.gdbinit:
import os.path
import sys
import gdb

# Update module path.
dir = os.path.join(os.path.expanduser("~"), ".gdb")
if not dir in sys.path:
sys.path.insert(0, dir)

class RegisterCommand (gdb.Command):
"""Register GLib and GObject modules"""

def __init__ (self):
super (RegisterCommand, self).__init__ ("gregister",
gdb.COMMAND_DATA,
gdb.COMPLETE_NONE)

def invoke (self, arg, from_tty):
objects = gdb.objfiles ()
for object in objects:
if object.filename.find ("libglib-2.0.so.") != -1:
from glib import register
register (object)
elif object.filename.find ("libgobject-2.0.so.") != -1:
from gobject import register
register (object)

RegisterCommand ()
What I do is put glib.py and gobject.py in a ~/.gdb directory and don't forget to call gregister inside GDB (once gdb has loaded glib and gobject)
  • The scripts that are inside glib's repository were written with the archer branch of gdb (which bring all the python stuff). Unfortunately stock GDB (7.0 and 7.1) does not have everything the archer gdb has. I have a couple of patches to fix that in the queue. Meanwhile you can grab them in my survival kit repository. This will disable the back trace filters as they are still not in stock GDB.

You're all set! it's time to enjoy pretty printing and gforeach. Hopefully people will join the fun at some point and add more GDB python macro goodness both inside glib and in other projects (for instance a ClutterActor could print its name).
int main (int argc, char **argv)
{
glist = g_list_append (glist, "first");
glist = g_list_append (glist, "second");

return breeeaaak_oooon_meeeee ();
}
gives:
(gdb) b breeeaaak_oooon_meeeee
Breakpoint 1 at 0x80484b7: file glib.c, line 9.
(gdb) r
Starting program: /home/damien/src/test-gdb/glib
Breakpoint 1, breeeaaak_oooon_meeeee () at glib.c:9
9        return 0;
(gdb) gregister
(gdb) gforeach s in glistp: print ((char *)$s)
No symbol "glistp" in current context.
(gdb) gforeach s in glist: print ((char *)$s)
$2 = 0x80485d0 "first"
$3 = 0x80485d6 "second"

by Damien Lespiau (noreply@blogger.com) at February 07, 2016 02:02 PM

Learning how to draw


I can't draw. I've never been able to. Yet, for some reason, I decided to give it a serious try, buy a book to guide me in that journey (listening to an advice from pippin, yeah I know, crazy). The first step was, like a pilgrim walking to a sacred place, to go and buy some art supplies, which turned out to be a really enjoyable experience.

The first thing you have to do is a snapshot of your skills before reading more of the book to be able to do a "before/after" comparison. I thought it was quite hard, but was surprised that the result was all right, by my low standards anyway. You have to do 3 drawings: a self-portrait, looking at yourself in a mirror, a person/character drawn from memory without a visual help and your hand.

The next exercise is there to make you realize that you'll have to forget everything you know and re-learn how to see to draw. It's about copying drawings upside down, copying it curve by curve without associating any meaning to what you are doing. The result is quite surprising as you can see on the left. Now it's a matter to learn how to do that without resorting to the upside down trick.

It's only the beginning of a long journey, so many things can go wrong, but worth giving it a try!

by Damien Lespiau (noreply@blogger.com) at February 07, 2016 01:58 PM

The GStreamer conference from a Clutter point of view

Two weeks ago I attended the first GStreamer conference, and it was great. I won't talk about the 1.0 plan that seems to take shape and looks really good but just what stroke me the most: Happy Clutter Stories and an Tale To Be Told to your manager.

Let's move on the Clutter stories. You had a surprising number of people mixing GStreamer and Clutter, two talks especially:
  • Florent Thiery founder of Ubicast talked about one of their products: a portable recording system with quite a bit of bling (records the slides, movement detection with OpenCV, RoI, ...). The system was used to record the talks on the main track. Now, what was of particular interest for me is that the UI to control the system is entirely written with Clutter and python. They have built a whole toolkit on top of Clutter, in python, called candies/touchwizard and written their UI with it, cooool.
  • A very impressive talk from the Tanberg (now Cisco) guys about their Movi software, video conferencing at its finest. It uses GStreamer extensively and Clutter for its UI (on Windows!). They said that about 150,000 copies of Movi are deployed in the wild. Patches from Ole André Vadla Ravnås and Haakon Sporsheim have been flowing to Clutter and Clutter-gst (win32 support).

As a side note, Fluendo talked about their Open Source, Intel founded, GStreamer codecs for Intel CE3100/CE4100. This platform specificities are supported natively by Clutter (./configure --with-flavour=cex100) using the native EGL winsys called "GDL" and evdev events coming from the kernel. More on this later :p

A very interesting point about those success stories is that the companies and engineers working with open source software to build their applications, sometimes with parts heavily covered by patents, while contributing back to the ecosystem that allowed to build those applications in the first place. Contributing is done at many levels: directly patches but also feedback on the libraries/platform (eg. input for GStreamer 1.0). And guess what? It works! To me, that's exactly how the GNOME platform should be used to build proprietary applications: build on top and contribute back to consolidate the libraries. I'd go as far as saying that contributing upstream is the best way to share code inside the same big corporation. Such companies are always very bad a cooperating between divisions.

by Damien Lespiau (noreply@blogger.com) at February 07, 2016 01:55 PM

A simple transition effect with Clutter

When doing something with graphics, your first need an idea (granted, as with pretty much everything else). In this case, a simple transition that I've seen somewhere a long time ago and I wanted to reproduce with Clutter.


The code is available in a branch of a media explorer I'm currently working on. A few bullet points to follow the code:
  • As the effect needs a "screenshot" of a Clutter scene to play with. You first need to create a subclass of ClutterOffscreenEffect as it does the work of redirecting the painting of a subtree of actors in an offscreen buffer that you can  reuse to texture the rectangles you'll be animating in the effect. This subclass has a "progress" property to control the animation.
  • Then actually compute the coordinates of the grid cells both in screen space and in texture space. To be able to use cogl_rectangles_with_texture_coords(), to try limit the number of GL calls (and/or by the Cogl journal and to ease the animation of the cells fading out, I decided to store the diagonals of the rectangle in a 1D array so that the following grid:


    is stored as the following array:
  • ::paint_target()looks at the "progress" property, animate those grid cells accordingly and draw them. priv->rects is the array storing the initial rectangles, priv->animated_rects the animated ones and priv->chunks stores the start and duration of each diagonal animation along with a (index, length) tuple that references the diagonal rectangles in priv->rects and priv->animated_rects.
Some more details:
  • in the ::paint_target() function, you can special case when the progress is 0.0 (paint the whole FBO instead of the textured grid) and 1.0 (don't do anything),
  • Clutter does not currently allow to just rerun the effect when you animate a property of an offscreen effect for instance. This means that when animating the "progress" property on the effect, it queues a redraw on the actor that end up in the offscreen to trigger the effect ::paint_target() again. A branch from Neil allows to queue a "rerun" on the effect to avoid having to do that,
  • The code has some limitations right now (ie, n_colums must be equal to n_rows) but easily fixable. Once done, it makes sense to try to push the effect to Mx.

by Damien Lespiau (noreply@blogger.com) at February 07, 2016 01:53 PM

Clutter on Android: first results

With the release of Android 2.3, there's a decent way to integrate native applications with the NativeActivity class, an EGL library, and some C API to expose events, main loop, etc. So? how about porting Clutter to it now that it looks actually feasible? After a few days of work, the first results are there, quite promising!


There's still a fairly large number of items in my TODO before being happy with the state of this work, the most prominent items are:
  • Get a clean up pass done to have something upstreamable, this includes finishing the event integration (it receives events but not yet forward them to Clutter),
  • Come up with a plan to manage the application life cycle and handle the case when Android destroys the EGL surface that you were using (probably by having the app save a state, and properly tear down Clutter).,
  • While you probably have the droid font installed in /system/fonts, this is not part of the advertised NDK interface. The safest choice is to embed the font you want to use with your application. Unfortunately fontconfig + freetype + pango + compressed assets in your Android package don't work really well together. Maybe solve it at the Pango level with a custom "direct" fontmap implementation that would let you register fonts from files easily?
  • What to do with text entries? show soft keyboard? Mx or Clutter problem? what happens to the GL surface in that case?
  • Better test the GMainLoop/ALooper main loop integration (esp. adding and removing file descriptors),
  • All the libraries that Clutter depends on are linked into a big .so (which is the Android NDK application). It results in a big .so (~5 MB, ~1.7 MB compressed in the .apk). That size can be dramatically reduced, sometimes at the expense of changes that will break the current API/ABI, but hell, you'll be statically linking anyway,
  • Provide "prebuilt libraries", ie. pre-compiled libraries that makes it easy to just use Clutter to build applications.

by Damien Lespiau (noreply@blogger.com) at February 07, 2016 01:38 PM

git commit --fixup and git rebase -i --autosquash

It's not unusual that I need to fix previous commits up when working  on a branch or in the review phase. Until now I used a regular commit with some special marker to remember which commit to squash it with and then git rebase -i to reorder the patches and squash the fixup commits with their corresponding "parent" commits.

Turns out, git can handle quite a few of those manual manipulations for you. git commit --fixup <commit> allows you to commit work, marking it as a fixup of a previous commit. git rebase -i --autosquash will then present the usual git rebase -i screen but with the fixup commits moved just after their parents and ready to be squashed without any extra manipulation.

For instance, I had a couple of changes to a commit buried 100 patches away from HEAD (yes, a big topic branch!):
$ git diff
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 29f3813..08ea851 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -2695,6 +2695,11 @@ static void skylake_update_primary_plane(struct drm_crtc *crtc,

intel_fb = to_intel_framebuffer(fb);
obj = intel_fb->obj;
+
+ /*
+ * The stride is expressed either as a multiple of 64 bytes chunks for
+ * linear buffers or in number of tiles for tiled buffers.
+ */
switch (obj->tiling_mode) {
case I915_TILING_NONE:
stride = fb->pitches[0] >> 6;
@@ -2707,7 +2712,6 @@ static void skylake_update_primary_plane(struct drm_crtc *crtc,
BUG();
}

- plane_ctl &= ~PLANE_CTL_TRICKLE_FEED_DISABLE;
plane_ctl |= PLANE_CTL_PLANE_GAMMA_DISABLE;

I915_WRITE(PLANE_CTL(pipe, 0), plane_ctl);
And I wanted to squash those changes with commit 2021785
$ git commit -a --fixup 2021785
git will then go ahead and create a new commit with the subject taken from the referenced commit and prefixed with fixup!
commit d2d278ffbe87d232369b028d0c9ee9e6ecd0ba20
Author: Damien Lespiau <damien.lespiau@intel.com>
Date: Sat Sep 20 11:09:15 2014 +0100

fixup! drm/i915/skl: Implement thew new update_plane() for primary planes
Then when using the interactive rebase with autosquash:
$ git rebase -i --autosquash drm-intel/drm-intel-nightly
The fixup will be next after the reference commit
pick 2021785 drm/i915/skl: Implement thew new update_plane() for primary planes
fixup d2d278ff fixup! drm/i915/skl: Implement thew new update_plane() for primary planes
validating the proposed change (by in my case leaving vim) will squash the fixup commits. Definitely what I'll be using from now on!

Oh, and there's a config option to have git rebase automatically autosquash if there are some fixup commits:
$ git config --global rebase.autosquash true

by Damien Lespiau (noreply@blogger.com) at February 07, 2016 12:53 PM

February 06, 2016

Damien Lespiau

A simple autotool template

Every now and then, you feel a big urge to start hacking on a small thingy and need to create Makefiles for it. Turns out that the autotools won't be that intrusive when we are talking about small programs and you get do a reasonable job with a few lines, first the configure.ac file:
# autoconf
AC_PREREQ(2.59)
AC_INIT([fart], [0.0.1], [damien.lespiau@gmail.com])
AC_CONFIG_MACRO_DIR([build])
AC_CONFIG_AUX_DIR([build])
AC_CONFIG_SRCDIR([fart.c])
AC_CONFIG_HEADERS([config.h])

# automake
AM_INIT_AUTOMAKE([1.11 -Wall foreign no-define])
AM_SILENT_RULES([yes])

# Check for programs
AC_PROG_CC

# Check for header files
AC_HEADER_STDC

AS_COMPILER_FLAGS([WARNING_CFLAGS],
["-Wall -Wshadow -Wcast-align -Wno-uninitialized
-Wno-strict-aliasing -Wempty-body -Wformat -Wformat-security
-Winit-self -Wdeclaration-after-statement -Wvla
-Wpointer-arith"])

PKG_CHECK_MODULES([GLIB], [glib-2.0 >= 2.24])

AC_OUTPUT([
Makefile
])
and then Makefile.am:
ACLOCAL_AMFLAGS = -I build ${ACLOCAL_FLAGS}

bin_PROGRAMS = fart

fart_SOURCES = fart.c
fart_CFLAGS = $(WARNING_CFLAGS) $(GLIB_CFLAGS)
fart_LDADD = $(GLIB_LIBS)
After that, it's just a matter of running autoreconf
$ autoreconf -i
and you are all set!
So, what do you get for this amount of lines?
  • The usual set of automake targets, handy! ("make tags" is so under used!) and bonus features (out of tree builds, extra rules to reconfigure/rebuild the Makefiles on changes in configure.ac/Makefile.an, ...)
  • Trying to make the autoconf/automake discreet (putting auxiliary files out of the way, silence mode, automake for non GNU projects)
  • Some decent warning flags (tweak to your liking!)
  • autoreconf cooperating with aclocal thanks to ACLOCAL_AMFLAGS and coping with non standard locations for system m4 macros
I'll maintain a git tree to help bootstrap my next small hacks, feel free to use it as well!

by Damien Lespiau (noreply@blogger.com) at February 06, 2016 11:59 PM

Extracting part of files with sed

For reference for my future self, a few handy sed commands. Let's consider this file:
$ cat test-sed
First line
Second line
--
Another line
Last line
We can extract the lines from the start of the file to the marker by deleting the rest:
$ sed '/--/,$d' test-sed 
First line
Second line
a,b is the range the command, here d(elete), applies to. a and b can be, among others, line numbers, regular expressions or $ for end of the file. We can also extract the lines from the marker to the end of the file with:
$ sed -n '/--/,$p' test-sed 
--
Another line
Last line
This one is slightly more complicated. By default sed spits all the lines it receives as input, '-n' is there to tell sed not to do that. The rest of the expression is to p(rint) the lines between -- and the end of the file.
That's all folks!

by Damien Lespiau (noreply@blogger.com) at February 06, 2016 11:55 PM

A git pre-commit hook to check the year of copyright notices

Like every year, touching a source file means you also need to update the year of the copyright notice you should have at the top of the file. I always end up forgetting about them, this is where a git pre-commit hook would be ultra-useful, so I wrote one:
#
# Check if copyright statements include the current year
#
files=`git diff --cached --name-only`
year=`date +"%Y"`

for f in $files; do
head -10 $f | grep -i copyright 2>&1 1>/dev/null || continue

if ! grep -i -e "copyright.*$year" $f 2>&1 1>/dev/null; then
missing_copyright_files="$missing_copyright_files $f"
fi
done

if [ -n "$missing_copyright_files" ]; then
echo "$year is missing in the copyright notice of the following files:"
for f in $missing_copyright_files; do
echo " $f"
done
exit 1
fi
Hope this helps!

by Damien Lespiau (noreply@blogger.com) at February 06, 2016 06:35 PM

Working on more than one line with sed's 'N' command

Yesterday I was asked to help solving a small sed problem. Considering that file (don't look too closely on the engineering of the defined elements):
<root>
<key>key0</key>
<string>value0</string>
<key>key1</key>
<string>value1</string>
<key>key2</key>
<string>value2</string>
</root>
The problem was: How to change value1 to VALUE!. The problem here is that you can't blindly execute a s command matching <string>.*</string>.
Sed maintains a buffer called the "pattern space" and processes commands on this buffer. From the GNU sed manual:
sed operates by performing the following cycle on each line of input: first, sed reads one line from the input stream, removes any trailing newline, and places it in the pattern space. Then commands are executed; each command can have an address associated to it: addresses are a kind of condition code, and a command is only executed if the condition is verified before the command is to be executed.

When the end of the script [(list of sed commands)] is reached, unless the -n option is in use, the contents of pattern space are printed out to the output stream, adding back the trailing newline if it was removed.3 Then the next cycle starts for the next input line.
So the idea is to first, use a /pattern/ address to select the the right <key> line, append the next line to the pattern space (with the N command) and finally run a s command on the buffer now containing both lines:
<key>key1</key>
<string>value1</string>
And so we end up with:
$ cat input 
<root>
<key>key0</key>
<string>value0</string>
<key>key1</key>
<string>value1</string>
<key>key2</key>
<string>value2</string>
</root>
$ sed -e '/<key>key1<\/key>/{N;s#<string>.*<\/string>#<string>VALUE!<\/string#;}' < input
<root>
<key>key0</key>
<string>value0</string>
<key>key1</key>
<string>VALUE!</string
<key>key2</key>
<string>value2</string>
</root>

by Damien Lespiau (noreply@blogger.com) at February 06, 2016 06:33 PM

HDMI stereo 3D & KMS

If everything goes according to plan, KMS in linux 3.13 should have stereo 3D support. Should one be interested in scanning out a stereo frame buffer to a 3D capable HDMI sink, here's a rough description of how those modes are exposed to user space and how to use them.

A reader not well acquainted with the DRM sub-system and its mode setting API (Aka Kernel Mode Setting, KMS) could start by watching the first part of Laurent Pinchart's Anatomy of an Embedded KMS Driver or read David Herrmann's heavily documented mode setting example code.

Stereo modes work by sending a left eye and right eye picture per frame to the monitor. It's then up to the monitor to use those 2 pictures to display a 3D frame and the technology there varies.

There are different ways to organise the 2 pictures inside a bigger frame buffer. For HDMI, those layouts are described in the HDMI 1.4 specification. Provided you give them your contact details, it's possible to download the stereo 3D part of the HDMI 1.4 spec from hdmi.org.

As one inevitably knows, modes supported by a monitor can be retrieved out of the KMS connector object in the form of drmModeModeInfo structures (when using libdrm, it's also possible to write your own wrappers around the KMS ioctls, should you want to):
typedef struct _drmModeModeInfo {
uint32_t clock;
uint16_t hdisplay, hsync_start, hsync_end, htotal, hskew;
uint16_t vdisplay, vsync_start, vsync_end, vtotal, vscan;

uint32_t vrefresh;

uint32_t flags;
uint32_t type;
char name[...];
} drmModeModeInfo, *drmModeModeInfoPtr;
To keep existing software blissfully unaware of those modes, a DRM client interested in having stereo modes listed starts by telling the kernel to expose them:
drmSetClientCap(drm_fd, DRM_CLIENT_CAP_STEREO_3D, 1);
Stereo modes use the flags field to advertise which layout the mode requires:
uint32_t layout = mode->flags & DRM_MODE_FLAG_3D_MASK;
This will give you a non zero value when the mode is a stereo mode, value among:
DRM_MODE_FLAG_3D_FRAME_PACKING
DRM_MODE_FLAG_3D_FIELD_ALTERNATIVE
DRM_MODE_FLAG_3D_LINE_ALTERNATIVE
DRM_MODE_FLAG_3D_SIDE_BY_SIDE_FULL
DRM_MODE_FLAG_3D_L_DEPTH
DRM_MODE_FLAG_3D_L_DEPTH_GFX_GFX_DEPTH
DRM_MODE_FLAG_3D_TOP_AND_BOTTOM
DRM_MODE_FLAG_3D_SIDE_BY_SIDE_HALF
User space is then responsible for choosing which stereo mode to use and to prepare a buffer that matches the size and left/right placement requirements of that layout. For instance, when choosing Side by Side (half), the frame buffer is the same size as its 2D equivalent (that is hdisplay x vdisplay) with the left and right images sub-sampled by 2 horizontally:

Side by Side (half)

Other modes need a bigger buffer than hdisplay x vdisplay. This is the case with frame packing, where each eye has the the full 2D resolution, separated by the number of vblank lines:


Fame Packing

Of course, anything can be used to draw into the stereo frame buffer, including OpenGL. Further work should enable Mesa to directly render into such buffers, say with the EGL/gbm winsys for a wayland compositor to use.

Wipe Out using Frame Packing on the PS3

Behind the scene, the kernel's job is to parse the EDID to discover which stereo modes the HDMI sink supports and, once user-space instructs to use a stereo mode, to send infoframes (metadata sent during the vblank interval) with the information about which 3D mode is being sent.

A good place to start for anyone wanting to use this API is testdisplay, part of the Intel GPU tools test suite. testdisplay can list the available modes with:
$ sudo ./tests/testdisplay -3 -i
[...]
name refresh (Hz) hdisp hss hse htot vdisp vss vse vtot flags type clock
[0] 1920x1080 60 1920 2008 2052 2200 1080 1084 1089 1125 0x5 0x48 148500
[1] 1920x1080 60 1920 2008 2052 2200 1080 1084 1089 1125 0x5 0x40 148352
[2] 1920x1080i 60 1920 2008 2052 2200 1080 1084 1094 1125 0x15 0x40 74250
[3] 1920x1080i 60 1920 2008 2052 2200 1080 1084 1094 1125 0x20015 0x40 74250 (3D:SBSH)
[4] 1920x1080i 60 1920 2008 2052 2200 1080 1084 1094 1125 0x15 0x40 74176
[5] 1920x1080i 60 1920 2008 2052 2200 1080 1084 1094 1125 0x20015 0x40 74176 (3D:SBSH)
[6] 1920x1080 50 1920 2448 2492 2640 1080 1084 1089 1125 0x5 0x40 148500
[7] 1920x1080i 50 1920 2448 2492 2640 1080 1084 1094 1125 0x15 0x40 74250
[8] 1920x1080i 50 1920 2448 2492 2640 1080 1084 1094 1125 0x20015 0x40 74250 (3D:SBSH)
[9] 1920x1080 24 1920 2558 2602 2750 1080 1084 1089 1125 0x5 0x40 74250
[10] 1920x1080 24 1920 2558 2602 2750 1080 1084 1089 1125 0x1c005 0x40 74250 (3D:TB)
[11] 1920x1080 24 1920 2558 2602 2750 1080 1084 1089 1125 0x4005 0x40 74250 (3D:FP)
[...]
To test a specific mode:
$ sudo ./tests/testdisplay -3 -o 17,10
1920x1080 24 1920 2558 2602 2750 1080 1084 1089 1125 0x1c005 0x40 74250 (3D:TB)
To cycle through all the supported stereo modes:
$ sudo ./tests/testdisplay -3
testdisplay uses cairo to compose the final frame buffer from two separate left and right test images.

by Damien Lespiau (noreply@blogger.com) at February 06, 2016 06:28 PM

Working in a separate prefix

I've been surprised in the past to discover that even some seasoned engineers didn't know how to use the autotools prefix feature. A sign they've been lucky enough and didn't have to deal with Autotools too much. Here's my attempt to provide some introduction to ./configure --prefix.

Working with or in "a separate prefix" is working with libraries and binaries (well, anything produced by 'make install' in an autotooled project really) installed in a different directory than the system-wide ones (/usr or even /usr/local that can become quite messy). It is the preferred way to hack on a full stack without polluting your base distribution and has several advantages:
  • One can hack on the whole stack without the fear of not being able to run your desktop environment you're working with if something goes wrong,
  • More often than not, one needs a relatively recent library that your distribution doesn't ship with (say a recent libdrm). When working with the dependencies in a prefix, it's just a matter of recompiling it.

Let's take an example to make the discussion easier:
  •  We want to compile libdrm and intel-gpu-tools (because intel-gpu-needs needs a more recent libdrm than the one coming with your distribution),
  •  We want to use the ~/gfx directory for our work,
  • git trees with be cloned in ~/gfx/sources,
  • ~/gfx/install is chosen as the prefix.

First, let's clone the needed git repositories:
$ mkdir -p ~/gfx/sources ~/gfx/install
$ cd ~/gfx/sources
$ git clone git://anongit.freedesktop.org/mesa/drm libdrm
$ git clone git://anongit.freedesktop.org/xorg/app/intel-gpu-tools
Then you need to source a script that will set-up your environment with a few variables to tell the system to use the prefix (both at run-time and compile-time). A minimal version of that script for our example is (I store my per-project setup scripts to source at the root of the project, in our case ~/gfx):
$ cat ~/gfx/setup-env
PROJECT=~/gfx
export PATH=$PROJECT/install/bin:$PATH
export LD_LIBRARY_PATH=$PROJECT/install/lib:$LD_LIBRARY_PATH
export PKG_CONFIG_PATH=$PROJECT/install/lib/pkgconfig:$PKG_CONFIG_PATH
export ACLOCAL_FLAGS="-I $PROJECT/install/share/aclocal $ACLOCAL_FLAG"
$ source ~/gfx/setup-env
Then it's time to compile libdrm, telling the configure script that we want to install it in in our prefix:
$ cd ~/gfx/sources/libdrm
$ ./autogen.sh --prefix=/home/damien/gfx/install
$ make
$ make install
Note that you don't need to run "sudo make install" since we'll be installing in our prefix directory that is writeable by the current user.

Now it's time to compile i-g-t:
$ cd ~/gfx/sources/intel-gpu-tools
$ ./autogen.sh --prefix=/home/damien/gfx/install
$ make
$ make install
The configure script may complain about dependencies (eg. cairo, SWIG,...). Different ways to solve those:
  • For dependencies not directly linked with the graphics stack (like SWIG), it's recommended to use the development package provided by the distribution
  • For old enough dependencies that don't change very often (like cairo) you can use the distribution development package or compile them in your prefix
  • For dependencies more recent than your distribution ones, you need to install them in the chosen prefix.

by Damien Lespiau (noreply@blogger.com) at February 06, 2016 12:27 PM

April 24, 2015

Chris Lord

Web Navigation Transitions

Wow, so it’s been over a year since I last blogged. Lots has happened in that time, but I suppose that’s a subject for another post. I’d like to write a bit about something I’ve been working on for the last week or so. You may have seen Google’s proposal for navigation transitions, and if not, I suggest reading the spec and watching the demonstration. This is something that I’ve thought about for a while previously, but never put into words. After reading Google’s proposal, I fear that it’s quite complex both to implement and to author, so this pushed me both to document my idea, and to implement a proof-of-concept.

I think Google’s proposal is based on Android’s Activity Transitions, and due to Android UI’s very different display model, I don’t think this maps well to the web. Just my opinion though, and I’d be interested in hearing peoples’ thoughts. What follows is my alternative proposal. If you like, you can just jump straight to a demo, or view the source. Note that the demo currently only works in Gecko-based browsers – this is mostly because I suck, but also because other browsers have slightly inscrutable behaviour when it comes to adding stylesheets to a document. This is likely fixable, patches are most welcome.


 Navigation Transitions specification proposal

Abstract

An API will be suggested that will allow transitions to be performed between page navigations, requiring only CSS. It is intended for the API to be flexible enough to allow for animations on different pages to be performed in synchronisation, and for particular transition state to be selected on without it being necessary to interject with JavaScript.

Proposed API

Navigation transitions will be specified within a specialised stylesheet. These stylesheets will be included in the document as new link rel types. Transitions can be specified for entering and exiting the document. When the document is ready to transition, these stylesheets will be applied for the specified duration, after which they will stop applying.

Example syntax:

<link rel="transition-enter" duration="0.25s" href="URI" />
<link rel="transition-exit" duration="0.25s" href="URI" />

When navigating to a new page, the current page’s ‘transition-exit‘ stylesheet will be referenced, and the new page’s ‘transition-enter‘ stylesheet will be referenced.

When navigation is operating in a backwards direction, by the user pressing the back button in browser chrome, or when initiated from JavaScript via manipulation of the location or history objects, animations will be run in reverse. That is, the current page’s ‘transition-enter‘ stylesheet will be referenced, and animations will run in reverse, and the old page’s ‘transition-exit‘ stylesheet will be referenced, and those animations also run in reverse.

[Update]

Anne van Kesteren suggests that forcing this to be a separate stylesheet and putting the duration information in the tag is not desirable, and that it would be nicer to expose this as a media query, with the duration information available in an @-rule. Something like this:

@viewport {
  navigate-away-duration: 500ms;
}

@media (navigate-away) {
  ...
}

I think this would indeed be nicer, though I think the exact naming might need some work.

Transitioning

When a navigation is initiated, the old page will stay at its current position and the new page will be overlaid over the old page, but hidden. Once the new page has finished loading it will be unhidden, the old page’s ‘transition-exit‘ stylesheet will be applied and the new page’s ‘transition-enter’ stylesheet will be applied, for the specified durations of each stylesheet.

When navigating backwards, the CSS animations timeline will be reversed. This will have the effect of modifying the meaning of animation-direction like so:

Forwards          | Backwards
--------------------------------------
normal            | reverse
reverse           | normal
alternate         | alternate-reverse
alternate-reverse | alternate

and this will also alter the start time of the animation, depending on the declared total duration of the transition. For example, if a navigation stylesheet is declared to last 0.5s and an animation has a duration of 0.25s, when navigating backwards, that animation will effectively have an animation-delay of 0.25s and run in reverse. Similarly, if it already had an animation-delay of 0.1s, the animation-delay going backwards would become 0.15s, to reflect the time when the animation would have ended.

Layer ordering will also be reversed when navigating backwards, that is, the page being navigated from will appear on top of the page being navigated backwards to.

Signals

When a transition starts, a ‘navigation-transition-startNavigationTransitionEvent will be fired on the destination page. When this event is fired, the document will have had the applicable stylesheet applied and it will be visible, but will not yet have been painted on the screen since the stylesheet was applied. When the navigation transition duration is met, a ‘navigation-transition-end‘ will be fired on the destination page. These signals can be used, amongst other things, to tidy up state and to initialise state. They can also be used to modify the DOM before the transition begins, allowing for customising the transition based on request data.

JavaScript execution could potentially cause a navigation transition to run indefinitely, it is left to the user agent’s general purpose JavaScript hang detection to mitigate this circumstance.

Considerations and limitations

Navigation transitions will not be applied if the new page does not finish loading within 1.5 seconds of its first paint. This can be mitigated by pre-loading documents, or by the use of service workers.

Stylesheet application duration will be timed from the first render after the stylesheets are applied. This should either synchronise exactly with CSS animation/transition timing, or it should be longer, but it should never be shorter.

Authors should be aware that using transitions will temporarily increase the memory footprint of their application during transitions. This can be mitigated by clear separation of UI and data, and/or by using JavaScript to manipulate the document and state when navigating to avoid keeping unused resources alive.

Navigation transitions will only be applied if both the navigating document has an exit transition and the target document has an enter transition. Similarly, when navigating backwards, the navigating document must have an enter transition and the target document must have an exit transition. Both documents must be on the same origin, or transitions will not apply. The exception to these rules is the first document load of the navigator. In this case, the enter transition will apply if all prior considerations are met.

Default transitions

It is possible for the user agent to specify default transitions, so that navigation within a particular origin will always include navigation transitions unless they are explicitly disabled by that origin. This can be done by specifying navigation transition stylesheets with no href attribute, or that have an empty href attribute.

Note that specifying default transitions in all situations may not be desirable due to the differing loading characteristics of pages on the web at large.

It is suggested that default transition stylesheets may be specified by extending the iframe element with custom ‘default-transition-enter‘ and ‘default-transition-exit‘ attributes.

Examples

Simple slide between two pages:

[page-1.html]

<head>
  <link rel="transition-exit" duration="0.25s" href="page-1-exit.css" />
  <style>
    body {
      border: 0;
      height: 100%;
    }

    #bg {
      width: 100%;
      height: 100%;
      background-color: red;
    }
  </style>
</head>
<body>
  <div id="bg" onclick="window.location='page-2.html'"></div>
</body>

[page-1-exit.css]

#bg {
  animation-name: slide-left;
  animation-duration: 0.25s;
}

@keyframes slide-left {
  from {}
  to { transform: translateX(-100%); }
}

[page-2.html]

<head>
  <link rel="transition-enter" duration="0.25s" href="page-2-enter.css" />
  <style>
    body {
      border: 0;
      height: 100%;
    }

    #bg {
      width: 100%;
      height: 100%;
      background-color: green;
    }
  </style>
</head>
<body>
  <div id="bg" onclick="history.back()"></div>
</body>

[page-2-enter.css]

#bg {
  animation-name: slide-from-left;
  animation-duration: 0.25s;
}

@keyframes slide-from-left {
  from { transform: translateX(100%) }
  to {}
}


I believe that this proposal is easier to understand and use for simpler transitions than Google’s, however it becomes harder to express animations where one element is transitioning to a new position/size in a new page, and it’s also impossible to interleave contents between the two pages (as the pages will always draw separately, in the predefined order). I don’t believe this last limitation is a big issue, however, and I don’t think the cognitive load required to craft such a transition is considerably higher. In fact, you can see it demonstrated by visiting this link in a Gecko-based browser (recommended viewing in responsive design mode Ctrl+Shift+m).

I would love to hear peoples’ thoughts on this. Am I actually just totally wrong, and Google’s proposal is superior? Are there huge limitations in this proposal that I’ve not considered? Are there security implications I’ve not considered? It’s highly likely that parts of all of these are true and I’d love to hear why. You can view the source for the examples in your browser’s developer tools, but if you’d like a way to check it out more easily and suggest changes, you can also view the git source repository.

by Chris Lord at April 24, 2015 09:26 AM

April 19, 2015

Ross Burton

Cycling Dad Nirvana Approaches

Last week Alex wanted to go for a bike ride so we had a play on the local pump track and some of the cheeky trails hidden away nearby. This was his first ride off the pavements so I was cautious but much fun was had by Alex and he’s spent most of the last week talking about the ride and in particular the pump track. There’s a pretty good one at Thetford Forest now so now that Spring has (mostly) sprung we decided to have a family day out and give Isla some proper practise at riding her new bike.

Family ride at Thetford Forest

For the start of the ride it was me and Alex riding ahead with Vicky riding alongside Isla whilst she practised the hard bit of stopping and starting. It didn’t take long before we heard a loud “COMING THROUGH!” and Isla flew by. A few kilometres down the Shepherd trail we decided to head (badly, I can’t recall the Shepherd route at all) before little legs tired and find the pump track. There were a few tumbles, Alex was getting tired and there’s a tight berm with a very loose straight line. Isla of course had the confidence of a bold little sister and wanted a go, which led to a great double-faceplant when I was running alongside guiding her around. Nothing a bit of savlon won’t solve though, and ice cream made it all better!

All in all, a good day: Alex had a good ride and fun on the pump track, and Isla is massively more confident on her bike, especially when the path isn’t new-pavement-smooth. Not panicing when the path is a bit “bumpy lumpy” is an important skill when riding alongside traffic!

by Ross Burton at April 19, 2015 08:44 PM

April 13, 2015

Hylke Bons

San Francisco Impressions

Had the opportunity to visit San Francisco for two weeks in March, it was great. Hope to be back there soon.

by Hylke Bons at April 13, 2015 12:03 AM

March 07, 2015

Damien Lespiau

shave 0.1.0

After a month without anyone shouting at shave in despair or horror, it's time to tag something to have a "stable" branch so people can rely on a stable interface (yes, it's important even for a 100 lines macro!).

What's the most amazing is that quite a few projects have adopted shave in the GNOME and freedesktop.org communities : Clutter, Niepce Digital, Giggle, GStreamer, GObject introspection, PulseAudio, ConnMan, Json-glib, libunique, gnote, seed, gnome-utils, libccss, xorg ? and maybe some more I've forgotten or I don't even know about.

You can grab the tarball or clone the git repositoy (git clone git://git.lespiau.name/shave) and have a look at the README file.

Time to celebrate.

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

AS_AM_STFU

Writing m4 macro is fun, it really is.

If you want to have make be a "make -s" without doing boring stuff like aliases and actually respect the default verbosity of automake >= 1.11, use this small m4 macro I wrote.

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

g_object_notify_by_pspec()

Now that GLib 2.26.0 is out, it's time to talk about a little patch to GObject I've written (well, the original idea was born while looking at it with Neil): add a new g_object_notify_by_pspec() symbol to GObject. As shown in the bug it can improve the notification of GObject properties by 10-15% (the test case tested was without any handler connected to the notify signal).

If you can depend on GLib 2.26, consider using it!

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

Aligning C function parameters with vim

updated: now saves/retores the paste register

It has bothered me for a while, some coding styles, most notably in the GNOME world try to enforce good looking alignment of functions parameters such as:
static UniqueResponse
on_unique_message_received (UniqueApp         *unique_app,
gint               command,
UniqueMessageData *message_data,
guint              time_,
gpointer           user_data)
{
}

Until now, I aligned the arguments by hand, but that time is over! Please welcome my first substantial vim plugin: it defines a GNOMEAlignArguments command to help you in that task. All you have to do is to add this file in your ~/.vim/plugin directory and define a macro in your ~/.vimrc to invoke it just like this:
" Align arguments
nmap ,a :GNOMEAlignArguments<CR>

HTH.

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

Vim macro for change log entry in .spec files

Tired of writing this kind of lines by hand ?

* Mon Feb 09 2009 Damien Lespiau <damien.lespiau@xxxx.com> 1.4.3

This vim macro does just this for you!

nmap ,mob-ts :r!date +'\%a \%b \%d \%Y'<CR>0i* <ESC>$a Damien Lespiau <damien.lespiau@xxxx.com> FIXME

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

February 07, 2015

Hylke Bons

Vienna GNOME/.NET hackfest report

I had a great time attending the GNOME/.NET hackfest last month in Vienna. My goal for the week was to port SparkleShare's user interface to GTK+3 and integrate with GNOME 3.

A lot of work got done. Many thanks to David and Stefan for enabling this by the smooth organisation of the space, food, and internet. Bertrand, Stephan, and Mirco helped me get set up to build a GTK+3-enabled SparkleShare pretty quickly. The porting work itself was done shortly after that, and I had time left to do a lot of visual polish and behavioural tweaks to the interface. Details matter!

Last week I released SparkleShare 1.3, a Linux-only release that includes all the work done at the hackfest. We're still waiting for the dependencies to be included in the distributions, so the only way you can use it is to build from source yourself for now. Hopefully this will change soon.

One thing that's left to do is to create a gnome-shell extension to integrate SparkleShare into GNOME 3 more seamlessly. Right now it still has to use the message tray area, which is far from optimal. So if you're interested in helping out with that, please let me know.

Tomboy Notes

The rest of the time I helped out others with design work. Helped out Mirco with the Smuxi preference dialogues using my love for the Human Interface Guidelines and started a redesign of Tomboy Notes. Today I sent out the new design to their mailing list with the work done so far.

Sadly there wasn't enough time for me to help out with all of the other applications… I guess that's something for next year.

Sponsors

I had a fun week in Vienna (which is always lovely no matter the time of year) and met many new great people. Special thanks to the many sponsors that helped making this event possible: Norkart, Collabora, Novacoast IT, University of Vienna and The GNOME Foundation.

by Hylke Bons at February 07, 2015 04:07 PM

Trip to Nuremberg and Munich

This month I visited my friend and colleague Garrett in Germany. We visited the Christmas markets there. Lots of fun. Here are some pictures.

by Hylke Bons at February 07, 2015 04:07 PM

Attending the Vienna GNOME/.NET hackfest

Today I arrived in the always wonderful city of Vienna for the GNOME/.NET Hackfest. Met up and had dinner with the other GNOME and .NET fans.

SparkleShare has been stuck on GTK+2 for a while. Now that the C# bindings for GTK+3 are starting to get ready, and Bindinator is handling any other dependencies that need updating (like WebKit), it is finally time to take the plunge.

My goal this week is to make some good progress on the following things:

  1. Port SparkleShare's user interface to GTK+3.
  2. Integrate SparkleShare seamlessly with the GNOME 3 experience

SparkleShare 1.2

Yesterday I made a new release of SparkleShare. It addresses several issues that may have been bugging you, so it's worth to upgrade. Depending on how well things go this week it may be the last release based on GNOME 2 technologies. Yay for the future!

by Hylke Bons at February 07, 2015 04:07 PM

SparkleShare 1.0

I’m delighted to announce the availability of SparkleShare 1.0!

What is SparkleShare?

SparkleShare is an Open Source (self hosted) file synchronisation and collaboration tool and is available for Linux distributions, Mac, and Windows.

SparkleShare creates a special folder on your computer in which projects are kept. All projects are automatically synced to their respective hosts (you can have multiple projects connected to different hosts) and to your team’s SparkleShare folders when someone adds, removes or edits a file.

The idea for SparkleShare sprouted about three years ago at the GNOME Usability Hackfest in London (for more background on this read The one where the designers ask for a pony).

SparkleShare uses the version control system Git under the hood, so people collaborating on projects can make use of existing infrastructure, and setting up a host yourself will be easy enough. Using your own host gives you more privacy and control, as well as lots of cheap storage space and higher transfer speeds.

Like every piece of software it’s not bug free, even though it has hit 1.0. But it’s been tested for a long time now and all reproducable and known major issues have been fixed. It works reliably and the issue tracker is mostly filled with feature requests now.

The biggest sign that it was time for a 1.0 release was the fact that Lapo hasn’t reported brokenness for a while now. This can either mean that SparkleShare has been blessed by a unicorn or that the world will end soon. I think it’s the first.

Features

For those of you that are not (that) familiar with SparkleShare, I’ll sum up its most important features:

The SparkleShare folder

This is where all of your projects are kept. Everything in this folder will be automatically synced to the remote host(s), as well as to your other computers and everyone else connected to the same projects. Are you done with a project? Simply delete it from your SparkleShare folder.

The status icon

The status icon gives you quick access to all of your projects and shows you what’s going on regarding the synchronisation process. From here you can connect to existing remote projects and open the recent changes window.

The setup dialog

Here you can link to a remote project. SparkleShare ships with a couple of presets. You can have mulitple projects syncing to different hosts at the same time. For example, I use this to sync some public projects with Github, some personal documents with my own private vps and work stuff with a host on the intranet.

Recent changes window

The recent changes window shows you everything that has recently changed and by whom.

History

The history view let’s you see who has edited a particular file before and allows you to restore deleted files or revert back to a previous version.

Conflict handling

When a file has been changed by two people at the same time and causes a conflict, SparkleShare will create a copy of the conflicting file and adds a timestamp. This way changes won’t get accidentally lost and you can either choose to keep one of the files or cherry pick the wanted changes.

Notifications

If someone makes a change to a file a notification will pop up saying what changed and by whom.

Client side encryption

Optionally you can protect a project with a password. When you do, all files in it will be encrypted locally using AES-256-CBC before being transferred to the host. The password is only stored locally, so if someone cracked their way into your server it will be very hard (if not impossible) to get the files’ contents. This on top of the file transfer mechanism, which is already encrypted and secure. You can set up an encrypted project easily with Dazzle.

Dazzle, the host setup script

I’ve created a script called Dazzle that helps you set up a Linux host to which you have SSH access. It installs Git, adds a user account and configures the right permissions. With it, you should be able to get up and running by executing just three simple commands.

Plans for the future

Something that comes up a lot is the fact that Git doesn’t handle large (binary) files well. Git also stores a database of all the files including history on every client, causing it to use a lot of space pretty quickly. Now this may or may not be a problem depending on your usecase. Nevertheless I want SparkleShare to be better at the “large backups of bulks of data” usecase.

I’ve stumbled upon a nice little project called git-bin in some obscure corner of Github. It seems like a perfect match for SparkleShare. Some work needs to be done to integrate it and to make sure it works over SSH. This will be the goal for SparkleShare 2.0, which can follow pretty soon (hopefully in months, rather than years).

I really hope contributors can help me out in this area. The Github network graph is feeling a bit lonely. Your help can make a big difference!

Some other fun things to work on may be:

  1. Saving the modification times of files
  2. Creating a binary Linux bundle
  3. SparkleShare folder location selection
  4. GNOME 3 integration
  5. …other things that you may find useful.

If you want to get started on contributing, feel free to visit the IRC channel: #sparkleshare on irc.gnome.org so I can answer any questions you may have and give support.

Finally…

I’d like to thank everyone who has helped testing and submitted patches so far. SparkleShare wouldn’t be nearly as far as it is now without you. Cheers!

by Hylke Bons at February 07, 2015 04:07 PM

November 19, 2014

Richard Purdie

Laugh or cry?

I think by now people realise there is some “fatigue” thing going on with me.

I spent a couple of evenings last week doing strenuous wall paper stripping (woodchip), to the point I was trembling when I took breaks. On Saturday I took my new cyclocross bicycle for an 18 mile ride, I’ve been talking about buying one for years so I was determined to try it while the weather was good. Compared to my previous 40 year old bicycle with non-indexed gears and ineffective brakes, its a complete revelation and I love it. There is an 10 mile “time trial” section on the route I picked, Nov 2013 I was doing ~45 mins, Nov 2014 on the old bike, ~39 mins, on the new one, under 34 mins. I was pleasantly surprised!

I spent the weekend in a bit of a daze induced by the exercise which is par for the course, I’m used to that along with the muscle aches.

On Monday night, I became extremely cold and shivery with the “flu” aches and pains and basically didn’t sleep, oddly wide awake yet more unwell than I’ve ever been with proper flu. There were no respiratory or digestive symptoms. It could be a virus although the pattern of flu like symptoms 36-48 hours after exertion with varying intensity makes me lets say suspicious.

The medical profession? They basically don’t have a clue what is going on :( . Lots of tests up to and including muscle biopsy and some interesting results (like spinal nerve damage I seemingly recovered from?!) but nothing which explains it.

There seems to be a finite amount I can do, if I exceed that, there is a price to pay. I still feel horrible 24 hours on, nowhere near as bad as I did but still “not good”, sitting wearing half my wardrobe to keep warm (trail riding base layers are wonderful). I have no idea which events to commit to and need to be careful about being in a fit state for things. On the plus side, the price comes later, not during activity and I guess I have some handle on the pattern. Its also always been there I think, I’ve just tried to be more active/fit and provoked it.

So really, I don’t know whether to laugh or cry :/. If you see me having disappeared a bit from some things, this is why though.

by Richard at November 19, 2014 12:23 PM

September 28, 2014

Hylke Bons

Switching jobs

Today was my first day at Red Hat! This has been a public service announcement.

by Hylke Bons at September 28, 2014 09:30 PM

July 22, 2014

Emmanuele Bassi

moving on…

I’ve finally restored my personal web server after the WordPress installation(s) I had there got hacked last year, and I decided to migrate this blog there.

if you do not read this blog via Planet GNOME, but you use the syndacation feed, you should subscribe to this feed, instead. if you’re only interested in GNOME-related posts (i.e. the posts that end up in Planet GNOME) the use this feed instead.

by ebassi at July 22, 2014 11:55 AM

May 19, 2014

Ross Burton

UEFI Validation

The Linux UEFI Validation Project was announced recently:

Bringing together multiple separate upstream test suites into a cohesive and easy-to-use product with a unified reporting framework, LUV validates UEFI firmware at critical levels of the Linux software stack and boot phases.

LUV also provides tests in areas not previously available, such as the interaction between the bootloader, Linux kernel and firmware. Integrated into one Linux distribution, firmware can now be tested under conditions that are closer to real scenarios of operation. The result: fewer firmware issues disrupting the operating system.

Of course that “one Linux distribution” is built using the Yocto Project, so it’s trivial to grab the source and patch/extend it, or rebuild it for different processors if for example you want to validate UEFI on ARM or a new CPU that mainstream distros don’t fully support.

by Ross Burton at May 19, 2014 09:07 AM

Richard Purdie

Kielder K2 – Marshalling Day 3

Sunday morning was an early start but despite the fun of the previous day, I was basically ok and the swelling of my leg has massively reduced overnight. I made my way back to Bellingham in time to put fuel onto the fuel trailer and then made my way leisurely to the far checkpoint and my section. Unfortunately I was on my own today, it is more fun when you have someone to ride with and talk to.

Eventually bikes turned up and entered the section and then I followed the pack through to make sure there weren’t any issues. It was all quiet so I cut back to the special test and refueled finding the riders weren’t yet back to there. I therefore waited around. The end of the special test has an interesting detour through a ditch and there were a number of people falling off there who I ended up helping. There was one person who decided to head off the line everyone else was using and buried it in liquid mud up to the mudguards. I didn’t enjoy pulling that out and getting covered in mud. Another went off line, planted the front wheel into a hole and went over the bars head first, thankfully he was shaken but ok as it was slow speed.

There were people having fun on the enduro loops but my back tyre wasn’t up to it, nor were my energy levels to be quite honest. I had something to eat/drink and eventually the closing marshal turned up which marked the start of my main work. I headed back to my section via a short cut along with a local rider who was wanting to retire. There, I had a wait for the closing marshal again, he arrived and it was time to demark.

I was supposed to be with a team of another two however they weren’t there and I decided to get started without them, leaving word with the checkpoint to send them on. I hadn’t gotten too far when they showed up and joined in. We worked as a team, person in front gets the first arrow they come to and the team rotates. Demarking can be interesting as you never know quite what kind of terrain you’ll have to park on to reach the arrows. Obviously you try to do it without getting off the bike, although if the ditch is marsh/water with reeds growing out of it, you quickly learn not to ride into it (I’d remembered from last year).

We made it around the course to the next checkpoint and demarked to the road there, then also demarked the route back to the camping field. The far section with the second special test was being handled by another team. By the end of the few days, the bike was rather coated in mud/dust and rather sorry for itself with its missing headlight:

The forest fire roads are hard on tyres, particularly when you do use the power to accelerate and my rear was paractially a slick at this point:

So all in all a good weekend and some good fun. The bike is going to need a good check over after all that vibration from the fireroads.

by Richard at May 19, 2014 08:56 AM

Kielder K2 – Marshalling Day 2

Saturday dawned and I found I could roughly move. Various locals started arriving, either entering in the rally or marshalling. I was given the top section of the course to look after where I’d run out of fuel the day before. One of the local TRF was to be my partner in crime.

We had a leisurely ride out to the special test, then short cut to our section. We did a sighting lap of that part not least so my partner had some idea where we were, then watched a load of bikes through the special test since that was our way back to the start of our section and the checkpoint there. We headed back there to find nobody has entered it yet and managed to push our way through the crowds to the front. The spot has a lovely view over the reservoir.

After a good few bikes had set off, we started a check through and stopped to take some photos of some friends. After that we came across
a breakdown, a 690 that wouldn’t run. We spent a while pooling tools trying to diagnose the problem. It had a spark so it had to be fuel related. We left to go and find a tow rope. We swept through our section, then went back to the special test and fuel point to refill and find a tow rope. On the way back to our section we passed some other breakdowns but there wasn’t anything we could do to help and they were in a good place for recovery.

Whilst my two stroke wasn’t up to towing, my partner’s four stroke was and we were able to get him out the forest using some creative use of shortcuts and into a position he could get recovered from. Greg did well particularly given he’d never tried this before.

After Greg refueled at Kielder and I’d eaten my sandwiches, we went back around our section, stopping for photos and so on, hovering near the end so we’d hear of any other breakdowns or problems. When it was clear things were winding down we headed back to the test/fuel area and found the course was closed. All that was left was for us to head back to base which we did without incident. Our section was clear as far as we knew and would get checked by the closing team anyway.

I have to say I was pretty tired at this point. I decided that rather than camping, I’d head home for a shower and so on in comfort and I wanted a better look at my leg. I arrived home without incident, had a shower and re-bandaged the leg and then started to feel extremely unwell. As far as I can tell I was losing my ability to regulate body temperature and was very shaky and shivering with a fast pulse. I tried various quick food/drink options and it wasn’t helping, if anything I was getting worse, extremely cold and shivery. An idea struck me and I tried a can of coke which thankfully hit the spot and pulled me out of it within 10 minutes.

This is very similar to what happened to me a couple of years ago. In that case I ended up near enough unconscious for 18 hours so this was an improvement on last time. I’m now pretty sure these were both cases of hypoglycaemia. Readers are probably thinking diabetes come to mind or a thyroid problem etc. For the record, I’ve had a ton of tests and its not any of the “usual” suspects. It does seem to be exercise induced, delayed effect and some kind of metabolic muscle issue is prime suspect. I have my theories and investigation is still ongoing.

So I opted to stay at home for a good night’s sleep and to see how I was in the morning. Any sensible person would perhaps of opted to rest the next day however I believe I’m actually starting to understand what was going on with energy levels and all my instincts told me I would be fine to ride.

by Richard at May 19, 2014 08:36 AM

K2 Rally – Marshalling Day 1

The K2 rally is this weekend in Kielder forest. It was fairly eventful so I’ll split these day by day.

Marking out a 65 mile course in the forest takes quite some time so I’d offered help to the organiser with that as well as marshalling in the rally itself.

As things worked out on the Friday, the course was pretty much done but it did need sighting, a check that all the signs were present in the right places, hazards marked etc. for the perspective of someone coming around the course.

Having arrived and secured a flat piece of ground for the tent, I therefore set off with someone who I won’t name on a DRZ400 on semi tyres, not off road ones. He assured me he would go steady but he’d finished X rallies, considers himself a retired enduro rider etc. He did have a good medical reason for not wanting to fall off too and wanted someone with him.

So we set off and it was soon clear that when he said we’d go ’steady’, he didn’t half mean it. I was on the YZ with its road gearing on, not the enduro gearing since this was a rally. It’s engine is a handful at the best of times and it didn’t run well at this (lack of) speed :( .

Back in the karting days, I prided myself on an ability to limp broken two stroke engines back to the pits so somehow, I managed to keep the YZ running and not oil the plug.

When we found the special test, I ended up having to go past for one section since the YZ requires commitment and the 10mph simply wasn’t going to work. I was wishing I was on the CRM which would have been much happier at this pace.

We did put up a few arrows in places to make things clearer and there was only one confusing section where some tape wasn’t out. At this point it became clear that whilst we had a map, I’d have to read it. I also had the GPS and some idea of what the forest looks like by now, thankfully.

Somehow we made it to the top of the course and after a biscuit break, we dropped down onto the road by the reservoir and then turned back into the forest. He commented that his speedo had stopped working.

At this point I noticed his front wheel was unwell. At first it looked like he’d lost a spoke which had snagged the speedo cable and wrapped it around the wheel a couple of times. The bike was basically unsafe to ride.

So, first up, what tools did we have? He seemed very unkeen to try and get them from the sealed package on his bike so we tried mine. Unfortunately I’d picked up my favourite multitool, not remembering it was broken so we were without decent pliers. The speedo cable was at quite some tension and therefore not easy to undo. We could undo various mounting clips to try and slacken it off and then using the broken pliers, very slowly inched the top end of the cable undone.

When it did come undone it did so with quite some force and could have done nasty things to my fingers but I had anticipated this. We found it wasn’t a spoke that was wrapped around the wheel but 12″ of fencing wire. We pulled that off and then rather than disconnecting the speedo from the other end, managed to simply sever the cable. Great, we could continue.

The course is supposed to be a 2-2.5 hour lap. I think we were 4 hours in at this point, maybe more and not even half way. We looped around the next section of forest and then suddenly, my bike started sounding odd, the kind of odd that means the fuel is running out. I sounded my horn, waved arms and he took not notice of me and continued off into the distance as I coasted to a stop.

Not entirely able to believe it I checked the tank and yes, it was extremely low and now on reserve. Eventually he noticed I was missing and came back to look for me. Basically, my lack of fuel was extremely frustrating for him and his advice was we split up and I make my own way back on the road, I could always hitch a lift and then pick the bike up. That way I was no longer a problem for him. Against my better judgement, I decided that yes, I’d have to sort myself out. I knew where there was an automated fuel station, perhaps within range of my reserve however I had neither two stroke oil to mix new fuel, or a credit card with me (not many places take them deep in the forest).

We also had a little disagreement about the cause of my lack of fuel. He was keen to point out that as any idiot knows, running slower doesn’t use more fuel. I suggested that rule may apply to a point but that the YZ actually runs more efficiently at more than 10mph however he wasn’t having it. Regardless, it wasn’t going to change anything.

I therefore left, in full fuel economy mode and headed for Bellingham. Making it was never going to happen and I coasted to a stop completely out just past Falston. On the plus side I knew exactly where I was. On the downside it was a fair distance to where I needed to be (9 miles as it turned out).

I stashed the bike down a side road in some bushes, I also stashed the bike helmet, the arrows, stapler, body armour and anything else I wasn’t going to need under a handy plastic sheet over some firewood. I should mention at this point that there is no mobile phone signal apart from occasionally a network I wasn’t on, nor was I expecting any signal any time soon. Any houses around there are holiday lets or second homes and nobody was around.

So still wearing the knee braces and bike boots, I started the walk to Bellingham. At least this way I knew I’d get sorted out eventually. I was passed by lots of cars and I’d imagine someone wearing MX clothing and big bike boots looked a little out of place walking along a verge. One oncoming car did stop and ask if I was ok and did offer me a lift in the wrong direction but I politely declined. I wasn’t taking too well to walking in the heat with that gear on, “dripping”, doesn’t quite cover it.

After 3 miles a car going the right way did stop and gave me a lift into Bellingham for which I was extremely grateful. I tried to find the organiser to tell him I was ok but he wasn’t around. I figured my friend was still riding around the forest so far so nobody probably knew I was stranded yet. I therefore jumped into the discovery and went to collect the bike and stashed gear. That went without incident and when I got back to the site, the organiser also arrived back having been looking for me. My friend had got a message from his phone to his wife. Anyhow, I+bike were back and all was well.

It was now late afternoon and I opted for a tea break. Thinking I was just popping out for three hours, I hadn’t taken food with me although I’d had a drink. The organiser was apologetic for my trip out and said if I wanted, I could go and check the other half of the course with him if I was up for it. So we set off and all I can say is that this trip was opposite extremes to the first one. I mostly kept up and we got the other half of the course done on half a tank of fuel (including riding out to it). So perhaps it does use more fuel at 10mph, who’d have thought it! I did get to see the second special test although I much preferred the first.

Unfortunately, just as we were coming to the end of the course, I relaxed a little too much, ran wide on the exit of a corner, ran onto the edge of and then into a gravel drainage ditch and spent a short while sliding along said ditch under the bike.

Once I was sure we’d actually stopped, I have a distinct memory of the self test of various pieces of me, concluding that all appeared to still be attached with one area of pain on my thigh which wasn’t structural. I therefore extracted myself and inspection showed a hole through the clothing on my thigh and some rather red raw looking skin, 3×2″ in size. I wasn’t leaking and inspection tallied with the previous conclusion, not structural so I turned my attention to the bike. It appeared not to be too bad and so while the adrenaline was still kicking in, I hauled it out the ditch through shear will power. The organiser therefore found me sitting in the middle of the track looking a little worse for wear. I was thankful the bike started without too much faff for a change and we went the two corners out the forest and the few miles of road back to camp.

Once back, someone helpfully pointed out the headlight was smashed. That was the least of my concerns, I dug out the first aid kit from my backpack which I’ve been carrying around for literally years, some water and paper towels and went about seeing how bad the damage was to me. I still can’t make up my mind if its a burn or a graze, I suspect its a burn from the exhaust. The first aid kit had the right things in it thankfully and although the first attempt at a bandage didn’t work too well, it did after supplementation with gaffer tape. It was clear at this point there was ’some’ bruising too. I suspect that I hit my knee hard but the brace deflected the damage into my thigh muscle which is a good thing and working as designed. I’ve a link here to a picture of said injury, don’t follow it if you don’t like gruesome graphic detail.

What followed was a pleasant evening on the campsite, cooking dinner and then talking to various people as they arrived, I eventually had to call it a night as sitting in the cold was causing my leg to seize up.

But hang on, what became of my friend you might ask? Well, he did complete lap one but proceeded to follow the arrows and start a second, not realising where he was. He did think some of the corners looked familiar! At some point he ran out of fuel. The only remaining detail to complete to story is that that he was rescued by a passing postman!

by Richard at May 19, 2014 07:50 AM

May 15, 2014

Ross Burton

Reproducible builds and GPL compliance

LWN has a good article on GPL compliance (if you’re not a subscriber you’ll have to wait) that has an interesting quote:

Developers, and embedded developers in particular, can help stop these violations. When you get code from a supplier, ensure that you can build it, he said, because someone will eventually ask. Consider using the Yocto Project, as Beth Flanagan has been adding a number of features to Yocto to help with GPL compliance. Having reproducible builds is simply good engineering practice—if you can’t reproduce your build, you have a problem.

This has always been one of the key points that we emphasis when explaining why you should use the Yocto Project for your next product.  If you’re shipping a product that is built using fifty open source projects then ensuring that you can redistribute all the original sources, and the patches that you’ve applied, and the configure options that you’ve used, and any tweaks to go from a directory of binaries to a bootable image isn’t something you can knock up in an afternoon when you get a letter from the SFC.  Fingers crossed you didn’t accidentally use some GPLv3 code when that is considered toxic.

Beth is awesome and has worked with others in the Yocto community to ensure all of this is covered.  Yocto can produce license manifests, upstream sources + patches archives, verify GPLv3 code isn’t distributed, and more.  All the work that is terribly boring at the beginning when you have a great idea and are full of enthusiasm (and Club-Mate), but by the time you’re shipping is often nigh on impossible.  Dave built the kernel on his machine but the disk with the right source tree on died, and Sarah left without telling anyone else the right flags to make libhadjaha actually link…  it’ll be fine, right?

by Ross Burton at May 15, 2014 10:29 PM

May 11, 2014

Ross Burton

Misc for Sale

As we’re moving house we are having a bit of a clear out, and have some techie gadgets that someone else might want:

  • Netgear DGN3500 ADSL/WiFi router. £15.
  • Bluetooth GPS dongle. £5.
  • Seagate Momentus 5400.6 160GB 2.5″ SATA hard disk (ex-MacBook). £5.

Those prices are including UK postage. Anyone interested?

by Ross Burton at May 11, 2014 10:28 PM

May 05, 2014

Richard Purdie

MGB lives again

Back in August 2010 I blew the MG’s engine up. I did get around to taking that one out and fitting the spare. The battery was too flat to start it and winter came along before I’d got any further. With moving house, the work on said house and garage and 101 other things, the MG just never happened and its been sat looking sorry for itself ever since :( .

Today, I thought I’d see what state it was in. It was changed overnight so the battery should either be good or knackered. The bonnet release cable was snapped but I found a way in.

Whilst stiff, the controls all seemed to roughly do what they should. First try at ignition on saw petrol explode over the engine bay as the fuel line between the carbs gave way. At least the fuel pump works I guess.

After a small new piece of fuel pipe was fitted, take two. Ignition on, fuel at pressure, no overflow from the carbs which was a pleasant surprised as I was expecting a carb rebuild. Trying the starter, the engine turned over at a reasonable speed, it even make a hint of a cough of life. The lack of oil pressure stopped me at this point. I took the plugs out, then ran it on the starter and after what seemed like an eternity, the oil pressure climbed to normal. Ok.

Plugs back in, try the starter. Nothing. This was the point I was at after the rebuild but with a charged battery. I noted the distributor was loose and I’d never set the timing. Ok, twiddle it one way, try again. Still nothing. Ok, lets try the other way.

This time, you could hear it trying. After some further gentle nudging, it started coughing into life, first on a single cylinder then quickly onto approximately two. At this point I just gently tried to keep it running. The other cylinders kicked in intermittently at first, then it was running on all approximately four. I looked around for Dad who’d run off to try and stop the clouds of smoke a two stoke would be proud of from getting into the garage (too late). I gestured for Dad to check nothing was on fire on the exhaust (probably just the petrol from the earlier spillage).

I decided not to run it too much more since there was no water in it. We stopped and then rectified that, cue a comedy moment where there was water pouring from a hole in the cylinder head with us wondering “what’s missing?” until we realised it was the water temperature sender unit.

Emboldened by this, we wondered “could it move?”. My Dad carefully moved vehicles out of the way in case the brakes failed in some catastrophic way and made dire predictions about whether I’d destroy the clutch trying this. It started and then we managed a controlled lurch forwards as the brakes unbound with not nearly as much issue as we’d expected. At this point I drove it off the drive and did a couple of loops around the T junction. Everything felt seized up but it was none the less moving, stopping, starting and turning.

At this point attention turned to the driveway which needed a good clean and with the car missing, this was an ideal opportunity. Later, the car started first turn of the starter and reversed back on relatively happily.

Sadly the bodywork is in a bad way but at least now its known to vaguely function and move under its own power :) .

by Richard at May 05, 2014 03:19 PM

May 04, 2014

Richard Purdie

CRM lives again

I took the CRM to bits back in September. It was making various rattling noises that said the engine was tired and in need of a rebuild. Upon dismantling it, I found that not only was the piston chattering but the power valve was also not in the best of health being rather loose on its shaft. I managed to get a secondhand part for that but getting a new piston proved tricky since I wanted a forged one rather than cast. I ended up going for a .4mm oversize after much playing with feeler gauges and vernier callipers to map out the barrel wear. As expected it was more worn in the centre of the bore and less at the top/bottom.

As a family we’ve long since used an engineering shop in Blyth for rebores. A lot of our engines now have nikasil liners which mean sending off the cylinders but this one is a cast iron liner and hence can be done locally. When I dropped it off, there was much sucking of teeth about a 0.4mm oversize piston as it was not leaving them a lot of room for the rebore. They’d do the best they could, complaining the standard bores were never actually straight. It had been given to the “old man” by the son since he has more patience with these silly motorcycle things. We use the place due to their attention to detail. A joke from my school’s karting association days about their local competitor springs to mind: “Where did you get the rebore? Armstrongs?! How do you square up the barrel then? Yellow pages [telephone directory] under one side?”.

The above shows a mark left which they were very apologetic about but they had warned me. In reality this won’t make any difference to the running.

New shiny parts and some less shiny power valve bits.

Crazy amounts of plumbing. The question is could I remember how it all goes together from back in September?


An interesting aside, these photos show the exhaust port with the valve open and closed. It makes an amazing difference to the port size and position. This allows the engine to have low down grunt and decent top end rather than having the port timing fixed to a specific engine speed.

The new shiny piston fitted and ready for the top end. I nearly forgot to fit the new base gasket but did thankfully remember.

It went back together with nothing unepected left over which is always good. So did it start? Kind of. It did fire up, smoke a lot then stop. It reluctantly fired up again, stopped and lost all compression. At this point I was some what panicking and also out to time to work on it further.

There are a limited number of ways a two stroke can lose compression and most of them are not good. This morning I took the exhaust off and peered into the barrel as best I could, all looked well. No signs of the power valve having hit the piston which was one worry. I tried turning the engine over under load and the compression came back suggesting the thing was just full of oil which was seizing up the piston rings. I spent an age kicking it over trying to get it started and did manage to get it to fire up occasionally once managing to get it onto full throttle with clouds of smoke coming out, then it basically stopped dead again. The plug was clearly oiling up and its a 10 minute job to remove, clean and refit it.

I gave in and called in Dad to drive the discovery whilst I was towed by it on the CRM. I am reluctant to do this having destroyed engines like this but I was confident the thing was just clogged up with oil. It took half the block for it to turn over enough to clear, fire and then run under its own power. A couple of more trips around the block, probably much to the enjoyment of the neighbours and its running! So it lives again, just need to get it out and gently run it in now.

by Richard at May 04, 2014 01:03 PM

Hanging by a thread

Never let it be said I don’t believe in preventative maintenance. Admittedly this is a little just in time!

This is the YZ’s clutch cable in case that isn’t clear. The first one that arrived was for a 250F (a four stroke) despite me clearly buying one for the YZ250 (a two stroke).

by Richard at May 04, 2014 12:19 PM

May 02, 2014

Emmanuele Bassi

Graphene

one of the challenges of writing a graphics library that is capable of doing what modern UI designers and developers expect is providing the required data types to achieve things like 3D transformations.

with the collective knowledge and attention to detail1 that the free and open source software community brings to the table I was actually surprised to see that all the code for doing vector and matrix math is usually tucked away into various silos that also come with canvas implementations, physics engines, and entire web browsers. it gets even worse when you want code that used features of modern (and less modern) hardware, and instead all you get are just naive implementations of four floating point values in a structure.

you can trust me when I say that I didn’t want to spend the past seven days writing code that deals with vector and matrix operations, when I wasn’t reading PDFs of Intel architecture opcodes, or ARM NEON instructions; I also didn’t want to know that once you start implementing common operations on matrix types, like projection and unprojection, you get to open a fairly deep can of worms that forces you to implement point (2D and 3D), rectangle, quaternion, and quad types.

luckily, it’s possible to find a bunch of implementations under various stages of maintenance, and under suitable licenses, even though mostly are in C++ and they overlap by just about 60% each; you really need to buckle up and start translating naive matrix determinant code to SIMD four vector data structures, and do a union of all possible API, before you have something you can actually use.

the end result of these seven days is an almost decent, almost complete little utility library that tries to be fairly thin in both what it requires and what it provides. I called it graphene and it’s available in Git. at some point, when I’m actually satisfied with it, I’ll even document it like the grown-up I’m supposed to be. right now, I’ll have to write a ton of tests to check on the math, because I’m pretty sure there must be at ton of bugs in there.

the main question is: what do I intend to use graphene for. the more attentive amongst you, kind readers, will already guess that it’s for the forthcoming GTK+ scene graph API — which is indeed the correct answer, but you’ll have to wait for the next blog post in the series for a proper introduction and description, as well as a road map for the unicorn and ponies fuelled future.

  1. bordering on the OCD

by ebassi at May 02, 2014 08:00 PM

Berlin DX Hackfest / Day 3

 

 

 

 

 

the third, and last day of the DX hackfest opened with a quick recap as to what people have been working on in the past couple of days.

we had a nice lunch nearby, and then we went back to the Endocode office to tackle the biggest topic: a road map for GTK+.

we made good progress on all the items, and we have a fairly clear idea of who is going to work on what. sadly, my optimism on GProperty landing soon did not survive a discussion with Ryan; it turns out that there are many more layers of yak to be shaved, though we kinda agreed on the assumption that there is, in fact, a yak underneath all those layers. to be fair, the work on GProperty enabled a lot of the optimizations of GObject: property notifications, bulk installation of properties, and the private instance data reorganization of last year are just examples. both Ryan and I agreed that we should not increase the cost for callers of property setters — which right now would require asking the GProperty instance to the class of the instance that we’re modifying, which implies taking locks and other unpleasant stuff. luckily, we do have access to private class data, and with few minor modification we can use that private data to store the properties; thus, getting the properties of a class can be achieved with simple pointer offsets and dereferences, without locks being involved. I’ll start working on this very soon, and hopefully we’ll be able to revisit the issue at GUADEC, in time for the next development cycle of GLib.

in the meantime, I kept hacking on my little helper library that provides data types for canvases — and about which I’ll blog soon — as well as figuring out what’s missing from the initial code drop of the GTK+ scene graph that will be ready to be shown by the time GUADEC 2014 rolls around.

I’m flying back home on Saturday, so this is the last full day in Berlin for me. it was a pleasure to be here, and I’d really like to thank Endocode for generously giving us access to their office; Chris Kühl, for being a gracious and mindful host; and the GNOME Foundation, for sponsoring attendance to all these fine people and contributors, and me.

 

Sponsored by the GNOME Foundation

by ebassi at May 02, 2014 07:00 PM

May 01, 2014

Emmanuele Bassi

Berlin DX Hackfest / Day 2

the second day of the hackfest had a pretty big discussion on the roadmap for GTK+.

thanks to Matthias Clasen, we had a list of things to discuss prior to the start of the hackfest, even if Matthias himself would not be present:

  • filling the gaps between the GNOME HIG and the GTK+ API needed to implement it
  • a better cross-platform story for tool kit maintainers and application developers
  • touch support
  • scene graph to replace Clutter
  • documentation
  • improving the relationship of the tool kit with Glade
  • required clean ups for GTK+ 4

during the afternoon we managed to go through the first bullet point of the list, but we made really good progress on it, and we managed to assign each sub-issue to a prospective owner that is going to be in charge of it.

hopefully, we’re going to go through the other points during the rest of the hackfest much more quickly.

Sponsored by the GNOME Foundation

by ebassi at May 01, 2014 11:01 PM

Berlin DX hackfest / Day 1

we had a fairly productive first day here, at the Endocode offices in Berlin. everyone is pretty excited about working on the overall experience for developers on the GNOME platform.

at first, we decided what to tackle in the next three days, and drafted a rough schedule. the hackfest then broke down into two main groups: the first tackled GObject models for the benefit of GTK+ widgets acting as views; the second worked on the developer documentation available on developer.gnome.org.

I decided to stay on the sidelines for the day, and worked on a small utility library that I’m going to use in the development of GSK, the GTK+ scene graph API that will replace Clutter in the near future; I’m going to do a proper blog post on both things later this week. I’ve also worked a bit on my old nemesis, GProperty. I have really high hopes that after three years of back and forth we’re going to finally land it in GLib, and let people have a better, easier, and more efficient way to define and use GObject properties.

In the evening we went to the Berlin GNOME beers along with the local GNOME community; it’s been a great evening, and we met both familiar faces and new ones.

I’d like to thank Endocode for kindly giving us access to their office in order to host the hackfest, as well as the GNOME Foundation for sponsoring travel attendance of many talented members of the GNOME community.

Sponsored by the GNOME Foundation

by ebassi at May 01, 2014 04:00 PM