Planet Closed Fist

February 05, 2014

Ross Burton

Better bash completion?

Bash completion is great and everything, but I spend more time than is advisable dealing with numerous timestamped files.

$ mv core-image-sato-qemux86-64-20140204[tab]
core-image-sato-qemux86-64-20140204194448.rootfs.ext3
core-image-sato-qemux86-64-20140204202414.rootfs.ext3
core-image-sato-qemux86-64-20140204203642.rootfs.ext3

This isn’t an obvious choice as I now need to remember long sequences of numbers. Does anyone know if bash can be told to highlight the bit I’m being asked to pick from, something like this:

$ mv core-image-sato-qemux86-64-20140204[tab]
core-image-sato-qemux86-64-20140204194448.rootfs.ext3
core-image-sato-qemux86-64-20140204202414.rootfs.ext3
core-image-sato-qemux86-64-20140204203642.rootfs.ext3

by Ross Burton at February 05, 2014 11:55 AM

January 21, 2014

Ross Burton

Remote X11 on OS X

I thought I’d blog this just in case someone else is having problems using XQuartz on OS X as a server to remote X11 applications (i.e. using ssh -X somehost).

At first this works but after some time (20 minutes, to be exact) you’ll get “can’t open display: localhost:10.0″ errors when applications attempt to connect to the X server. This is because the X forwarding is “untrusted” and that has a 20 minute timeout. There are two solution here: increase the X11 timeout (the maximum is 596 hours) or enable trusted forwarding.

It’s probably only best to enable trusted forwarding if you’re connecting to machines you, well, trust. The option is ForwardX11Trusted yes and this can be set globally in /etc/ssh_config or per host in ~/.ssh/config.

by Ross Burton at January 21, 2014 11:17 AM

January 08, 2014

Ross Burton

Network Oddity

This is… strange. Two machines, connected through cat5 and gigabit adaptors/hub.

$ iperf -c melchett.local -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to melchett.local, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  4] local 192.168.1.7 port 35197 connected with 192.168.1.10 port 5001
[  5] local 192.168.1.7 port 5001 connected with 192.168.1.10 port 33692
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  1.08 GBytes   926 Mbits/sec
[  5]  0.0-10.0 sec  1.05 GBytes   897 Mbits/sec

Simultaneous transfers get ~900MBits/s.

$ iperf -c melchett.local -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to melchett.local, TCP port 5001
TCP window size: 22.9 KByte (default)
------------------------------------------------------------
[  5] local 192.168.1.7 port 35202 connected with 192.168.1.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec   210 MBytes   176 Mbits/sec
[  4] local 192.168.1.7 port 5001 connected with 192.168.1.10 port 33693
[  4]  0.0-10.0 sec  1.10 GBytes   941 Mbits/sec

Testing each direction independently results in only 176MBits/sec on the transfer to the iperf server (melchett). This is 100% reproducible, and the same results appear if I swap iperf client and servers.

I’ve swapped one of the cables involved but the other is harder to get to, but I don’t see how physical damage could cause this sort of performance issue. Oh Internet, any ideas?

by Ross Burton at January 08, 2014 12:38 PM

December 16, 2013

Chris Lord

Linking CSS properties with scroll position: A proposal

As I, and many others have written before, on mobile, rendering/processing of JS is done asynchronously to responding to the user scrolling, so that we can maintain touch response and screen update. We basically have no chance of consistently hitting 60fps if we don’t do this (and you can witness what happens if you don’t by running desktop Firefox (for now)). This does mean, however, that you end up with bugs like this, where people respond in JavaScript to the scroll position changing and end up with jerky animation because there are no guarantees about the frequency or timeliness of scroll position updates. It also means that neat parallax sites like this can’t be done in quite the same way on mobile. Although this is currently only a problem on mobile, this will eventually affect desktop too. I believe that Internet Explorer already uses asynchronous composition on the desktop, and I think that’s the way we’re going in Firefox too. It’d be great to have a solution for this problem first.

It’s obvious that we could do with a way of declaring a link between a CSS property and the scroll position. My immediate thought is to do this via CSS. I had this idea for a syntax:

scroll-transition-(x|y): <transition-declaration> [, <transition-declaration>]*

    where transition-declaration = <property>( <transition-stop> [, <transition-stop>]+ )
      and transition-stop        = <relative-scroll-position> <property-value>

This would work quite similarly to standard transitions, where a limited number of properties would be supported, and perhaps their interpolation could be defined in the same way too. Relative scroll position is 0px when the scroll position of the particular axis matches the element’s offset position. This would lead to declarations like this:

scroll-transition-y: opacity( 0px 0%, 100px 100%, 200px 0% ), transform( 0px scale(1%), 100px scale(100%), 200px scale(1%);

This would define a transition that would grow and fade in an element as the user scrolled it towards 100px down the page, then shrink and fade out as you scrolled beyond that point.

But then Paul Rouget made me aware that Anthony Ricaud had the same idea, but instead of this slightly arcane syntax, to tie it to CSS animation keyframes. I think this is more easily implemented (at least in Firefox’s case), more flexible and more easily expressed by designers too. Much like transitions and animations, these need not be mutually exclusive though, I suppose (though the interactions between them might mean as a platform developer, it’d be in my best interests to suggest that they should :)).

I’m not aware of any proposal of this suggestion, so I’ll describe the syntax that I would expect. I think it should inherit from the CSS animation spec, but prefix the animation-* properties with scroll-. Instead of animation-duration, you would have scroll-animation-bounds. scroll-animation-bounds would describe a vector, the distance along which would determine the position of the animation. Imagine that this vector was actually a plane, that extended infinitely, perpendicular to its direction of travel; your distance along the vector is unaffected by your distance to the vector. In other words, if you had a scroll-animation-bounds that described a line going straight down, your horizontal scroll position wouldn’t affect the animation. Animation keyframes would be defined in the exact same way.

[Edit] Paul Rouget makes the suggestion that rather than having a prefixed copy of animation, that a new property be introduced, animation-controller, of which the default would be time, but a new option could be scroll. We would still need an equivalent to duration, so I would re-purpose my above-suggested property as animation-scroll-bounds.

What do people think about either of these suggestions? I’d love to hear some conversation/suggestions/criticisms in the comments, after which perhaps I can submit a revised proposal and begin an implementation.

by Chris Lord at December 16, 2013 11:20 AM

November 29, 2013

Chris Lord

Efficient animation for games on the (mobile) web

Drawing on some of my limited HTML5 games experience, and marginally less limited general games and app writing experience, I’d like to write a bit about efficient animation for games on the web. I usually prefer to write about my experiences, rather than just straight advice-giving, so I apologise profusely for how condescending this will likely sound. I’ll try to improve in the future :)

There are a few things worth knowing that will really help your game (or indeed app) run better and use less battery life, especially on low-end devices. I think it’s worth getting some of these things down, as there’s evidence to suggest (in popular and widely-used UI libraries, for example) that it isn’t necessarily common knowledge. I’d also love to know if I’m just being delightfully/frustratingly naive in my assumptions.

First off, let’s get the basic stuff out of the way.

Help the browser help you

If you’re using DOM for your UI, which I’d certainly recommend, you really ought to use CSS transitions and/or animations, rather than JavaScript-powered animations. Though JS animations can be easier to express at times, unless you have a great need to synchronise UI animation state with game animation state, you’re unlikely to be able to do a better job than the browser. The reason for this is that CSS transitions/animations are much higher level than JavaScript, and express a very specific intent. Because of this, the browser can make some assumptions that it can’t easily make when you’re manually tweaking values in JavaScript. To take a concrete example, if you start a CSS transition to move something from off-screen so that it’s fully visible on-screen, the browser knows that the related content will end up completely visible to the user and can pre-render that content. When you animate position with JavaScript, the browser can’t easily make that same assumption, and so you might end up causing it to draw only the newly-exposed region of content, which may introduce slow-down. There are signals at the beginning and end of animations that allow you to attach JS callbacks and form a rudimentary form of synchronisation (though there are no guarantees on how promptly these callbacks will happen).

Speaking of assumptions the browser can make, you want to avoid causing it to have to relayout during animations. In this vein, it’s worth trying to stick to animating only transform and opacity properties. Though some browsers make some effort for other properties to be fast, these are pretty much the only ones semi-guaranteed to be fast across all browsers. Something to be careful of is that overflow may end up causing relayouting, or other expensive calculations. If you’re setting a transform on something that would overlap its container’s bounds, you may want to set overflow: hidden on that container for the duration of the animation.

Use requestAnimationFrame

When you’re animating canvas content, or when your DOM animations absolutely must synchronise with canvas content animations, do make sure to use requestAnimationFrame. Assuming you’re running in an arbitrary browsing session, you can never really know how long the browser will take to draw a particular frame. requestAnimationFrame causes the browser to redraw and call your function before that frame gets to the screen. The downside of using this vs. setTimeout, is that your animations must be time-based instead of frame-based. i.e. you must keep track of time and set your animation properties based on elapsed time. requestAnimationFrame includes a time-stamp in its callback function prototype, which you most definitely should use (as opposed to using the Date object), as this will be the time the frame began rendering, and ought to make your animations look more fluid. You may have a callback that ends up looking something like this:

var startTime = -1;
var animationLength = 2000; // Animation length in milliseconds

function doAnimation(timestamp) {
 // Calculate animation progress
 var progress = 0;
 if (startTime < 0) {
   startTime = timestamp;
 } else {
   progress = Math.min(1.0, animationLength /
                              (timestamp - startTime));
 }

 // Do animation ...

 if (progress < 1.0) {
   requestAnimationFrame(doAnimation);
 }
}

// Start animation
requestAnimationFrame(doAnimation);

You’ll note that I set startTime to -1 at the beginning, when I could just as easily set the time using the Date object and avoid the extra code in the animation callback. I do this so that any setup or processes that happen between the start of the animation and the callback being processed don’t affect the start of the animation, and so that all the animations I start before the frame is processed are synchronised.

To save battery life, it’s best to only draw when there are things going on, so that would mean calling requestAnimationFrame (or your refresh function, which in turn calls that) in response to events happening in your game. Unfortunately, this makes it very easy to end up drawing things multiple times per frame. I would recommend keeping track of when requestAnimationFrame has been called and only having a single handler for it. As far as I know, there aren’t solid guarantees of what order things will be called in with requestAnimationFrame (though in my experience, it’s in the order in which they were requested), so this also helps cut out any ambiguity. An easy way to do this is to declare your own refresh function that sets a flag when it calls requestAnimationFrame. When the callback is executed, you can unset that flag so that calls to that function will request a new frame again, like this:

function redraw() {
  drawPending = false;

  // Do drawing ...
}

var drawPending = false;
function requestRedraw() {
  if (!drawPending) {
    drawPending = true;
    requestAnimationFrame(redraw);
  }
}

Following this pattern, or something similar, means that no matter how many times you call requestRedraw, your drawing function will only be called once per frame.

Remember, that when you do drawing in requestAnimationFrame (and in general), you may be blocking the browser from updating other things. Try to keep unnecessary work outside of your animation functions. For example, it may make sense for animation setup to happen in a timeout callback rather than a requestAnimationFrame callback, and likewise if you have a computationally heavy thing that will happen at the end of an animation. Though I think it’s certainly overkill for simple games, you may want to consider using Worker threads. It’s worth trying to batch similar operations, and to schedule them at a time when screen updates are unlikely to occur, or when such updates are of a more subtle nature. Modern console games, for example, tend to prioritise framerate during player movement and combat, but may prioritise image quality or physics detail when compromise to framerate and input response would be less noticeable.

Measure performance

One of the reasons I bring this topic up, is that there exist some popular animation-related libraries, or popular UI toolkits with animation functions, that still do things like using setTimeout to drive their animations, drive all their animations completely individually, or other similar things that aren’t conducive to maintaining a high frame-rate. One of the goals for my game Puzzowl is for it to be a solid 60fps on reasonable hardware (for the record, it’s almost there on Galaxy Nexus-class hardware) and playable on low-end (almost there on a Geeksphone Keon). I’d have liked to use as much third party software as possible, but most of what I tried was either too complicated for simple use-cases, or had performance issues on mobile.

How I came to this conclusion is more important than the conclusion itself, however. To begin with, my priority was to write the code quickly to iterate on gameplay (and I’d certainly recommend doing this). I assumed that my own, naive code was making the game slower than I’d like. To an extent, this was true, I found plenty to optimise in my own code, but it go to the point where I knew what I was doing ought to perform quite well, and I still wasn’t quite there. At this point, I turned to the Firefox JavaScript profiler, and this told me almost exactly what low-hanging-fruit was left to address to improve performance. As it turned out, I suffered from some of the things I’ve mentioned in this post; my animation code had some corner cases where they could cause redraws to happen several times per frame, some of my animations caused Firefox to need to redraw everything (they were fine in other browsers, as it happens – that particular issue is now fixed), and some of the third party code I was using was poorly optimised.

A take-away

To help combat poor animation performance, I wrote Animator.js. It’s a simple animation library, and I’d like to think it’s efficient and easy to use. It’s heavily influenced by various parts of Clutter, but I’ve tried to avoid scope-creep. It does one thing, and it does it well (or adequately, at least). Animator.js is a fire-and-forget style animation library, designed to be used with games, or other situations where you need many, synchronised, custom animations. It includes a handful of built-in tweening functions, the facility to add your own, and helper functions for animating object properties. I use it to drive all the drawing updates and transitions in Puzzowl, by overriding its requestAnimationFrame function with a custom version that makes the request, but appends the game’s drawing function onto the end of the callback, like so:

animator.requestAnimationFrame =
  function(callback) {
    requestAnimationFrame(function(t) {
      callback(t);
      redraw();
    });
 };

My game’s redraw function does all drawing, and my animation callbacks just update state. When I request a redraw outside of animations, I just check the animator’s activeAnimations property first to stop from mistakenly drawing multiple times in a single animation frame. This gives me nice, synchronised animations at very low cost. Puzzowl isn’t out yet, but there’s a little screencast of it running on a Nexus 5:

Alternative, low-framerate YouTube link.

by Chris Lord at November 29, 2013 02:31 PM

November 28, 2013

Ross Burton

Solving buildhistory slowness

The buildhistory class in oe-core is incredibly useful for analysing the changes in packages and images over time, but when doing frequently builds all of this metadata builds up and the resulting git repository can be quite unwieldy. I recently noticed that updating my buildhistory repository was often taking several minutes, with git frantically doing huge amounts of I/O. This wasn’t surprising after realising that my buildhistory repository was now 2.9GB, covering every build I’ve done since April. Historical metrics are useful but I only ever go back a few days, so this is slightly over the top. Deleting the entire repository is one idea, but a better solution would be to drop everything but the last week or so.

Luckily Paul Eggleton had already been looking into this so pointed me at a StackOverflow page which used “git graft points” to erase history. The basic theory is that it’s possible to tell git that a certain commit has specific parents, or in this case no parent, so it becomes the end of history. A quick git filter-branch and a re-clone later to clean out the stale history and the repository is far smaller.

$ git rev-parse "HEAD@{1 month ago}" > .git/info/grafts

This tells git that the commit a month before HEAD has no parents. The documentation for graft points explains the syntax, but for this purpose that’s all you need to know.

$ git filter-branch

This rewrites the repository from the new start of history. This isn’t a quick operation: the manpage for filter-branch suggests using a tmpfs as a working directory and I have to agree it would have been a good idea.

$ git clone file:///your/path/here/buildhistory buildhistory.new
$ rm -rf buildhistory
$ mv buildhistory.new buildhistory

After filter-branch all of the previous objects still exist in reflogs and so on, so this is the easiest way of reducing the repository to just the objects needed for the revised history. My newly shrunk repository is a fraction of the original size, and more importantly doesn’t take several minutes to run git status in.

by Ross Burton at November 28, 2013 05:07 PM

November 24, 2013

Richard Purdie

YZ Engine Rebuild

Back in August I found the YZ’s engine needed “a bit of attention”. Its taken a bit longer to get back to it than I’d hoped, partly due to building work but I can now complete the story. I stripped the bottom end down and concluded the easiest way forward was to buy a complete new crank shaft. This was slightly more expensive than just a conrod kit however it meant I didn’t need to press in the new rod and rebalance the crank, both things I could probably do but would need to buy/make tooling for. Luckily the wear on the top end was to the piston, the barrel looked fine. Bottom and top end kits were therefore duly ordered and turned up.

I started to reassemble the engine only to find the replacement crank was right, apart from the taper and thread on the ignition side. It had an M10 thread, I needed an M12 for my ignition. The bike is a 2002 model, the engine is a 2002 engine however it appears to have a 2003 crankshaft. This is probably due to the aftermarket alternator and weight. I ended up deciding to get another 2003 model crankshaft.

Since I was doing a complete overhaul I put new main bearings and seals in:

The photo shows some scary looking “cracks” in the casings although every two stroke I’ve ever rebuild looks like this to some degree so I’m doing my best to ignore them.

One nice feature of modern Japanese engines is the gearbox stays as one lump. Trying to put those back together and getting all the thrust washers in the right places is “fun”.

The crankshaft installed and casings mated back together. Of course life isn’t simple and whilst taking the engine apart, I found the likely cause of the scary sounding rattle. A worn power valve linkage. The part looks like this:

and the wear is in the first joint next to the coin in the photo. Its very hard to photograph “play” however this gives a better idea, after I’ve ground off the weld and separated the joint:

You shouldn’t be able to see light through there! Yamaha wanted a sum of money I considered excessive for this part so I decided I’d have a go at a home “oversize rebore” repair. This means drilling the hole in the outer piece larger (to make it square) and then machining a new oversize internally collar/bearing. The only lathe available to me was a little bit overkill for the job, weighing in at about 3.5tons:

however I did manage to get it to machine something that small, just about anyway:

Its hard to tell any difference from the final part however it has much less play:

After putting the crankshaft in and mating the cases, the clutch basket, plates and primary drive gear on the RHS of the engine can be installed:

A view of the ignition side of the engine showing the ignition and aftermarket flywheel weight in situ:

The clutch casing/cover can now be installed and the lovely new shiny piston can be connected to the conrod. You can see the power value linkage on the bottom left of the green base gasket. It sits in the clutch cover where there are spinning weights which control the power values depending on engine speed. The main bearings, both ends of the con ron, piston and rings were all liberally coated with two stroke oil as it was assembled.

Sadly it won’t look this shiny for long. You get a good idea of what the ports in a two stroke engine look like from this view:

A view of the power value chamber on the front of the cylinder. The repaired power value linkage rod connects to the end of the shaft on the left of the photo, turning it to different positions depending on engine rpm. The YZ has three power values, a main one on the centre exhaust port actuated by the springs in the centre and two secondary ports on the sides which are actuated by the cams and levers at the sides of the chamber. This was the only point throughout the rebuild I consulted the manual about since I’ve never actually tried to set up power valves before. The manual was a bit vague so I did what seemed right…

After all the access covers are installed, the engine is then complete. You can see the power value chamber on the front with the chamber on the side covering the repaired linkage. The cylinder head has also been installed.

All that is left is to fit it back into the bike. It took less time than I thought to do so and I’m pleased to report that whilst it didn’t start first kick, it did fire up pretty readily and whilst I didn’t run it for long, it sounds much happier!

by Richard at November 24, 2013 05:16 PM

November 03, 2013

Hylke Bons

Hackfest Report

I had a great time attending the GNOME/.NET hackfest last month in Vienna. Here's what happened.

My goal for the week was to port SparkleShare's user interface to GTK+3 and integrate with GNOME 3.

A lot of work got done. Many thanks to David and Stefan for enabling this by the smooth organisation of the space, food, and internet. Bertrand, Stephan, and Mirco helped me get set up to build a GTK+3-enabled SparkleShare pretty quickly. The porting work itself was done shortly after that, and I had time left to do a lot of visual polish and behavioural tweaks to the interface. Details matter!

SparkleShare

Last week I released SparkleShare 1.3, a Linux-only release that includes all the work done at the hackfest. We're still waiting for the dependencies to be included in the distributions, so the only way you can use it is to build from source yourself for now. Hopefully this will change soon.

One thing that's left to do is to create a gnome-shell extension to integrate SparkleShare into GNOME 3 more seamlessly. Right now it still has to use the message tray area, which is far from optimal. So if you're interested in helping out with that, please let me know.

Tomboy Notes

The rest of the time I helped out others with design work. Helped out Mirco with the Smuxi preference dialogues using my HIG obsession and started a redesign of Tomboy Notes. Today I sent out the new design to their mailing list with the work done so far.

Sadly there wasn't enough time for me to help out with all of the other applications… I guess that's something for next year.

Sponsors

I had a fun week in Vienna (which is always lovely no matter the time of year) and met many new great people. Special thanks to the many sponsors that helped making this event possible: Norkart, Collabora, Novacoast IT, University of Vienna, and The GNOME Foundation.

November 03, 2013 12:00 AM

October 28, 2013

Chris Lord

Sabbatical Over

Aww, my 8-week sabbatical is now over. I wish I had more time, but I feel I used it well and there are certainly lots of Firefox bugs I want to work on too, so perhaps it’s about that time now (also, it’s not that long till Christmas anyway!)

So, what did I do on my sabbatical?

As I mentioned in the previous post, I took the time off primarily to work on a game, and that’s pretty much what I did. Except, I ended up working on two games. After realising the scope for our first game was much larger than we’d reckoned for, we decided to work on a smaller puzzle game too. I had a prototype working in a day, then that same prototype rewritten because DOM is slow in another day, then it rewritten again in another day because it ends up, canvas isn’t particularly fast either. After that, it’s been polish and refinement; it still isn’t done, but it’s fun to play and there’s promise. We’re not sure what the long-term plan is for this, but I’d like to package it with a runtime and distribute it on the major mobile app-stores (it runs in every modern browser, IE included).

The first project ended up being a first-person, rogue-like, dungeon crawler. None of those genres are known for being particularly brief or trivial games, so I’m not sure what we expected, but yes, it’s a lot of work. In this time, we’ve gotten our idea of the game a bit more solid, designed some interaction, worked on various bits of art (texture-sets, rough monsters) and have an engine that lets you walk around an area, pick things up and features deferred, per-pixel lighting. It doesn’t run very well on your average phone at the moment, and it has layout bugs in WebKit/Blink based browsers. IE11′s WebGL also isn’t complete enough to render it as it is, though I expect I could get a basic version of it working there. I’ve put this on the back-burner slightly to focus on smaller projects that can be demoed and completed in a reasonable time-frame, but I hope to have the time to return to it intermittently and gradually bring it up to the point where it’s recognisable as a game.

You can read a short paragraph and see a screenshot of both of these games at our team website, or see a few more on our Twitter feed.

What did I learn on my sabbatical?

Well, despite what many people are pretty eager to say, the web really isn’t ready as a games platform. Or an app platform, in my humble opinion. You can get around the issues if you have a decent knowledge of how rendering engines are implemented and a reasonable grasp of debugging and profiling tools, but there are too many performance and layout bugs for it to be comfortable right now, considering the alternatives. While it isn’t ready, I can say that it’s going to be amazing when it is. You really can write an app that, with relatively little effort, will run everywhere. Between CSS media queries, viewport units and flexbox, you can finally, easily write a responsive layout that can be markedly different for desktop, tablet and phone, and CSS transitions and a little JavaScript give you great expressive power for UI animations. WebGL is good enough for writing most mobile games you see, if you can avoid jank caused by garbage collection and reflow. Technologies like CocoonJS makes this really easy to deploy too.

Given how positive that all sounds, why isn’t it ready? These are the top bugs I encountered while working on some games (from a mobile specific viewpoint):

WebGL cannot be relied upon

WebGL has finally hit Chrome for Android release version, and has been enabled in Firefox and Opera for Android for ages now. The aforementioned CocoonJS lets you use it on iOS too, even. Availability isn’t the problem. The problem is that it frequently crashes the browser, or you frequently lose context, for no good reason. Changing the orientation of your phone, or resizing the browser on desktop has often caused the browser to crash in my testing. I’ve had lost contexts when my app is the only page running, no DOM manipulation is happening, no textures are being created or destroyed and the phone isn’t visibly busy with anything else. You can handle it, but having to recreate everything when this happens is not a great user experience. This happens frequently enough to be noticeable, and annoying. This seems to vary a lot per phone, but is not something I’ve experienced with native development at this scale.

An aside, Chrome also has an odd bug that causes a security exception if you load an image (on the same domain), render it scaled into a canvas, then try to upload that canvas. This, unfortunately, means we can’t use WebGL on Chrome in our puzzle game.

Canvas performance isn’t great

Canvas ought to be enough for simple 2d games, and there are certainly lots of compelling demos about, but I find it’s near impossible to get 60fps, full-screen, full-resolution performance out of even quite simple cases, across browsers. Chrome has great canvas acceleration and Firefox has an accelerated canvas too (possibly Aurora+ only at the moment), and it does work, but not well enough that you can rely on it. My puzzle game uses canvas as a fallback renderer on mobile, when WebGL isn’t an option, but it has markedly worse performance.

Porting to Chrome is a pain

A bit controversial, and perhaps a pot/kettle situation coming from a Firefox developer, but it seems that if Chrome isn’t your primary target, you’re going to have fun porting to it later. I don’t want to get into specifics, but I’ve found that Chrome often lays out differently (and incorrectly, according to specification) when compared to Firefox and IE10+, especially when flexbox becomes involved. Its transform implementation is also quite buggy too, and often ignores set perspective. There’s also the small annoyance that some features that are unprefixed in other browsers are still prefixed in Chrome (animations, 3d transforms). I actually found Chrome to be more of a pain than IE. In modern IE (10+), things tend to either work, or not work. I had fewer situations where something purported to work, but was buggy or incorrectly implemented.

Another aside, touch input in Chrome for Android has unacceptable latency and there doesn’t seem to be any way of working around it. No such issue in Firefox.

Appcache is awful

Uh, seriously. Who thought it was a good idea that appcache should work entirely independently of the browser cache? Because it isn’t a good idea. Took me a while to figure out that I have to change my server settings so that the browser won’t cache images/documents independently of appcache, breaking appcache updates. I tend to think that the most obvious and useful way for something to work should be how it works by default, and this is really not the case here.

Aside, Firefox has a bug that means that any two pages that have the same appcache manifest will cause a browser crash when accessing the second page. This includes an installed version of an online page using the same manifest.

CSS transitions/animations leak implementation details

This is the most annoying one, and I’ll make sure to file bugs about this in Firefox at least. Because setting of style properties gets coalesced, animations often don’t run. Removing display:none from an element and setting a style class to run a transition on it won’t work unless you force a reflow in-between. Similarly, switching to one style class, then back again won’t cause the animation on the first style-class to re-run. This is the case at least in Firefox and Chrome, I’ve not tested in IE. I can’t believe that this behaviour is explicitly specified, and it’s certainly extremely unintuitive. There are plenty of articles that talk about working around this, I’m kind of amazed that we haven’t fixed this yet. I’m equally concerned about the bad habits that this encourages too.

DOM rendering is slow

One of the big strengths of HTML5 as an app platform is how expressive HTML/CSS are and how you can easily create user interfaces in it, visually tweak and debugging them. You would naturally want to use this in any app or game that you were developing for the web primarily. Except, at least for games, if you use the DOM for your UI, you are going to spend an awful lot of time profiling, tweaking and making seemingly irrelevant changes to your CSS to try and improve rendering speed. This is no good at all, in my opinion, as this is the big advantage that the web has over native development. If you’re using WebGL only, you may as well just develop a native app and port it to wherever you want it, because using WebGL doesn’t make cross-device testing any easier and it certainly introduces a performance penalty. On the other hand, if you have a simple game, or a UI-heavy game, the web makes that much easier to work on. The one exception to this seems to be IE, which has absolutely stellar rendering performance. Well done IE.

This has been my experience with making web apps. Although those problems exist, when things come together, the result is quite beautiful. My puzzle game, though there are still browser-specific bugs to work around and performance issues to fix, works across varying size and specification of phone, in every major, modern browser. It even allows you to install it in Firefox as a dedicated app, or add it to your homescreen in iOS and Chrome beta. Being able to point someone to a URL to play a game, with no further requirement, and no limitation of distribution or questionable agreements to adheer to is a real game-changer. I love that the web fosters creativity and empowers the individual, despite the best efforts of various powers that be. We have work to do, but the future’s bright.

by Chris Lord at October 28, 2013 09:22 AM

October 25, 2013

Michael Wood

Insight recorder updates

Insight recorder updates.

Insight recorder is a user testing session recording tool that allows you to record various types of UX testing sessions on the Linux desktop. It supports webcam/webcam, screencast/webcam, webcam and screencast setups. You can download it from github as a zip.

It looks like this..isr3 isr2 isrHere is an example of a testing session recorded with insight recorder where the task was “Clear history” in Web (epiphany) , normally in a user testing session you’d have a facilitator who would be encouraging the user to speak about their experience as they’re doing it.

A few of the updates..

  • Project Autotooled – better supported installation mechanism and for translation generation
  • Two new languages (es, fr)
  • Updated VU meter to show microphone level betterer
  • Opacity control for secondary video
  • “Online” recording – seems more reliable than trying to mash videos together afterwards
  • Fixes for project file updating and loading
  • Various crash fixes
  • Cleaned up video preview ui
  • new ‘New project dialog’

 

by Michael Wood at October 25, 2013 04:33 PM

October 07, 2013

Michael Wood

Updating HTC One X via usb on Linux

The wifi was broken on the HTC One X (wouldn’t activate) I had and thought it might be fixed by an update (it was), having no SIM for it yet over the mobile network wasn’t an option (and it’s a 300MiB+update), but you can do reverse tethering or “internet pass-through”, without the htc sync program you can do this on Linux:.

You will need: nmap, iptables, dnsmasq and python

  • Plug in phone, select “Internet pass-through”
  • Check dnsmasq is running to provide dns, (check by `host google.com localhost` )
  • Set up forwarding/gateway to your interwebs
  • echo 1 > /proc/sys/net/ipv4/ip_forward
    iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
    iptables -A FORWARD -i usb0 -j ACCEPT
    
  • Any luck both your PC and the phone will have themselves IPs, find the phone’s IP: “nmap -sn -v xxx.xxx.xxx.0/24 | grep “Host is up” -B 1″ replacing xxx with the current subnet. e.g. 192.168.99.0

by Michael Wood at October 07, 2013 04:22 PM

Hylke Bons

GNOME/.NET Hackfest

Today I arrived in the always wonderful city of Vienna for the GNOME/.NET Hackfest. Met up and had dinner with the other GNOME + C# fans.

SparkleShare has been stuck on GTK+2 for a while. Now that the C# bindings for GTK+3 are starting to get ready, and Bindinator is handling any other dependencies that need updating (like WebKit), it is finally time to take the plunge.

My goal this week is to make some good progress on the following things:

  • Port SparkleShare's user interface to GTK+3.
  • Integrate SparkleShare seamlessly with the GNOME 3 experience

SparkleShare 1.2

Yesterday I made a new release of SparkleShare. It addresses several issues that may have been bugging you, so it's worth to upgrade. Depending on how well things go this week it may be the last release based on GNOME 2 technologies. Yay for the future!

October 07, 2013 12:00 AM

October 02, 2013

Damien Lespiau

HDMI stereo 3D & KMS

If everything goes according to plan, KMS in linux 3.13 should have stereo 3D support. Should one be interested in scanning out a stereo frame buffer to a 3D capable HDMI sink, here’s a rough description of how those modes are exposed to user space and how to use them.

A reader not well acquainted with the DRM sub-system and its mode setting API (Aka Kernel Mode Setting, KMS) could start by watching the first part of Laurent Pinchart’s Anatomy of an Embedded KMS Driver or read David Herrmann’s heavily documented mode setting example code.

Stereo modes work by sending a left eye and right eye picture per frame to the monitor. It’s then up to the monitor to use those 2 pictures to display a 3D frame and the technology there varies.

There are different ways to organise the 2 pictures inside a bigger frame buffer. For HDMI, those layouts are described in the HDMI 1.4 specification. Provided you give them your contact details, it’s possible to download the stereo 3D part of the HDMI 1.4 spec from hdmi.org.

As one inevitably knows, modes supported by a monitor can be retrieved out of the KMS connector object in the form of drmModeModeInfo structures (when using libdrm, it’s also possible to write your own wrappers around the KMS ioctls, should you want to):

typedef struct _drmModeModeInfo {
    uint32_t clock;
    uint16_t hdisplay, hsync_start, hsync_end, htotal, hskew;
    uint16_t vdisplay, vsync_start, vsync_end, vtotal, vscan;

    uint32_t vrefresh;

    uint32_t flags;
    uint32_t type;
    char name[...];
} drmModeModeInfo, *drmModeModeInfoPtr;

To keep existing software blissfully unaware of those modes, a DRM client interested in having stereo modes listed starts by telling the kernel to expose them:

drmSetClientCap(drm_fd, DRM_CLIENT_CAP_STEREO_3D, 1);

Stereo modes use the flags field to advertise which layout the mode requires:

uint32_t layout = mode->flags & DRM_MODE_FLAG_3D_MASK;

This will give you a non zero value when the mode is a stereo mode, value among:

DRM_MODE_FLAG_3D_FRAME_PACKING
DRM_MODE_FLAG_3D_FIELD_ALTERNATIVE
DRM_MODE_FLAG_3D_LINE_ALTERNATIVE
DRM_MODE_FLAG_3D_SIDE_BY_SIDE_FULL
DRM_MODE_FLAG_3D_L_DEPTH
DRM_MODE_FLAG_3D_L_DEPTH_GFX_GFX_DEPTH
DRM_MODE_FLAG_3D_TOP_AND_BOTTOM
DRM_MODE_FLAG_3D_SIDE_BY_SIDE_HALF

User space is then responsible for choosing which stereo mode to use and to prepare a buffer that matches the size and left/right placement requirements of that layout. For instance, when choosing Side by Side (half), the frame buffer is the same size as its 2D equivalent (that is hdisplay x vdisplay) with the left and right images sub-sampled by 2 horizontally:

sbsh

Side by Side (half)

Other modes need a bigger buffer than hdisplay x vdisplay. This is the case with frame packing, where each eye has the the full 2D resolution, separated by the number of vblank lines:

Frame Packing

Frame Packing

Of course, anything can be used to draw into the stereo frame buffer, including OpenGL. Further work should enable Mesa to directly render into such buffers, say with the EGL/gbm winsys for a wayland compositor to use. Of course, fun profit would the last step:

PS3_3D2

A 720p frame packing buffer from the game WipeOut

Behind the scene, the kernel’s job is to parse the EDID to discover which stereo modes the HDMI sink supports and, once user-space instructs to use a stereo mode, to send infoframes (metadata sent during the vblank interval) with the information about which 3D mode is being sent.

A good place to start for anyone wanting to use this API is testdisplay, part of the Intel GPU tools test suite. testdisplay can list the available modes with:

$ sudo ./tests/testdisplay -3 -i
[...]
  name refresh (Hz) hdisp hss hse htot vdisp vss vse vtot flags type clock
[0]  1920x1080 60 1920 2008 2052 2200 1080 1084 1089 1125 0x5 0x48 148500
[1]  1920x1080 60 1920 2008 2052 2200 1080 1084 1089 1125 0x5 0x40 148352
[2]  1920x1080i 60 1920 2008 2052 2200 1080 1084 1094 1125 0x15 0x40 74250
[3]  1920x1080i 60 1920 2008 2052 2200 1080 1084 1094 1125 0x20015 0x40 74250 (3D:SBSH)
[4]  1920x1080i 60 1920 2008 2052 2200 1080 1084 1094 1125 0x15 0x40 74176
[5]  1920x1080i 60 1920 2008 2052 2200 1080 1084 1094 1125 0x20015 0x40 74176 (3D:SBSH)
[6]  1920x1080 50 1920 2448 2492 2640 1080 1084 1089 1125 0x5 0x40 148500
[7]  1920x1080i 50 1920 2448 2492 2640 1080 1084 1094 1125 0x15 0x40 74250
[8]  1920x1080i 50 1920 2448 2492 2640 1080 1084 1094 1125 0x20015 0x40 74250 (3D:SBSH)
[9]  1920x1080 24 1920 2558 2602 2750 1080 1084 1089 1125 0x5 0x40 74250
[10]  1920x1080 24 1920 2558 2602 2750 1080 1084 1089 1125 0x1c005 0x40 74250 (3D:TB)
[11]  1920x1080 24 1920 2558 2602 2750 1080 1084 1089 1125 0x4005 0x40 74250 (3D:FP)
[...]

To test a specific mode:

$ sudo ./tests/testdisplay -3 -o 17,10
  1920x1080 24 1920 2558 2602 2750 1080 1084 1089 1125 0x1c005 0x40 74250 (3D:TB)

To cycle through all the supported stereo modes:

$ sudo ./tests/testdisplay -3

testdisplay uses cairo to compose the final frame buffer from two separate left and right test images.

by damien at October 02, 2013 06:38 PM

September 28, 2013

Michael Wood

vmware makefiles not compatible with make 3.82

make[1]: Entering directory `/usr/src/linux-headers-3.10-3-amd64′
Makefile:10: *** mixed implicit and normal rules.  Stop.
make[1]: Leaving directory `/usr/src/linux-headers-3.10-3-amd64′
make: *** [vmnet.ko] Error 2

Using make version 3.82-1 downgrade to  3.81-8.2 0 (or go fix their make files)

http://lists.gnu.org/archive/html/bug-make/2011-02/msg00037.html

"In previous versions of make it was acceptable to list one or more explicit
  targets followed by one or more pattern targets in the same rule and it
  worked "as expected".  However, this was not documented as acceptable and if
  you listed any explicit targets AFTER the pattern targets, the entire rule
  would be mis-parsed.  This release removes this ability completely: make
  will generate an error message if you mix explicit and pattern targets in
  the same rule."

by Michael Wood at September 28, 2013 09:07 PM

September 16, 2013

Emmanuele Bassi

Do not link against PulseAudio and JSON-GLib < 0.16

this is a public announcement.

if you link against PulseAudio, whether you want it or not, you’ll get an automatic dependency on json-c, a small C library that parses JSON and doesn’t have any dependency1. sadly, json-c leaks symbols all over the place, and one of them is called json_object_get_type2.

JSON-GLib, the library that yours truly wrote about 6 years ago to parse JSON using a decent C library as a base, also has a type called json_object_get_type3.

if you link against PulseAudio and JSON-GLib4 then you’ll get a segmentation fault with a weird stack trace, like this one and its duplicates5.

the solution is to use a version of JSON-GLib greater than 0.16.1, which builds JSON-GLib with the -Bsymbolic linker flag6.

that would be all.

  1. which is arguably a plus for a system daemon
  2. which returns an integer for I don’t know which reason
  3. which returns a GType for the JsonObject boxed structure, so that you can use them in properties and signal marshallers; as it happens, GType is a long lookalike
  4. or any other library that depends on JSON-GLib, like Clutter
  5. since both return values and arguments of the functions above are compatible, the linker won’t moan about it, so you won’t see any warning or error when building your code
  6. another solution is to statically link json-c inside PulseAudio instead of dynamically linking it; another solution is to link json-c with -Bsymbolic; yet another solution would be for PA to not use a dependency to parse JSON – or drop JSON entirely because I can’t for the life of me understand why an audio server is dealing with JSON at all

by ebassi at September 16, 2013 01:36 PM

September 10, 2013

Ross Burton

Using netconsole in the Yocto Project

Debugging problems which mean init goes crazy is tricky, especially so on modern Intel hardware that doesn’t have anything resembling a serial port you can connect to.

Luckily this isn’t a new problem, as Linux supports a network console which will send the console messages over UDP packets to a specific machine. This is mostly easy to use but there are some caveats that are not obvious.

The prerequisites are that netconsole support is enabled, and your ethernet driver is built in to the kernel and not a module. Luckily, the stock Yocto kernels have netconsole enabled and the atom-pc machine integrates the driver for my hardware.

Then, on the target machine, you pass netconsole=... to the kernel. The kernel documentation explains this quite well:

netconsole=[src-port]@[src-ip]/[],[tgt-port]@/[tgt-macaddr]

   where
        src-port      source for UDP packets (defaults to 6665)
        src-ip        source IP to use (interface address)
        dev           network interface (eth0)
        tgt-port      port for logging agent (6666)
        tgt-ip        IP address for logging agent
        tgt-macaddr   ethernet MAC address for logging agent (broadcast)

The biggest gotcha is that you (obviously) need a source IP address, and netconsole starts before the networking normally comes up. Apart from that you can generally get away with minimal settings:

netconsole=@192.168.1.21/,@192.168.1.17/

Note that apparently some routers may not forward the broadcast packets correctly, so you may need to specify the target MAC address.

On the target machine run something like netcat to capture the packets:

$ netcat -l -u -p 6666 | tee console.log

If you get the options wrong the kernel will tell you why, so if you don’t get any logging iterate a working argument on an image that works, using dmesg to see what the problem is.

Finally instead of typing in this argument every time you boot, you can add it to the boot loader in your local.conf:

APPEND += "netconsole=@192.168.1.21/,@192.168.1.17/"

(APPEND being the name of the variable that is passed to the kernel by the boot loader.)

Update: when the journal starts up systemd will stop logging to the console, so if you want to get all systemd messages also pass systemd.log_target=kmsg.

by Ross Burton at September 10, 2013 11:21 AM

September 03, 2013

Thomas Wood

Displays Settings in GNOME 3.10

screenshot-1

The display settings for GNOME hasn’t seen a major change since before 3.0, so with the design help of Allan Day, I had the chance to completely update the interface for 3.10. The interface now follows the new visual style of the other settings panels and simplifies tasks such as setting the primary display. It also includes simpler and clearer display indicators. Full details of the redesign, including the mockups, are available on the wiki page. Not everything is complete yet; “presentation mode” needs some additional support from the windowing system before it can be implemented. However, Wayland support is now available thanks to Giovanni Campagna’s work on Mutter and gnome-desktop, which also includes a new confirmation dialog provided by gnome-shell.

screenshot-2

by thos at September 03, 2013 01:45 PM

September 01, 2013

Chris Lord

Sabbatical

As of Friday night, I am now on a two month unpaid leave. There are a few reasons I want to do this. It’s getting towards the 3-year point at Mozilla, and that’s usually the sort of time I get itchy feet to try something new. I also think I may have been getting a bit close to burn-out, which is obviously no good. I love my job at Mozilla and I think they’ve spoiled me too much for me to easily work elsewhere even if that wasn’t the case, so that’s one reason to take an extended break.

I still think Mozilla is a great place to work, where there are always opportunities to learn, to expand your horizons and to meet new people. An unfortunate consequence of that, though, is that I think it’s also quite high-stress. Not the kind of obvious stress you get from tight deadlines and other external pressures, but a more subtle, internal stress that you get from constantly striving to keep up and be the best you can be. Mozilla’s big enough now that it’s not uncommon to see people leave, but it does seem that a disproportionate amount of them cite stress or needing time to deal with life issues as part of the reason for moving on. Maybe we need to get better at recognising that, or at encouraging people to take more personal time?

Another reason though, and the primary reason, is that I want to spend some serious time working on creating a game. Those who know me know that I’m quite an avid gamer, and I’ve always had an interest in games development (I even spoke about it at Guadec some years back). Pre-employment, a lot of my spare time was spent developing games. Mostly embarrassingly poor efforts when I view them now, but it’s something I used to be quite passionate about. At some point, I think I decided that I preferred app development to games development, and went down that route. Given that I haven’t really been doing app development since joining Mozilla, it feels like a good time to revisit games development. If you’re interested in hearing about that, you may want to follow this Twitter account. We’ve started already, and I like to think that what we have planned, though very highly influenced by existing games, provides some fun, original twists. Let’s see how this goes :)

by Chris Lord at September 01, 2013 11:50 AM

August 26, 2013

Richard Purdie

YZ Engine Rebuild in Progress

As anyone reading would note, YZ hasn’t been happy recently. Its been making noises I didn’t like the sound of so I decided to pull the barrel off and take a look, the first time I’ve done so. You can do this with the engine still in the bike. The first thing of note was the piston, the exhaust side was unremarkable but the inlet side had fairly deep scoring, above what I’d consider normal:

Its a widely accepted fact of life that two stroke engines do need maintenance from time to time and with two years of (ab)use, I can hardly complain and I’d expected it would need a piston. The barrel didn’t photograph well but I think its probably serviceable which is a relief as getting a new coating put in is a pain. I had removed the piston so I turned my attention to look at the bottom end of the engine and specifically the big end bearing:

The big end was fine however in checking it, I thought the little end felt like I’d filled it with sand. A closer look revealed:

Which is something I’d not expected, the little end is to put it technical engineering terms, knackered. I’ve damaged two stokes in a multitude of different ways but I’d never broken a little end bearing until now. First time for everything I guess. Sadly, with damage like that a new con-rod is needed and that means splitting the casings to get the crank out. That in turn means taking the whole engine out the bike. To do that there are a number of serious nuts that need undoing. It was one of these that I previously managed to snap ligaments in my hand whilst attempting to undo. This time around with having the barrel off, its possible to rather robustly brace things to stop them turning:

This may look abusive but the little end is already broken so it can’t really damage much else. The bike looks rather sorry for itself without the engine. You can see from the photo how spindly the frame is which makes more sense when you realise the engine is actually part of the frame (a stressed member):

So I’ve found it needs a new piston and a new con-rod but why was the bike rattling? My best theory so far is the power valves are also in need of attention. The push rod from the bottom to top end is loose and the free play in it could account for the rattle. The fact it doesn’t work directly as engine rpm changes also matches well with the symptoms. In looking at the valves themselves on the workbench, it was clear the left hand valve was not completely closing. Wear in the power valve cams and the cam following pins would appear to account for this. In an ideal world, I would just replace all these parts however to do so looks like adding hundreds of pounds to the repair bill. I think there is adjustment which would allow the free play in the cams to be taken up, the push rod may need to be replaced however.

A photo of the engine partially dismantled on the workbench. I can’t split the casings until I find a puller to take the ignition off. The cams are on the left and right of the square oily opening on the barrel. The followers are on the ends of the forks on the black thing on the workbench on the bottom left.

It could be a lot worse, something could have seized whilst it was running and caused considerably more damage. Finding reasonable priced spare parts seems to be a bit of a challenge and I haven’t figured out what to do with the power valves yet. The YZ has some life left in it yet and will live again!

by Richard at August 26, 2013 06:51 PM

August 17, 2013

Richard Purdie

My Centennial Rally Experience

A while ago I volunteered to help out with marshalling at the centennial rally which is up in and around the north this weekend. Saturday they were going into Kielder/Wark forest so this morning it was an early start. I met up with some fellow TRF members and together we made our way out to Gisland and up past Spadeadam. I volunteered to take the fuel for the group in the discovery into the forest. So far so good.

On the way in, I hit a deep pothole and stotted bike off the back window of the discovery. On the one hand it has marked the glass, on the other it didn’t smash it, how I don’t know. I made a mental note to take it easy even if at this point crazy forest worker come marhsals towing trailers with quads on appeared and were flying around and asking if I’d left the hand brake on.

So we found where we needed to be, I unloaded, kitted up and found the others. We were to be on course in the special test. Two roving posts at the start/end and a static in the middle. I took the first stint on the static and was there for a couple of hours, I had been promised some relief and it did come. I then roved around the end section a bit. There was a corner the bikes seemed to be having trouble with where many were going straight on into the grass and a nice deep ditch which I helped pull several out of.

The day had started off dry, turned into a drizzle and then by now was a near constant light rain. The rain eventually ran into the boots since I’d put the trousers into them, to try and avoid ripping them into any more shreds on the YZ kickstart. I gave up on the googles and just let my glasses get wet.

At one point I was offers a trip down a “hard” short cut, I suspect I made a bit of a fool of myself since I was going slowly, coming off and then was too worn out to continue. At this point someone else got the bike out of the rut I was stuck in, making it look easy. They did check it if was a 125 or a 250 to know how much throttle to give it :) I’m sure I could have done that myself, had I been able to get enough oxygen into my blood stream to think clearly. Anyone who is honest about their first off road experiences on bikes will know the vicious circle of come off, tire getting things back on track, come off again because you’re tired and so on. Its been a while since I suffered quite like that. I couldn’t suppress a chuckle when the two I was following both came off themselves.

By this point, the YZ was making worrying engine noises, there is something wrong, the next suspect is the power value mechanism since it doesn’t seem to be coming into the raw power it should (not that is mattered on a day like this!). I also found I’d run out of back brake pads.

Being rather wet by this point, it was good when we figured out the course was closed and we helped demark it. It was then a case of loading the bike onto the discovery, finding some dry clothes and heading home. I went back out via Spadeadam so I could drop the fuel back to the others.

Driving on the forest fire roads on the way out, I wondered if something wasn’t quite right. I’ve had this feeling on previous occasions driving the discovery off road and this time told myself to ignore it, every little noise makes my jumpy. I could see the bike was fine in the mirrors and the car seemed level and all that. The stability control light did briefly light on one occasion accelerating which seemed odd.

As soon as I reached tarmac, I knew something was very wrong. Having stopped and walked around I noticed a not very happy looking tyre which was clearly written off. No problem, reverse into the layby to get off the road and I’ll have to change it for the spare.

Reversing into the layby totally destroyed anything that remained of the tyre. Ok, no use crying over spilt milk. I calmly flicked through the owners handbook to the section on changing a tyre. Firstly, I have to say that whoever wrote it should get a different job, preferably after being made to change a tyre in the pouring rain. Its full of things like explaining how to jack the car, then mentioning that before jacking you should loosen the wheel nuts. The main reason I was reading it was to figure out where the tools were, how to release the spare wheel and what to do with the self levelling air suspension.

Obviously the jack and tools were in a compartment in the rear of the car, with piles of wet bike gear filling the boot and the boot door inoperative due to the bike on the rack on the back. There was also 85L of petrol in the way amongst tools and all kinds of other junk. I also noted with dismay that to lower the spare wheel you need to access a winch under the rear two pop up seats, which have all this stuff on top of them. I would also note it is raining, it would be, right!

To cut a long story short, I did manage to move things around, I was able to jack the car up, put the spare on, stow the shredded wheel and continue home. The self leveling suspension did quite an amazing job of hiding the flat. I bought the discovery to use and have a bit of adventure with so I guess I’m getting that and these things happen. Nobody can accuse me of not using it as it was designed or call it a Chelsea Tractor! :)

I am now totally worn out, I think I’ll need to take it easy for a few weekends and I’ve some work in the garage to do now. I haven’t dared closely inspect the rim yet, if I’m lucky, it might be ok, we’ll see tomorrow. Marshalling tomorrow? I think not.

by Richard at August 17, 2013 10:12 PM

August 16, 2013

Michael Wood

Q Light controller 64bit deb

I’ve rolled a 64bit deb for Q Light controller I “fixed” some missing includes/distcheck type issues to make it build the modified source is here

by Michael Wood at August 16, 2013 02:12 AM

August 07, 2013

Richard Purdie

The 675 for a change

On Sunday with Cadwell coming up it made sense to check the 675 still works and scrub in its new tyres a little. Scotland seemed the logical direction choice. I’d started heading for Morpeth/Rothbury and then found myself in the middle of a cycle event which seemed to head to Bellingham. I’d not have liked to be cycling up some of those hills. The roads seemed otherwise quiet.

Going past Kielder, the fuel warning light came on, I’d evidently incorrectly ‘remembered’ filling the bike last time I used it :( . This is about the worst place to run out of fuel as there isn’t any for many miles, particularly on a Sunday. I was closer to Scotland at this point so continued to head for the border, it being touch and go whether I’d reach anywhere selling fuel. I remembered someone telling me the old garage at Kielder had reopened so there was a small chance that would be open and I’d be passing it anyway.

Going past the garage at Kielder the garage looked deserted and shut, the signs said open. The fuel pump looked odd. I’m pleased I stopped and checked as its now an automated self service one and I could get fuel there which solved my worries about running out.

It was then into Scotland and onto roads which were mostly single track with passing places ending up in Hawick which was the planed refuelling stop. Since I wasn’t in a hurry, I then looked around some roads over to Langholm and Newcastleton using some further minor single track roads going from the Scottish Borders into Dumfries and Galloway and back before heading back to Kielder in my native Northumberland and then home via Chollerford. The weather was mostly good with odd rain showers, the only one I really got wet in was near Langholm (hence the dark photo) other than that there were just wet roads as evidence of them.

It was nice to use the 675 again, I do love that bike although I’m not used to the riding position any more which was comparatively painful, particularly after the couple of hundred miles I covered on this trip. The 675 behaved well and the new tyres seem good although it has a few squeaks for example in the air intake mechanism and will benefit from a good service.

by Richard at August 07, 2013 07:31 AM

August 05, 2013

Richard Purdie

Fatigue

I’ve had some extended vacation time recently and shortly I’m probably going to get questions about how I enjoyed it. It would be tempting to nod and smile and quickly change the subject since the answer I’d truthfully give is hard to understand and not what people would want to hear or expect, I’ve actually found it rather like a form of torture.

Its an open secret that for whatever reason I seem to suffer from some kind of fatigue. There is no known medical explanation for it, the catch all “we don’t know” of chronic fatigue syndrome (CFS) being the “diagnosis” once you rule out everything else. So what does that mean in practise? Imagine having a finite store of energy which replenishes at a fixed rate. As long as I use it carefully at a rate approximating the replenishment rate I’m perfectly fine. If I do something strenuous, I need to go easier for a while to allow the store to replenish and for example with the motorcycle trail riding, there will be a mild price to pay (say approximating flu like symptoms the following day). The real problems start if you use up a large amount of the store, I’ve experimented to varying degrees and near involuntary collapse followed by a week of feeling like you were badly beaten in a boxing ring is a possible outcome. I’ve carefully improved my general health and fitness over the past few years in the hope it would help. Sadly the size of the store or rate of replenishment doesn’t seem to change, even if I’ve noticed significant other improvements in my general fitness.

A colleague recently posted about being unable to do nothing and I had to smile since I share this “problem”. Combine this with the fatigue and you can see where this is going. There are a ton of things I want to do yet I know that if I try and do them, there will be a price to pay. The availability of extra time puts temptation in place and to be honest, I’ve totally overdone the activities and physically feel like a wreak now, yet I haven’t used the time as fully as I’d have liked either.

So if you ask me if I enjoyed my vacation and I laugh you might better understand why. That isn’t to say I haven’t done some things I’ve wanted to do for a while or enjoyed. I also appreciate things could be much worse too!

by Richard at August 05, 2013 11:30 AM

August 02, 2013

Richard Purdie

Historic Karting at Rowrah

For five years my kart (CR250 powered) has been buried behind mountains of “stuff” at the back of two different garages quietly dreaming of once again driving on open tarmac. Partly this has been a time issue, partly its due to not being able to drive it on any of the local circuits after they deemed gearbox karts too dangerous on them.

The kart is “only” 19 years old and is water cooled so is frowned upon in historic kart circles however I was invited to attend their annual meeting over at Rowrah last weekend. The circuit is my favourite kart circuit since its somewhere unexpected (a national park) and is picturesque, nestled in the bottom of an old quarry. Since I was last there (must have been years) they have built a new clubhouse replacing the corrugated iron shack I remember of a canteen and generally improved facilities there.

Prior to the event I’d cleaned out the carb and fuel lines, found slick tyres for it, filled it with coolant and was pleased to find it started up on the rope without any real issue. My Dad was also there with two karts, one a Bellotti with a air cooled ‘red rocket’ CR250 on it, the predecessor to my kart’s engine and the other, the cougar, a kart from 1979 which has its main components manufactured then but was only recently brazed up on the jig which was dug out from under a compost heap. This was the Cougar’s first outing after several late nights last week finishing putting it together.

After arriving on the Friday night and meeting some people I’ve not seen since the School’s Karting Association (SKANE) days and a good night’s sleep on the top of a hillside, the Saturday had beautiful weather. The sessions were alternating between class ones and class fours, 20 minutes each. That is ancient terminology for 100cc direct drives (ick) and then anything with a gearbox (the proper karts).

Basically my kart performed wonderfully given its condition, the only issue was that it was geared for long circuit (26:30) and I was only using the first 2.5 gears. There were some modern 125cc karts there which were thrashing me on acceleration. Initially I decided to ignore this but I appear to have some kind of competitive spirit as I ended up taking the sprockets off and changing to 26:36 which was chosen from the available sprockets and chain availability. This gave me 4 gears and made a massive difference to keeping up with the modern 125s as I could now act as a road block and keep up with them. I’m not sure I could have overtaken one, would have needed better gearing again for that but I was happy enough.

I played in various sessions, torn between pushing the kart and not wanting to break it, particularly the engine. I soon realised that the back bumper had cracked around the radiator mounting on one side, an age old problem its suffered with since forever however I decided to ignore that. I was less able to ignore the squealing the kart was now making under braking. A quick check showed the rear pads were not looking healthy, it turned out one has partially disintegrated:

thankfully, I’d taken a set with me having realised the pads were low and even had the cordless drill to make the oval hole round on the new pads to fit.

I tried Dad’s red rocket and it was fun, the engine makes much more sense on that chassis that some of the others we’d had on it 10 years ago in the SKANE days (2.5 YZ125s reduced to shrapnel in the end). We then turned attention to the cougar, the engine fired up no problem although pushing it up the hill to start it was hard work as the brakes were new and binding a bit. Dad took it out onto the track and it threw the chain. Hmm. We replaced that and tried again. As I was push starting it, I had a hand on the engine and as it fired and set off, I noticed the engine move 4″ sideways. My brain registered that it shouldn’t do that, ever. I signalled for Dad to stop and we found the engine mounting posts had sheared from the chassis, the engine literally now able to fall off. Game over for the weekend but it can and will be fixed and the weekend was always meant as a shakedown for it.

The memories of how to drive a kart came back, I only spun once over the weekend and kept the engine running. When chasing 125s, I did run wide off the chicanes onto grass on a few occasions and despite effectively rallying the kart, kept it on full throttle and didn’t lose ground. I suspect I’m channelling some of its former rally driver owner there (who’s name is still on the front bubble).

At this point I was bruised and battered from bouncing around in the seat, a seat bolt finding a particular connection with my ribs which were now visibly bruised along with my left shoulder blade. I had quickly resorted to driving with a towel wrapped around my waist but even that didn’t stop things. Dad took plenty of photos and I also swapped roles and took from photos of him in kart for a change, venturing out onto the circuit in the high-vis with a camera much to the bemusement of people (its usually Dad doing this).

It rained heavily over night and part of the Sunday morning, I wasn’t optimistic the track would dry but it did. I hadn’t slept too well due to the bruised ribs and shoulder. I did get out a bit more on the track and was pleased with the kart holding its own apart from against a particularly quick twin 250. One of the things which I regret from the SKANE karting days is there weren’t many photos despite hundreds of hours in the various karts. I now at least have some photos of my in the 250 thanks to Dad, its only taken 20 years to get them!

by Richard at August 02, 2013 11:23 AM

Robert Bradford

GUADEC: Wayland talks today

The order of the Wayland talks are going to be flipped compared to the printed schedule. This means my introductory talk to Wayland will be before the panel discussion which should give valuable background for the subsequent discussions. Hopefully see you there!

by Rob Bradford at August 02, 2013 09:01 AM

Richard Purdie

More wheels than normal

I’ve been wanting to try this since I brought the Discovery. Last Friday on my way elsewhere with a full load of karting stuff in the back I took it over Long Cross since I was passing. This is a steep rocky climb with no tarmac surface, just rocks. I decided not to stop for photos on the steepest/most technical bits and the sun was in the wrong direction but you get the idea from these. I was impressed with the way it handled things as it never missing a beat. Stopping and starting again wasn’t a problem anywhere and it slowly but surely crawled its way up and over everything.

One thing which didn’t impress me is when I took it out of low range and the rock crawl mode at the top of the hill, it also decided to drop to normal suspension height itself without prompting. This lead to the vehicle grounding out which was annoying, the towbar/bike rack mount took the brunt of it.

I think the weirdest feeling of all was after this, feeling bumps and twists through the steering, driving back onto tarmac and resuming 60mph cruising of twisty roads up and over Hartside and into the lake district. It feels so at home on both.

The size does make things interesting on some of the narrow roads over in the lake district but it also has its moments where it shines. For the trip back there had been heavy rain which had washed large gravel onto the A686 but this wasn’t a problem. Hitting a few inches of water flowing over the road at speed was also interesting, its the first time I’ve felt it thinking about aquaplaning but the main issue being its tendency to seemingly remove all the water from the road and put it on the windscreen making seeing where you’re going trickier. All in all I’m quite enjoying it!

by Richard at August 02, 2013 08:54 AM

July 30, 2013

Robert Bradford

The Waylanders are coming

This GUADEC there will be a couple of sessions on Friday afternoon from 2pm about Wayland. I’ll be giving a presentation with a brief introduction to what Wayland is, what new features we’ve worked on in the last cycle as well as what’s planned for the next one. As this is GUADEC i’ll of course be covering how we’re doing with getting Wayland integrated into GNOME. There will also be a Wayland panel discussion where you can ask your tricky questions of myself, Owen Taylor, Robert Bragg and Kristian Høgsberg – to get things started i’ve got some already prepared!

And if that’s not enough Wayland for you, on Monday we’ll be BoF’ing between 10am and 2pm in room A218. It would be great to see you there.

by Rob Bradford at July 30, 2013 03:16 PM

July 22, 2013

Richard Purdie

Trail Riding in the Yorkshire Dales

Whilst I know some of the roads and places from road motorcycling and camping I’d never been trail riding in the Yorkshire Dales so when the offer came up, I was happy to accept. We met up on the A1 and made our way to Leyburn which was to be our starting point.

I was with a group of people I’ve not really ridden with before, often just passing out on trails so it was a nice change and they seemed like a friendly enough bunch! :)

I had no idea what kind of terrain to expect and it turned out to be quite rocky, kind of like the lake district but quite different too. This meant learning a new kind of surface which is always interesting.

I thought I might have gotten away with my rather worn front tyre, in hindsight, I should have changed it as it caused problems with a lack of grip and confidence. The YZ’s gearing was also suboptimal for that kind of terrain and as I couldn’t go slow enough yet have the engine behaving comfortably.

Whilst there had been rain overnight and there was plenty of cloud cover, it was dry and the trails were hence very dusty. At least the cloud cover kept the temperature reasonable.

The trails themselves were rather nice for a change with some interesting variety of different types. Unfortunately one of the group started having problems with a rear tyre puncture. Attempts were made to pump it up but it became clear this couldn’t be sustained. Shortly after that, after being stopped my bike started rattling. It was hard to figure out where it was coming from but it became clear it was from the engine which is never good.

It seemed to be coming from the kickstart area but inside the casing so we did take the clutch cover off to see if anything untoward could be spotted but there wasn’t anything visible. I therefore made the decision just to ride it and see what happened. Through several more trails it became clear that the rattle only happened when the clutch lever was engaged. Clearly this meant I should just not use the clutch so I switched to clutchless/crash gear changes. Unfortunately the gearing made use of the clutch unavoidable on some uphill sections but on the plus side, the noise didn’t seem to get any worse.

At the lunch and fuel stop in Hawes they patched the problematic rear tyre and I noticed my brake light lens had been smashed and gone missing at some point. We set off for the return journey with the warning that an “interesting” uphill section was coming up and a comment about stopping for photos. For the newbie who has never ridden the trail, this is always a good sign.

Initially it didn’t seem so bad but it did indeed have an interesting section. I decided to try and save the clutch but ended up stalling, I turned around to see the person following then falling off in sympathy (sorry)! Once I’d decided to (ab)use the clutch and got going again, I made it up the rest of the steep section without incident other than stalling it again in relief having made it up the worst bit!

Some of the trails are quite a decent length, some of them being old Roman roads and the Yorkshire Dales scenery is spectaular as ever. Sadly I don’t have many photos as I seemed to be going slower than others for whatever reasons and it wouldn’t have been fair to stop the group to get the camera out.

Sadly the patched tube didn’t hold up and we ended up stopping again to change it and put a spare front tube in the rear wheel to replace it. There appeared to be a trials event going on over that section of trail. Changing the tube over seemed easier this time, perhaps as the tyre was loosened up already.

With the various stops we were a little behind schedule but we did decide to put the final loop of trails in and I’m pleased we did since I think these were the most enjoyable of the day for me. The surface was a different type of stone and for some reason I and the bike were a lot happier on it. There were a couple of small fords and some massive eroded tracks. I’ve never seen a road eroded before the “unsuitable for motor vehicles” sign like that before, it did get even worse after the sign too although not enough to trouble an appropriate motorcycle, just made it more interesting. It was they back to Leyburn, load the bike onto the disco and time to head home.

I’m grateful to the group who took me out, I hope I didn’t hold you up too much and thoroughly enjoyed going somewhere new, and being with some different people too!

by Richard at July 22, 2013 02:24 PM

Lake District Trails

The Lake District is a place I haven’t been as often as I’d like recently. I’ve spent school holidays camping there, sailed on its lakes, raced karts in one of its quarries and ridden motorcycles over its passes. I’ve don’t comparatively little trail riding there though so when the opportunity came up I decided to go for it.

It meant an early start so I loaded the bike onto the bike rack the night before. We met up near Tebay after an uneventful trip on the A69 and M6 which the Discovery coped with nicely. Parking wasn’t a problem since I’d have been complaining to trading standards if it couldn’t get itself back off a grass verge!

There were four of us, we met up and the first shock of the day was one of the bikes tank ranges, at 45 miles which is worryingly low. We set off and the first trail of the day was Breast High road. There is an interesting slimy ford near the start which caused some fun and is good for catching riders who haven’t quite woken up yet, we all did make it through without an early bath.

There is a photo of my only other attempt of Breast High road where you can just see the underside of my bike and I’m lying on the other side of it, I’d hoped to do better today. The road is covered in comparatively large rocks which are hard to ride over since they’re big enough to deflect the bike. I passed one of the group who’d fallen off but was ok, shortly after that I managed to hit a rock which bounced the front wheel off the edge of the track. I did the sensible thing and stood off the bike whilst it plunged over the edge and onto its side, ending up lying there teetering on the edge.

Given some time to think through the recovery all would be ok however the bike was leaking fuel rapidly and the fuel tap was under the bike so I couldn’t get to it to shut it off. I therefore made the decision to lift the bike and it started off over the edge again, the front brake wouldn’t hold it. I therefore decided to let it find its own way down the 10ft drop and which point it was still on its side leaking fuel :/. This time I could lift it, roll it into some handy rocks to stop it and start to think about how to get back onto the trail. I gave the fallen rider I’d passed a thumbs up as he went past at this point, it being clear I’d had an issue given the bike was pointing the wrong way and I was off the track. Once I’d caught my breath, I was able to get back onto the track and continue up past where I fell off last time (another rider was off there today) and continue up to the top.

The trip down the other side was less eventful and there wasn’t that much water in the ford on the other side. We covered a few more trails and then stopped to discuss where the turning for the next trail might be. We were navigating with a paper based route but I was tracking it on the GPS and it was interesting comparing the two to figure out where we were since my GPS had no useful basemap of the area. We were in one of two places and decided and as it turns out, we picked the wrong one so ended up with the mandatory U turn. I also noticed one of the bikes was chucking out a lot of oil like smoke to the point it looked like a two stroke yet I knew it was a four. At the next stop, I mentioned this to the rider.

We set of again and found the lane we were looking for however half the group was missing. We turned back and found the smoking bike now wouldn’t start, seemingly with a lack of electrical charge. We tried bump starting it but we couldn’t find a problem, or get it going so we towed it to a junction with decent signposts and they called their recovery policy. Then we were three.

We made it to the first garage at 47 miles so we never did get to test the 45 mile tank range since it had broken down first. After a quick food/drink stop, we continued on looping around the southern lakes and then northwards towards Coniston Water. We stopped and chatted to some other trail riders but didn’t see much in the way of walkers or other road users, presumably the baking hot weather had put them off.

One interesting moment of the day was on a nice single track tarmac road which weaved both horizontally and vertically. I had to admit I’d underestimated how much it did so and ended up feeling like the bike was rather light on traction with a trajectory projection heading towards the grass verge. I remember thinking that as vehicles go, this was not a bad one to have that issue on and that I’d aim for the tarmac/grass join which I targeted correctly and continued without incident. I’m reliably told by the person following that “rather light” was in fact airborne, oops.

We then headed up through Grizedale and over to the Langdale area where we did see a few more people and a chain of 4×4s. The route ended up cut a little short since it was now after 6pm and we needed to get back as we looped around and then back over Breast High Road. I made a better job of getting back over it than I had that morning (or its easier going to other way). This is the first time I’d done it the other way since we’d “shortcut” our way back on the previous Lake District trip using the M6 instead of going back over it.

For the return trip, I took the Discovery complete with bike on the rack along the A686 over Hartside. I have to admit I also thoroughly enjoyed doing it. For a vehicle as heavy as it is, you can’t often feel the weight with the engine power, brakes and power steering hiding it and having beautiful handling for something of the size. You do however notice it going down hills since it picks up speed like crazy. Despite some spirited driving, enjoying the road, the bike rack held the bike solidly, I was quite impressed.

The A686 has some hairpin bends and it was amusing to note that with the weight of the bike on the back, putting the power down on hard lock uphill on the hairpins did get the front end slipping. You could feel the electronics waking up and taking note :) . I’ve been noticing that sliding in a 4 wheel drive is something quite unlike anything I’ve ever driven too since it can do all wheel powered drifting and the stability control system seems either unable to detect it, unable to do anything about it, or probably both.

All in all I thoroughly enjoyed the day although it was very hard going (in all senses) and I ached for days afterwards. Thanks to Phil for leading!

by Richard at July 22, 2013 02:24 PM

July 16, 2013

Robert Bragg

Rig 1 - A UI designer & engine


For anyone interested in graphics and in the design of user interfaces (UIs) I hope I can invite you to take a look at our release of Rig 1 today. Rig 1 is the start of a new design tool & UI engine that I've been working on recently with Neil Roberts and Damien Lespiau as a means to break from the status quo for developing interfaces, enabling more creative visuals and taking better advantage of our modern system hardware.

In particular we are looking at designing UIs for consumer devices with an initial focus on native interfaces (device shells and applications), but also with an eye towards WebGL support in the future too.

With Rig we want to bring interactivity into the UI design process to improve creativity and explore the possibilities of GPUs beyond the traditional PDF drawing model we have become so accustomed to. We want to see a design work flow that isn't constrained by the static nature of tools such as Photoshop or mislead by offline post-processing tools such as After Effects. We think designers and engineers should be able to collaborate with a common technology like Rig.

Lets start with a screenshot!


This gives you a snapshot of how the interface looks today. For Rig 1 our focus has been on bootstrapping an integrated UI design tool & rendering engine which can showcase a few graphics techniques not traditionally used in user interfaces and allows basic scene and animation authoring.

If you're wondering what mud, water, some trees and a sun have to do with creating fancy UIs; firstly I should own up to drawing them, so sorry about that, but it's a sneak peek at the assets for a simple "Fox, Goose and Corn" puzzle, in the style of a pop-up book, we are looking to create for Rig 2.

I'd like to highlight a few things about the interface:

It all starts with assets:

On the left of the UI you see a list of assets with a text entry to search based on tags and names. Assets might be regular images, they could be 3D meshes, they could be special ancillary images such as alpha masks or normal maps (Used for a technique called bump-mapping to give a surface a perturbed look under dynamic lighting).

Assets don't have to be things created by artists though, they might also be things like scripts or data sources in later versions of Rig.

The basic idea is that assets are the building blocks for any interface and so that's why we start here. Click an asset and it's added to the scene and assets can sometimes even be combined together to make more complex things, which I'll talk more about later.

Direct manipulation:

In the center is the main editing area where you see the bounds of the device currently being targeted and can directly manipulate scenes for the UI being designed. We think this ability to directly manipulate a design from within the UI engine itself is going to be a cornerstone for enabling greater creativity since there is no ambiguity about what's possible when
you're working directly with the technology that's going to run the UI when deployed to a device.

These are the current controls:
  • Middle mouse button to rotate the whole scene
  • Shift + middle mouse to pan the scene
  • '+' and '-' to zoom in and out
  • Left click and drag to move an object (object should not already be selected)
  • Left click and drag a selected object to rotate
  • Ctrl-Z and Ctrl-Y for Undo and Redo respectively
  • Ctrl-S to Save the current document



In-line previews...

Without leaving the designer it's possible to toggle on and off the clutter of the editing tools, such as the grid overlay, and also toggle fullscreen effects such as the depth-of-field effect shown here.

Currently this is done by pressing the 'P' key.

Editing properties:

Properties are another cornerstone for Rig but first I should introduce what Entities and Components are.

A scene in Rig is comprised of primitives called entities which basically just represent a 3D transformation relative to a parent. What really makes entities interesting are components which attach features to entities. Components can represent anything really but examples currently include geometry, material, camera and lighting state.

When you click an asset and that's added to the scene, what actually happens is that we create a new entity and also a new component that refers to the clicked asset which is attached to the entity. The kind of component created depends on the kind of asset you click. If you click an asset with an existing entity selected, that lets you build up a collection of components attached to a single entity.

Each component attached to an entity comes with a set of properties. The properties of the currently selected entity and those of all attached components are displayed on the right hand side of the interface. The effect of manipulating these properties can immediately be seen in the main editing area.

The little red record button () that you can see next to some of the properties is used to add that property into the current timeline for animating...

Timelines:

Once you've built up a scene of Entities then authoring animations is pretty simple. Clicking the red record button of a property says that you want to change the property's value over time and a marker will pop up in the timeline view at the bottom. Left clicking and dragging on the timeline lets you scrub forwards and backwards in time. If you scrub forwards in time and then change a recorded property then a new marker pops up for that property at the current time. Left clicking markers lets you select and move marks. Property values are then automatically interpolated (tweened) for timeline offsets that lay in between specific markers.

These are the current timeline controls:
  • Left click lets you scrub the current timeline position
  • Left clicking markers lets you select marks; Shift + Left click lets you select multiple markers
  • Left clicking and dragging lets you move selected markers left and right
  • Delete lets you remove markers
  • Ctrl-Z and Ctrl-Y also let you undo and redo timeline changes

Effects

We started with a fairly conservative set of effects for Rig 1, opting for effects that are well understood and widely used by game developers. This reduced some of the initial development risk for us but there is a chance that our choices will give the impression we're simply trying to make UIs that look like console games which isn't the intention.

Modern GPUs are extremely flexible pieces of hardware that enable an unimaginably broad range of visual effects, but they are also pretty complex. If you really want to get anything done with a GPU at a high level you quickly have to lay down some constraints, and make some trade-offs.

The effects we started with have a long history in game development and so we know they work well together. These effects emphasize building photorealistic scenes but there are lots of non-photorealistic effects and generative art techniques we are also interested in supporting within Rig. Since these are more specialized we didn't feel they should be our first focus while bootstrapping the tool itself.

Lighting

I'm sure you can guess what this effect enables, but here's a video anyway that shows a 3D model loaded in Rig and how its colour changes as I rotate it or if I change the lighting direction:


Although I don't envisage using glaringly 3D models for core user interface elements I think there could be some scope for more subtle use of 3D models in live backgrounds for instance.

Shadow mapping

Shadow mapping is a widely used technique for calculating real-time shadows that basically involves rendering the scene from the point of view of the light to find out what objects are directly exposed to the light. That image is then referenced when rendering the objects normally to determine what parts of the object are in shadow so the computed colours can be darkened.

Shadows can be used to clarify how objects are arranged spatially relative to one another and we think there's potential for user interfaces to use depth cues perhaps to help define focus, relevance or the age of visible items but also for purely aesthetic purposes.

Bump mapping

This is a widely used technique in game engines that lets you efficiently emboss a surface with bumps and troughs to make it more visually interesting under dynamic lighting. For an example use case in UIs, if we consider where we use simple silhouette emblems in UIs today, such as status icons you might be able to imagine introducing a subtle form of lighting over the UI (maybe influenced by touch input or just the time
of day) and imagine the look of light moving over those shapes.

Rig provides a convenient, standalone tool that can take an image and output a bump map, or take a bump map to output a normal map. These examples show an original icon, followed by its bump map and then its normal map:


Note: There are sometimes better ways calculate normal maps for specific use cases but this tool at least gives us one general purpose technique as a starting point. An artist might for example at least want to hand tune the bump map before generating a normal map.

This video shows the gist of what this effect enables, though for a more realistic use case I think it would deserve a more hand-tuned bump map, and more suitable texture mapping.


Alpha masks

Rig provides a way to associate an alpha mask with an entity that can be used to cut shapes out of an image and since the threshold value (used to decide what level of alpha should act as a mask) can be animated that means you can also design the masks with animations in mind. For example if you have a gradient mask like this:

If we animate the threshold between 0 to 1 we will see a diagonal swipe in/out effect applied to the primary texture.

This video shows a simple example of animating the threshold with two different mask assets:


Depth of field

Rig implements a basic depth of field effect that emulates how camera lenses can be made to bring a subject into sharp focus while leaving the foreground and background looking soft and out of focus.

This example video alludes to using the effect for moving focus through a list of items that extends into the depths of the screen.

Status

At this point Rig can be used for some kinds of visual animation prototyping but it isn't yet enough to create a standalone application since we can't yet incorporate application logic or data.

Our priority now is to get Rig to the point where it can be used end-to-end to design and run UIs as quickly as possible. As such we're defining our next milestones in terms of enabling specific kinds of UIs so we have concrete benchmarks to focus on.

Our next technical aim is to support application logic, basic input and the ability to interactively bind properties with expressions. As a milestone benchmark we plan to create a Fox, Goose and Corn puzzle, chosen due to its simple logic requirements and no need to fetch and process external data.

The technical aim after that is to support data sources, such as contact lists or message notifications as assets where we can interactively describe how to visualize that data. The benchmark for this work will be a Twitter feed application.

I'm particularly looking forward to getting our ideas for data integration working since the approach we've come up with should allow much greater creativity for things like showing a list of contacts or notifications while simultaneously also being naturally efficient by only focusing on what's visible to the user.

Summary

So hopefully if you read this far you are interested in what we're creating with Rig. We're hoping to make Rig appeal to both Designers and Engineers who are looking to do something a bit more interesting with their interfaces. I'd like to invite the brave to go and check out the code here, and I hope the rest of you will follow our progress and feel free to leave comments and questions below.

by Robert Bragg (noreply@blogger.com) at July 16, 2013 12:05 PM

July 15, 2013

Robert Bradford

Wayland & Weston 1.2.0 is out

The latest release of the Wayland protocol and support library along with the Weston compositor is now out. For the GNOME community this release is particularly interesting:

  • It is the first one to advertise a stable API for the implementation of compositors (libwayland-server) – which will prove useful with the porting of gnome-shell & mutter to Wayland
  • Two new protocol enhancements have been staged for inclusion: subsurfaces from Pekka Paalanen which will be the basis for implementing Clutter-GTK on Wayland and support for input methods from Jan Arne Petersen. These are not yet in the core of the Wayland protocol but will be moved there when the API has been proven.
  • HiDPI support – Alexander Larsson implemented this for Wayland and GTK+ too

At GUADEC i’ll be speaking about the current state of the Wayland project and plans going forward. If you have a particular topic or question you’d like me to cover please let me know in the comments.

by Rob Bradford at July 15, 2013 01:06 PM

Robert Bragg

Rig Status Update

I've been continuing my work on the Rig project I introduced back in September, as well as helping add Wayland support to GnomeShell, and was feeling bad that I haven't made time to post about the progress of either project and so wanted to give a quick status update for Rig...

I think the easiest way to get a quick idea of how Rig has been shaping up is this overview video I made back in May that goes over the initial UI layout and features:

The main thing that's been changing UI wise since I made that video is that the bottom area is evolving beyond just timeline management into an area for "controllers" that must handle more then simple key-frame based animations.

Controllers will support several methods of controlling properties, where key-frame animations would be one way, but other methods would be physics simulation, expressions that relate properties together and all manner of other high level behaviours. As an example of a behaviour Chris Cummins, one of the interns I've been working with, is experimenting with a Boids based flocking simulation which might offer us a fun way to introduce emergent, nature inspired aesthetics to the backdrop of a device.

Probably the biggest architectural change in Rig is that it's now designed to be connected directly with a device that you are designing for to enable immediate feedback about performance, responsiveness and quality on real target hardware. We've added a networking layer using avahi and protocol buffers to discover devices and for synchronizing UI changes made in the designer with the device.

Rig is aiming to help optimize the workflow of UI development and it seems that one of the biggest inefficiencies today is that interaction and visual designers often use tools that are completely unconstrained by the technologies and devices that will eventually be used to realize their ideas.

Going further, the plan is to directly incorporate remote profiling visualization capabilities into Rig so that we can also allow device metrics to influence the design stages as early as possible instead of only using them as a diagnostic tool. UIs need to work within the limits of the hardware they run on otherwise the experience suffers. If we don't move to a situation where real metrics can influence the design early we either have to continue being ultra conservative with our UIs or we risk big problems being discovered really late in the development process that can either force us back to the drawing board or leave us scrambling to fix the technology under pressure.

To keep this update relatively short here's a quick run-through of the work that's been going on:

  • UI design work
    • Thanks to Mikael Metthey for his help creating mock ups for Rig, clearly a marked improvement over the very first visuals we used:


  • Device connectivity - as mentioned above.
  • Neil Roberts has worked on basic OSX support.
  • Neil also added binary update support since we'd like to aim for a browser like development model of continuously rolling out small features so once Rig is installed it will automatically evolve, getting better over time.
  • Bump mapping support for 3D models, for detailed lighting effects over complex models.
  • A pointillism effect by Plamena Manolova as a fun example of a generative art algorithm that can be handled efficiently on the GPU.
  • Default cylindrical texture mapping of models that don't have their own texture coordinates.
  • Plamena is currently implementing an algorithm for tiling images across an arbitrary mesh.
  • Plamena has added initial support for real-time fur rendering, another visual style that clearly diverges from what's achievable with the traditional PostScript rendering model.
  • Chris has been working on a particle engine.
  • Chris has also been working on a Boids simulation engine to emulate flocking behaviours. The inspiration for this basically came from an exhibition made by Universal Everything: (Full disclosure - I work for Intel, although the point here isn't the advertising, but just the idea of bringing more natural forms into user interfaces.)
  • We've made some early progress experimenting with WebGL support.
    • The low level graphics layer of Rig now supports WebGL but we have some further integration work to do still in the higher level engine code before we can really test the viability of this idea.
  • Drag & Drop, Copy & Paste - work in progress. Not having this is really getting in the way of lots of other important interaction and feature work that we need to move onto next.
Next week I'm off to Siggraph where we'll be publishing a whitepaper explaining why we feel there is a big opportunity to really raise the bar for how we manage UI development and we'll also have a short video introducing the Rig project.

I'll also be at Guadec in August and I'd love to chat with anyone interested in Rig.

I'll try not to wait so long before posting my next update here.

by Robert Bragg (noreply@blogger.com) at July 15, 2013 11:23 AM

July 13, 2013

Chris Lord

Getting healthy

I’ve never really considered myself an unhealthy person. I exercise quite regularly and keep up with a reasonable amount of active hobbies (climbing, squash, tennis). That’s not really lapsed much, except for the time the London Mozilla office wasn’t ready and I worked at home – I think I climbed less during that period. Apparently though, that isn’t enough… After EdgeConf, I noticed in the recording of the session I participated in that I was looking a bit more plump than the mental image I had of myself. I weighed myself, and came to the shocking realisation that I was almost 14 stone (89kg). This put me well into the ‘overweight’ category, and was at least a stone heavier than I thought I was.

I’d long been considering changing my diet. I found Paul Rouget’s post particularly inspiring, and discussing diet with colleagues at various work-weeks had put ideas in my head. You could say that I was somewhat of a diet sceptic; I’d always thought that exercise was the key to maintaining a particular weight, especially cardiovascular exercise, and that with an active lifestyle you could get away with eating what you like. I’ve discovered that, for the most part, this was just plain wrong.

Before I go into the details of what I’ve done over the past 5 months, let me present some data:

by Chris Lord at July 13, 2013 08:58 AM

July 07, 2013

Richard Purdie

Alnwick and back

Its been a while since I’ve ridden the northern lanes from here. Even though there were just two of us interested/available to do it, I decided to go ahead since the weather was lovely for some river crossings.

We met up and set off with the run up to Alnwick being slightly more tarmac biased, stopping for fuel near Morpeth. Its always interesting to see how rivers and river crossings change over time, what was a 1m deep raging torrent I drowned the bike in last time I went through it was a silted up few inches this time!

We kept a fairly leisurely pace but that didn’t stop me sliding out a rut and falling off, first time I’ve done that for a while :/. Kept the bike running and felt ok at the time but appear to have hit the top of my pelvis with the elbow armour and have a lovely bruise there today.

One of the lanes had notices about police action against motor vehicle use so we didn’t use it and and will need to look into it what the rights are there. Near Alnwick we came across the first deep crossing which did look rather washed out. We ended up going slightly downstream to cross where it was a much more reasonable depth.

After a fuel and lunch stop in Alnwick we headed back southwards covering a lot more lanes including a few more interesting fords. I think this part of the trip really made the day for me.

The only downside was that time was getting on and many fuel places were now shut. I thought I should make it back to the start point on the fuel in the tank. There was a minor panic when the bike spluttered out of fuel but that turned out to be a kinked breather hose. A while later it needed to go onto reserve for real though.

Waiting at the bridge to cross over to the start point I glanced in the tank and fuel was conspicuous by its absence. I knew the garage at the start point was also now shut too so I headed straight for the other nearby garage that should still be open. As I waited in traffic to make the right turn into the garage, the bike spluttered completely out of fuel. I knocked it into neutral and pushed it the 5m to the fuel pump. Never have I had such a close run on fuel!

All that remained was to ride back home but there were still a couple of interesting twists left. I decided to ride through Newcastle, something I’ve never done on the trail bikes, taking a route past the football stadium. I’m still a little unclear about what happened but I think someone who had likely had a bit too much to drink decided my hand signal for a turn meant I wanted a hug and therefore ran at the moving bike with his arms outstretched. It was that or a rugby tackle. Thankfully I managed to avoid him, how I’m not entirely sure.

The final part of the route home was the coast road where I kept the bike at comparatively high speed/revs for several miles. Pulling up and stopping in the queue for the roundabout, I was looking at my shadow and noticing a strong swirling haze at the back of the bike. This caused me to turn around to see copious amounts of smoke coming from the exhaust as if something was on fire which in all likelihood, it probably was. I think the oil in the exhaust was probably burning however the engine sounded fine so I continued without unduly worrying about it.

It was an enjoyable day out, thoroughly worn out now mind!




















by Richard at July 07, 2013 03:15 PM

June 21, 2013

Emmanuele Bassi

The King is Dead

I guess a lot of you, kind readers, are pretty well-acquainted with the current idiomatic way to write a GObject type. it’s the usual boilerplate, plus or minus a bunch of macros:

// header
typedef struct _MyObject MyObject;
typedef struct _MyObjectPrivate MyObjectPrivate;
typedef struct _MyObjectClass MyObjectClass;

struct _MyObject {
  GObject parent_instance;
  MyObjectPrivate *priv;
};

struct _MyObjectClass {
  GObjectClass parent_class;
};

GType my_object_get_type (void);

// source
struct _MyObjectPrivate
{
  int foo;
};

G_DEFINE_TYPE (MyObject, my_object, G_TYPE_OBJECT)

static void
my_object_class_init (MyObjectClass *klass)
{
  g_type_class_add_private (klass, sizeof (MyObjectPrivate));
}

static void
my_object_init (MyObject *self)
{
  self->priv = G_TYPE_INSTANCE_GET_PRIVATE (self,
                                            my_object_get_type (),
                                            MyObjectPrivate);
  self->priv->foo = 42;
}

boring stuff that everyone had to remember1. the last big change in the way people write GObject happened 10 years ago, and it was the addition of per-instance private data. it seems to me like a good way to celebrate that occasion to change this stuff all over again. ;-)

at the latest GTK+ hackfest, Alex and Ryan had a very evil, and very clever idea to solve a problem in how the per-instance private data is laid out in memory. before that, the layout was:

[[[GTypeInstance] GObject] TypeA] TypeB] [TypeAPrivate] [TypeBPrivate]

as you can see, the offset of the private data for each type changes depending at which point in the class hierarchy initialization we are, and can only be determined once the whole class hierarchy has been initialized. this makes retrieving the pointer of the private data a pretty hard problem; one way to solve it is storing the private pointer when we initialize the instance, and we spare ourselves from type checks and traversals. the main problem is that, in order to get to the private data faster, we need to rely on a specific layout of the instance structure, something that is not really nice if we want to have generic accessors to private data2. for that, it would be really cool if we could only have offsets to through to G_STRUCT_MEMBER().

well, it turns out that if you’re doing memory allocations for the instance you can overallocate a bit, and return a pointer in the middle of the memory you allocated. you can actually allocate the whole private data in a decent layout, and only deal with offsets safely — after all, the type information will store all the offsets for safe access. so, here’s the new layout in memory of a GObject3:

[TypeBPrivate] [TypeAPrivate] [[[[GTypeInstance] GObject] TypeA] TypeB]

that’s neat, isn’t it? now all private data can be accessed simply through offsets, and accessing it should be just as fast as a private pointer.

I can already see people using Valgrind preparing torches and pitchforks — but fear not, my fellow developers: GLib now detects if you’re running under Valgrind, and it will communicate with it4 about this new memory layout, as well as keeping a pointer to the beginning of the allocated region, so that you won’t get false positives in your report.

this was the state at the end of the hackfest. on top of that, I decided to contribute a bunch of “syntactic sugar”5 to cut down the amount of lines and things to remember6, as well as providing a good base towards making GProperty work better, and with fewer headaches.

so, here’s how you create a new GObject type in the Brave New World:

// header
typedef struct _MyObject MyObject;
typedef struct _MyObjectClass MyObjectClass;

struct _MyObject {
  GObject parent_instance;
};

struct _MyObjectClass {
  GObjectClass parent_class;
};

GType my_object_get_type (void);

// source
typedef struct {
  int foo;
} MyObjectPrivate;

G_DEFINE_TYPE_WITH_PRIVATE (MyObject, my_object, G_TYPE_OBJECT)

static void
my_object_class_init (MyObjectClass *klass)
{
}

static void
my_object_init (MyObject *self)
{
  MyObjectPrivate *priv = my_object_get_instance_private (self);

  priv->foo = 42;
}

the my_object_get_instance_private() function is generated by G_DEFINE_TYPE, so you can forget about G_TYPE_INSTANCE_GET_PRIVATE and all that jazz. also, no more g_type_class_add_private() — one less thing to remember is one less thing to screw up. you can still store the private pointer in your instance structure — and if you care about ABI compatibility, you really should — but for new code it’s not necessary. you can finally hide the private data structure inside your source code, instead of having the typedef in your header, sitting there, taunting you. finally, everything is just as fast as it was, as well as backward compatible.

stay tuned for the next blog post, because it’ll finally be about GProperty…

  1. or commit to a script to autogenerate it
  2. say, for instance, if we’re re-implementing the way properties are handled in GObject
  3. well, of any GTypeInstance, really
  4. yes, you can do that, and it’s an impressive amount of crack, luckily for us hidden behind the valgrind.h header provided by the Valgrind folks
  5. i.e. pre-processor macros
  6. the port of GIO to the new macros lost around 900 lines

by ebassi at June 21, 2013 01:40 AM

June 19, 2013

Ross Burton

Guacamayo 0.2

Last week we (well, mostly Tomas if I’m honest) made our first release of Guacamayo, a reference Linux distribution for media playback devices in a networked world. The core technologies are the usual suspects: Yocto for the distribution, GStreamer and PulseAudio, Rygel and Media Explorer.

The first release caters for the “connected speakers” use-case.  On boot it connects automatically to the network over ethernet (wi-fi coming soon) and using Rygel exposes a UPnP/DLNA MediaRenderer, with hot-plugged USB speakers automatically switched to if plugged in. I’ve been happily using it on a laptop with my Raumfeld system, and Tomas has tested it on a BeagleBoard with a UPnP control point app on his Android phone.

So, what’s next?  I’m working on a web-based configuration tool for the headless systems, and Tomas is integrating Media Explorer so we’ll have something you can plug into a TV.  Tomas is testing this on a Zotac ZBox at the moment, and any week now I’ll have a RaspberryPi to experiment with.

If you’re interested and want to know more, we have a small wiki at GitHub and you can often find us in #guacamayo on FreeNode.

by Ross Burton at June 19, 2013 02:17 PM

June 10, 2013

Richard Purdie

Marshalling at the K2 Rally

They were looking for volunteers to marshal at the K2 rally around Kielder forest at the weekend, I decided after having enjoyed the enduro last year I helped with last year, I’d give it a go.

Even the trip up to Bellingham that morning turned out to have its interesting moments. The A68 is a very straight Roman road with lots of steep dips with blind summits. I had the bike on a bike rack on the back of the vehicle and naively set the cruise control. I soon found that it would plummet down the dips like a stone, then come to the other side, realise the incline needed throttle and that the speed was rapidly decreasing so it would open the engine right up. It would do this up to the summit of the hill with an effect like applying rocket boosters. The only way to describe the result is “big air”. Thankfully the air suspension does appear to be able to cope with this, much more gently and gracefully than I expected.

I’d not actually been able to get much information other than what time to turn up, even where was a little bit vague. The rally was two 80 mile loops of mostly forest fire road. The day was looking to get rather warm and the fire roads would be extremely dusty.

Firstly, I found out there were two refuelling points and the fuel trailer was leaving ASAP so I put 5L on each and hoped for the best since I had no idea where I’d be. At this point I didn’t even know what the organiser looked like. I figured out who he was and was assigned to the group manning the first section. We set off and followed the course around getting a feel for where we were supposed to be although the section with two way traffic where it loops back on itself confused us.

We shortcut some of the section and reached the first refuelling point. We decided to split up with some going to do traffic management on the two way section and some of us figuring out our shortcut to loop back to the start and the enduro loop we were responsible for.

I found the GPS invaluable at this point. We had a paper map of the route and comparing the trace on screen with that gave some idea of where we were, even if the scale seemed incorrect and the route had changed a bit from the one marked.

The final part of the course has some horrible bends covered in large rocks which I really hated, our enduro loop was good fun, starting off with brash, a drop and hill climb, then a nice green lane forest track.

I did have one interesting moment where both wheels lost traction and I was travelling along the track semi sideways doing a lowside in slow motion. I remember thinking that it felt like I could just tweak it back into line however there was a significant risk of turning it into a highside. I gently tried anyway and much to my surprise the front wheel started working properly again, even if the rear was fishtailing like crazy which was a much less serious issue. I think that stands as the single best recovery I’ve ever made on a motorcycle, I just wish I had it on camera.

So we made it back to the start and waited around a bit for the bulk of the riders to head through, then set off ourselves. We soon came to a fallen rider who was in a bad way. The fractured wrist was clear, the back pain worrying but they were conscious and alert which was a big plus. We had a few marshals around and some riders had summoned emergency services having been unable to get the satellite phone link to the organisers to work properly. I went back to the start and escorted paramedic to the scene who decided to call for air support. At this point there were too many people around the scene so I moved further down the course. The marshalls there were going to flush the first section so I skipped to the first refuel and refuelled, watching the air ambulance come in from a distance. We realised no marshal had been over the section to the first test so agreed I’d change plans and head over that way.

I rode for what seemed like hours without seeing another person, eventually reaching the special test. Great, except there was no safe route back to the start that I could see, so I did the only thing I could and continued onwards. I stopped and checked a couple of people dealing with punctures were ok. I also found someone who’d had an off and damaged their wrist, I advised just following the course since there was no shortcut out that section of forest I knew of. Eventually I came to the second refuel point and refuelled. I waited there for a bit and was able to direct the injured rider on a bit of a shortcut back.

The course was due to close at 2pm and I was supposed to be helping close it so I didn’t look at the enduro loops and headed back to the start, meeting up with another start marshal who’d had to find fuel point two since his fuel was there. We made it there slightly late but were the first there. Two others left to close the course, I was left manning the tape. I spent the next four hours there since I’d been told that I was not to move on account of anything, or let any rider onto the course. I was pleased when the closing marshals finally appeared.

Sunday started an hour earlier and for us, had the added fun of demarking. The riders would only do the first section on the first lap, then it was getting closed and demarked. The course would close at 1pm and then we’d sweep through and close/demark the remaining parts to the start of the special test.

The day started with ensuring the first section was clear and all the correct gates were open. It had rained overnight which meant there was more grip and less dust. There was a nasty bog in the first section which was interesting. I picked a line and took some reasonable speed into it. The bike threatened to stop half way but I really wasn’t keen on that, opened the throttle and put the weight on the back. Much to my surprise the front lifted to head height and the YZ did what it does best and powered onwards. I was in a deep rut at this point and using the front wheel for steering is overrated anyway :) . I was mildly concerned that there might be something in front but there wasn’t and I was able to land safely on the other side. Another of the marshals hadn’t faired quite as well but we all got through in the end.

Some of us went to figure out who the last man was so we could start the demarking, I went on with another to check the rest of the course to the first test. With riders setting off behind us, we were needing to get a move on but I was finally starting to remember how to ride fire roads (inside line on corners instead of road riding stick to the left for example). We were able to back track from the special test a little and short cut back to the start where we refueled.

When we got back, demarking had started but the gates needed locking. No problem, I can do that so I set off. This meant I was off by myself and reached that bog. This time around I tried a different line and managed to ground the bike out. I therefore spent quite a while extracting it and ended up rather muddy. Moments like that where the bike is on its side and your foot is trapped under the front wheel are always good fun. With the gates shut we went to the start for course closing. On the way there I found someone struggling with a stuck throttle after an off. I pointed out the cable was jammed by the barkbuster and helped them free it. A short while later I also loaned out an allen key to stop the guard catching their brake lever.

With the course closed, we set off, demarking as we went. I got the awkward signs to do since I had the sidestand. Obviously getting off the bike for each one is a pain so you attempt to ride the bike through some interesting obstacles. Occasionally you’d ride into what looked like a nice safe area, only to find for example it was full 1ft sized lumps of rock covered in moss. All good fun though. We demarked all the way to the special test, then passed on the batten to the team there. We’d passed a broken down rider so we went back with a tow rope for them and took some short cuts to pull them out the forest. I then manned the tape for half an hour before calling things done and headed back to camp and then home.

The bike is going to need some TLC now, the rear tyre was worn at the start, now its totally knackered. My number plate is also disintegrating and the sidestand spring gave out when I was unloading the bike. My hands now have blisters on the blisters and muscles I’ve never felt pains from before are aching. All things considered it was a good weekend :) .

by Richard at June 10, 2013 10:41 AM

June 03, 2013

Damien Lespiau

Working on more than one line with sed’s ‘N’ command

Yesterday I was asked to help solving a small sed problem. Considering that file (don’t look too closely on the engineering of the defined elements):

<root>
  <key>key0</key>
  <string>value0</string>
  <key>key1</key>
  <string>value1</string>
  <key>key2</key>
  <string>value2</string>
</root>

The problem was: How to change value1 to VALUE!. The problem here is that you can’t blindly execute a s command matching <string>.*</string>.

Sed maintains a buffer called the “pattern space” and processes commands on this buffer. From the GNU sed manual:

sed operates by performing the following cycle on each line of input: first, sed reads one line from the input stream, removes any trailing newline, and places it in the pattern space. Then commands are executed; each command can have an address associated to it: addresses are a kind of condition code, and a command is only executed if the condition is verified before the command is to be executed.

When the end of the script [(list of sed commands)] is reached, unless the -n option is in use, the contents of pattern space are printed out to the output stream, adding back the trailing newline if it was removed.3 Then the next cycle starts for the next input line.

So the idea is to first, use a /pattern/ address to select the the right <key> line, append the next line to the pattern space (with the N command) and finally run a s command on the buffer now containing both lines:

  <key>key1</key>
  <string>value1</string>

And so we end up with:

$ cat input 
<root>
  <key>key0</key>
  <string>value0</string>
  <key>key1</key>
  <string>value1</string>
  <key>key2</key>
  <string>value2</string>
</root>
$ sed -e '/<key>key1<\/key>/{N;s#<string>.*<\/string>#<string>VALUE!<\/string#;}' < input 
<root>
  <key>key0</key>
  <string>value0</string>
  <key>key1</key>
  <string>VALUE!</string
  <key>key2</key>
  <string>value2</string>
</root>

by damien at June 03, 2013 01:24 PM

May 30, 2013

Emmanuele Bassi

California One Youth and Beauty Brigade

now, that was a title of a Decemberists song that I’d have never expected to use as a blog post title

I did announce it on foundation-list, given that it impacts my position on the Board of Directors of the GNOME Foundation, and I did a sneaky tweet as well, but I guess the blog (and Planet GNOME) is still the Old Fashioned Way™ to do these things — and seeing that Cosimo beat me to a punch, it’s worth saying that I have joined Endless Mobile as well.

my last blog post about my work life was a bit depressing, I guess; I received a ton of support and encouragement from many, many people — too many to thank effectively in the narrow margins of this blog. I did take the announced month off, and I was already on my way to recovery; then I met Matt, who told me about Endless, and what they were trying to do with GNOME, and I felt the absolute need to help them as much as I could. after all, aren’t we trying to make GNOME a viable proposition for OEMs and OSVs to take and put on their own devices? I’m sure we’ll be able to start telling the community at large more details about what we want to achieve, and how.

in the meantime, I expect to see people in San Francisco a bit more often (though I’m still going to be based on London for the foreseeable future), and I’ll obviously be at GUADEC in Brno.

by ebassi at May 30, 2013 10:25 PM

May 04, 2013

Chris Lord

Writing and deploying a small Firefox OS application

For the last week I’ve been using a Geeksphone Keon as my only phone. There’s been no cheating here, I don’t have a backup Android phone and I’ve not taken to carrying around a tablet everywhere I go (though its use has increased at home slightly…) On the whole, the experience has been positive. Considering how entrenched I was in Android applications and Google services, it’s been surprisingly easy to make the switch. I would recommend anyone getting the Geeksphones to build their own OS images though, the shipped images are pretty poor.

Among the many things I missed (Spotify is number 1 in that list btw), I could have done with a countdown timer. Contrary to what the interfaces of most Android timer apps would have you believe, it’s not rocket-science to write a usable timer, so I figured this would be a decent entry-point into writing mobile web applications. For the most part, this would just be your average web-page, but I did want it to feel ‘native’, so I started looking at the new building blocks site that documents the FirefoxOS shared resources. I had elaborate plans for tabs and headers and such, but turns out, all I really needed was the button style. The site doesn’t make hugely clear that you’ll actually need to check out the shared resources yourself, which can be found on GitHub.

Writing the app was easy, except perhaps for getting things to align vertically (for which I used the nested div/”display: table-cell; vertical-alignment: middle;” trick), but it was a bit harder when I wanted to use some of the new APIs. In particular, I wanted the timer to continue to work when the app is closed, and I wanted it to alert you only when you aren’t looking at it. This required use of the Alarm API, the Notifications API and the Page Visibility API.

The page visibility API was pretty self-explanatory, and I had no issues using it. I use this to know when the app is put into the background (which, handily, always happens before closing it. I think). When the page gets hidden, I use the Alarm API to set an alarm for when the current timer is due to elapse to wake up the application. I found this particularly hard to use as the documentation is very poor (though it turns out the code you need is quite short). Finally, I use the Notifications API to spawn a notification if the app isn’t visible when the timer elapses. Notifications were reasonably easy to use, but I’ve yet to figure out how to map clicking on a notification to raising my application – I don’t really know what I’m doing wrong here, any help is appreciated! Update: Thanks to Thanos Lefteris in the comments below, this now works – activating the notification will bring you back to the app.

The last hurdle was deploying to an actual device, as opposed to the simulator. Apparently the simulator has a deploy-to-device feature, but this wasn’t appearing for me and it would mean having to fire up my Linux VM (I have my reasons) anyway, as there are currently no Windows drivers for the Geeksphone devices available. I obviously don’t want to submit this to the Firefox marketplace yet, as I’ve barely tested it. I have my own VPS, so ideally I could just upload the app to a directory, add a meta tag in the header and try it out on the device, but unfortunately it isn’t as easy as that.

Getting it to work well as a web-page is a good first step, and to do that you’ll want to add a meta viewport tag. Getting the app to install itself from that page was easy to do, but difficult to find out about. I think the process for this is harder than it needs to be and quite poorly documented, but basically, you want this in your app:

if (navigator.mozApps) {
  var request = navigator.mozApps.getSelf();
  request.onsuccess = function() {
    if (!this.result) {
      request = navigator.mozApps.install(location.protocol + "//" + location.host + location.pathname + "manifest.webapp");
      request.onerror = function() {
        console.log("Install failed: " + this.error.name);
      };
    }
  };
}

And you want all paths in your manifest and appcache manifest to be absolute (you can assume the host, but you can’t have paths relative to the directory the files are in). This last part makes deployment very awkward, assuming you don’t want to have all of your app assets in the root directory of your server and you don’t want to setup vhosts for every app. You also need to make sure your server has the webapp mimetype setup. Mozilla has a great online app validation tool that can help you debug problems in this process.

Timer app screenshot

And we’re done! (Ctrl+Shift+M to toggle responsive design mode in Firefox)

Visiting the page will offer to install the app for you on a device that supports app installation (i.e. a Firefox OS device). Not bad for a night’s work! Feel free to laugh at my n00b source and tell me how terrible it is in the comments :)

by Chris Lord at May 04, 2013 08:37 AM

April 20, 2013

Emmanuele Bassi

GTK+ Hackfest 2013/Day 1 & 2

Day 1

it turns out that this week wasn’t the best possible to hold a hackfest in Boston and Cambridge; actually, it was the worst. what was supposed to be the first day passed with us hacking in various locations, mostly from home, or hotel lobbies. nevertheless, there were interesting discussions on experimental work, like a rework of the drawing and content scrolling model that Alex is working on.

Day 2

or Day 1 redux

with the city-wide lockdown revoked, we finally managed to meet up at the OLPC offices and start the discussion on Wayland, input, and compatibility; we took advantage of Kristian attending so we could ask questions about client-side decorations, client-side shadows, and Wayland compatibility. we also discussed clipboard, and drag and drop, and the improvements in the API that will be necessary when we switch to Wayland — right now, both clipboard and DnD are pretty tied to the X11 implementation and API.

after lunch, the topic moved to EggListBox and EggFlowBox: scalability, selection, row containers, CSS style propagation, and accessibility.

we also went over a whole set of issues, like positioning popups; high resolution displays; input methods; integrating the Gd widgets into GTK+, and various experimental proposals that I’m sure will be reported by their authors on Planet GNOME soon. :-) it was mostly high level discussion, to frame the problems and bring people up to speed with each problem space and potential/proposed solutions.

we’d all want to thank OLPC, and especially Walter Bender, for being gracious hosts at their office in Cambridge, even on a weekend and the GNOME Foundation.

by ebassi at April 20, 2013 08:58 PM

April 19, 2013

Emmanuele Bassi

GUADEC is coming

this is a PSA: if you’re thinking about submitting a talk for GUADEC 2013 in Brno, you have a week to do so. :-)

by ebassi at April 19, 2013 03:00 PM

April 09, 2013

Thomas Wood

Sharing Settings in GNOME 3.8

One of the new features I was able to contribute to GNOME 3.8 was the sharing settings panel.

Sharing Settings

The goal of this panel is to provide the user with a way to control what is shared over the network. The sharing services are provided by various existing projects, including Vino, Rygel and gnome-user-share. If any of the services are not installed, the relevant settings are not displayed. The panel also allows the user to configure various options for the services.

Bluetooth Sharing

More details about the design of the panel are on the wiki page.

by thos at April 09, 2013 02:21 PM

March 25, 2013

Emmanuele Bassi

I’ve been asked to review the GNOME 3 Application Development Guide for Beginners; I went through the book in about half a day and wrote this somewhat short review afterwards, and published on G+; sadly, I used a limited distribution, and G+ does not allow changing that without resharing your own post. given that I wanted to push it on the blog, I took the chance to review some of the stuff I wrote, and expand it.

my initial impression of the GNOME 3 Application Development Guide for Beginners book is fairly positive: the topics covered are interesting, and the book never loses itself in them, so that beginners will not feel utterly stranded after the first three chapters, as it too often happens with “for beginners” books. I appreciated the “pop quiz” sections, as well as the small sections that recommended improvements to the example code.

obviously, writing a book enshrines a certain set of requirements and APIs, and that is problematic when there is high churn – like what happens in GNOME 3, especially in terms of development tools (libraries and applications) and overall experience. for instance, the section on Clutter (which is the one I can immediately give feedback on, given my position) still uses the deprecated “default stage”, and yet it uses the new ClutterActor easing state for animations; the default stage was deprecated at long last in Clutter 1.10, but its use was not recommended since the days of 1.0; the actor easing state API was introduced in the same release that deprecated the default stage. also, the example code published in the Clutter section does not use any of the layout managers provided by Clutter, preferring the fixed positioning of the actors, which is perfectly fine on its own; the book, though, then proceeds to mention the amount of code necessary to get something on the screen, compared to the equivalent code in GTK, that uses boxes and grids. in general, that’s an utterly fair thing to say: Clutter sits at a lower-level than GTK, and it doesn’t have complex constructs like GTK does; I’m pretty sure, though, there are better examples than a row of boxes that could have used a BoxLayout, or a FlowLayout, or a GridLayout, or a TableLayout; or better examples than using an explicit PangoFontDescription instance with a ClutterText to set a specific font name and size, instead of using the ClutterText:font-name property which wraps the whole thing for the developer’s convenience. in short: Clutter is more “raw” than GTK, but there are convenience APIs for developers.

it’s been a long time1 since I started off as a beginner in developing with (and) GNOME, so all I can say about book for beginners is whether they are using what I write in the way I think it’s supposed to be used; as far as I’m concerned, apart from a couple of issues, this book is indeed providing a solid base for people that want to begin developing for GNOME and with GNOME.

the price is pretty accessible, compared to the cost of similar books: I’ve paid much, much more for an introductory book on Core Animation, or on Perl books; the ebook version, if you dislike the dead tree version, comes in various formats, including PDF and Kindle.

I’m not going to give votes or anything: it’d be a pointless number on an equally pointless scale; but if you’re a beginner, I think this book may be fairly interesting to you, if you want to start coding with GNOME technologies.

  1. this year is actually my 10th bug-versary

by ebassi at March 25, 2013 05:31 PM

March 18, 2013

Michael Wood

Video processing all you need is…

It’s been raining all this weekend, so what I wanted to do was see if I could accompany myself playing a piece of music by making multiple videos and slowly building it up.

I used my Canon S100 to record a bunch of videos and then imported them into kdenlive (the only viable multi-track video editor on Linux Desktop), this allowed me to cut and paste and do all the editing to make sure each track correctly accompanied the previous ones.

Because I wanted to show all 4 video tracks at once I figured I could use the videomix GStreamer element to do this with a custom pipeline. So by toggling which video track was “muted” I exported each video in a raw format at 960×540 so that 960×2 + 540×2 = 1920×1080 (i.e. 1080p).

The next thing I wanted to do was clean up the audio as the little microphone in the camera picked up a fair bit of hissing (not from the audience of course), multiply that 4 times and it’s pretty bad. The UI for doing this in Audacity is a bit odd…

1) Go to Effect menu select “Remove noise” 2) In the dialog click “Get noise profile” 3) Use the selection tool to select a part of the audio which contains only noise. 4) Deselect (so nothing is selected) 5)  Go to Effect menu select “Remove noise” and click OK .
Or watch this tutorial. Either way It works pretty damn well. Export to WAV.

So now we have 4 video files and an audio track let’s combine them into a webm video:


gst-launch -e  \
 	filesrc location=piano.avi ! avidemux ! ffdec_huffyuv ! ffmpegcolorspace ! video/x-raw-rgb !  mix. \
 	filesrc location=sing.avi ! avidemux ! ffdec_huffyuv ! ffmpegcolorspace ! video/x-raw-rgb ! mix. \
 	filesrc location=guitar2.avi ! avidemux ! ffdec_huffyuv ! ffmpegcolorspace ! video/x-raw-rgb ! mix. \
 	filesrc location=guitar1.avi ! avidemux ! ffdec_huffyuv ! ffmpegcolorspace ! video/x-raw-rgb ! mix. \
 	videotestsrc pattern=2 ! video/x-raw-rgb,width=1920,height=1080 ! \
        videomixer name=mix sink_0::xpos=0 sink_0::ypos=0 sink_0::zorder=02  sink_1::xpos=960 sink_2::ypos=540 \
        sink_3::ypos=540 sink_3::xpos=960 sink_4::zorder=0 ! video/x-raw-rgb,height=1080,width=1920 ! \
        ffmpegcolorspace !  vp8enc threads=4 !  webmmux name=mux ! filesink location=full-render.webm  \
 	filesrc location=audiotrack.wav ! wavparse ! audioconvert ! vorbisenc ! mux.

So those who know a bit of GStreamer you’ll spot the “videotestsrc” here as the 5th video input, this is because I couldn’t see a way to set the surface size of the videomixer, it seems it was mostly designed for doing picture in picture, so the output is set to the largest video source, without the videotestsrc the video would be stuck at 960×540.

I did try a videobox element but couldn’t persuade the pipeline to roll if the left/right/top/bottom properties were > 100px. I also had problems with the order of the height/width caps causing “Internal Data flow” in the avidemux and problems using decodebin2 which I abandoned pretty early on.

The videotestsrc introduces a second problem in that the videotestsrc has no EOS, it will continue streaming forever. I knew how long my video should be so I just HUP’d the process when it got to the right progress, `gstreamer -e` makes sure it sends an EOS on the pipeline on HUP which closes the filesink properly. Obviously this is pretty lame so maybe there is a better solution?  you could use a python script to listen to the EOS signal on one of the filesrc elements and then propagate the EOS to the videotestsrc.

Anywhoo it kind of worked.

Linux is all about making it work for you?

Update: oops forgot to add the audio level right.. time to re-add the audio..


gst-launch -e filesrc location='full-render.webm' ! matroskademux ! \
 video/x-vp8 ! webmmux name=mux ! filesink location=final.webm \
 filesrc location='audiotrack.wav' ! wavparse ! audioconvert ! vorbisenc ! mux.

by Michael Wood at March 18, 2013 01:52 PM

February 28, 2013

Ross Burton

Mutually Exclusive PulseAudio streams

The question of mutually exclusive streams in PulseAudio came to mind earlier, and thanks to Arun and Jens in #gupnp I discovered the PulseAudio supports them already. The use-case here is a system where there are many ways of playing music, but instead of mixing them PA should pause the playing stream when a new one starts.

Configuring this with PulseAudio is trivial, using the module-role-cork module:

$ pactl load-module module-role-cork trigger_roles=music cork_roles=music

This means that when a new stream with the “music” role starts, “cork” (pause to everyone else) all streams that have the “music” role. Handily this module won’t pause the trigger stream, so this implements exclusive playback.

Testing is simple with gst-launch

$ PULSE_PROP='media.role=music' gst-launch-0.10 audiotestsrc ! pulsesink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstPulseSinkClock

At this point, starting another gst-launch results in this stream being paused:

Setting state to PAUSED as requested by /GstPipeline:pipeline0/GstPulseSink:pulsesink0...

Note that it won’t automatically un-cork when the newer stream disappears, but for what I want this is the desired behaviour anyway.

by Ross Burton at February 28, 2013 04:12 PM

February 21, 2013

Neil Roberts

Mewer

If you sometimes find yourself confused about whether you should use the word ‘fewer’ or ‘less’, here is a handy table to help you:

image

It is extremely important to pick the right word. For example if you think there are too many pedantic people in your office you should say ‘my office has the manyest pedantic people’. If you were to incorrectly say ‘my office has the most pedantic people’, your listener would immediately assume you meant that the people in your office are more pedantic than any other people. This could cause untold confusion and misunderstanding. It is therefore essential that if you hear a friend incorrectly say ‘more’ when they mean ‘manyer’ you must immediately interrupt and correct them.

February 21, 2013 01:43 AM

February 13, 2013

Emmanuele Bassi

Everything hits at once

conferences
sadly, this year I missed FOSDEM — and I saw from the blog posts on PGO that the DX hackfest was also amazing — but I had a good excuse, as I had to give a talk about Unicorns and Rainbows at linux.conf.au, in Canberra. I had the chance to explain what are the plans for the toolkit(s) used in GNOME in the coming future, as well as (obviously) enjoying one of the best free and open source software conferences, so all in all, I think that it was a good trade off. plus, I didn’t have to deal with the FOSDEM flu, apparently, so: good stuff all around. the video of the talk is already available on the LCA website1 so if you weren’t there, you can still watch it. I’d put the slides somewhere, but I’ll have to make sure that the notes are up to date first2.
life
after LCA, I took a week off in Sydney with Marta; the city was really enjoyable, and we had a great time exploring it. when I say “a week off” I don’t mean a week off of work, given that I’m currently not working: January 22nd was my last day at Mozilla. I detailed a why I left, and what my current plans are, on G+, so I won’t repeat them here — the executive summary is that I burned out pretty badly before leaving Intel, and I made the mistake of not taking a clean break before starting to work for Mozilla. right now, I’m trying to get back in a productive groove, so I’m looking around for cool stuff to do, as well as fulfilling my roles in the GNOME community as best as I can.
happenings
while in Sydney, I got a notification from my VPS hosting, the kind of notification that you really don’t want to receive when you’re on holiday and on a hotel wifi connection: apparently, my WordPress installation got hacked, and it started serving malware. cue various profanities that I clearly cannot repeat on a family-friendly website. I learned some valuable lessons from the first security breach I’ve ever experienced in years, but the main one is definitely the old adage from the Hitchhikers Guide to the Galaxy: don’t panic3.
beers
apropós of events, there’s a new GNOME London Beers meet up scheduled for celebrating the 3.8 release — or, at least, that’s the main excuse for meeting up, drinking beer, and having pizza with the GNOMErs in (and around) London. if you are in the area on March 1st, sign up on the wiki!
  1. and, seriously: how come the GUADEC videos are always, always late or not available even when we do record them? it makes us look really bad; GUADEC teams: fix this crap. it should be a hard requirement
  2. I had my notes on the tablet, to avoid the focus back and forth when I had to advance them
  3. the other, and equally important one is: never trust a pile of PHP brain damage

by ebassi at February 13, 2013 11:10 PM

February 12, 2013

Hylke Bons

The CLI (1): Getting help

This is part one of the series “The State of the CLI”. There’s a lot to write about, but let’s start small: how helpful are command line tools and how do we acquire help?

“Help!”, I mean, “Hilfe?”

You’d think that getting help on certain commands is the most straightforward thing ever, but I’ve found that this isn’t always the case. Most commands carry help screens and documentation, but the way you access these may vary.

If you’re a bit more familiar with the CLI, you’d probably start by running the command accompanied by arguments or options that may show you the help screen. These will be different depending on who provided it (eg. GNU, or BSD): help, --help, -help, -h, ?, and -? are all possibilities (I will discuss the different option styles in a different post). I’m trying to figure out how this new command works. This is a bit like someone nagging who’s asking for help because they didn’t say "please" correctly in the right language. This is mostly due to Linux distributions shipping tools from many different projects. Overall though, --help seems to be the safest bet.

A less proactive approach would be to pretend to be clueless and pass a bogus argument or option to a command. If you’re lucky, you’ll be shown some information on how to get to the help screen.

hbons@slowpoke:~$ ip --foo
Option "-foo" is unknown, try "ip -help".
hbons@slowpoke:~$ mkdir --foo
mkdir: unrecognized option '--foo'
Try `mkdir --help' for more information.
hbons@slowpoke:~$

We didn’t get what we wanted in either case here, but at least we’re being led into the right direction.

Lastly, and probably most obviously, you can launch the command without any arguments at all. Some commands will give you the help this way. There is a downside to this kind of investigation, however. Running a command blindly can have unintended consequences, and may actually be dangerous due to inconsistency between different commands (“will this trigger an action, or show me the help?”).

Handholding (or not)

The access to useful help differs greatly between commands. This is what the BSD version of the ls command does:

hbons@slowpoke:~$ ls --help
ls: illegal option -- -
usage: ls [-ABCFGHLOPRSTUWabcdefghiklmnopqrstuwx1] [file ...]
hbons@slowpoke:~$

It doesn’t look particularly useful at all. It lists all the option that you can possibly enter, but there’s no indication of what any of these do, nor are there any hints about how you can find out. Even if I was more experienced using this command, it’s hard to see how a line like this could be helpful.

The git command line tools are short but helpful when you do something wrong. You’re being pointed to the help:

hbons@slowpoke:~$ git unknown
git: 'unknown' is not a git command. See 'git --help'.
hbons@slowpoke:~$

Additionally, these tools have a nice mechanism built in that tries to guess your intentions when you’ve made a typo, or when you’ve entered a command that doesn’t exist:

hbons@slowpoke:~$ git pu
git: 'pu' is not a git command. See 'git --help'.

Did you mean one of these?
    pull
    push
hbons@slowpoke:~$

Documentation

In addition to help screens, most commands have a good (sometimes overwhelming) amount of well written documentation available. It’s just a matter of how you get to it. The vast majority of commands ship with what are called “man” (manual) pages. Documentation on a command can be retrieved by running the man command with a subject (command name) as an argument. There’s also the info command by the GNU project, but it doesn’t seem as commonly used. Some commands have pointers at the end of their help screens on how to get more detailed help and documentation. Good examples of this are git and GNU’s version of ls:

hbons@slowpoke:~$ ls --help
[...]
Report ls bugs to bug-coreutils@gnu.org
GNU coreutils home page: <http://www.gnu.org/software/coreutils/>
General help using GNU software: <http://www.gnu.org/gethelp/>
For complete documentation, run: info coreutils 'ls invocation'
hbons@slowpoke:~$ git --help
[...]
See 'git help <command>' for more information on a specific command.
hbons@slowpoke:~$

Do you have problems getting help with commands, or do you have opinions or tips on how to make things better? I look forward to your feedback, please let me know.

February 12, 2013 12:00 AM

February 11, 2013

Chris Lord

Tips for smooth scrolling web pages (EdgeConf follow-up)

I’m starting to type this up as EdgeConf draws to a close. I spoke on the performance panel, with Shane O’Sullivan, Rowan Beentje and Pavel Feldman, moderated by Matt Delaney, and tried to bring a platform perspective to the affair. I found the panel very interesting, and it reminded me how little I know about the high-level of web development. Similarly, I think it also highlighted how little consideration there usually is for the platform when developing for the web. On the whole, I think that’s a good thing (platform details shouldn’t be important, and they have a habit of changing), but a little platform knowledge can help in structuring things in a way that will be fast today, and as long as it isn’t too much of a departure from your design, it doesn’t hurt to think about it. At one point in the panel, I listed a few things that are particularly slow from a platform perspective today. While none of these were intractable problems, they may not be fixed in the near future and feedback indicated that they aren’t all common knowledge. So what follows are a few things to avoid, and a few things to do that will help make your pages scroll smoothly on both desktop and mobile. I’m going to try not to write lies, but I hope if I get anything slightly or totally wrong, that people will correct me in the comments and I can update the post accordingly :)

Avoid reflow

When I mentioned this at the conference, I prefaced it with a quick explanation of how rendering a web page works. It’s probably worth reiterating this. After network and such have happened and the DOM tree has been created, this tree gets translated into what we call the frame tree. This tree is similar to the DOM tree, but it’s structured in a way that better represents how the page will be drawn. This tree is then iterated over and the size and position of these frames are calculated. The act of calculating these positions and sizes is referred to as reflow. Once reflow is done, we translate the frame tree into a display list (other engines may skip this step, but it’s unimportant), then we draw the display list into layers. Where possible, we keep layers around and only redraw parts that have changed/newly become visible.

Really, reflow is actually quite fast, or at least it can be, but it often forces things to be redrawn (and drawing is often slow). Reflow happens when the size or position of things changes in such a way that dependent positions and sizes of elements need to be recalculated. Reflow usually isn’t something that will happen to the entire page at once, but incautious structuring of the page can result in this. There are quite a few things you can do to help avoid large reflows; set widths and heights to absolute values where possible, don’t reposition or resize things, don’t unnecessarily change the style of things. Obviously these things can’t always be avoided, but it’s worth thinking if there are other ways to achieve the result you want that don’t force reflow. If positions of things must be changed, consider using a CSS translate transform, for example – transforms don’t usually cause reflow.

If you absolutely have to do something that will trigger reflow, it’s important to be careful how you access properties in JavaScript. Reflow will be delayed as long as possible, so that if multiple things happen in quick succession that would cause reflow, only a single reflow actually needs to happen. If you access a property that relies on the frame tree being up to date though, this will force reflow. Practically, it’s worth trying to batch DOM changes and style changes, and to make sure that any property reads happen outside of these blocks. Interleaving reads and writes can end up forcing multiple reflows per page-draw, and the cost of reflow can add up quickly.

Avoid drawing

This sounds silly, but you should really only make the browser do as little drawing as is absolutely necessary. Most of the time, drawing will happen on reflow, when new content appears on the screen and when style changes. Some practical advice to avoid this would be to avoid making DOM changes near the root of the tree, avoid changing the size of things and avoid changing text (text drawing is especially slow). While repositioning doesn’t always force redrawing, you can ensure this by repositioning using CSS translate transforms instead of top/left/bottom/right style properties. Especially avoid causing redraws to happen while the user is scrolling. Browsers try their hardest to keep up the refresh rate while scrolling, but there are limits on memory bandwidth (especially on mobile), so every little helps.

Thinking of things that are slow to draw, radial gradients are very slow. This is really just a bug that we should fix, but if you must use CSS radial gradients, try not to change them, or put them in the background of elements that frequently change.

Avoid unnecessary layers

One of the reasons scrolling can be fast at all on mobile is that we reduce the page to a series of layers, and we keep redrawing on these layers down to a minimum. When we need to redraw the page, we just paste these layers that have already been drawn. While the GPU is pretty great at this, there are limits. Specifically, there is a limit to the amount of pixels that can be drawn on the screen in a certain time (fill-rate) – when you draw to the same pixel multiple times, this is called overdraw, and counts towards the fill-rate. Having lots of overlapping layers often causes lots of overdraw, and can cause a frame’s maximum fill-rate to be exceeded.

This is all well and good, but how does one avoid layers at a high level? It’s worth being vaguely aware of what causes stacking contexts to be created. While layers usually don’t exactly correspond to stacking contexts, trying to reduce stacking contexts will often end up reducing the number of resulting layers, and is a reasonable exercise. Even simpler, anything with position: fixed, background-attachment: fixed or any kind of CSS transformed element will likely end up with its own layer, and anything with its own layer will likely force a layer for anything below it and anything above it. So if it’s not necessary, avoid those if possible.

What can we do at the platform level to mitigate this? Firefox already culls areas of a layer that are made inaccessible by occluding layers (at least to some extent), but this won’t work if any of the layers end up with transforms, or aren’t opaque. We could be smarter with culling for opaque, transformed layers, and we could likely do a better job of determining when a layer is opaque. I’m pretty sure we could be smarter about the culling we already do too.

Avoid blending

Another thing that slows down drawing is blending. This is when the visual result of an operation relies on what’s already there. This requires the GPU (or CPU) to read what’s already there and perform a calculation on the result, which is of course slower than just writing directly to the buffer. Blending also doesn’t interact well with deferred rendering GPUs, which are popular on mobile.

This alone isn’t so bad, but combining it with text rendering is not so great. If you have text that isn’t on a static, opaque background, that text will be rendered twice (on desktop at least). First we render it on white, then on black, and we use those two buffers to maintain sub-pixel anti-aliasing as the background varies. This is much slower than normal, and also uses up more memory. On mobile, we store opaque layers in 16-bit colour, but translucent layers are stored in 32-bit colour, doubling the memory requirement of a non-opaque layer.

We could be smarter about this. At the very least, we could use multi-texturing and store non-opaque layers as a 16-bit colour + 8-bit alpha, cutting the memory requirement by a quarter and likely making it faster to draw. Even then, this will still be more expensive than just drawing an opaque layer, so when possible, make sure any text is on top of a static, opaque background when possible.

Avoid overflow scrolling

The way we make scrolling fast on mobile, and I believe the way it’s fast in other browsers too, is that we render a much larger area than is visible on the screen and we do that asynchronously to the user scrolling. This works as the relationship between time and size of drawing is not linear (on the whole, the more you draw, the cheaper it is per pixel). We only do this for the content document, however (not strictly true, I think there are situations where whole-page scrollable elements that aren’t the body can take advantage of this, but it’s best not to rely on that). This means that any element that isn’t the body that is scrollable can’t take advantage of this, and will redraw synchronously with scrolling. For small, simple elements, this doesn’t tend to be a problem, but if your entire page is in an iframe that covers most or all of the viewport, scrolling performance will likely suffer.

On desktop, currently, drawing is synchronous and we don’t buffer area around the page like on mobile, so this advice doesn’t apply there. But on mobile, do your best to avoid using iframes or having elements that have overflow that aren’t the body. If you’re using overflow to achieve a two-panel layout, or something like this, consider using position:fixed and margins instead. If both panels must scroll, consider making the largest panel the body and using overflow scrolling in the smaller one.

I hope we’ll do something clever to fix this sometime, it’s been at the back of my mind for quite a while, but I don’t think scrolling on sub-elements of the page can ever really be as good as the body without considerable memory cost.

Take advantage of the platform

This post sounds all doom and gloom, but I’m purposefully highlighting what we aren’t yet good at. There are a lot of things we are good at (or reasonable, at least), and having a fast page need not necessarily be viewed as lots of things to avoid, so much as lots of things to do.

Although computing power continues to increase, the trend now is to bolt on more cores and more hardware threads, and the speed increase of individual cores tends to be more modest. This affects how we improve performance at the application level. Performance increases, more often than not, are about being smarter about when we do work, and to do things concurrently, more than just finding faster algorithms and micro-optimisation.

This relates to the asynchronous scrolling mentioned above, where we do the same amount of work, but at a more opportune time, and in a way that better takes advantage of the resources available. There are other optimisations that are similar with regards to video decoding/drawing, CSS animations/transitions and WebGL buffer swapping. A frequently occurring question at EdgeConf was whether it would be sensible to add ‘hints’, or expose more internals to web developers so that they can instrument pages to provide the best performance. On the whole, hints are a bad idea, as they expose platform details that are liable to change or be obsoleted, but I think a lot of control is already given by current standards.

On a practical level, take advantage of CSS animations and transitions instead of doing JavaScript property animation, take advantage of requestAnimationFrame instead of setTimeout, and if you find you need even more control, why not drop down to raw GL via WebGL, or use Canvas?

I hope some of this is useful to someone. I’ll try to write similar posts if I find out more, or there are significant platform changes in the future. I deliberately haven’t mentioned profiling tools, as there are people far more qualified to write about them than I am. That said, there’s a wiki page about the built-in Firefox profiler, some nice documentation on Opera’s debugging tools and Chrome’s tools look really great too.

by Chris Lord at February 11, 2013 10:07 AM

February 07, 2013

Chris Lord

Firefox for Android in 2013

Lucas Rocha and I gave a talk at FOSDEM over the weekend on Firefox for Android. It went ok, I think we could have rehearsed it a bit better, but it was generally well-received and surprisingly well-attended! I’m sure Lucas will have the slides up soon too. If you were unfortunate enough not to have attended FOSDEM, and doubly unfortunate that you missed our talk (guffaw), we’ll be reiterating it with a bit more detail in the London Mozilla space on February 22nd. We’ll do our best to answer any questions you have about Firefox for Android, but also anything Mozilla-related. If you’re interested in FirefoxOS, there may be a couple of phones knocking about too. Do come along, we’re looking forward to seeing you :)

p.s. I’ll be talking on a performance panel at EdgeConf this Saturday. Though it’s fully booked, I think tickets occasionally become available again, so might be worth keeping an eye on. They’ll be much cleverer people than me knocking about, but I’ll be doing my best to answer your platform performance related questions.

by Chris Lord at February 07, 2013 03:31 PM

January 29, 2013

Hylke Bons

State of the CLI

Unlike most computer interfaces, the Command Line Interface (or CLI) hasn’t changed much over the last 30 years. Does this mean we’re in a good place?

Pros and cons

The CLI can be helpful in a lot of cases, but it has a bit of a learning curve. It allows you to do simple canonical things, but these things can be linked together in various ways that can’t often be done with Graphical User Interfaces (or GUIs). Thus sparing you from tedious repetitive labour.

Some applications also allow for command line interaction next to their graphical user interface, which makes them able to be integrated in automated processes.

In my opinion, the CLI sits about half way between raw programming and using a graphical application to get your tasks done. There are graphical applications that can give you a lot of control as well. A good example of this is OS X’s Automator.

It may be that your product provides (or requires) some interaction on the CLI level. Now how do you go about doing this right?

GNU and BSD

Linux distributions are a mixed bag of different programs, and so these programs don’t always behave in the same predictable ways. Most common commands originate either from the GNU or BSD projects.

Superficially, BSD sticks more to the older Unix days, whilst GNU tries to come up with new ideas that makes their programs a little more convenient to use.

Eric Raymond’s The Art of UNIX programming goes into CLI interface design a little bit. The GNU Coding Standards has something to say about the topic as well.

Overall, documentation and guidelines are sparse. As a developer writing software for Linux distributions, it’s not clear where one should go.

Best practises

The CLI is used most often by people that are more familiar with computers, software developers, system administrators, and those who like to tinker. This doesn’t mean that the CLI has to or is allowed be a suboptimal experience on that level.

What’s noticeable is that there doesn’t seem to be much documented reasoning or motivation behind any of the CLI design decisions. Things mostly seem the way they are because of “historic” reasons or tradition.

The next couple of months I’m going to to assess the CLI of various common software packages and write blog posts about them, to come up with a list of best practises that doesn’t conflict with the current state of utilities out there.

I’d like to explicitly mention that these will be best practises that are compatible with current conventions, not some new official “standard”. Having some rules that you can keep in mind, as well as knowing the reasons behind them. I don’t want to fall into the standards trap.

The usual suspects

If you administrate a server, you’ve probably used commands from the following packages:

  • GNU coreutils, a collection of basic tools, such as cat, ls, and rm
  • GnuPG, encryption tools
  • openSSH, connectivity tools for remote access
  • Apt, package management
  • Git, source code management
  • cURL, data transfer/download tool

These are some of the tools I’ll be looking at.

To wrap up: this is not about redefining the command line experience, or creating some revolutionary alternative. rather to identify issues, streamline the process, and come up with some design patterns that may help you write a usable CLI application for Linux.

If you have any CLI pet peeves, know of commands that are really good/bad, or have any other tips or links you think I should read, please let me know.

January 29, 2013 12:00 AM

January 13, 2013

Damien Lespiau

A git pre-commit hook to check the year of copyright notices

Like every year, touching a source file means you also need to update the year of the copyright notice you should have at the top of the file. I always end up forgetting about them, this is where a git pre-commit hook would be ultra-useful, so I wrote one:

#
# Check if copyright statements include the current year
#
files=`git diff --cached --name-only`
year=`date +"%Y"`

for f in $files; do
    head -10 $f | grep -i copyright 2>&1 1>/dev/null || continue
    
    if ! grep -i -e "copyright.*$year" $f 2>&1 1>/dev/null; then
        missing_copyright_files="$missing_copyright_files $f"
    fi
done

if [ -n "$missing_copyright_files" ]; then
    echo "$year is missing in the copyright notice of the following files:"
    for f in $missing_copyright_files; do
        echo "    $f"
    done 
    exit 1
fi

Hope this helps!

by damien at January 13, 2013 09:39 PM

December 13, 2012

Hylke Bons

SparkleShare 1.0

I’m delighted to announce the availability of SparkleShare 1.0!

What is SparkleShare?

SparkleShare is an Open Source (self hosted) file synchronisation and collaboration tool and is available for Linux distributions, Mac, and Windows.

SparkleShare creates a special folder on your computer in which projects are kept. All projects are automatically synced to their respective hosts (you can have multiple projects connected to different hosts) and to your team’s SparkleShare folders when someone adds, removes or edits a file.

The idea for SparkleShare sprouted about three years ago at the gnome Usability Hackfest in London (for more background on this read The one where the designers ask for a pony).

SparkleShare uses the version control system Git under the hood, so people collaborating on projects can make use of existing infrastructure, and setting up a host yourself will be easy enough. Using your own host gives you more privacy and control, as well as lots of cheap storage space and higher transfer speeds.

Like every piece of software it’s not bug free, even though it has hit 1.0. But it’s been tested for a long time now and all reproducable and known major issues have been fixed. It works reliably and the issue tracker is mostly filled with feature requests now.

The biggest sign that it was time for a 1.0 release was the fact that Lapo hasn’t reported brokenness for a while now. This can either mean that SparkleShare has been blessed by a unicorn or that the world will end soon. I think it’s the first.

Features

For those of you that are not (that) familiar with SparkleShare, I’ll sum up its most important features:

The SparkleShare folder

This is where all of your projects are kept. Everything in this folder will be automatically synced to the remote host(s), as well as to your other computers and everyone else connected to the same projects. Are you done with a project? Simply delete it from your SparkleShare folder.

The status icon

The status icon gives you quick access to all of your projects and shows you what’s going on regarding the synchronisation process. From here you can connect to existing remote projects and open the recent changes window.

The setup dialog

Here you can link to a remote project. SparkleShare ships with a couple of presets. You can have mulitple projects syncing to different hosts at the same time. For example, I use this to sync some public projects with Github, some personal documents with my own private vps and work stuff with a host on the intranet.

Recent changes window

The recent changes window shows you everything that has recently changed and by whom.

History

The history view let’s you see who has edited a particular file before and allows you to restore deleted files or revert back to a previous version.

Conflict handling

When a file has been changed by two people at the same time and causes a conflict, SparkleShare will create a copy of the conflicting file and adds a timestamp. This way changes won’t get accidentally lost and you can either choose to keep one of the files or cherry pick the wanted changes.

Notifications

If someone makes a change to a file a notification will pop up saying what changed and by whom.

Client side encryption

Optionally you can protect a project with a password. When you do, all files in it will be e encrypted locally using AES-256-CBC before being transferred to the host. The password is only stored locally, so if someone cracked their way into your server it will be very hard (if not impossible) to get the files’ contents. This on top of the file transfer mechanism, which is already encrypted and secure. You can set up an encrypted project easily with Dazzle.

Dazzle, the host setup script

I’ve created a script called Dazzle that helps you set up a Linux host to which you have SSH access. It installs Git, adds a user account and configures the right permissions. With it, you should be able to get up and running by executing just three simple commands.

Plans for the future

Something that comes up a lot is the fact that Git doesn’t handle large (binary) files well. Git also stores a database of all the files including history on every client, causing it to use a lot of space pretty quickly. Now this may or may not be a problem depending on your usecase. Nevertheless I want SparkleShare to be better at the “large backups of bulks of data” usecase.

I’ve stumbled upon a nice little project called git-bin in some obscure corner of Github. It seems like a perfect match for SparkleShare. Some work needs to be done to integrate it and to make sure it works over SSH. This will be the goal for SparkleShare 2.0, which can follow pretty soon (hopefully in months, rather than years).

I really hope contributors can help me out in this area. The Github network graph is feeling a bit lonely. Your help can make a big difference!

Some other fun things to work on may be:

  • Saving the modification times of files
  • Creating a binary Linux bundle
  • SparkleShare folder location selection
  • GNOME 3 integration
  • ...other things that you may find useful.

If you want to get started on contributing, feel free to visit the IRC channel: #sparkleshare on irc.gnome.org so I can answer any questions you may have and give support.

Finally...

I’d like to thank everyone who has helped testing and submitted patches so far. SparkleShare wouldn’t be nearly as far as it is now without you. Cheers!

December 13, 2012 12:00 AM

November 23, 2012

Michael Wood

python pygi GUdev example

Python GI / pygi GUdev example

from gi.repository import GUdev
from gi.repository import GLib

mainloop = GLib.MainLoop ()

def on_uevent (client, action, device):
    print ("action " + action + " on device " + device.get_sysfs_path())

# empty array for all subsystems, char array for subsystem e.g. ["usb","video4linux"] etc
client = GUdev.Client (subsystems=[])
# or client = GUdev.Client.new ([])

client.connect ("uevent", on_uevent)

mainloop.run ()

1. You can’t change subsystems that you’re listening to after construction
2. Make sure you keep a reference to client, because even though you’ve connected to it’s signal it will still be unreferenced when it goes out of scope

by Michael Wood at November 23, 2012 04:18 PM

November 18, 2012

Ross Burton

Yocto Project Build Times

Last month our friends at Codethink were guests on FLOSS Weekly, talking about Baserock. Baserock is a new embedded build system with some interesting features/quirks (depending on your point of view) that I won’t go into. What caught my attention was the discussion about build times for various embedded build systems.

Yocto, again, if you want to do a clean build it will take days to build your system, even if you do an incremental build, even if you just do a single change and test it, that will take hours.

(source: FLOSS Weekly #230, timestamp 13:21, slightly edited for clarity)

Now “days” for a clean build and “hours” for re-building an image with a single change is quite excessive for the Yocto Project, but also quite specific. I asked Rob Taylor where he was getting these durations from, and he corrected himself on Twitter:

I’m not sure if he meant “hours” for both a full build and an incremental build, or whether by “hours” for incremental he actually meant “minutes”, but I’ll leave this for now and talk about real build times.

Now, my build machine is new but nothing special. It’s built around an Intel Core i7-3770 CPU (quad-core, 3.4GHz) with 16GB of RAM (which is overkill, but more RAM means more disk cache which is always good), and two disks: a 250GB Western Digital Blue for /, and a 1TB Western Digital Green for /data (which is where the builds happen). This was built by PC Specialist for around £600 (the budget was $1000 without taxes) and happily sits in my home study running a nightly build without waking the kids up. People with more money stripe /data across multiple disks, use SSDs, or 10GB tmpfs filesystems, but I had a budget to stick to.

So, let’s wipe my build directory and do another build from scratch (with sources already downloaded). As a reference image I’ll use core-image-sato, which includes an X server, GTK+, the Matchbox window manager suite and some demo applications. For completeness, this is using the 1.3 release – I expect the current master branch to be slightly faster as there’s some optimisations to the housekeeping that have landed.

$ rm -rf /data/poky-master/tmp/
$ time bitbake core-image-sato
Pseudo is not present but is required, building this first before the main build
Parsing of 817 .bb files complete (0 cached, 817 parsed). 1117 targets, 18 skipped, 0 masked, 0 errors.
...
NOTE: Tasks Summary: Attempted 5393 tasks of which 4495 didn't need to be rerun and all succeeded.

real 9m47.289s

Okay, that was a bit too fast. What happened is that I wiped my local build directory, but it’s pulling build components from the “shared state cache”, so it spent six minutes reconstructing a working tree from shared state, and then three minutes building the image itself. The shared state cache is fantastic, especially as you can share it between multiple machines. Anyway, by renaming the sstate directory it won’t be found, and then we can do a proper build from scratch.

$ rm -rf /data/poky-master/tmp/
$ mv /data/poky-master/sstate /data/poky-master/sstate-old
$ time bitbake core-image-sato
Pseudo is not present but is required, building this first before the main build
...
NOTE: Tasks Summary: Attempted 5117 tasks of which 352 didn't need to be rerun and all succeeded.

real 70m37.298s
user 326m45.417s
sys 37m13.304s

That’s a full build from scratch (with downloaded sources, we’re not benchmarking my ADSL) in just over an hour on affordable commodity hardware. As I said this isn’t some “minimal” image that boots straight to busybox, this is building a complete cross-compiling toolchain, the kernel, X.org, GTK+, GStreamer, the Matchbox window manager/panel/desktop, and finally several applications. In total, 431 source packages were built and packaged, numerous QA tests executed and flashable images generated.

My configuration is to build for Intel Atom but a build for an ARM, MIPS, or PowerPC target would also take a similar amount of time, as even what could be considered “native” targets (targeting Atom, building on i7) doesn’t always turn out to be native: for example carrier-grade Xeon’s have instructions that my i7 doesn’t have, and if you were building carrier-grade embedded software you’d want to ensure they were used.

So, next time someone claims Yocto Project/OpenEmbedded takes “days” or even “hours” to do a build, you can denounce that as FUD and point them here!

by Ross Burton at November 18, 2012 03:57 AM

November 06, 2012

Michael Wood

dawati-user-testing tool updates

Some more updates to the dawati-user-testing Insight recorder tool

New features:

  • multiple webcam input support
  • system installable
  • required codec checker
  • encoding finished notification
  • various bug fixes

More info on latest release: http://belenpena.posterous.com/and-now-with-support-for-mobile-tests-hell-ye

project page: https://github.com/dawati/insight-recorder

p.s. we’re looking for some help making this  work on ubuntu/unity

by Michael Wood at November 06, 2012 02:41 PM

October 31, 2012

Neil Roberts

Rig 1

Today is the first release of Rig. This is a project that Robert Bragg and I have been working for the past few months. I won't try to describe it here, but instead I will try to entice you with a screenshot and ask you to take a look at Robert's detailed blog post.

image

October 31, 2012 06:39 PM

October 17, 2012

Chris Lord

Progressive Tile Rendering

So back from layout into graphics again! For the last few weeks, I’ve been working with Benoit Girard on getting progressive tile rendering finished and turned on by default in Firefox for Android. The results so far are very promising! First, a bit of background (feel free to skip to the end if you just want the results).

You may be aware that we use a multi-threaded application model for Firefox for Android. The UI runs in one thread and Gecko, which does the downloading and rendering of the page, runs in another. This is a bit of a simplification, but for all intents and purposes, that’s how it works. We do this so that we can maintain interactive performance – something of paramount important with a touch-screen. We render a larger area than you see on the screen, so that when you scroll, we can respond immediately without having to wait for Gecko to render more. We try to tell Gecko to render the most relevant area next and we hope that it returns in time so that the appearance is seamless.

There are two problems with this as it stands, though. If the work takes too long, you’ll be staring at a blank area (well, this isn’t quite true either, we do a low-resolution render of the entire page and use that as a backing in this worst-case scenario – but that often doesn’t work quite right and is a performance issue in and of itself…) The second problem is that if a page is made up of many layers, or updates large parts of itself as you scroll, uploading that work to the graphics unit can take a significant amount of time. During this time, the page will appear to ‘hang’, as unfortunately, you can’t upload data to the GPU and continue to use it to draw things (this isn’t true in every single case, but again, for our purposes, it is).

Progressive rendering tries to spread this load by breaking up that work into several smaller tiles, and processing them one-by-one, where appropriate. This helps us mitigate those pauses that may happen for particularly complex/animated pages. Alongside this work, we also add the ability for a render to be cancelled. This is good for the situation that a page takes so long to render that by the time it’s finished, what it rendered is no longer useful. Currently, because a render is done all at once, if it takes too long, we can waste precious cycles on irrelevant data. As well as splitting up this work, and allowing it to be cancelled, we also try to do it in the most intelligent order – render areas that the user can see that were previously blank first, and if that area intersects with more than one tile, make sure to do it in the order that maintains visual coherence the best.

A cherry on the top (which is still very much work-in-progress, but I hope to complete it soon), is that splitting this work up into tiles makes it easy to apply nice transitions to make the pathological cases not look so bad. With that said, how’s about some video evidence? Here’s an almost-Nightly (an extra patch or two that haven’t quite hit central), with the screenshot layer disabled so you can see what can happen in a pathological case:

And here’s the same code, with progressive tile rendering turned on and a work-in-progress fading patch applied.

This page is a particularly slow page to render due to the large radial gradient in the background (another issue which will eventually be fixed), so it helps to highlight how this work can help. For a fast-to-render page that we have no problems with, this work doesn’t have such an obvious effect (though scrolling will still be smoother). I hope the results speak for themselves :)

]]>

by Chris Lord at October 17, 2012 03:39 PM

October 01, 2012

Ross Burton

Devil’s Pie 0.23

This may come as a shock to some, but I’ve just tagged Devil’s Pie 0.23 (tarball here).

tl;dr: don’t use this, but if you insist it now works with libwnck3

The abridged changelog:

  • Port to libwnck3 (Christian Persch)
  • Add unfullscreen action (Mathias Dalh)
  • Remove exec action (deprecated by spawn)

I probably wouldn’t have ever released this as I’m generally not maintaining it and tend to push people towards Devil’s Pie 2 which funnily enough had a 0.23 release two days ago, but Christian asked nicely and I was waiting on a Yocto build to finish.

by Ross Burton at October 01, 2012 09:57 AM