Planet Closed Fist

April 24, 2015

Chris Lord

Web Navigation Transitions

Wow, so it’s been over a year since I last blogged. Lots has happened in that time, but I suppose that’s a subject for another post. I’d like to write a bit about something I’ve been working on for the last week or so. You may have seen Google’s proposal for navigation transitions, and if not, I suggest reading the spec and watching the demonstration. This is something that I’ve thought about for a while previously, but never put into words. After reading Google’s proposal, I fear that it’s quite complex both to implement and to author, so this pushed me both to document my idea, and to implement a proof-of-concept.

I think Google’s proposal is based on Android’s Activity Transitions, and due to Android UI’s very different display model, I don’t think this maps well to the web. Just my opinion though, and I’d be interested in hearing peoples’ thoughts. What follows is my alternative proposal. If you like, you can just jump straight to a demo, or view the source. Note that the demo currently only works in Gecko-based browsers – this is mostly because I suck, but also because other browsers have slightly inscrutable behaviour when it comes to adding stylesheets to a document. This is likely fixable, patches are most welcome.


 Navigation Transitions specification proposal

Abstract

An API will be suggested that will allow transitions to be performed between page navigations, requiring only CSS. It is intended for the API to be flexible enough to allow for animations on different pages to be performed in synchronisation, and for particular transition state to be selected on without it being necessary to interject with JavaScript.

Proposed API

Navigation transitions will be specified within a specialised stylesheet. These stylesheets will be included in the document as new link rel types. Transitions can be specified for entering and exiting the document. When the document is ready to transition, these stylesheets will be applied for the specified duration, after which they will stop applying.

Example syntax:

<link rel="transition-enter" duration="0.25s" href="URI" />
<link rel="transition-exit" duration="0.25s" href="URI" />

When navigating to a new page, the current page’s ‘transition-exit‘ stylesheet will be referenced, and the new page’s ‘transition-enter‘ stylesheet will be referenced.

When navigation is operating in a backwards direction, by the user pressing the back button in browser chrome, or when initiated from JavaScript via manipulation of the location or history objects, animations will be run in reverse. That is, the current page’s ‘transition-enter‘ stylesheet will be referenced, and animations will run in reverse, and the old page’s ‘transition-exit‘ stylesheet will be referenced, and those animations also run in reverse.

[Update]

Anne van Kesteren suggests that forcing this to be a separate stylesheet and putting the duration information in the tag is not desirable, and that it would be nicer to expose this as a media query, with the duration information available in an @-rule. Something like this:

@viewport {
  navigate-away-duration: 500ms;
}

@media (navigate-away) {
  ...
}

I think this would indeed be nicer, though I think the exact naming might need some work.

Transitioning

When a navigation is initiated, the old page will stay at its current position and the new page will be overlaid over the old page, but hidden. Once the new page has finished loading it will be unhidden, the old page’s ‘transition-exit‘ stylesheet will be applied and the new page’s ‘transition-enter’ stylesheet will be applied, for the specified durations of each stylesheet.

When navigating backwards, the CSS animations timeline will be reversed. This will have the effect of modifying the meaning of animation-direction like so:

Forwards          | Backwards
--------------------------------------
normal            | reverse
reverse           | normal
alternate         | alternate-reverse
alternate-reverse | alternate

and this will also alter the start time of the animation, depending on the declared total duration of the transition. For example, if a navigation stylesheet is declared to last 0.5s and an animation has a duration of 0.25s, when navigating backwards, that animation will effectively have an animation-delay of 0.25s and run in reverse. Similarly, if it already had an animation-delay of 0.1s, the animation-delay going backwards would become 0.15s, to reflect the time when the animation would have ended.

Layer ordering will also be reversed when navigating backwards, that is, the page being navigated from will appear on top of the page being navigated backwards to.

Signals

When a transition starts, a ‘navigation-transition-startNavigationTransitionEvent will be fired on the destination page. When this event is fired, the document will have had the applicable stylesheet applied and it will be visible, but will not yet have been painted on the screen since the stylesheet was applied. When the navigation transition duration is met, a ‘navigation-transition-end‘ will be fired on the destination page. These signals can be used, amongst other things, to tidy up state and to initialise state. They can also be used to modify the DOM before the transition begins, allowing for customising the transition based on request data.

JavaScript execution could potentially cause a navigation transition to run indefinitely, it is left to the user agent’s general purpose JavaScript hang detection to mitigate this circumstance.

Considerations and limitations

Navigation transitions will not be applied if the new page does not finish loading within 1.5 seconds of its first paint. This can be mitigated by pre-loading documents, or by the use of service workers.

Stylesheet application duration will be timed from the first render after the stylesheets are applied. This should either synchronise exactly with CSS animation/transition timing, or it should be longer, but it should never be shorter.

Authors should be aware that using transitions will temporarily increase the memory footprint of their application during transitions. This can be mitigated by clear separation of UI and data, and/or by using JavaScript to manipulate the document and state when navigating to avoid keeping unused resources alive.

Navigation transitions will only be applied if both the navigating document has an exit transition and the target document has an enter transition. Similarly, when navigating backwards, the navigating document must have an enter transition and the target document must have an exit transition. Both documents must be on the same origin, or transitions will not apply. The exception to these rules is the first document load of the navigator. In this case, the enter transition will apply if all prior considerations are met.

Default transitions

It is possible for the user agent to specify default transitions, so that navigation within a particular origin will always include navigation transitions unless they are explicitly disabled by that origin. This can be done by specifying navigation transition stylesheets with no href attribute, or that have an empty href attribute.

Note that specifying default transitions in all situations may not be desirable due to the differing loading characteristics of pages on the web at large.

It is suggested that default transition stylesheets may be specified by extending the iframe element with custom ‘default-transition-enter‘ and ‘default-transition-exit‘ attributes.

Examples

Simple slide between two pages:

[page-1.html]

<head>
  <link rel="transition-exit" duration="0.25s" href="page-1-exit.css" />
  <style>
    body {
      border: 0;
      height: 100%;
    }

    #bg {
      width: 100%;
      height: 100%;
      background-color: red;
    }
  </style>
</head>
<body>
  <div id="bg" onclick="window.location='page-2.html'"></div>
</body>

[page-1-exit.css]

#bg {
  animation-name: slide-left;
  animation-duration: 0.25s;
}

@keyframes slide-left {
  from {}
  to { transform: translateX(-100%); }
}

[page-2.html]

<head>
  <link rel="transition-enter" duration="0.25s" href="page-2-enter.css" />
  <style>
    body {
      border: 0;
      height: 100%;
    }

    #bg {
      width: 100%;
      height: 100%;
      background-color: green;
    }
  </style>
</head>
<body>
  <div id="bg" onclick="history.back()"></div>
</body>

[page-2-enter.css]

#bg {
  animation-name: slide-from-left;
  animation-duration: 0.25s;
}

@keyframes slide-from-left {
  from { transform: translateX(100%) }
  to {}
}


I believe that this proposal is easier to understand and use for simpler transitions than Google’s, however it becomes harder to express animations where one element is transitioning to a new position/size in a new page, and it’s also impossible to interleave contents between the two pages (as the pages will always draw separately, in the predefined order). I don’t believe this last limitation is a big issue, however, and I don’t think the cognitive load required to craft such a transition is considerably higher. In fact, you can see it demonstrated by visiting this link in a Gecko-based browser (recommended viewing in responsive design mode Ctrl+Shift+m).

I would love to hear peoples’ thoughts on this. Am I actually just totally wrong, and Google’s proposal is superior? Are there huge limitations in this proposal that I’ve not considered? Are there security implications I’ve not considered? It’s highly likely that parts of all of these are true and I’d love to hear why. You can view the source for the examples in your browser’s developer tools, but if you’d like a way to check it out more easily and suggest changes, you can also view the git source repository.

by Chris Lord at April 24, 2015 09:26 AM

April 19, 2015

Ross Burton

Cycling Dad Nirvana Approaches

Last week Alex wanted to go for a bike ride so we had a play on the local pump track and some of the cheeky trails hidden away nearby. This was his first ride off the pavements so I was cautious but much fun was had by Alex and he’s spent most of the last week talking about the ride and in particular the pump track. There’s a pretty good one at Thetford Forest now so now that Spring has (mostly) sprung we decided to have a family day out and give Isla some proper practise at riding her new bike.

Family ride at Thetford Forest

For the start of the ride it was me and Alex riding ahead with Vicky riding alongside Isla whilst she practised the hard bit of stopping and starting. It didn’t take long before we heard a loud “COMING THROUGH!” and Isla flew by. A few kilometres down the Shepherd trail we decided to head (badly, I can’t recall the Shepherd route at all) before little legs tired and find the pump track. There were a few tumbles, Alex was getting tired and there’s a tight berm with a very loose straight line. Isla of course had the confidence of a bold little sister and wanted a go, which led to a great double-faceplant when I was running alongside guiding her around. Nothing a bit of savlon won’t solve though, and ice cream made it all better!

All in all, a good day: Alex had a good ride and fun on the pump track, and Isla is massively more confident on her bike, especially when the path isn’t new-pavement-smooth. Not panicing when the path is a bit “bumpy lumpy” is an important skill when riding alongside traffic!

by Ross Burton at April 19, 2015 08:44 PM

April 14, 2015

Hylke Bons

Lockee to the rescue

Using public computers can be a huge privacy and security risk. There’s no way you can tell who may be spying on you using key loggers or other evil software.

Some friends and family don’t see the problem at all, and use any computer to log in to personal accounts. I actually found myself not being able to recommend an easy solution here. So I decided to build a service that I hope will help remove the need to sign in to sensitive services in some cases at least.

Example

You want to use the printer at your local library to print an e-ticket. As you’re on a public computer, you really don’t want to log in to your personal email account to fetch the document for security reasons. You’re not too bothered about your personal information on the ticket, but typing in your login details on a public computer is a cause for concern.

This is a use case I have every now and then, and I’m sure there many other similar situations where you have to log in to a service to get some kind of file, but you don’t really want to.

Existing storage services

There are temporary file storage solutions on the internet, but most of them give out links that are long and hard to remember, ask for an email address to send the links to, are public, or have any combination of these problems. Also, you have no idea what will happen to your data.

USB drives can help sometimes, but you may not always have one handy, it might get infected, and it’s easy to forget once plugged in.

Lockee to the rescue

Lockee is a small service that temporarily hosts files for you. Seen those luggage lockers at the railway station? It’s like that, but for files.

A Lockee locker

It allows you to create temporary file lockers, with easy to remember URLs (you can name your locker anything you want). Lockers are protected using passphrases, so your file isn’t out in the open.

Files are encrypted and decrypted in the browser, there’s no record of their real content on the server side. There’s no tracking of anything either, and lockers are automatically emptied after 24 hours.

Give it a go

I’m hosting an instance of Lockee on lockee.me. The source is also available if you’d like to run your own instance or contribute.

by Hylke Bons at April 14, 2015 10:25 PM

Ways to improve download page flow

App stores on every platform are getting more popular, and take care of downloads in a consistent and predictable way. Sometimes stores aren’t an option or you prefer not to use them, specially if you’re a Free and Open Source project and/or Linux distribution.

Here are some tips to improve your project’s download page flow. It’s based on confusing things I frequently run into when trying to download a FOSS project and think can be done a lot better.

This is in no way an exhaustive list, but is meant to help as a quick checklist to make sure people can try out your software without being confused or annoyed by the process. I hope it will be helpful.

Project name and purpose

The first thing people will (or should) see. Take advantage of this fact and pick a descriptive name. Avoid technical terms, jargon, and implementation details in the name. Common examples are: “-gui”, “-qt”, “gtk-”, “py-”, they just clutter up names with details that don’t matter.

Describe what your software does, what problem it solves, and why you should care. This sounds like stating the obvious, but this information is often buried in other less important information, like which programming language and/or free software license is used. Make this section prominent on the website and keep it down on the buzzwords.

The fact that the project is Free and Open Source, whilst important, is secondary. Oh, and recursive acronyms are not funny.

Platforms

Try to autodetect as much as possible. Is the visitor running Linux, Windows, or Mac? Which architecture? Make suggestions more prominent, but keep other options open in case someone wants to download a version for a platform other than the one they’re currently using.

Architecture names can be confusing as well: “amd64” and “x86” are labels often used to specify to distinguish between 32-bit and 64-bit systems, however they do a bad job at this. AMD is not the only company making 64-bit processors anymore, and “x86” doesn’t even mention “32-bit”.

Timestamps

Timestamps are a good way to find out if a project is actively maintained, you can’t (usually) tell from a version number when the software was released. Use human friendly date formatting that is unambiguous. For example, use “February 1, 2003” as opposed to “01-02-03”. If you keep a list of older versions, sort by time and clearly mark which is the latest version.

File sizes

Again, keep it human readable. I’ve seen instances where the file size are reported in bytes (e.g. 209715200 bytes, instead of 200 MB). Sometimes you need to round numbers or use thousands separators when numbers are large to improve readability.

File sizes are mostly there to make rough guesses, and depending on context you don’t need to list them at all. Don’t spend too much time debating whether you should be using MB or MiB.

Integrity verification

Download pages are often littered with checksums and GPG signatures. Not everybody is going to be familiar with these concepts. I do think checking (source) integrity is important, but also think source and file integrity verification should be automated by the browser. There’s no reason for it to be done manually, but there doesn’t seem to be a common way to do this yet.

If you do offer ways to check file and source integrity, add explanations or links to documentation on how to perform these checks. Don’t ditch strange random character strings on pages. Educate, or get out of the way.

Keep in mind search engines may link to the insecure version of your page. Not serving pages over HTTPS at all makes providing signatures checks rather pointless, and could even give a false sense of security.

Compression formats

Again something that should be handled by the browser. Compressing downloads can save a lot of time and bandwidth. Often though, specially on Linux, we’re presented with a choice of compression formats that hardly matter in size (.tar.gz, .tar.bz2, .7z, .xz, .zip).

I’d say pick one. Every operating system supports the .zip format nowadays. The most important lesson here though is to not put people up with irrelevant choices and clutter the page.

Mirrors

Detect the closest mirror if possible, instead of letting people pick from a long list. Don’t bother for small downloads, as the time required picking one is probably going to outweigh the benefit of the increased download speed.

Starting the download

Finally, don’t hide the link in paragraphs of text. Make it a big and obvious button.

by Hylke Bons at April 14, 2015 09:30 PM

San Francisco impressions

Had the opportunity to visit San Francisco for two weeks in March, it was great. Hope to be back there soon.

by Hylke Bons at April 14, 2015 09:30 PM

April 13, 2015

Hylke Bons

San Francisco Impressions

Had the opportunity to visit San Francisco for two weeks in March, it was great. Hope to be back there soon.

by Hylke Bons at April 13, 2015 12:03 AM

March 08, 2015

Damien Lespiau

Working in a separate prefix

I've been surprised in the past to discover that even some seasoned engineers didn't know how to use the autotools prefix feature. A sign they've been lucky enough and didn't have to deal with Autotools too much. Here's my attempt to provide some introduction to ./configure --prefix.

Working with or in "a separate prefix" is working with libraries and binaries (well, anything produced by 'make install' in an autotooled project really) installed in a different directory than the system-wide ones (/usr or even /usr/local that can become quite messy). It is the preferred way to hack on a full stack without polluting your base distribution and has several advantages:
  • One can hack on the whole stack without the fear of not being able to run your desktop environment you're working with if something goes wrong,
  • More often than not, one needs a relatively recent library that your distribution doesn't ship with (say a recent libdrm). When working with the dependencies in a prefix, it's just a matter of recompiling it.
Let's take an example to make the discussion easier:
  •  We want to compile libdrm and intel-gpu-tools (because intel-gpu-needs needs a more recent libdrm than the one coming with your distribution),
  •  We want to use the ~/gfx directory for our work,
  • git trees with be cloned in ~/gfx/sources,
  • ~/gfx/install is chosen as the prefix.
First, let's clone the needed git repositories:

$ mkdir -p ~/gfx/sources ~/gfx/install
$ cd ~/gfx/sources
$ git clone git://anongit.freedesktop.org/mesa/drm libdrm
$ git clone git://anongit.freedesktop.org/xorg/app/intel-gpu-tools

Then you need to source a script that will set-up your environment with a few variables to tell the system to use the prefix (both at run-time and compile-time). A minimal version of that script for our example is (I store my per-project setup scripts to source at the root of the project, in our case ~/gfx):

$ cat ~/gfx/setup-env
PROJECT=~/gfx
export PATH=$PROJECT/install/bin:$PATH
export LD_LIBRARY_PATH=$PROJECT/install/lib:$LD_LIBRARY_PATH
export PKG_CONFIG_PATH=$PROJECT/install/lib/pkgconfig:$PKG_CONFIG_PATH
export ACLOCAL_FLAGS="-I $PROJECT/install/share/aclocal $ACLOCAL_FLAG"
$ source ~/gfx/setup-env

Then it's time to compile libdrm, telling the configure script that we want to install it in in our prefix:
$ cd ~/gfx/sources/libdrm
$ ./autogen.sh --prefix=/home/damien/gfx/install
$ make
$ make install

Note that you don't need to run "sudo make install" since we'll be installing in our prefix directory that is writeable by the current user.

Now it's time to compile i-g-t:

$ cd ~/gfx/sources/intel-gpu-tools
$ ./autogen.sh --prefix=/home/damien/gfx/install
$ make
$ make install

The configure script may complain about dependencies (eg. cairo, SWIG,...). Different ways to solve those:
  • For dependencies not directly linked with the graphics stack (like SWIG), it's recommended to use the development package provided by the distribution
  • For old enough dependencies that don't change very often (like cairo) you can use the distribution development package or compile them in your prefix
  • For dependencies more recent than your distribution ones, you need to install them in the chosen prefix.

by Damien Lespiau (noreply@blogger.com) at March 08, 2015 12:31 PM

March 07, 2015

Damien Lespiau

shave: making the autotools output sane

updated: Automake 1.11 has been release with "silent rules" support, a feature that supersedes the hack that shave is. If you can depend on automake 1.11 please consider using its silent rules rather than shave.

updated: add some gtk-doc info

updated: CXX support thanks to Tommi Komulainen

shave


Fed up with endless screens of libtool/automake output? Fed up with having to resort to -Werror to see warnings in your code? Then shave might be for you. shave transforms the messy output of autotools into a pretty Kbuild-like one (Kbuild is the Linux build system). It's composed of a m4 macro and 2 small shell scripts and it's available in a git repository.
git clone git://git.lespiau.name/shave

Hopefully, in a few minutes, you should be able to see your project compile like this:
$ make
Making all in foo
Making all in internal
CC internal-file0.o
LINK libinternal.la
CC lib-file0.o
CC lib-file1.o
LINK libfoo.la
Making all in tools
CC tool0-tool0.o
LINK tool0

Just like Kbuild, shave supports outputting the underlying commands using:
$ make V=1

Setup



  • Put the two shell scripts shave.in and shave-libtool.in in the directory of your choice (it can be at the root of your autotooled project).

  • add shave and shave-libtool to AC_CONFIG_FILES

  • add shave.m4 either in acinclude.m4 or your macro directory

  • add a call to SHAVE_INIT just before AC_CONFIG_FILES/AC_OUTPUT. SHAVE_INIT takes one argument, the directory where shave and shave-libtool are.


custom rules


Sometimes you have custom Makefile rules, e.g. to generate a small header, run glib-mkenums or glib-genmarshal. It would be nice to output a pretty 'GEN' line. That's quite easy actually, just add few (portable!) lines at the top of your Makefile.am:
V         = @
Q = $(V:1=)
QUIET_GEN = $(Q:@=@echo ' GEN '$@;)

and then it's just a matter of prepending $(QUIET_GEN) to the rule creating the file:
lib-file2.h: Makefile
$(QUIET_GEN)echo "#define FOO_DEFINE 0xbabe" > lib-file2.h

gtk-doc + shave


gtk-doc + shave + libtool 1.x (2.x is fine) is known to have a small issue, a patch is available. Meanwhile I suggest adding a few lines to your autogen.sh script.
sed -e 's#) --mode=compile#) --tag=CC --mode=compile#' gtk-doc.make > gtk-doc.temp \
&& mv gtk-doc.temp gtk-doc.make
sed -e 's#) --mode=link#) --tag=CC --mode=link#' gtk-doc.make > gtk-doc.temp \
&& mv gtk-doc.temp gtk-doc.make

dolt + shave


It's possible to use dolt in conjunction with shave with a surprisingly small patch to dolt.

Real world example: Clutter


$ make
GEN   stamp-clutter-marshal.h
GEN   clutter-marshal.c
GEN   stamp-clutter-enum-types.h
Making all in cogl
Making all in common
CC    cogl-util.o
CC    cogl-bitmap.o
CC    cogl-bitmap-fallback.o
CC    cogl-primitives.o
CC    cogl-bitmap-pixbuf.o
CC    cogl-clip-stack.o
CC    cogl-fixed.o
CC    cogl-color.o
cogl-color.c: In function ‘cogl_set_source_color4ub’:
cogl-color.c:141: warning: implicit declaration of function ‘cogl_set_source_color’
CC    cogl-vertex-buffer.o
CC    cogl-matrix.o
CC    cogl-material.o
LINK  libclutter-cogl-common.la
[...]

Eh! now we can see a warning there!

TODO


This is a first release, shave has not been widely tested aka it may not work for you!

  • test it with a wider range of automake/libtool versions

  • shave won't work without AC_CONFIG_HEADERS due to shell quoting problems

  • see what can be done for make install/dist (they are prettier thanks to make -s, but we probably miss a few actions)

  • there is a '-s' hardcoded in MAKEFLAGS,  I have to find a way to make it more flexible

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

shave 0.1.0

After a month without anyone shouting at shave in despair or horror, it's time to tag something to have a "stable" branch so people can rely on a stable interface (yes, it's important even for a 100 lines macro!).

What's the most amazing is that quite a few projects have adopted shave in the GNOME and freedesktop.org communities : Clutter, Niepce Digital, Giggle, GStreamer, GObject introspection, PulseAudio, ConnMan, Json-glib, libunique, gnote, seed, gnome-utils, libccss, xorg ? and maybe some more I've forgotten or I don't even know about.

You can grab the tarball or clone the git repositoy (git clone git://git.lespiau.name/shave) and have a look at the README file.

Time to celebrate.

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

After-shave

A few concerns have been raised by shave, namely not being able to debug build failure in an automated environment as easily as before, or users giving  useless bug reports of failed builds.

One capital thing to realize is that, even when compiling with make V=1, everything that was not echoed was not showed (MAKEFLAGS=-s).

Thus, I've made a few changes:

  • Add CXX support (yes, that's unrelated, but the question was raised, thanks to Tommi Komulainen for the initial patch),

  • add a --enable-shave option to the configure script,

  • make the Good Old Behaviour the default one,

  • as a side effect, the V and Q variables are now defined in the m4 macro, please remove them from your Makefile.am files.


The rationale for the last point can be summarized as follow:

  • the default behaviour is as portable as before (for non GNU make that is), which is not the case is shave is activated by default,

  • you can still add --enable-shave to you autogen.sh script, bootstraping your project from a SCM will enable shave and that's cool!

  • don't break tools that were relying on automake's output.


Grab the latest version! (git://git.lespiau.name/shave)

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

HDMI stereo 3D & KMS

If everything goes according to plan, KMS in linux 3.13 should have stereo 3D support. Should one be interested in scanning out a stereo frame buffer to a 3D capable HDMI sink, here's a rough description of how those modes are exposed to user space and how to use them.

A reader not well acquainted with the DRM sub-system and its mode setting API (Aka Kernel Mode Setting, KMS) could start by watching the first part of Laurent Pinchart's Anatomy of an Embedded KMS Driver or read David Herrmann's heavily documented mode setting example code.

Stereo modes work by sending a left eye and right eye picture per frame to the monitor. It's then up to the monitor to use those 2 pictures to display a 3D frame and the technology there varies.

There are different ways to organise the 2 pictures inside a bigger frame buffer. For HDMI, those layouts are described in the HDMI 1.4 specification. Provided you give them your contact details, it's possible to download the stereo 3D part of the HDMI 1.4 spec from hdmi.org.

As one inevitably knows, modes supported by a monitor can be retrieved out of the KMS connector object in the form of drmModeModeInfo structures (when using libdrm, it's also possible to write your own wrappers around the KMS ioctls, should you want to):
typedef struct _drmModeModeInfo {
uint32_t clock;
uint16_t hdisplay, hsync_start, hsync_end, htotal, hskew;
uint16_t vdisplay, vsync_start, vsync_end, vtotal, vscan;

uint32_t vrefresh;

uint32_t flags;
uint32_t type;
char name[...];
} drmModeModeInfo, *drmModeModeInfoPtr;

To keep existing software blissfully unaware of those modes, a DRM client interested in having stereo modes listed starts by telling the kernel to expose them:
drmSetClientCap(drm_fd, DRM_CLIENT_CAP_STEREO_3D, 1);

Stereo modes use the flags field to advertise which layout the mode requires:
uint32_t layout = mode->flags & DRM_MODE_FLAG_3D_MASK;

This will give you a non zero value when the mode is a stereo mode, value among:
DRM_MODE_FLAG_3D_FRAME_PACKING
DRM_MODE_FLAG_3D_FIELD_ALTERNATIVE
DRM_MODE_FLAG_3D_LINE_ALTERNATIVE
DRM_MODE_FLAG_3D_SIDE_BY_SIDE_FULL
DRM_MODE_FLAG_3D_L_DEPTH
DRM_MODE_FLAG_3D_L_DEPTH_GFX_GFX_DEPTH
DRM_MODE_FLAG_3D_TOP_AND_BOTTOM
DRM_MODE_FLAG_3D_SIDE_BY_SIDE_HALF

User space is then responsible for choosing which stereo mode to use and to prepare a buffer that matches the size and left/right placement requirements of that layout. For instance, when choosing Side by Side (half), the frame buffer is the same size as its 2D equivalent (that is hdisplay x vdisplay) with the left and right images sub-sampled by 2 horizontally:

[caption id="attachment_398" align="aligncenter" width="300"]sbsh Side by Side (half)[/caption]

Other modes need a bigger buffer than hdisplay x vdisplay. This is the case with frame packing, where each eye has the the full 2D resolution, separated by the number of vblank lines:

[caption id="attachment_399" align="aligncenter" width="300"]Frame Packing Frame Packing[/caption]

Of course, anything can be used to draw into the stereo frame buffer, including OpenGL. Further work should enable Mesa to directly render into such buffers, say with the EGL/gbm winsys for a wayland compositor to use. Of course, fun profit would the last step:

[caption id="attachment_374" align="aligncenter" width="261"]PS3_3D2 A 720p frame packing buffer from the game WipeOut[/caption]

Behind the scene, the kernel's job is to parse the EDID to discover which stereo modes the HDMI sink supports and, once user-space instructs to use a stereo mode, to send infoframes (metadata sent during the vblank interval) with the information about which 3D mode is being sent.

A good place to start for anyone wanting to use this API is testdisplay, part of the Intel GPU tools test suite. testdisplay can list the available modes with:
$ sudo ./tests/testdisplay -3 -i
[...]
name refresh (Hz) hdisp hss hse htot vdisp vss vse vtot flags type clock
[0] 1920x1080 60 1920 2008 2052 2200 1080 1084 1089 1125 0x5 0x48 148500
[1] 1920x1080 60 1920 2008 2052 2200 1080 1084 1089 1125 0x5 0x40 148352
[2] 1920x1080i 60 1920 2008 2052 2200 1080 1084 1094 1125 0x15 0x40 74250
[3] 1920x1080i 60 1920 2008 2052 2200 1080 1084 1094 1125 0x20015 0x40 74250 (3D:SBSH)
[4] 1920x1080i 60 1920 2008 2052 2200 1080 1084 1094 1125 0x15 0x40 74176
[5] 1920x1080i 60 1920 2008 2052 2200 1080 1084 1094 1125 0x20015 0x40 74176 (3D:SBSH)
[6] 1920x1080 50 1920 2448 2492 2640 1080 1084 1089 1125 0x5 0x40 148500
[7] 1920x1080i 50 1920 2448 2492 2640 1080 1084 1094 1125 0x15 0x40 74250
[8] 1920x1080i 50 1920 2448 2492 2640 1080 1084 1094 1125 0x20015 0x40 74250 (3D:SBSH)
[9] 1920x1080 24 1920 2558 2602 2750 1080 1084 1089 1125 0x5 0x40 74250
[10] 1920x1080 24 1920 2558 2602 2750 1080 1084 1089 1125 0x1c005 0x40 74250 (3D:TB)
[11] 1920x1080 24 1920 2558 2602 2750 1080 1084 1089 1125 0x4005 0x40 74250 (3D:FP)
[...]

To test a specific mode:
$ sudo ./tests/testdisplay -3 -o 17,10
1920x1080 24 1920 2558 2602 2750 1080 1084 1089 1125 0x1c005 0x40 74250 (3D:TB)

To cycle through all the supported stereo modes:
$ sudo ./tests/testdisplay -3

testdisplay uses cairo to compose the final frame buffer from two separate left and right test images.

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

Extracting part of files with sed

For reference for my future self, a few handy sed commands. Let's consider this file:
$ cat test-sed
First line
Second line
--
Another line
Last line

We can extract the lines from the start of the file to the marker by deleting the rest:
$ sed '/--/,$d' test-sed 
First line
Second line

a,b is the range the command, here d(elete), applies to. a and b can be, among others, line numbers, regular expressions or $ for end of the file. We can also extract the lines from the marker to the end of the file with:
$ sed -n '/--/,$p' test-sed 
--
Another line
Last line

This one is slightly more complicated. By default sed spits all the lines it receives as input, '-n' is there to tell sed not to do that. The rest of the expression is to p(rint) the lines between -- and the end of the file.

That's all folks!

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

AS_AM_STFU

Writing m4 macro is fun, it really is.

If you want to have make be a "make -s" without doing boring stuff like aliases and actually respect the default verbosity of automake >= 1.11, use this small m4 macro I wrote.

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

g_object_notify_by_pspec()

Now that GLib 2.26.0 is out, it's time to talk about a little patch to GObject I've written (well, the original idea was born while looking at it with Neil): add a new g_object_notify_by_pspec() symbol to GObject. As shown in the bug it can improve the notification of GObject properties by 10-15% (the test case tested was without any handler connected to the notify signal).

If you can depend on GLib 2.26, consider using it!

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

Working on more than one line with sed's 'N' command

Yesterday I was asked to help solving a small sed problem. Considering that file (don't look too closely on the engineering of the defined elements):

<root>
<key>key0</key>
<string>value0</string>
<key>key1</key>
<string>value1</string>
<key>key2</key>
<string>value2</string>
</root>


The problem was: How to change value1 to VALUE!. The problem here is that you can't blindly execute a s command matching <string>.*</string>.

Sed maintains a buffer called the "pattern space" and processes commands on this buffer. From the GNU sed manual:

sed operates by performing the following cycle on each line of input: first, sed reads one line from the input stream, removes any trailing newline, and places it in the pattern space. Then commands are executed; each command can have an address associated to it: addresses are a kind of condition code, and a command is only executed if the condition is verified before the command is to be executed.

When the end of the script [(list of sed commands)] is reached, unless the -n option is in use, the contents of pattern space are printed out to the output stream, adding back the trailing newline if it was removed.3 Then the next cycle starts for the next input line.


So the idea is to first, use a /pattern/ address to select the the right <key> line, append the next line to the pattern space (with the N command) and finally run a s command on the buffer now containing both lines:

  <key>key1</key>
<string>value1</string>


And so we end up with:
$ cat input 
<root>
<key>key0</key>
<string>value0</string>
<key>key1</key>
<string>value1</string>
<key>key2</key>
<string>value2</string>
</root>
$ sed -e '/<key>key1<\/key>/{N;s#<string>.*<\/string>#<string>VALUE!<\/string#;}' < input
<root>
<key>key0</key>
<string>value0</string>
<key>key1</key>
<string>VALUE!</string
<key>key2</key>
<string>value2</string>
</root>

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

ADV: ADV is a Dependency Viewer

A few months ago I wrote a small script to draw a dependency graph between the object files of a library (the original idea is from Lionel Landwerlin). You'll need an archive of your library for the tool to be able to look for the needed pieces. Let's have a look at a sample of its output to understand what it does. I ran it against the HEAD of clutter.

[caption id="attachment_16" align="aligncenter" width="300" caption="A view of the clutter library"]A view of the clutter library[/caption]

This graph was generated with the following (tred is part of graphviz to do transitive reductions on graphs):

$ adv.py clutter/.libs/libclutter-glx-0.9.a | tred | dot -Tsvg > clutter.svg

You can provide more than one library to the tool:

./adv.py ../clutter/clutter/.libs/libclutter-glx-0.9.a \
../glib-2.18.4/glib/.libs/libglib-2.0.a \
../glib-2.18.4/gobject/.libs/libgobject-2.0.a \
| tred | dot -Tsvg > clutter-glib-gobject-boxed.svg


[caption id="attachment_17" align="aligncenter" width="300" caption="clutter, glib and gobject"][/caption]



What you can do with this:

  • trim down your library by removing the object files you don't need and that are leafs in the graph. This was actually the reason behind the script and it proved useful,

  • get an overview of a library,

  • make part of a library optional more easily.


To make the script work you'll need graphviz, python, ar and nm (you can provide a cross compiler prefix with --cross-prefix).

Interested? clone it! (or look at the code)

$ git clone git://git.lespiau.name/misc/adv

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

Using glib.py and gobject.py GDB scripts

Some time ago, Alexander Larson blogged about using gdb python macros when debugging Glib and GObject projects. I've wanted to try those for ages, so I spent part of the week-end looking at what you could do with the new python enabled GDB, result: quite a lot of neat stuff!

Let's start by making the script that now comes with glib work on stock gdb 7.0 and 7.1 (ie not the archer branch that contains more of the python work). If those two scripts don't work for you yet (because your distribution is not packaging them, or is packaging a stock gdb 7.0. 7.1), here are a few hints you can follow:

  • glib's GDB macros rely on GDB's auto-load feature, ie, every time GDB load a library your program uses, it'll look for a corresponding python script to execute:


open("/lib/libglib-2.0.so.0.2200.4-gdb.py", O_RDONLY)
open("/usr/lib/debug/lib/libglib-2.0.so.0.2200.4-gdb.py", O_RDONLY)
open("/usr/share/gdb/auto-load/lib/libglib-2.0.so.0.2200.4-gdb.py", O_RDONLY)

Some distributions have decided not to ship glib's and gobject's auto-load helpers, if you are in that case, you'd need to load gobject.py and glib.py by hand. For that purpose I've added a small python command in my ~/.gdbinit:
import os.path
import sys
import gdb

# Update module path.
dir = os.path.join(os.path.expanduser("~"), ".gdb")
if not dir in sys.path:
sys.path.insert(0, dir)

class RegisterCommand (gdb.Command):
"""Register GLib and GObject modules"""

def __init__ (self):
super (RegisterCommand, self).__init__ ("gregister",
gdb.COMMAND_DATA,
gdb.COMPLETE_NONE)

def invoke (self, arg, from_tty):
objects = gdb.objfiles ()
for object in objects:
if object.filename.find ("libglib-2.0.so.") != -1:
from glib import register
register (object)
elif object.filename.find ("libgobject-2.0.so.") != -1:
from gobject import register
register (object)

RegisterCommand ()
end

What I do is put glib.py and gobject.py in a ~/.gdb directory and don't forget to call gregister inside GDB (once gdb has loaded glib and gobject)

  • The scripts that are inside glib's repository were written with the archer branch of gdb (which bring all the python stuff). Unfortunately stock GDB (7.0 and 7.1) does not have everything the archer gdb has. I have a couple of patches to fix that in the queue. Meanwhile you can grab them in my survival kit repository. This will disable the back trace filters as they are still not in stock GDB.


You're all set! it's time to enjoy pretty printing and gforeach. Hopefully people will join the fun at some point and add more GDB python macro goodness both inside glib and in other projects (for instance a ClutterActor could print its name).
int main (int argc, char **argv)
{
glist = g_list_append (glist, "first");
glist = g_list_append (glist, "second");

return breeeaaak_oooon_meeeee ();
}

gives:
(gdb) b breeeaaak_oooon_meeeee
Breakpoint 1 at 0x80484b7: file glib.c, line 9.
(gdb) r
Starting program: /home/damien/src/test-gdb/glib
Breakpoint 1, breeeaaak_oooon_meeeee () at glib.c:9
9        return 0;
(gdb) gregister
(gdb) gforeach s in glistp: print ((char *)$s)
No symbol "glistp" in current context.
(gdb) gforeach s in glist: print ((char *)$s)
$2 = 0x80485d0 "first"
$3 = 0x80485d6 "second"

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

Aligning C function parameters with vim

updated: now saves/retores the paste register

It has bothered me for a while, some coding styles, most notably in the GNOME world try to enforce good looking alignment of functions parameters such as:
static UniqueResponse
on_unique_message_received (UniqueApp         *unique_app,
gint               command,
UniqueMessageData *message_data,
guint              time_,
gpointer           user_data)
{
}

Until now, I aligned the arguments by hand, but that time is over! Please welcome my first substantial vim plugin: it defines a GNOMEAlignArguments command to help you in that task. All you have to do is to add this file in your ~/.vim/plugin directory and define a macro in your ~/.vimrc to invoke it just like this:
" Align arguments
nmap ,a :GNOMEAlignArguments<CR>

HTH.

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

Still some hair left

I've been asked to give more input on make V=1 Vs. --disable-shave, so here it is: once again, before shipping your package with shave enabled by default, there is something crucial to understand: make V=1 (when having configured your package with --enable-shave) is NOT equivalent to no shave at all (ie --disable-shave). This is because the shave m4 macro is setting MAKEFLAGS=-s in every single Makefile. This means that make won't print the commands as is used to, and that the only way to print something on the screen is to echo it. It's precisely what the shave wrappers do, they echo the CC/CXX and LIBTOOL commands when V=1. So in short custom rules and a few automake commands won't be displayed with make V=1.

That said, it's possible to craft a rule that would display the command with shaved enabled and make V=1. The following rule:
 lib-file2.h: Makefile
$(SHAVE_GEN)echo "#define FOO_DEFINE 0xbabe" > lib-file2.h

would become:
 lib-file2.h: Makefile
@cmd='echo "#define FOO_DEFINE 0xbabe" > lib-file2.h'; \
if test x"$$V" = x1; then echo $$cmd; fi
$(SHAVE_GEN)echo "#define FOO_DEFINE 0xbabe" > lib-file2.h

which is quite ugly, to say the least. (if you find a smarter way, please enlighten me!).

On the development side, shave is slowly becoming more mature:

  • Thanks to Jan Schmidt, shave works with non GNU sed and echo that do not support -n. It now works on Solaris, hopefully on BSDs and various Unixes as well (not tested though).

  • SHAVE_INIT has a new, optional, parameter which empowers the programmer to define shave's default behaviour (when ./configure is run without shave any related option): either enable or disable. ie. SHAVE_INIT([autootols], [enable]) will instruct shave to find its wrapper scripts in the autotools directory and that running ./configure will actually enable the beast. SHAVE_INIT without parameters at all is supposed to mean that the wrapper scripts are in $top_builddir and that ./configure will not enable shave without the --enable-shave option.

  • however, shave has been reported to fail miserably with scratchbox.

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

Vim macro for change log entry in .spec files

Tired of writing this kind of lines by hand ?

* Mon Feb 09 2009 Damien Lespiau <damien.lespiau@xxxx.com> 1.4.3

This vim macro does just this for you!

nmap ,mob-ts :r!date +'\%a \%b \%d \%Y'<CR>0i* <ESC>$a Damien Lespiau <damien.lespiau@xxxx.com> FIXME

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

Cogl + JS = Love

Played a bit with Gjs and Cogl this weekend and ended up rewriting Clutter's test-cogl-primitives in JavaScript. In the unlikely case someone is interested in trying it, you'll need a patch to support arrays of float as argument in introspected functions and another small patch to add introspection annotations for a few Cogl symbols. As usual you can grab the code in its git repository:

cogl-primitives-js

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

A simple autotool template

Every now and then, you feel a big urge to start hacking on a small thingy and need to create Makefiles for it. Turns out that the autotools won't be that intrusive when we are talking about small programs and you get do a reasonable job with a few lines, first the configure.ac file:
# autoconf
AC_PREREQ(2.59)
AC_INIT([fart], [0.0.1], [damien.lespiau@gmail.com])
AC_CONFIG_MACRO_DIR([build])
AC_CONFIG_AUX_DIR([build])
AC_CONFIG_SRCDIR([fart.c])
AC_CONFIG_HEADERS([config.h])

# automake
AM_INIT_AUTOMAKE([1.11 -Wall foreign no-define])
AM_SILENT_RULES([yes])

# Check for programs
AC_PROG_CC

# Check for header files
AC_HEADER_STDC

AS_COMPILER_FLAGS([WARNING_CFLAGS],
["-Wall -Wshadow -Wcast-align -Wno-uninitialized
-Wno-strict-aliasing -Wempty-body -Wformat -Wformat-security
-Winit-self -Wdeclaration-after-statement -Wvla
-Wpointer-arith"])

PKG_CHECK_MODULES([GLIB], [glib-2.0 >= 2.24])

AC_OUTPUT([
Makefile
])

and then Makefile.am:
ACLOCAL_AMFLAGS = -I build ${ACLOCAL_FLAGS}

bin_PROGRAMS = fart

fart_SOURCES = fart.c
fart_CFLAGS = $(WARNING_CFLAGS) $(GLIB_CFLAGS)
fart_LDADD = $(GLIB_LIBS)

After that, it's just a matter of running autoreconf
$ autoreconf -i

and you are all set!

So, what do you get for this amount of lines?

  • The usual set of automake targets, handy! ("make tags" is so under used!) and bonus features (out of tree builds, extra rules to reconfigure/rebuild the Makefiles on changes in configure.ac/Makefile.an, ...)

  • Trying to make the autoconf/automake discreet (putting auxiliary files out of the way, silence mode, automake for non GNU projects)

  • Some decent warning flags (tweak to your liking!)

  • autoreconf cooperating with aclocal thanks to ACLOCAL_AMFLAGS and coping with non standard locations for system m4 macros


I'll maintain a git tree to help bootstrap my next small hacks, feel free to use it as well!

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

The GStreamer conference from a Clutter point of view

Two weeks ago I attended the first GStreamer conference, and it was great. I won't talk about the 1.0 plan that seems to take shape and looks really good but just what stroke me the most: Happy Clutter Stories and an Tale To Be Told to your manager.

Let's move on the Clutter stories. You had a surprising number of people mixing GStreamer and Clutter, two talks especially:

  • Florent Thiery founder of Ubicast talked about one of their products: a portable recording system with quite a bit of bling (records the slides, movement detection with OpenCV, RoI, ...). The system was used to record the talks on the main track. Now, what was of particular interest for me is that the UI to control the system is entirely written with Clutter and python. They have built a whole toolkit on top of Clutter, in python, called candies/touchwizard and written their UI with it, cooool.

  • A very impressive talk from the Tanberg (now Cisco) guys about their Movi software, video conferencing at its finest. It uses GStreamer extensively and Clutter for its UI (on Windows!). They said that about 150,000 copies of Movi are deployed in the wild. Patches from Ole André Vadla Ravnås and Haakon Sporsheim have been flowing to Clutter and Clutter-gst (win32 support).


As a side note, Fluendo talked about their Open Source, Intel founded, GStreamer codecs for Intel CE3100/CE4100. This platform specificities are supported natively by Clutter (./configure --with-flavour=cex100) using the native EGL winsys called "GDL" and evdev events coming from the kernel. More on this later :p

A very interesting point about those success stories is that the companies and engineers working with open source software to build their applications, sometimes with parts heavily covered by patents, while contributing back to the ecosystem that allowed to build those applications in the first place. Contributing is done at many levels: directly patches but also feedback on the libraries/platform (eg. input for GStreamer 1.0). And guess what? It works! To me, that's exactly how the GNOME platform should be used to build proprietary applications: build on top and contribute back to consolidate the libraries. I'd go as far as saying that contributing upstream is the best way to share code inside the same big corporation. Such companies are always very bad a cooperating between divisions.

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

Clutter on Android: first results

With the release of Android 2.3, there's a decent way to integrate native applications with the NativeActivity class, an EGL library, and some C API to expose events, main loop, etc. So? how about porting Clutter to it now that it looks actually feasible? After a few days of work, the first results are there, quite promising!



There's still a fairly large number of items in my TODO before being happy with the state of this work, the most prominent items are:


  • Get a clean up pass done to have something upstreamable, this includes finishing the event integration (it receives events but not yet forward them to Clutter),

  • Come up with a plan to manage the application life cycle and handle the case when Android destroys the EGL surface that you were using (probably by having the app save a state, and properly tear down Clutter).,

  • While you probably have the droid font installed in /system/fonts, this is not part of the advertised NDK interface. The safest choice is to embed the font you want to use with your application. Unfortunately fontconfig + freetype + pango + compressed assets in your Android package don't work really well together. Maybe solve it at the Pango level with a custom "direct" fontmap implementation that would let you register fonts from files easily?

  • What to do with text entries? show soft keyboard? Mx or Clutter problem? what happens to the GL surface in that case?

  • Better test the GMainLoop/ALooper main loop integration (esp. adding and removing file descriptors),

  • All the libraries that Clutter depends on are linked into a big .so (which is the Android NDK application). It results in a big .so (~5 MB, ~1.7 MB compressed in the .apk). That size can be dramatically reduced, sometimes at the expense of changes that will break the current API/ABI, but hell, you'll be statically linking anyway,

  • Provide "prebuilt libraries", ie. pre-compiled libraries that makes it easy to just use Clutter to build applications.

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

Blending two RGBA 5551 layers

I've just stumbled accross a small piece of code, written one year and a half ago, that blends two 512x512 RGBA 5551 images. It was originally written for a (good!) GIS, so the piece of code blends roads with rivers (and displays the result in a GdkPixbuf). The only thing interesting is that it uses some MMX, SSE2 and rdtsc instructions. You can have a look at the code in its git repository.
screenshot-layer-fusion

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

A simple transition effect with Clutter

When doing something with graphics, your first need an idea (granted, as with pretty much everything else). In this case, a simple transition that I've seen somewhere a long time ago and I wanted to reproduce with Clutter.





The code is available in a branch of a media explorer I'm currently working on. A few bullet points to follow the code:


  • As the effect needs a "screenshot" of a Clutter scene to play with. You first need to create a subclass of ClutterOffscreenEffect as it does the work of redirecting the painting of a subtree of actors in an offscreen buffer that you can  reuse to texture the rectangles you'll be animating in the effect. This subclass has a "progress" property to control the animation.

  • Then actually compute the coordinates of the grid cells both in screen space and in texture space. To be able to use cogl_rectangles_with_texture_coords(), to try limit the number of GL calls (and/or by the Cogl journal and to ease the animation of the cells fading out, I decided to store the diagonals of the rectangle in a 1D array so that the following grid:


a 5x5 grid with one color per diagonal line


is stored as:


A 1D array with all the diagonals of the grid




  • ::paint_target()looks at the "progress" property, animate those grid cells accordingly and draw them. priv->rects is the array storing the initial rectangles, priv->animated_rects the animated ones and priv->chunks stores the start and duration of each diagonal animation along with a (index, length) tuple that references the diagonal rectangles in priv->rects and priv->animated_rects.


Some more details:

  • in the ::paint_target() function, you can special case when the progress is 0.0 (paint the whole FBO instead of the textured grid) and 1.0 (don't do anything),

  • Clutter does not currently allow to just rerun the effect when you animate a property of an offscreen effect for instance. This means that when animating the "progress" property on the effect, it queues a redraw on the actor that end up in the offscreen to trigger the effect ::paint_target() again. A branch from Neil allows to queue a "rerun" on the effect to avoid having to do that,

  • The code has some limitations right now (ie, n_colums must be equal to n_rows) but easily fixable. Once done, it makes sense to try to push the effect to Mx.

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

A git pre-commit hook to check the year of copyright notices

Like every year, touching a source file means you also need to update the year of the copyright notice you should have at the top of the file. I always end up forgetting about them, this is where a git pre-commit hook would be ultra-useful, so I wrote one:


#
# Check if copyright statements include the current year
#
files=`git diff --cached --name-only`
year=`date +"%Y"`

for f in $files; do
head -10 $f | grep -i copyright 2>&1 1>/dev/null || continue

if ! grep -i -e "copyright.*$year" $f 2>&1 1>/dev/null; then
missing_copyright_files="$missing_copyright_files $f"
fi
done

if [ -n "$missing_copyright_files" ]; then
echo "$year is missing in the copyright notice of the following files:"
for f in $missing_copyright_files; do
echo " $f"
done
exit 1
fi


Hope this helps!

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

git commit --fixup and git rebase -i --autosquash

It's not unusual that I need to fix previous commits up when working  on a branch or in the review phase. Until now I used a regular commit with some special marker to remember which commit to squash it with and then git rebase -i to reorder the patches and squash the fixup commits with their corresponding "parent" commits.

Turns out, git can handle quite a few of those manual manipulations for you. git commit --fixup <commit> allows you to commit work, marking it as a fixup of a previous commit. git rebase -i --autosquash will then present the usual git rebase -i screen but with the fixup commits moved just after their parents and ready to be squashed without any extra manipulation.

For instance, I had a couple of changes to a commit buried 100 patches away from HEAD (yes, a big topic branch!):
$ git diff
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 29f3813..08ea851 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -2695,6 +2695,11 @@ static void skylake_update_primary_plane(struct drm_crtc *crtc,

intel_fb = to_intel_framebuffer(fb);
obj = intel_fb->obj;
+
+ /*
+ * The stride is expressed either as a multiple of 64 bytes chunks for
+ * linear buffers or in number of tiles for tiled buffers.
+ */
switch (obj->tiling_mode) {
case I915_TILING_NONE:
stride = fb->pitches[0] >> 6;
@@ -2707,7 +2712,6 @@ static void skylake_update_primary_plane(struct drm_crtc *crtc,
BUG();
}

- plane_ctl &= ~PLANE_CTL_TRICKLE_FEED_DISABLE;
plane_ctl |= PLANE_CTL_PLANE_GAMMA_DISABLE;

I915_WRITE(PLANE_CTL(pipe, 0), plane_ctl);

And I wanted to squash those changes with commit 2021785
$ git commit -a --fixup 2021785

git will then go ahead and create a new commit with the subject taken from the referenced commit and prefixed with <code>fixup!<code>
commit d2d278ffbe87d232369b028d0c9ee9e6ecd0ba20
Author: Damien Lespiau <damien.lespiau@intel.com>
Date: Sat Sep 20 11:09:15 2014 +0100

fixup! drm/i915/skl: Implement thew new update_plane() for primary planes

Then when using the interactive rebase with autosquash:
$ git rebase -i --autosquash drm-intel/drm-intel-nightly

The fixup will be next after the reference commit
pick 2021785 drm/i915/skl: Implement thew new update_plane() for primary planes
fixup d2d278ff fixup! drm/i915/skl: Implement thew new update_plane() for primary planes

validating the proposed change (by in my case leaving vim) will squash the fixup commits. Definitely what I'll be using from now on!

Oh, and there's a config option to have git rebase automatically autosquash if there are some fixup commits:
$ git config --global rebase.autosquash true

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

Learning how to draw

 I can't draw. I've never been able to. Yet, for some reason, I decided to give it a serious try, buy a book to guide me in that journey (listening to an advice from pippin, yeah I know, crazy). The first step was, like a pilgrim walking to a sacred place, to go and buy some art supplies, which turned out to be a really enjoyable experience.

The first thing you have to do is a snapshot of your skills before reading more of the book to be able to do a "before/after" comparison. I thought it was quite hard, but was surprised that the result was all right, by my low standards anyway. You have to do 3 drawings: a self-portrait, looking at yourself in a mirror, a person/character drawn from memory without a visual help and your hand.

The next exercise is there to make you realize that you'll have to forget everything you know and re-learn how to see to draw. It's about copying drawings upside down, copying it curve by curve without associating any meaning to what you are doing. The result is quite surprising as you can see on the left. Now it's a matter to learn how to do that without resorting to the upside down trick.

It's only the beginning of a long journey, so many things can go wrong, but worth giving it a try!

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

Per project .vimrc

My natural C indentation style is basically kernel-like and my ~/.vimrc reflects that. Unfortunately I have to hack on GNUish-style projects and I really don't want to edit my ~/.vimrc every single time I switch between different indentation styles.

Modelines are evil.

To solve that terrible issue, vim can use per directory configuration files. To enable that neat feature only two little lines are needed in your ~/.vimrc:
set exrc			" enable per-directory .vimrc files
set secure " disable unsafe commands in local .vimrc files

Then it's just a matter of writing a per project .vimrc like this one:
set tabstop=8
set softtabstop=2
set shiftwidth=2
set expandtab
set cinoptions=>4,n-2,{2,^-2,:0,=2,g0,h2,t0,+2,(0,u0,w1,m1

You can find help with the wonderful cinoptions variable in the Vim documentation. As sane persons open files from the project's root directory, this works like a charm. As for the Makefiles, they are special anyway, you really should add an autocmd in your ~/.vimrc.
" add list lcs=tab:>-,trail:x for tab/trailing space visuals
autocmd BufEnter ?akefile* set noet ts=8 sw=8 nocindent

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

libpneu first import

Waw, definitely hard to keep a decent pace at posting news in my blog. Nevertheless, a first import of libpneu has reached my public git repository. libpneu is an effort to make a tracing library that I could use in every single project I start. Basically, you put tracing points in your programs and libpneu prints them whenever you need to know what is happening. Different backends can be used to display traces and debug messages, from printing them to stdout, to sending them over an UDP socket. More about libpneu in a few days/weeks !

A small screenshot to better understand what it does:

[caption id="attachment_10" align="alignnone" width="300" caption="libpneu printing a few debug and error messages"]libpneu printing a few debug and error messages[/caption]

by Damien Lespiau (noreply@blogger.com) at March 07, 2015 09:16 PM

February 07, 2015

Hylke Bons

London Zoo photos

Visited the London Zoo for the first time and took a few photos.

by Hylke Bons at February 07, 2015 04:07 PM

A bit about taking pictures

Though I like going out and take pictures at the places I visit, I haven’t actually blogged about taking pictures before. I thought I should share some tips and experiences.

This is not a “What’s in my bag” kind of post. I won’t, and can’t, tell you what the best cameras or lenses are. I simply don’t know. These are some things I’ve learnt and that have worked for me and my style of taking pictures, and wish I knew earlier on.

Pack

Keep gear light and compact, and focus on what you have. You will often bring more than you need. If you get the basics sorted out, you don’t need much to take a good picture. Identify a couple of lenses you like using and get to know their qualities and limits.

Your big lenses aren’t going to do you any good if you’re reluctant to take them with you. Accept that your stuff is going to take a beating. I used to obsess over scratches on my gear, I don’t anymore.

I don’t keep a special bag. I wrap my camera in a hat or hoody and lenses in thick socks and toss them into my rucksack. (Actually, this is one tip you might want to ignore.)

Watch out for gear creep. It’s tempting to wait until that new lens comes out and get it. Ask yourself: will this make me go out and shoot more? The answer usually is probably not, and the money is often better spent on that trip to take those nice shots with the stuff you already have.

Learn

Try some old manual lenses to learn with. Not only are these cheap and able to produce excellent image quality, it’s a great way to learn how aperture, shutter speed, and sensitivity affect exposure. Essential for getting the results you want.

I only started understanding this after having inherited some old lenses and started playing around with them. The fact they’re all manual makes you realise quicker how things physically change inside the camera when you modify a setting, compared to looking at abstract numbers on the back of the screen. I find them much more engaging and fun to use compared to full automatic lenses.

You can get M42 lens adapters for almost any camera type, but they work specially well with mirrorless cameras. Here’s a list of the Asahi Takumar (old Pentax) series of lenses, which has some gems. You can pick them up off eBay for just a few tenners.

My favourites are the SMC 55mm f/1.8 and SMC 50mm f/1.4. They produce lovely creamy bokeh and great sharpness of in focus at the same time.

See

A nice side effect of having a camera on you is that you look at the world differently. Crouch. Climb on things. Lean against walls. Get unique points of view (but be careful!). Annoy your friends because you need to take a bit more time photographing that beetle.

Some shots you take might be considered dumb luck. However, it’s up to you to increase your chances of “being lucky”. You might get lucky wandering around through that park, but you know you certainly won’t be when you just sit at home reading the web about camera performance.

Don’t worry about the execution too much. The important bit is that your picture conveys a feeling. Some things can be fixed in post-production. You can’t fix things like focus or motion blur afterwards, but even these are details and not getting them exactly right won’t mean your picture will be bad.

Don’t compare

Even professional photographers take bad pictures. You never see the shots that didn’t make it. Being a good photographer is as much about being a good editor. The very best still take crappy shots sometimes, and alright shots most of the time. You just don’t see the bad ones.

Ask people you think are great photographers to point out something they’re unhappy about in that amazing picture they took. Chances are they will point out several flaws that you weren’t even aware about.

Share

Don’t forget to actually have a place to actually post your images. Flickr or Instagram are fine for this. We want to see your work! Even if it’s not perfect in your eyes. Do your own thing. You have your own style.

Go

I hope that was helpful. Now stop reading and don’t worry too much. Get out there and have fun. Shoot!

by Hylke Bons at February 07, 2015 04:07 PM

Vienna GNOME/.NET hackfest report

I had a great time attending the GNOME/.NET hackfest last month in Vienna. My goal for the week was to port SparkleShare's user interface to GTK+3 and integrate with GNOME 3.

A lot of work got done. Many thanks to David and Stefan for enabling this by the smooth organisation of the space, food, and internet. Bertrand, Stephan, and Mirco helped me get set up to build a GTK+3-enabled SparkleShare pretty quickly. The porting work itself was done shortly after that, and I had time left to do a lot of visual polish and behavioural tweaks to the interface. Details matter!

Last week I released SparkleShare 1.3, a Linux-only release that includes all the work done at the hackfest. We're still waiting for the dependencies to be included in the distributions, so the only way you can use it is to build from source yourself for now. Hopefully this will change soon.

One thing that's left to do is to create a gnome-shell extension to integrate SparkleShare into GNOME 3 more seamlessly. Right now it still has to use the message tray area, which is far from optimal. So if you're interested in helping out with that, please let me know.

Tomboy Notes

The rest of the time I helped out others with design work. Helped out Mirco with the Smuxi preference dialogues using my love for the Human Interface Guidelines and started a redesign of Tomboy Notes. Today I sent out the new design to their mailing list with the work done so far.

Sadly there wasn't enough time for me to help out with all of the other applications… I guess that's something for next year.

Sponsors

I had a fun week in Vienna (which is always lovely no matter the time of year) and met many new great people. Special thanks to the many sponsors that helped making this event possible: Norkart, Collabora, Novacoast IT, University of Vienna and The GNOME Foundation.

by Hylke Bons at February 07, 2015 04:07 PM

Trip to Nuremberg and Munich

This month I visited my friend and colleague Garrett in Germany. We visited the Christmas markets there. Lots of fun. Here are some pictures.

by Hylke Bons at February 07, 2015 04:07 PM

Attending the Vienna GNOME/.NET hackfest

Today I arrived in the always wonderful city of Vienna for the GNOME/.NET Hackfest. Met up and had dinner with the other GNOME and .NET fans.

SparkleShare has been stuck on GTK+2 for a while. Now that the C# bindings for GTK+3 are starting to get ready, and Bindinator is handling any other dependencies that need updating (like WebKit), it is finally time to take the plunge.

My goal this week is to make some good progress on the following things:

  1. Port SparkleShare's user interface to GTK+3.
  2. Integrate SparkleShare seamlessly with the GNOME 3 experience

SparkleShare 1.2

Yesterday I made a new release of SparkleShare. It addresses several issues that may have been bugging you, so it's worth to upgrade. Depending on how well things go this week it may be the last release based on GNOME 2 technologies. Yay for the future!

by Hylke Bons at February 07, 2015 04:07 PM

SparkleShare 1.0

I’m delighted to announce the availability of SparkleShare 1.0!

What is SparkleShare?

SparkleShare is an Open Source (self hosted) file synchronisation and collaboration tool and is available for Linux distributions, Mac, and Windows.

SparkleShare creates a special folder on your computer in which projects are kept. All projects are automatically synced to their respective hosts (you can have multiple projects connected to different hosts) and to your team’s SparkleShare folders when someone adds, removes or edits a file.

The idea for SparkleShare sprouted about three years ago at the GNOME Usability Hackfest in London (for more background on this read The one where the designers ask for a pony).

SparkleShare uses the version control system Git under the hood, so people collaborating on projects can make use of existing infrastructure, and setting up a host yourself will be easy enough. Using your own host gives you more privacy and control, as well as lots of cheap storage space and higher transfer speeds.

Like every piece of software it’s not bug free, even though it has hit 1.0. But it’s been tested for a long time now and all reproducable and known major issues have been fixed. It works reliably and the issue tracker is mostly filled with feature requests now.

The biggest sign that it was time for a 1.0 release was the fact that Lapo hasn’t reported brokenness for a while now. This can either mean that SparkleShare has been blessed by a unicorn or that the world will end soon. I think it’s the first.

Features

For those of you that are not (that) familiar with SparkleShare, I’ll sum up its most important features:

The SparkleShare folder

This is where all of your projects are kept. Everything in this folder will be automatically synced to the remote host(s), as well as to your other computers and everyone else connected to the same projects. Are you done with a project? Simply delete it from your SparkleShare folder.

The status icon

The status icon gives you quick access to all of your projects and shows you what’s going on regarding the synchronisation process. From here you can connect to existing remote projects and open the recent changes window.

The setup dialog

Here you can link to a remote project. SparkleShare ships with a couple of presets. You can have mulitple projects syncing to different hosts at the same time. For example, I use this to sync some public projects with Github, some personal documents with my own private vps and work stuff with a host on the intranet.

Recent changes window

The recent changes window shows you everything that has recently changed and by whom.

History

The history view let’s you see who has edited a particular file before and allows you to restore deleted files or revert back to a previous version.

Conflict handling

When a file has been changed by two people at the same time and causes a conflict, SparkleShare will create a copy of the conflicting file and adds a timestamp. This way changes won’t get accidentally lost and you can either choose to keep one of the files or cherry pick the wanted changes.

Notifications

If someone makes a change to a file a notification will pop up saying what changed and by whom.

Client side encryption

Optionally you can protect a project with a password. When you do, all files in it will be encrypted locally using AES-256-CBC before being transferred to the host. The password is only stored locally, so if someone cracked their way into your server it will be very hard (if not impossible) to get the files’ contents. This on top of the file transfer mechanism, which is already encrypted and secure. You can set up an encrypted project easily with Dazzle.

Dazzle, the host setup script

I’ve created a script called Dazzle that helps you set up a Linux host to which you have SSH access. It installs Git, adds a user account and configures the right permissions. With it, you should be able to get up and running by executing just three simple commands.

Plans for the future

Something that comes up a lot is the fact that Git doesn’t handle large (binary) files well. Git also stores a database of all the files including history on every client, causing it to use a lot of space pretty quickly. Now this may or may not be a problem depending on your usecase. Nevertheless I want SparkleShare to be better at the “large backups of bulks of data” usecase.

I’ve stumbled upon a nice little project called git-bin in some obscure corner of Github. It seems like a perfect match for SparkleShare. Some work needs to be done to integrate it and to make sure it works over SSH. This will be the goal for SparkleShare 2.0, which can follow pretty soon (hopefully in months, rather than years).

I really hope contributors can help me out in this area. The Github network graph is feeling a bit lonely. Your help can make a big difference!

Some other fun things to work on may be:

  1. Saving the modification times of files
  2. Creating a binary Linux bundle
  3. SparkleShare folder location selection
  4. GNOME 3 integration
  5. …other things that you may find useful.

If you want to get started on contributing, feel free to visit the IRC channel: #sparkleshare on irc.gnome.org so I can answer any questions you may have and give support.

Finally…

I’d like to thank everyone who has helped testing and submitted patches so far. SparkleShare wouldn’t be nearly as far as it is now without you. Cheers!

by Hylke Bons at February 07, 2015 04:07 PM

November 19, 2014

Richard Purdie

Laugh or cry?

I think by now people realise there is some “fatigue” thing going on with me.

I spent a couple of evenings last week doing strenuous wall paper stripping (woodchip), to the point I was trembling when I took breaks. On Saturday I took my new cyclocross bicycle for an 18 mile ride, I’ve been talking about buying one for years so I was determined to try it while the weather was good. Compared to my previous 40 year old bicycle with non-indexed gears and ineffective brakes, its a complete revelation and I love it. There is an 10 mile “time trial” section on the route I picked, Nov 2013 I was doing ~45 mins, Nov 2014 on the old bike, ~39 mins, on the new one, under 34 mins. I was pleasantly surprised!

I spent the weekend in a bit of a daze induced by the exercise which is par for the course, I’m used to that along with the muscle aches.

On Monday night, I became extremely cold and shivery with the “flu” aches and pains and basically didn’t sleep, oddly wide awake yet more unwell than I’ve ever been with proper flu. There were no respiratory or digestive symptoms. It could be a virus although the pattern of flu like symptoms 36-48 hours after exertion with varying intensity makes me lets say suspicious.

The medical profession? They basically don’t have a clue what is going on :( . Lots of tests up to and including muscle biopsy and some interesting results (like spinal nerve damage I seemingly recovered from?!) but nothing which explains it.

There seems to be a finite amount I can do, if I exceed that, there is a price to pay. I still feel horrible 24 hours on, nowhere near as bad as I did but still “not good”, sitting wearing half my wardrobe to keep warm (trail riding base layers are wonderful). I have no idea which events to commit to and need to be careful about being in a fit state for things. On the plus side, the price comes later, not during activity and I guess I have some handle on the pattern. Its also always been there I think, I’ve just tried to be more active/fit and provoked it.

So really, I don’t know whether to laugh or cry :/. If you see me having disappeared a bit from some things, this is why though.

by Richard at November 19, 2014 12:23 PM

September 28, 2014

Hylke Bons

Switching jobs

Today was my first day at Red Hat! This has been a public service announcement.

by Hylke Bons at September 28, 2014 09:30 PM

July 22, 2014

Emmanuele Bassi

moving on…

I’ve finally restored my personal web server after the WordPress installation(s) I had there got hacked last year, and I decided to migrate this blog there.

if you do not read this blog via Planet GNOME, but you use the syndacation feed, you should subscribe to this feed, instead. if you’re only interested in GNOME-related posts (i.e. the posts that end up in Planet GNOME) the use this feed instead.

by ebassi at July 22, 2014 11:55 AM

May 19, 2014

Ross Burton

UEFI Validation

The Linux UEFI Validation Project was announced recently:

Bringing together multiple separate upstream test suites into a cohesive and easy-to-use product with a unified reporting framework, LUV validates UEFI firmware at critical levels of the Linux software stack and boot phases.

LUV also provides tests in areas not previously available, such as the interaction between the bootloader, Linux kernel and firmware. Integrated into one Linux distribution, firmware can now be tested under conditions that are closer to real scenarios of operation. The result: fewer firmware issues disrupting the operating system.

Of course that “one Linux distribution” is built using the Yocto Project, so it’s trivial to grab the source and patch/extend it, or rebuild it for different processors if for example you want to validate UEFI on ARM or a new CPU that mainstream distros don’t fully support.

by Ross Burton at May 19, 2014 09:07 AM

Richard Purdie

Kielder K2 – Marshalling Day 3

Sunday morning was an early start but despite the fun of the previous day, I was basically ok and the swelling of my leg has massively reduced overnight. I made my way back to Bellingham in time to put fuel onto the fuel trailer and then made my way leisurely to the far checkpoint and my section. Unfortunately I was on my own today, it is more fun when you have someone to ride with and talk to.

Eventually bikes turned up and entered the section and then I followed the pack through to make sure there weren’t any issues. It was all quiet so I cut back to the special test and refueled finding the riders weren’t yet back to there. I therefore waited around. The end of the special test has an interesting detour through a ditch and there were a number of people falling off there who I ended up helping. There was one person who decided to head off the line everyone else was using and buried it in liquid mud up to the mudguards. I didn’t enjoy pulling that out and getting covered in mud. Another went off line, planted the front wheel into a hole and went over the bars head first, thankfully he was shaken but ok as it was slow speed.

There were people having fun on the enduro loops but my back tyre wasn’t up to it, nor were my energy levels to be quite honest. I had something to eat/drink and eventually the closing marshal turned up which marked the start of my main work. I headed back to my section via a short cut along with a local rider who was wanting to retire. There, I had a wait for the closing marshal again, he arrived and it was time to demark.

I was supposed to be with a team of another two however they weren’t there and I decided to get started without them, leaving word with the checkpoint to send them on. I hadn’t gotten too far when they showed up and joined in. We worked as a team, person in front gets the first arrow they come to and the team rotates. Demarking can be interesting as you never know quite what kind of terrain you’ll have to park on to reach the arrows. Obviously you try to do it without getting off the bike, although if the ditch is marsh/water with reeds growing out of it, you quickly learn not to ride into it (I’d remembered from last year).

We made it around the course to the next checkpoint and demarked to the road there, then also demarked the route back to the camping field. The far section with the second special test was being handled by another team. By the end of the few days, the bike was rather coated in mud/dust and rather sorry for itself with its missing headlight:

The forest fire roads are hard on tyres, particularly when you do use the power to accelerate and my rear was paractially a slick at this point:

So all in all a good weekend and some good fun. The bike is going to need a good check over after all that vibration from the fireroads.

by Richard at May 19, 2014 08:56 AM

Kielder K2 – Marshalling Day 2

Saturday dawned and I found I could roughly move. Various locals started arriving, either entering in the rally or marshalling. I was given the top section of the course to look after where I’d run out of fuel the day before. One of the local TRF was to be my partner in crime.

We had a leisurely ride out to the special test, then short cut to our section. We did a sighting lap of that part not least so my partner had some idea where we were, then watched a load of bikes through the special test since that was our way back to the start of our section and the checkpoint there. We headed back there to find nobody has entered it yet and managed to push our way through the crowds to the front. The spot has a lovely view over the reservoir.

After a good few bikes had set off, we started a check through and stopped to take some photos of some friends. After that we came across
a breakdown, a 690 that wouldn’t run. We spent a while pooling tools trying to diagnose the problem. It had a spark so it had to be fuel related. We left to go and find a tow rope. We swept through our section, then went back to the special test and fuel point to refill and find a tow rope. On the way back to our section we passed some other breakdowns but there wasn’t anything we could do to help and they were in a good place for recovery.

Whilst my two stroke wasn’t up to towing, my partner’s four stroke was and we were able to get him out the forest using some creative use of shortcuts and into a position he could get recovered from. Greg did well particularly given he’d never tried this before.

After Greg refueled at Kielder and I’d eaten my sandwiches, we went back around our section, stopping for photos and so on, hovering near the end so we’d hear of any other breakdowns or problems. When it was clear things were winding down we headed back to the test/fuel area and found the course was closed. All that was left was for us to head back to base which we did without incident. Our section was clear as far as we knew and would get checked by the closing team anyway.

I have to say I was pretty tired at this point. I decided that rather than camping, I’d head home for a shower and so on in comfort and I wanted a better look at my leg. I arrived home without incident, had a shower and re-bandaged the leg and then started to feel extremely unwell. As far as I can tell I was losing my ability to regulate body temperature and was very shaky and shivering with a fast pulse. I tried various quick food/drink options and it wasn’t helping, if anything I was getting worse, extremely cold and shivery. An idea struck me and I tried a can of coke which thankfully hit the spot and pulled me out of it within 10 minutes.

This is very similar to what happened to me a couple of years ago. In that case I ended up near enough unconscious for 18 hours so this was an improvement on last time. I’m now pretty sure these were both cases of hypoglycaemia. Readers are probably thinking diabetes come to mind or a thyroid problem etc. For the record, I’ve had a ton of tests and its not any of the “usual” suspects. It does seem to be exercise induced, delayed effect and some kind of metabolic muscle issue is prime suspect. I have my theories and investigation is still ongoing.

So I opted to stay at home for a good night’s sleep and to see how I was in the morning. Any sensible person would perhaps of opted to rest the next day however I believe I’m actually starting to understand what was going on with energy levels and all my instincts told me I would be fine to ride.

by Richard at May 19, 2014 08:36 AM

K2 Rally – Marshalling Day 1

The K2 rally is this weekend in Kielder forest. It was fairly eventful so I’ll split these day by day.

Marking out a 65 mile course in the forest takes quite some time so I’d offered help to the organiser with that as well as marshalling in the rally itself.

As things worked out on the Friday, the course was pretty much done but it did need sighting, a check that all the signs were present in the right places, hazards marked etc. for the perspective of someone coming around the course.

Having arrived and secured a flat piece of ground for the tent, I therefore set off with someone who I won’t name on a DRZ400 on semi tyres, not off road ones. He assured me he would go steady but he’d finished X rallies, considers himself a retired enduro rider etc. He did have a good medical reason for not wanting to fall off too and wanted someone with him.

So we set off and it was soon clear that when he said we’d go ’steady’, he didn’t half mean it. I was on the YZ with its road gearing on, not the enduro gearing since this was a rally. It’s engine is a handful at the best of times and it didn’t run well at this (lack of) speed :( .

Back in the karting days, I prided myself on an ability to limp broken two stroke engines back to the pits so somehow, I managed to keep the YZ running and not oil the plug.

When we found the special test, I ended up having to go past for one section since the YZ requires commitment and the 10mph simply wasn’t going to work. I was wishing I was on the CRM which would have been much happier at this pace.

We did put up a few arrows in places to make things clearer and there was only one confusing section where some tape wasn’t out. At this point it became clear that whilst we had a map, I’d have to read it. I also had the GPS and some idea of what the forest looks like by now, thankfully.

Somehow we made it to the top of the course and after a biscuit break, we dropped down onto the road by the reservoir and then turned back into the forest. He commented that his speedo had stopped working.

At this point I noticed his front wheel was unwell. At first it looked like he’d lost a spoke which had snagged the speedo cable and wrapped it around the wheel a couple of times. The bike was basically unsafe to ride.

So, first up, what tools did we have? He seemed very unkeen to try and get them from the sealed package on his bike so we tried mine. Unfortunately I’d picked up my favourite multitool, not remembering it was broken so we were without decent pliers. The speedo cable was at quite some tension and therefore not easy to undo. We could undo various mounting clips to try and slacken it off and then using the broken pliers, very slowly inched the top end of the cable undone.

When it did come undone it did so with quite some force and could have done nasty things to my fingers but I had anticipated this. We found it wasn’t a spoke that was wrapped around the wheel but 12″ of fencing wire. We pulled that off and then rather than disconnecting the speedo from the other end, managed to simply sever the cable. Great, we could continue.

The course is supposed to be a 2-2.5 hour lap. I think we were 4 hours in at this point, maybe more and not even half way. We looped around the next section of forest and then suddenly, my bike started sounding odd, the kind of odd that means the fuel is running out. I sounded my horn, waved arms and he took not notice of me and continued off into the distance as I coasted to a stop.

Not entirely able to believe it I checked the tank and yes, it was extremely low and now on reserve. Eventually he noticed I was missing and came back to look for me. Basically, my lack of fuel was extremely frustrating for him and his advice was we split up and I make my own way back on the road, I could always hitch a lift and then pick the bike up. That way I was no longer a problem for him. Against my better judgement, I decided that yes, I’d have to sort myself out. I knew where there was an automated fuel station, perhaps within range of my reserve however I had neither two stroke oil to mix new fuel, or a credit card with me (not many places take them deep in the forest).

We also had a little disagreement about the cause of my lack of fuel. He was keen to point out that as any idiot knows, running slower doesn’t use more fuel. I suggested that rule may apply to a point but that the YZ actually runs more efficiently at more than 10mph however he wasn’t having it. Regardless, it wasn’t going to change anything.

I therefore left, in full fuel economy mode and headed for Bellingham. Making it was never going to happen and I coasted to a stop completely out just past Falston. On the plus side I knew exactly where I was. On the downside it was a fair distance to where I needed to be (9 miles as it turned out).

I stashed the bike down a side road in some bushes, I also stashed the bike helmet, the arrows, stapler, body armour and anything else I wasn’t going to need under a handy plastic sheet over some firewood. I should mention at this point that there is no mobile phone signal apart from occasionally a network I wasn’t on, nor was I expecting any signal any time soon. Any houses around there are holiday lets or second homes and nobody was around.

So still wearing the knee braces and bike boots, I started the walk to Bellingham. At least this way I knew I’d get sorted out eventually. I was passed by lots of cars and I’d imagine someone wearing MX clothing and big bike boots looked a little out of place walking along a verge. One oncoming car did stop and ask if I was ok and did offer me a lift in the wrong direction but I politely declined. I wasn’t taking too well to walking in the heat with that gear on, “dripping”, doesn’t quite cover it.

After 3 miles a car going the right way did stop and gave me a lift into Bellingham for which I was extremely grateful. I tried to find the organiser to tell him I was ok but he wasn’t around. I figured my friend was still riding around the forest so far so nobody probably knew I was stranded yet. I therefore jumped into the discovery and went to collect the bike and stashed gear. That went without incident and when I got back to the site, the organiser also arrived back having been looking for me. My friend had got a message from his phone to his wife. Anyhow, I+bike were back and all was well.

It was now late afternoon and I opted for a tea break. Thinking I was just popping out for three hours, I hadn’t taken food with me although I’d had a drink. The organiser was apologetic for my trip out and said if I wanted, I could go and check the other half of the course with him if I was up for it. So we set off and all I can say is that this trip was opposite extremes to the first one. I mostly kept up and we got the other half of the course done on half a tank of fuel (including riding out to it). So perhaps it does use more fuel at 10mph, who’d have thought it! I did get to see the second special test although I much preferred the first.

Unfortunately, just as we were coming to the end of the course, I relaxed a little too much, ran wide on the exit of a corner, ran onto the edge of and then into a gravel drainage ditch and spent a short while sliding along said ditch under the bike.

Once I was sure we’d actually stopped, I have a distinct memory of the self test of various pieces of me, concluding that all appeared to still be attached with one area of pain on my thigh which wasn’t structural. I therefore extracted myself and inspection showed a hole through the clothing on my thigh and some rather red raw looking skin, 3×2″ in size. I wasn’t leaking and inspection tallied with the previous conclusion, not structural so I turned my attention to the bike. It appeared not to be too bad and so while the adrenaline was still kicking in, I hauled it out the ditch through shear will power. The organiser therefore found me sitting in the middle of the track looking a little worse for wear. I was thankful the bike started without too much faff for a change and we went the two corners out the forest and the few miles of road back to camp.

Once back, someone helpfully pointed out the headlight was smashed. That was the least of my concerns, I dug out the first aid kit from my backpack which I’ve been carrying around for literally years, some water and paper towels and went about seeing how bad the damage was to me. I still can’t make up my mind if its a burn or a graze, I suspect its a burn from the exhaust. The first aid kit had the right things in it thankfully and although the first attempt at a bandage didn’t work too well, it did after supplementation with gaffer tape. It was clear at this point there was ’some’ bruising too. I suspect that I hit my knee hard but the brace deflected the damage into my thigh muscle which is a good thing and working as designed. I’ve a link here to a picture of said injury, don’t follow it if you don’t like gruesome graphic detail.

What followed was a pleasant evening on the campsite, cooking dinner and then talking to various people as they arrived, I eventually had to call it a night as sitting in the cold was causing my leg to seize up.

But hang on, what became of my friend you might ask? Well, he did complete lap one but proceeded to follow the arrows and start a second, not realising where he was. He did think some of the corners looked familiar! At some point he ran out of fuel. The only remaining detail to complete to story is that that he was rescued by a passing postman!

by Richard at May 19, 2014 07:50 AM

May 15, 2014

Ross Burton

Reproducible builds and GPL compliance

LWN has a good article on GPL compliance (if you’re not a subscriber you’ll have to wait) that has an interesting quote:

Developers, and embedded developers in particular, can help stop these violations. When you get code from a supplier, ensure that you can build it, he said, because someone will eventually ask. Consider using the Yocto Project, as Beth Flanagan has been adding a number of features to Yocto to help with GPL compliance. Having reproducible builds is simply good engineering practice—if you can’t reproduce your build, you have a problem.

This has always been one of the key points that we emphasis when explaining why you should use the Yocto Project for your next product.  If you’re shipping a product that is built using fifty open source projects then ensuring that you can redistribute all the original sources, and the patches that you’ve applied, and the configure options that you’ve used, and any tweaks to go from a directory of binaries to a bootable image isn’t something you can knock up in an afternoon when you get a letter from the SFC.  Fingers crossed you didn’t accidentally use some GPLv3 code when that is considered toxic.

Beth is awesome and has worked with others in the Yocto community to ensure all of this is covered.  Yocto can produce license manifests, upstream sources + patches archives, verify GPLv3 code isn’t distributed, and more.  All the work that is terribly boring at the beginning when you have a great idea and are full of enthusiasm (and Club-Mate), but by the time you’re shipping is often nigh on impossible.  Dave built the kernel on his machine but the disk with the right source tree on died, and Sarah left without telling anyone else the right flags to make libhadjaha actually link…  it’ll be fine, right?

by Ross Burton at May 15, 2014 10:29 PM

May 11, 2014

Ross Burton

Misc for Sale

As we’re moving house we are having a bit of a clear out, and have some techie gadgets that someone else might want:

  • Netgear DGN3500 ADSL/WiFi router. £15.
  • Bluetooth GPS dongle. £5.
  • Seagate Momentus 5400.6 160GB 2.5″ SATA hard disk (ex-MacBook). £5.

Those prices are including UK postage. Anyone interested?

by Ross Burton at May 11, 2014 10:28 PM

May 05, 2014

Richard Purdie

MGB lives again

Back in August 2010 I blew the MG’s engine up. I did get around to taking that one out and fitting the spare. The battery was too flat to start it and winter came along before I’d got any further. With moving house, the work on said house and garage and 101 other things, the MG just never happened and its been sat looking sorry for itself ever since :( .

Today, I thought I’d see what state it was in. It was changed overnight so the battery should either be good or knackered. The bonnet release cable was snapped but I found a way in.

Whilst stiff, the controls all seemed to roughly do what they should. First try at ignition on saw petrol explode over the engine bay as the fuel line between the carbs gave way. At least the fuel pump works I guess.

After a small new piece of fuel pipe was fitted, take two. Ignition on, fuel at pressure, no overflow from the carbs which was a pleasant surprised as I was expecting a carb rebuild. Trying the starter, the engine turned over at a reasonable speed, it even make a hint of a cough of life. The lack of oil pressure stopped me at this point. I took the plugs out, then ran it on the starter and after what seemed like an eternity, the oil pressure climbed to normal. Ok.

Plugs back in, try the starter. Nothing. This was the point I was at after the rebuild but with a charged battery. I noted the distributor was loose and I’d never set the timing. Ok, twiddle it one way, try again. Still nothing. Ok, lets try the other way.

This time, you could hear it trying. After some further gentle nudging, it started coughing into life, first on a single cylinder then quickly onto approximately two. At this point I just gently tried to keep it running. The other cylinders kicked in intermittently at first, then it was running on all approximately four. I looked around for Dad who’d run off to try and stop the clouds of smoke a two stoke would be proud of from getting into the garage (too late). I gestured for Dad to check nothing was on fire on the exhaust (probably just the petrol from the earlier spillage).

I decided not to run it too much more since there was no water in it. We stopped and then rectified that, cue a comedy moment where there was water pouring from a hole in the cylinder head with us wondering “what’s missing?” until we realised it was the water temperature sender unit.

Emboldened by this, we wondered “could it move?”. My Dad carefully moved vehicles out of the way in case the brakes failed in some catastrophic way and made dire predictions about whether I’d destroy the clutch trying this. It started and then we managed a controlled lurch forwards as the brakes unbound with not nearly as much issue as we’d expected. At this point I drove it off the drive and did a couple of loops around the T junction. Everything felt seized up but it was none the less moving, stopping, starting and turning.

At this point attention turned to the driveway which needed a good clean and with the car missing, this was an ideal opportunity. Later, the car started first turn of the starter and reversed back on relatively happily.

Sadly the bodywork is in a bad way but at least now its known to vaguely function and move under its own power :) .

by Richard at May 05, 2014 03:19 PM

May 04, 2014

Richard Purdie

CRM lives again

I took the CRM to bits back in September. It was making various rattling noises that said the engine was tired and in need of a rebuild. Upon dismantling it, I found that not only was the piston chattering but the power valve was also not in the best of health being rather loose on its shaft. I managed to get a secondhand part for that but getting a new piston proved tricky since I wanted a forged one rather than cast. I ended up going for a .4mm oversize after much playing with feeler gauges and vernier callipers to map out the barrel wear. As expected it was more worn in the centre of the bore and less at the top/bottom.

As a family we’ve long since used an engineering shop in Blyth for rebores. A lot of our engines now have nikasil liners which mean sending off the cylinders but this one is a cast iron liner and hence can be done locally. When I dropped it off, there was much sucking of teeth about a 0.4mm oversize piston as it was not leaving them a lot of room for the rebore. They’d do the best they could, complaining the standard bores were never actually straight. It had been given to the “old man” by the son since he has more patience with these silly motorcycle things. We use the place due to their attention to detail. A joke from my school’s karting association days about their local competitor springs to mind: “Where did you get the rebore? Armstrongs?! How do you square up the barrel then? Yellow pages [telephone directory] under one side?”.

The above shows a mark left which they were very apologetic about but they had warned me. In reality this won’t make any difference to the running.

New shiny parts and some less shiny power valve bits.

Crazy amounts of plumbing. The question is could I remember how it all goes together from back in September?


An interesting aside, these photos show the exhaust port with the valve open and closed. It makes an amazing difference to the port size and position. This allows the engine to have low down grunt and decent top end rather than having the port timing fixed to a specific engine speed.

The new shiny piston fitted and ready for the top end. I nearly forgot to fit the new base gasket but did thankfully remember.

It went back together with nothing unepected left over which is always good. So did it start? Kind of. It did fire up, smoke a lot then stop. It reluctantly fired up again, stopped and lost all compression. At this point I was some what panicking and also out to time to work on it further.

There are a limited number of ways a two stroke can lose compression and most of them are not good. This morning I took the exhaust off and peered into the barrel as best I could, all looked well. No signs of the power valve having hit the piston which was one worry. I tried turning the engine over under load and the compression came back suggesting the thing was just full of oil which was seizing up the piston rings. I spent an age kicking it over trying to get it started and did manage to get it to fire up occasionally once managing to get it onto full throttle with clouds of smoke coming out, then it basically stopped dead again. The plug was clearly oiling up and its a 10 minute job to remove, clean and refit it.

I gave in and called in Dad to drive the discovery whilst I was towed by it on the CRM. I am reluctant to do this having destroyed engines like this but I was confident the thing was just clogged up with oil. It took half the block for it to turn over enough to clear, fire and then run under its own power. A couple of more trips around the block, probably much to the enjoyment of the neighbours and its running! So it lives again, just need to get it out and gently run it in now.

by Richard at May 04, 2014 01:03 PM

Hanging by a thread

Never let it be said I don’t believe in preventative maintenance. Admittedly this is a little just in time!

This is the YZ’s clutch cable in case that isn’t clear. The first one that arrived was for a 250F (a four stroke) despite me clearly buying one for the YZ250 (a two stroke).

by Richard at May 04, 2014 12:19 PM

May 02, 2014

Emmanuele Bassi

Graphene

one of the challenges of writing a graphics library that is capable of doing what modern UI designers and developers expect is providing the required data types to achieve things like 3D transformations.

with the collective knowledge and attention to detail1 that the free and open source software community brings to the table I was actually surprised to see that all the code for doing vector and matrix math is usually tucked away into various silos that also come with canvas implementations, physics engines, and entire web browsers. it gets even worse when you want code that used features of modern (and less modern) hardware, and instead all you get are just naive implementations of four floating point values in a structure.

you can trust me when I say that I didn’t want to spend the past seven days writing code that deals with vector and matrix operations, when I wasn’t reading PDFs of Intel architecture opcodes, or ARM NEON instructions; I also didn’t want to know that once you start implementing common operations on matrix types, like projection and unprojection, you get to open a fairly deep can of worms that forces you to implement point (2D and 3D), rectangle, quaternion, and quad types.

luckily, it’s possible to find a bunch of implementations under various stages of maintenance, and under suitable licenses, even though mostly are in C++ and they overlap by just about 60% each; you really need to buckle up and start translating naive matrix determinant code to SIMD four vector data structures, and do a union of all possible API, before you have something you can actually use.

the end result of these seven days is an almost decent, almost complete little utility library that tries to be fairly thin in both what it requires and what it provides. I called it graphene and it’s available in Git. at some point, when I’m actually satisfied with it, I’ll even document it like the grown-up I’m supposed to be. right now, I’ll have to write a ton of tests to check on the math, because I’m pretty sure there must be at ton of bugs in there.

the main question is: what do I intend to use graphene for. the more attentive amongst you, kind readers, will already guess that it’s for the forthcoming GTK+ scene graph API — which is indeed the correct answer, but you’ll have to wait for the next blog post in the series for a proper introduction and description, as well as a road map for the unicorn and ponies fuelled future.

  1. bordering on the OCD

by ebassi at May 02, 2014 08:00 PM

Berlin DX Hackfest / Day 3

 

 

 

 

 

the third, and last day of the DX hackfest opened with a quick recap as to what people have been working on in the past couple of days.

we had a nice lunch nearby, and then we went back to the Endocode office to tackle the biggest topic: a road map for GTK+.

we made good progress on all the items, and we have a fairly clear idea of who is going to work on what. sadly, my optimism on GProperty landing soon did not survive a discussion with Ryan; it turns out that there are many more layers of yak to be shaved, though we kinda agreed on the assumption that there is, in fact, a yak underneath all those layers. to be fair, the work on GProperty enabled a lot of the optimizations of GObject: property notifications, bulk installation of properties, and the private instance data reorganization of last year are just examples. both Ryan and I agreed that we should not increase the cost for callers of property setters — which right now would require asking the GProperty instance to the class of the instance that we’re modifying, which implies taking locks and other unpleasant stuff. luckily, we do have access to private class data, and with few minor modification we can use that private data to store the properties; thus, getting the properties of a class can be achieved with simple pointer offsets and dereferences, without locks being involved. I’ll start working on this very soon, and hopefully we’ll be able to revisit the issue at GUADEC, in time for the next development cycle of GLib.

in the meantime, I kept hacking on my little helper library that provides data types for canvases — and about which I’ll blog soon — as well as figuring out what’s missing from the initial code drop of the GTK+ scene graph that will be ready to be shown by the time GUADEC 2014 rolls around.

I’m flying back home on Saturday, so this is the last full day in Berlin for me. it was a pleasure to be here, and I’d really like to thank Endocode for generously giving us access to their office; Chris Kühl, for being a gracious and mindful host; and the GNOME Foundation, for sponsoring attendance to all these fine people and contributors, and me.

 

Sponsored by the GNOME Foundation

by ebassi at May 02, 2014 07:00 PM

May 01, 2014

Emmanuele Bassi

Berlin DX Hackfest / Day 2

the second day of the hackfest had a pretty big discussion on the roadmap for GTK+.

thanks to Matthias Clasen, we had a list of things to discuss prior to the start of the hackfest, even if Matthias himself would not be present:

  • filling the gaps between the GNOME HIG and the GTK+ API needed to implement it
  • a better cross-platform story for tool kit maintainers and application developers
  • touch support
  • scene graph to replace Clutter
  • documentation
  • improving the relationship of the tool kit with Glade
  • required clean ups for GTK+ 4

during the afternoon we managed to go through the first bullet point of the list, but we made really good progress on it, and we managed to assign each sub-issue to a prospective owner that is going to be in charge of it.

hopefully, we’re going to go through the other points during the rest of the hackfest much more quickly.

Sponsored by the GNOME Foundation

by ebassi at May 01, 2014 11:01 PM

Berlin DX hackfest / Day 1

we had a fairly productive first day here, at the Endocode offices in Berlin. everyone is pretty excited about working on the overall experience for developers on the GNOME platform.

at first, we decided what to tackle in the next three days, and drafted a rough schedule. the hackfest then broke down into two main groups: the first tackled GObject models for the benefit of GTK+ widgets acting as views; the second worked on the developer documentation available on developer.gnome.org.

I decided to stay on the sidelines for the day, and worked on a small utility library that I’m going to use in the development of GSK, the GTK+ scene graph API that will replace Clutter in the near future; I’m going to do a proper blog post on both things later this week. I’ve also worked a bit on my old nemesis, GProperty. I have really high hopes that after three years of back and forth we’re going to finally land it in GLib, and let people have a better, easier, and more efficient way to define and use GObject properties.

In the evening we went to the Berlin GNOME beers along with the local GNOME community; it’s been a great evening, and we met both familiar faces and new ones.

I’d like to thank Endocode for kindly giving us access to their office in order to host the hackfest, as well as the GNOME Foundation for sponsoring travel attendance of many talented members of the GNOME community.

Sponsored by the GNOME Foundation

by ebassi at May 01, 2014 04:00 PM

February 05, 2014

Ross Burton

Better bash completion?

Bash completion is great and everything, but I spend more time than is advisable dealing with numerous timestamped files.

$ mv core-image-sato-qemux86-64-20140204[tab]
core-image-sato-qemux86-64-20140204194448.rootfs.ext3
core-image-sato-qemux86-64-20140204202414.rootfs.ext3
core-image-sato-qemux86-64-20140204203642.rootfs.ext3

This isn’t an obvious choice as I now need to remember long sequences of numbers. Does anyone know if bash can be told to highlight the bit I’m being asked to pick from, something like this:

$ mv core-image-sato-qemux86-64-20140204[tab]
core-image-sato-qemux86-64-20140204194448.rootfs.ext3
core-image-sato-qemux86-64-20140204202414.rootfs.ext3
core-image-sato-qemux86-64-20140204203642.rootfs.ext3

by Ross Burton at February 05, 2014 11:55 AM

January 21, 2014

Ross Burton

Remote X11 on OS X

I thought I’d blog this just in case someone else is having problems using XQuartz on OS X as a server to remote X11 applications (i.e. using ssh -X somehost).

At first this works but after some time (20 minutes, to be exact) you’ll get “can’t open display: localhost:10.0″ errors when applications attempt to connect to the X server. This is because the X forwarding is “untrusted” and that has a 20 minute timeout. There are two solution here: increase the X11 timeout (the maximum is 596 hours) or enable trusted forwarding.

It’s probably only best to enable trusted forwarding if you’re connecting to machines you, well, trust. The option is ForwardX11Trusted yes and this can be set globally in /etc/ssh_config or per host in ~/.ssh/config.

by Ross Burton at January 21, 2014 11:17 AM

January 08, 2014

Ross Burton

Network Oddity

This is… strange. Two machines, connected through cat5 and gigabit adaptors/hub.

$ iperf -c melchett.local -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to melchett.local, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[  4] local 192.168.1.7 port 35197 connected with 192.168.1.10 port 5001
[  5] local 192.168.1.7 port 5001 connected with 192.168.1.10 port 33692
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  1.08 GBytes   926 Mbits/sec
[  5]  0.0-10.0 sec  1.05 GBytes   897 Mbits/sec

Simultaneous transfers get ~900MBits/s.

$ iperf -c melchett.local -r
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to melchett.local, TCP port 5001
TCP window size: 22.9 KByte (default)
------------------------------------------------------------
[  5] local 192.168.1.7 port 35202 connected with 192.168.1.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec   210 MBytes   176 Mbits/sec
[  4] local 192.168.1.7 port 5001 connected with 192.168.1.10 port 33693
[  4]  0.0-10.0 sec  1.10 GBytes   941 Mbits/sec

Testing each direction independently results in only 176MBits/sec on the transfer to the iperf server (melchett). This is 100% reproducible, and the same results appear if I swap iperf client and servers.

I’ve swapped one of the cables involved but the other is harder to get to, but I don’t see how physical damage could cause this sort of performance issue. Oh Internet, any ideas?

by Ross Burton at January 08, 2014 12:38 PM

December 16, 2013

Chris Lord

Linking CSS properties with scroll position: A proposal

As I, and many others have written before, on mobile, rendering/processing of JS is done asynchronously to responding to the user scrolling, so that we can maintain touch response and screen update. We basically have no chance of consistently hitting 60fps if we don’t do this (and you can witness what happens if you don’t by running desktop Firefox (for now)). This does mean, however, that you end up with bugs like this, where people respond in JavaScript to the scroll position changing and end up with jerky animation because there are no guarantees about the frequency or timeliness of scroll position updates. It also means that neat parallax sites like this can’t be done in quite the same way on mobile. Although this is currently only a problem on mobile, this will eventually affect desktop too. I believe that Internet Explorer already uses asynchronous composition on the desktop, and I think that’s the way we’re going in Firefox too. It’d be great to have a solution for this problem first.

It’s obvious that we could do with a way of declaring a link between a CSS property and the scroll position. My immediate thought is to do this via CSS. I had this idea for a syntax:

scroll-transition-(x|y): <transition-declaration> [, <transition-declaration>]*

    where transition-declaration = <property>( <transition-stop> [, <transition-stop>]+ )
      and transition-stop        = <relative-scroll-position> <property-value>

This would work quite similarly to standard transitions, where a limited number of properties would be supported, and perhaps their interpolation could be defined in the same way too. Relative scroll position is 0px when the scroll position of the particular axis matches the element’s offset position. This would lead to declarations like this:

scroll-transition-y: opacity( 0px 0%, 100px 100%, 200px 0% ), transform( 0px scale(1%), 100px scale(100%), 200px scale(1%);

This would define a transition that would grow and fade in an element as the user scrolled it towards 100px down the page, then shrink and fade out as you scrolled beyond that point.

But then Paul Rouget made me aware that Anthony Ricaud had the same idea, but instead of this slightly arcane syntax, to tie it to CSS animation keyframes. I think this is more easily implemented (at least in Firefox’s case), more flexible and more easily expressed by designers too. Much like transitions and animations, these need not be mutually exclusive though, I suppose (though the interactions between them might mean as a platform developer, it’d be in my best interests to suggest that they should :)).

I’m not aware of any proposal of this suggestion, so I’ll describe the syntax that I would expect. I think it should inherit from the CSS animation spec, but prefix the animation-* properties with scroll-. Instead of animation-duration, you would have scroll-animation-bounds. scroll-animation-bounds would describe a vector, the distance along which would determine the position of the animation. Imagine that this vector was actually a plane, that extended infinitely, perpendicular to its direction of travel; your distance along the vector is unaffected by your distance to the vector. In other words, if you had a scroll-animation-bounds that described a line going straight down, your horizontal scroll position wouldn’t affect the animation. Animation keyframes would be defined in the exact same way.

[Edit] Paul Rouget makes the suggestion that rather than having a prefixed copy of animation, that a new property be introduced, animation-controller, of which the default would be time, but a new option could be scroll. We would still need an equivalent to duration, so I would re-purpose my above-suggested property as animation-scroll-bounds.

What do people think about either of these suggestions? I’d love to hear some conversation/suggestions/criticisms in the comments, after which perhaps I can submit a revised proposal and begin an implementation.

by Chris Lord at December 16, 2013 11:20 AM

November 29, 2013

Chris Lord

Efficient animation for games on the (mobile) web

Drawing on some of my limited HTML5 games experience, and marginally less limited general games and app writing experience, I’d like to write a bit about efficient animation for games on the web. I usually prefer to write about my experiences, rather than just straight advice-giving, so I apologise profusely for how condescending this will likely sound. I’ll try to improve in the future :)

There are a few things worth knowing that will really help your game (or indeed app) run better and use less battery life, especially on low-end devices. I think it’s worth getting some of these things down, as there’s evidence to suggest (in popular and widely-used UI libraries, for example) that it isn’t necessarily common knowledge. I’d also love to know if I’m just being delightfully/frustratingly naive in my assumptions.

First off, let’s get the basic stuff out of the way.

Help the browser help you

If you’re using DOM for your UI, which I’d certainly recommend, you really ought to use CSS transitions and/or animations, rather than JavaScript-powered animations. Though JS animations can be easier to express at times, unless you have a great need to synchronise UI animation state with game animation state, you’re unlikely to be able to do a better job than the browser. The reason for this is that CSS transitions/animations are much higher level than JavaScript, and express a very specific intent. Because of this, the browser can make some assumptions that it can’t easily make when you’re manually tweaking values in JavaScript. To take a concrete example, if you start a CSS transition to move something from off-screen so that it’s fully visible on-screen, the browser knows that the related content will end up completely visible to the user and can pre-render that content. When you animate position with JavaScript, the browser can’t easily make that same assumption, and so you might end up causing it to draw only the newly-exposed region of content, which may introduce slow-down. There are signals at the beginning and end of animations that allow you to attach JS callbacks and form a rudimentary form of synchronisation (though there are no guarantees on how promptly these callbacks will happen).

Speaking of assumptions the browser can make, you want to avoid causing it to have to relayout during animations. In this vein, it’s worth trying to stick to animating only transform and opacity properties. Though some browsers make some effort for other properties to be fast, these are pretty much the only ones semi-guaranteed to be fast across all browsers. Something to be careful of is that overflow may end up causing relayouting, or other expensive calculations. If you’re setting a transform on something that would overlap its container’s bounds, you may want to set overflow: hidden on that container for the duration of the animation.

Use requestAnimationFrame

When you’re animating canvas content, or when your DOM animations absolutely must synchronise with canvas content animations, do make sure to use requestAnimationFrame. Assuming you’re running in an arbitrary browsing session, you can never really know how long the browser will take to draw a particular frame. requestAnimationFrame causes the browser to redraw and call your function before that frame gets to the screen. The downside of using this vs. setTimeout, is that your animations must be time-based instead of frame-based. i.e. you must keep track of time and set your animation properties based on elapsed time. requestAnimationFrame includes a time-stamp in its callback function prototype, which you most definitely should use (as opposed to using the Date object), as this will be the time the frame began rendering, and ought to make your animations look more fluid. You may have a callback that ends up looking something like this:

var startTime = -1;
var animationLength = 2000; // Animation length in milliseconds

function doAnimation(timestamp) {
 // Calculate animation progress
 var progress = 0;
 if (startTime < 0) {
   startTime = timestamp;
 } else {
   progress = Math.min(1.0, animationLength /
                              (timestamp - startTime));
 }

 // Do animation ...

 if (progress < 1.0) {
   requestAnimationFrame(doAnimation);
 }
}

// Start animation
requestAnimationFrame(doAnimation);

You’ll note that I set startTime to -1 at the beginning, when I could just as easily set the time using the Date object and avoid the extra code in the animation callback. I do this so that any setup or processes that happen between the start of the animation and the callback being processed don’t affect the start of the animation, and so that all the animations I start before the frame is processed are synchronised.

To save battery life, it’s best to only draw when there are things going on, so that would mean calling requestAnimationFrame (or your refresh function, which in turn calls that) in response to events happening in your game. Unfortunately, this makes it very easy to end up drawing things multiple times per frame. I would recommend keeping track of when requestAnimationFrame has been called and only having a single handler for it. As far as I know, there aren’t solid guarantees of what order things will be called in with requestAnimationFrame (though in my experience, it’s in the order in which they were requested), so this also helps cut out any ambiguity. An easy way to do this is to declare your own refresh function that sets a flag when it calls requestAnimationFrame. When the callback is executed, you can unset that flag so that calls to that function will request a new frame again, like this:

function redraw() {
  drawPending = false;

  // Do drawing ...
}

var drawPending = false;
function requestRedraw() {
  if (!drawPending) {
    drawPending = true;
    requestAnimationFrame(redraw);
  }
}

Following this pattern, or something similar, means that no matter how many times you call requestRedraw, your drawing function will only be called once per frame.

Remember, that when you do drawing in requestAnimationFrame (and in general), you may be blocking the browser from updating other things. Try to keep unnecessary work outside of your animation functions. For example, it may make sense for animation setup to happen in a timeout callback rather than a requestAnimationFrame callback, and likewise if you have a computationally heavy thing that will happen at the end of an animation. Though I think it’s certainly overkill for simple games, you may want to consider using Worker threads. It’s worth trying to batch similar operations, and to schedule them at a time when screen updates are unlikely to occur, or when such updates are of a more subtle nature. Modern console games, for example, tend to prioritise framerate during player movement and combat, but may prioritise image quality or physics detail when compromise to framerate and input response would be less noticeable.

Measure performance

One of the reasons I bring this topic up, is that there exist some popular animation-related libraries, or popular UI toolkits with animation functions, that still do things like using setTimeout to drive their animations, drive all their animations completely individually, or other similar things that aren’t conducive to maintaining a high frame-rate. One of the goals for my game Puzzowl is for it to be a solid 60fps on reasonable hardware (for the record, it’s almost there on Galaxy Nexus-class hardware) and playable on low-end (almost there on a Geeksphone Keon). I’d have liked to use as much third party software as possible, but most of what I tried was either too complicated for simple use-cases, or had performance issues on mobile.

How I came to this conclusion is more important than the conclusion itself, however. To begin with, my priority was to write the code quickly to iterate on gameplay (and I’d certainly recommend doing this). I assumed that my own, naive code was making the game slower than I’d like. To an extent, this was true, I found plenty to optimise in my own code, but it go to the point where I knew what I was doing ought to perform quite well, and I still wasn’t quite there. At this point, I turned to the Firefox JavaScript profiler, and this told me almost exactly what low-hanging-fruit was left to address to improve performance. As it turned out, I suffered from some of the things I’ve mentioned in this post; my animation code had some corner cases where they could cause redraws to happen several times per frame, some of my animations caused Firefox to need to redraw everything (they were fine in other browsers, as it happens – that particular issue is now fixed), and some of the third party code I was using was poorly optimised.

A take-away

To help combat poor animation performance, I wrote Animator.js. It’s a simple animation library, and I’d like to think it’s efficient and easy to use. It’s heavily influenced by various parts of Clutter, but I’ve tried to avoid scope-creep. It does one thing, and it does it well (or adequately, at least). Animator.js is a fire-and-forget style animation library, designed to be used with games, or other situations where you need many, synchronised, custom animations. It includes a handful of built-in tweening functions, the facility to add your own, and helper functions for animating object properties. I use it to drive all the drawing updates and transitions in Puzzowl, by overriding its requestAnimationFrame function with a custom version that makes the request, but appends the game’s drawing function onto the end of the callback, like so:

animator.requestAnimationFrame =
  function(callback) {
    requestAnimationFrame(function(t) {
      callback(t);
      redraw();
    });
 };

My game’s redraw function does all drawing, and my animation callbacks just update state. When I request a redraw outside of animations, I just check the animator’s activeAnimations property first to stop from mistakenly drawing multiple times in a single animation frame. This gives me nice, synchronised animations at very low cost. Puzzowl isn’t out yet, but there’s a little screencast of it running on a Nexus 5:

Alternative, low-framerate YouTube link.

by Chris Lord at November 29, 2013 02:31 PM

November 28, 2013

Ross Burton

Solving buildhistory slowness

The buildhistory class in oe-core is incredibly useful for analysing the changes in packages and images over time, but when doing frequently builds all of this metadata builds up and the resulting git repository can be quite unwieldy. I recently noticed that updating my buildhistory repository was often taking several minutes, with git frantically doing huge amounts of I/O. This wasn’t surprising after realising that my buildhistory repository was now 2.9GB, covering every build I’ve done since April. Historical metrics are useful but I only ever go back a few days, so this is slightly over the top. Deleting the entire repository is one idea, but a better solution would be to drop everything but the last week or so.

Luckily Paul Eggleton had already been looking into this so pointed me at a StackOverflow page which used “git graft points” to erase history. The basic theory is that it’s possible to tell git that a certain commit has specific parents, or in this case no parent, so it becomes the end of history. A quick git filter-branch and a re-clone later to clean out the stale history and the repository is far smaller.

$ git rev-parse "HEAD@{1 month ago}" > .git/info/grafts

This tells git that the commit a month before HEAD has no parents. The documentation for graft points explains the syntax, but for this purpose that’s all you need to know.

$ git filter-branch

This rewrites the repository from the new start of history. This isn’t a quick operation: the manpage for filter-branch suggests using a tmpfs as a working directory and I have to agree it would have been a good idea.

$ git clone file:///your/path/here/buildhistory buildhistory.new
$ rm -rf buildhistory
$ mv buildhistory.new buildhistory

After filter-branch all of the previous objects still exist in reflogs and so on, so this is the easiest way of reducing the repository to just the objects needed for the revised history. My newly shrunk repository is a fraction of the original size, and more importantly doesn’t take several minutes to run git status in.

by Ross Burton at November 28, 2013 05:07 PM

November 24, 2013

Richard Purdie

YZ Engine Rebuild

Back in August I found the YZ’s engine needed “a bit of attention”. Its taken a bit longer to get back to it than I’d hoped, partly due to building work but I can now complete the story. I stripped the bottom end down and concluded the easiest way forward was to buy a complete new crank shaft. This was slightly more expensive than just a conrod kit however it meant I didn’t need to press in the new rod and rebalance the crank, both things I could probably do but would need to buy/make tooling for. Luckily the wear on the top end was to the piston, the barrel looked fine. Bottom and top end kits were therefore duly ordered and turned up.

I started to reassemble the engine only to find the replacement crank was right, apart from the taper and thread on the ignition side. It had an M10 thread, I needed an M12 for my ignition. The bike is a 2002 model, the engine is a 2002 engine however it appears to have a 2003 crankshaft. This is probably due to the aftermarket alternator and weight. I ended up deciding to get another 2003 model crankshaft.

Since I was doing a complete overhaul I put new main bearings and seals in:

The photo shows some scary looking “cracks” in the casings although every two stroke I’ve ever rebuild looks like this to some degree so I’m doing my best to ignore them.

One nice feature of modern Japanese engines is the gearbox stays as one lump. Trying to put those back together and getting all the thrust washers in the right places is “fun”.

The crankshaft installed and casings mated back together. Of course life isn’t simple and whilst taking the engine apart, I found the likely cause of the scary sounding rattle. A worn power valve linkage. The part looks like this:

and the wear is in the first joint next to the coin in the photo. Its very hard to photograph “play” however this gives a better idea, after I’ve ground off the weld and separated the joint:

You shouldn’t be able to see light through there! Yamaha wanted a sum of money I considered excessive for this part so I decided I’d have a go at a home “oversize rebore” repair. This means drilling the hole in the outer piece larger (to make it square) and then machining a new oversize internally collar/bearing. The only lathe available to me was a little bit overkill for the job, weighing in at about 3.5tons:

however I did manage to get it to machine something that small, just about anyway:

Its hard to tell any difference from the final part however it has much less play:

After putting the crankshaft in and mating the cases, the clutch basket, plates and primary drive gear on the RHS of the engine can be installed:

A view of the ignition side of the engine showing the ignition and aftermarket flywheel weight in situ:

The clutch casing/cover can now be installed and the lovely new shiny piston can be connected to the conrod. You can see the power value linkage on the bottom left of the green base gasket. It sits in the clutch cover where there are spinning weights which control the power values depending on engine speed. The main bearings, both ends of the con ron, piston and rings were all liberally coated with two stroke oil as it was assembled.

Sadly it won’t look this shiny for long. You get a good idea of what the ports in a two stroke engine look like from this view:

A view of the power value chamber on the front of the cylinder. The repaired power value linkage rod connects to the end of the shaft on the left of the photo, turning it to different positions depending on engine rpm. The YZ has three power values, a main one on the centre exhaust port actuated by the springs in the centre and two secondary ports on the sides which are actuated by the cams and levers at the sides of the chamber. This was the only point throughout the rebuild I consulted the manual about since I’ve never actually tried to set up power valves before. The manual was a bit vague so I did what seemed right…

After all the access covers are installed, the engine is then complete. You can see the power value chamber on the front with the chamber on the side covering the repaired linkage. The cylinder head has also been installed.

All that is left is to fit it back into the bike. It took less time than I thought to do so and I’m pleased to report that whilst it didn’t start first kick, it did fire up pretty readily and whilst I didn’t run it for long, it sounds much happier!

by Richard at November 24, 2013 05:16 PM