Planet Closed Fist

April 30, 2018

Tomas Frydrych

Six Months with Cotton Analogy®

Six Months with Cotton Analogy®

When the row over the National Trust for Scotland trademarking the name ‘Glencoe’ erupted last summer, I had never heard of a company called Hilltrek. But for a while then I had been on the look out for some clothes for pottering about the woods with binoculars and a camera during the winter months, and had not seen anything that would be well suited to the (sodden) Scottish conditions. And I liked what I saw at the Hilltrek website.

Hilltrek are a tiny Scottish company offering a range of clothing made from Ventile®. If you, like me, have not heard of Ventile® before, it’s a cotton fabric developed in the 1930s essentially for fire hoses. When subject to water its very dense weave swells so much it prevents water penetration. The swelling is not instantaneous, so a single layer of the material is not enough to keep a wearer completely dry when subject to lot of water, but two layers, so called Double Ventile®, are.

Hilltrek make clothes in three fabric options: Single Ventile®, Double Ventile®, and Cotton Analogy®. The latter is a single layer of Ventile combined with the Nikwax Analogy® lining, also used by the Paramo® range of clothes. It was this that caught my attention, for the Nikwax Analogy® lining is well proven, and while I have never owned any Paramo® clothing (I am more of a Buffalo man myself), I know many who swear by it, and I have seen it perform excellently in some ‘real’ Scottish and Welsh weather. Unlike Paramo®, Cotton Analogy® offers the natural feel of cotton, and the lack of the irritating rustling of nylon — I was sold.

The Conival Trousers

While I was looking for something to wear about the woods, with the Conival Trousers I got more than I bargained for — without exaggeration, these are the best outdoor trousers I have ever owned. Over the last six months I have spent somewhere in the region of thirty five days wearing them, from sodden days in the woods, to numerous big full on days in the hills, including multi-day camping trips in the snow and temperatures dropping below -10C. In all of this they performed impeccably.

The Conivals have a no-nonsense cut, can be customised at the point of ordering, and if you have special requirements, all you need to do is to lift the phone (the great thing about dealing with small companies). There are two zipped pockets on the back, and two front hand pockets; cargo pockets can be ordered as an extra.

Unlike typical waterproof fabrics, the Analogy® lining is pleasant enough to wear next to skin, so these really are trousers rather than over-trousers, and they breathe very well. I tend to sweat fairly heavily, and so I normally avoid wearing waterproofs until it is really raining — these are the first waterproof trousers I have owned that don’t feel like being inside a banya and that I am happy to wear all the time.

The two layer construction is quite warm. I have found them good down to a few degrees C below zero on their own, and with a pair of thin merino long johns in temperatures down to -10C. On the upper end, I find them fine to about 12C, beyond that they are too warm for me (but then I don't usually wear waterproofs in those sort of temperatures anyway, and I am so impressed I am saving up for the Single Ventile® version Hilltrek make).

I have heard it said of Paramo® trousers that if you kneel on wet ground the water gets through. I have knelt in the Conivals in mud and snow on numerous occasions, pitching a tent or resting calves on long steep front pointing stints, and I have not found that to be the case, perhaps it is the benefit of the Ventile® itself being shower proof (or perhaps it was just an evil rumour about Paramo®).

The Ventile® fabric is quite heavy compared to ‘modern’ ‘technical’ kit, but I am really growing sick and tired of this current obsession with weight, which invariably translates into equipment that lasts a season or two. Indeed, the Conivals have shown themselves to be (I admit, surprisingly) hard wearing. I have done a fair bit of sliding about in them, sometimes on quite coarse icy ground, without noticeable surface wear. Some of the stitching around where the front pockets merge the side seam is starting to come undone, but that’s easily fixed.

The main wear-related issue with the Conivals is to do with the Ventile® dye, which does not seem to penetrate deep into the fibre, so where the fabric creases regularly, it starts reverting to the natural colour of cotton, and this happens so easily that somewhat disconcertingly the trouser started showing these whitish marks from the very first short walk in them, and it gets progressively worse, though it does appear purely cosmetic.

Six Months with Cotton Analogy®

The biggest drawback of Ventile® is that, according to the manufacturer’s recommendation, it is supposed to be dry cleaned. For a jacket this might be OK, but for outdoor trousers this is not practical. A closer look at the Ventile® site shows that the fabric can be washed with soap. I have been washing mine in 30C using the Nikwax® Tech wash, and can report no ill effects.

(It's worth noting that as with all waterproof fabrics the special care requirements have naught to do with the fabric per se, but the DWR coating that is applied to it, which has largely worn off before the first wash. I have tried Nikwax® Cotton Proof per the manufacturer recommendation; it does not produce the same sort of beading the original DWR did. It does seem to slow the water absorption a bit, but I am not entirely convinced it merits the expense.)

The Assynt Jacket

The Assynt jacket is billed as ‘ideal for field sports, nature watching and photography’. It has a corresponding cut with a waist level draw cord, two voluminous, low down, front pockets with stud closures, two chest level hand warming pockets, and a 5” high collar, with a stowaway hood.

In terms of size, based on the official size chart I am bang on for S, and indeed, have found the chest size to allow for adequate layering for winter use. But the sleeves are a different story. If anything the nominal size suggests these should be too long for me, but in fact they are well on the short size (1-2” shorter than on any other jacket of a comparable size I own), which becomes very noticeable with more layers underneath.

The snug fitting collar is the jacket’s best feature, keeping the dreich weather a bay. The stowing of the hood works better than is usual with such an arrangement, but unavoidably results in a hood of a low volume. This is the jacket’s main limitation. I have used it on a couple of fairly full on mountain days to see what it would be like, and the hood is not up to the task (this is not the intended use, and there are other jackets in the Hilltrek range that come with big volume, helmet-compatible, hoods).

Other minor drawback is that the hand warming pockets don’t have any closures, and, as they are not Ventile lined, this makes them draughty in moderately strong side-on wind. This feels as a bit of an oversight within the overall well thought out design.

All in all, I have found the jacket to be excellent within the parameters for which it was intended. I do wish the hood was bigger, I find I keep it out most of the time, simply because Scotland, and a bigger, non stowable, hood would make this a much more versatile garment.

None of the Hilltrek clothing is cheap, especially if you decide to do some customisation, but not incomparable to prices of some big brand mass produced outdoor kit. On the other hand, I expect it to last longer. I own a very nice Gore-Tex jacket from a big brand name that cost a similar amount as the Assynt jacket. It’s my ‘special occasions’ jacket, for on past experiences I know that in intensive use it wouldn’t last more than a season. I have no such quibbles with the Hilltrek clothing, there is a sturdy feel to it, and it is obvious that it was not only made in Scotland, but also for Scotland.

by tf at April 30, 2018 09:57 AM

April 16, 2018

Tomas Frydrych

Cooking with Alcohol

Cooking with Alcohol

In the last couple of years I have become a great fan of alcohol stoves. For three reasons. On short trips they are very weight-efficient. Alcohol is a much more environmentally friendly fuel than gas. And alcohol stoves are cheap to run!

As I have mentioned before, through my childhood and teenage years outdoor cooking involved an open fire. My first real stove was MSR WhisperLite™ purchased in the Mountain Equipment Coop in Vancouver in ’96. I still have it, with the original seals and that, though I haven’t used it for some years. The truth is that petrol stoves really come into their own on long remote trips and I don’t do those. And they take a bit of getting used to, the priming can easily get out of hand!

(On one particularly memorable occasion in Glen Brittle in the late ’90s the WhisperLite™ got me invited to cook in the kitchen of a giant luxury mobile home by a kind German couple who thought my stove was broken when I misjudged the volume of the priming fuel resulting in a flare worthy of Grangemouth. The trick, I learnt eventually, is to use little cotton wool and meths, but by then I also realised that this, excellent, stove was a poor match to my needs.)

And so, like everyone else, I switched to gas.

Gas stoves, without a question, win on the convenience front. There is no risk of spilling stinky fuel, no priming. But they have their drawbacks, not least the fuel is expensive and environmentally unfriendly — the LPG gas brings with it the whole oil industry baggage, the cartridges are manufactured in the Far East then shipped around the globe, and, being non-refillable, they end up in landfill (or left in a bothy); these things increasingly bother me.

Gas stoves are also rather weight inefficient. I didn’t fully appreciate this until I started thinking of multi day running trips, and was forced to rationalise the weight of my kit. My first move was, of course, a lighter gas stove, the 25g BRS-3000T. It only took a couple of trips to realise this was a dangerous piece of crap (mine flares uncontrollably sideways at any attempt to reduce the flame; sometimes we really get what we pay for).

In any case, if the objective is to reduce weight, even the lightest of gas stoves doesn’t help much, for the fundamental problem lies with the canister: on the one hand, I have very little control over how much fuel I take, and on the other the canister is far too heavy. So if my requirement is, say, for 60g of gas, I have to take 110g, plus the 120g of the canister; if I need 120g, I have to take 220g of it, plus 180g of the canister, etc.

Just as petrol/kerosine stoves beat gas in the weight game for long trips, alcohol stoves do so for short trips. Alcohol has two big advantages: it is very easy to store and transport, with minimal weight overheads, and it is very easy to burn, making it possible to create simple, light stoves.

Of course, burning alcohol produces only about half the amount of energy per weight as gas. But for short trips this is more than offset by the weight of the canister: if you need 60g of gas you have to pack 230g of fuel + canister; for an alcohol stove the equivalent will come to ~140g. Broadly speaking the weight game works out in favour of alcohol, or at least level pegging, until you need enough fuel to take the big 460g gas canister. How long a trip that is will depend on your cooking style, but in my case that is 3+ solo nights when snow melting, and something like 10+ nights in the summer.

And alcohol is cheap, and the environmental footprint is much smaller. There are, of course, downsides, most notably cooking with alcohol takes longer, how much longer will depend on the actual stove, so let’s talk about the stoves.

Alcohol burners come in two basic types: pressurised and unpressurised. An unpressurised stove is really just a small bowl holding the fuel, burning the vapours as they rise from the surface. While this works perfectly fine, such an open bath stove is potentially quite dangerous because of the risk of spilling the burning fuel; this is easily remedied by filling the bowl with some kind of a fireproof porous material. The simplicity means unpressurised stoves are usually home, or cottage, made.

In case of a pressurised stove, the fuel vapour is expelled under pressure from an enclosed fuel reservoir through a series small holes, resulting in discrete jets of flame. Unlike with petrol/kerosene this pressure is simply created by heating up the fuel in the reservoir, and is not very high. It, however, means that the stove has to have some way of priming. Most often it comes in the form of an open bath in the centre of the burner. The best known pressurised alcohol stove is undoubtedly the Trangia, but this type of burner can also be made fairly easily at home from a beer can, e.g., the famous Penny Stove — beer can stoves are neat and really fun to experiment with (but they are also quite large and fragile).

(Before going any further, it is worth saying that alcohol stoves always need a windshield, the flame is just too feeble to cope with even a slight breeze. The cheapest, and also most lightweight option is to make some from a double layer of kitchen foil. If you look after it, it will last quite a while, but it is too light for use in real wind, though perfectly fine for in-tent use. Of course, alternatives, commercial or otherwise, exist.)

Back to stoves. So, which one is better, pressurised or not? The cued up reader, who undoubtedly now expects a detailed discussion on fuel efficiency of the different designs is going to be most disappointed in me. As exciting as carefully measuring the fuel burnt by different models to find the Ultimate Stove is, when it comes to alcohol such comparisons are of a very limited value.

The fuel efficiency of any stove really comes down to a single thing: is the vapourised fuel mixed with enough oxygen to allow complete combustion? In the case of all alcohol stoves the mixing happens above the burner, and so is given more by the size of the pot, its distance from the burner, and the airflow provided by the windshield, than the design of the burner itself. Consequently any comparison is only valid for the one specific testing configuration, and you will almost certainly be able to come up with a different setup to produce quite different results.

OK, but, which is better? They both have advantages and disadvantages. The great thing about unpressurised burners is you can put in as little or as much fuel in as you want, and if there is any left, you screw the lid on, and it will keep till the next time. Also the variety with the absorbent material is the safest alcohol stove there is (and one shouldn’t really underestimate the danger of spilling the burning fuel, as the flames are nigh invisible).

The main advantage of a pressurised stove is a higher rate of burn, i.e., it cooks faster. But it is quite difficult to make a really tiny one, because below certain size the priming/gasification doesn’t work very well. Also the usual method of priming from an open bath on the top of the burner is super inefficient, and for there to be enough fuel in the bath, the stove generally needs to be filled near to capacity. For the smaller stoves, this will be around 30ml — if like me you only use 50ml per day and less than 15ml at a time, this a nuisance, as there is always unavoidable significant loss due to continued evaporation while the stove cools down before you can pour the excess out of it, and there is always spillage draining it.

If the burner doesn’t sit directly on the ground, it is possible to prime from below, using a small vessel, e.g., a bottle cap. This is much more efficient and needs just a few drops of fuel. But it is quite tricky to get right and requires practice — if you use too much fuel, you get a flair up, not ideal in a tent!

An unpressurised stove is great for solo summer use. Mine is of the makeup case variety; if you look carefully, you can see it among my other cooking paraphernalia in the title picture (taken on an unexpectedly cold autumn morning in the Cairngorms; the -4C meant I had to boil an extra pot that morning to pour into the running shoes to soften them up).

The stove came from redspeedster on eBay (you could easily make your own, but it’s not worth sourcing the materials for just one). It has a 30ml capacity, and with the nice pot supports he also makes comes to 24g. In my setup using 1/2l pot it will bring 400ml from 8C to rolling boil in about 12min, using 11g of fuel — yes, it’s slow, but then I rarely need rolling boil, so my actual ‘boil’ time is ~8 min, and really, I have all the time in the world, after all I am escaping the time-obsessed rat race.

But once you start looking at cooking for more than one person the unpressurised stove becomes impractical. I still want something small, i.e., not the family sized Trangia, but nevertheless something faster.

The Vargo Triad fits the bill. It’s a nicely made little gadget, and has about double the rate of burn of my makeup stove, bringing 0.8l of water from 8C to rolling boil in just under 13min, using 23g of fuel. This will do nicely for our next summer trip, I reckon. It’s a pity the burner does not have a cap, but I have cut a circle from a silicon baking sheet to cover it, which reduces fuel evaporation after the stove is extinguished and is cooling down.

Cooking with Alcohol

My current quest is to find an alcohol stove I’d be happy with for winter use. During winter I tend to heat up about twice the amount of water than I do during the summer (~3l), while at the same time snow melting roughly doubles the energy requirements (I am sure I could cut this down by manning up, but TBH, the winter brings enough misery as it is). The Triad at near-full fill of 35g of fuel will just bring 1/2 litre of water from snow to boil in 20min — for winter solo use I think this is borderline, I’d prefer something that would do about 0.8l at a time and a bit faster.

The Vargo Decagon looks a possible option. The 60ml capacity should be enough to melt 0.8l from snow, and it appears to have considerably higher burn rate than the Triad. But by all accounts, the Decagon is very slow priming, and unlike the Triad the pot can’t go on until the priming is finished (the pot sits directly on the top of the burner, so it conducts heat away from the burner); it also doesn’t lend itself as well to bottom priming as the Triad, nor can be so easily capped. Nevertheless, I am keen to give it try, preferably while there is still some snow in the local hills.

by tf at April 16, 2018 12:46 PM

March 15, 2018

Emmanuele Bassi

pkg-config and paths

This is something of a frequently asked question, as it comes up every once in a while. The pkg-config documentation is fairly terse, and even pkgconf hasn’t improved on that.

The problem

Let’s assume you maintain a project that has a dependency using pkg-config.

Let’s also assume that the project you are depending on loads some files from a system path, and your project plans to install some files under that path.

The questions are:

  • how can the project you are depending on provide an appropriate way for you to discover where that path is
  • how can the project you maintain use that information

The answer to both questions is: by using variables in the pkg-config file. Sadly, there’s still some confusion as to how those variables work, so this is my attempt at clarifying the issue.

Defining variables in pkg-config files

The typical preamble stanza of a pkg-config file is something like this:

prefix=/some/prefix
libdir=${prefix}/lib
datadir=${prefix}/share
includedir=${prefix}/include

Each variable can reference other variables; for instance, in the example above, all the other directories are relative to the prefix variable.

Those variables that can be extracted via pkg-config itself:

$ pkg-config --variable=includedir project-a
/some/prefix/include

As you can see, the --variable command line argument will automatically expand the ${prefix} token with the content of the prefix variable.

Of course, you can define any and all variables inside your own pkg-config file; for instance, this is the definition of the giomoduledir variable inside the gio-2.0 pkg-config file:

prefix=/usr
libdir=${prefix}/lib

…

giomoduledir=${libdir}/gio/modules

This way, the giomoduledir variable will be expanded to /usr/lib/gio/modules when asking for it.

If you are defining a path inside your project’s pkg-config file, always make sure you’re using a relative path!

We’re going to see why this is important in the next section.

Using variables from pkg-config files

Now, this is where things get complicated.

As I said above, pkg-config will expand the variables using the definitions coming from the pkg-config file; so, in the example above, getting the giomoduledir will use the prefix provided by the gio-2.0 pkg-config file, which is the prefix into which GIO was installed. This is all well and good if you just want to know where GIO installed its own modules, in the same way you want to know where its headers are installed, or where the library is located.

What happens, though, if your own project needs to install GIO modules in a shared location? More importantly, what happens if you’re building your project in a separate prefix?

If you’re thinking: “I should install it into the same location as specified by the GIO pkg-config file”, think again. What happens if you are building against the system’s GIO library? The prefix into which it has been installed is only going to be accessible by the administrator user; or it could be on a read-only volume, managed by libostree, so sudo won’t save you.

Since you’re using a separate prefix, you really want to install the files provided by your project under the prefix used to configure your project. That does require knowing all the possible paths used by your dependencies, hard coding them into your own project, and ensuring that they never change.

This is clearly not great, and it places additional burdens on your role as a maintainer.

The correct solution is to tell pkg-config to expand variables using your own values:

$ pkg-config \
> --define-variable=prefix=/your/prefix \
> --variable=giomoduledir
> gio-2.0
/your/prefix/lib/gio/modules

This lets you rely on the paths as defined by your dependencies, and does not attempt to install files in locations you don’t have access to.

Build systems

How does this work, in practice, when building your own software?

If you’re using Meson, you can use the get_pkgconfig_variable() method of the dependency object, making sure to replace variables:

gio_dep = dependency('gio-2.0')
giomoduledir = gio_dep.get_pkgconfig_variable(
  'giomoduledir',
  define_variable: [ 'libdir', get_option('libdir') ],
)

This is the equivalent of the --define-variable/--variable command line arguments.

If you are using Autotools, sadly, the PKG_CHECK_VAR m4 macro won’t be able to help you, because it does not allow you to expand variables. This means you’ll have to deal with it in the old fashioned way:

giomoduledir=`$PKG_CONFIG --define-variable=libdir=$libdir --variable=giomoduledir gio-2.0`

Which is annoying, and yet another reason why you should move off from Autotools and to Meson. 😃

Caveats

All of this, of course, works only if paths are expressed as locations relative to other variables. If that does not happen, you’re going to have a bad time. You’ll still get the variable as requested, but you won’t be able to make it relative to your prefix.

If you maintain a project with paths expressed as variables in your pkg-config file, check them now, and make them relative to existing variables, like prefix, libdir, or datadir.

If you’re using Meson to generate your pkg-config file, make sure that the paths are relative to other variables, and file bugs if they aren’t.

by ebassi at March 15, 2018 04:45 PM

March 06, 2018

Ross Burton

Rewriting Git Commit Messages

So this week I started submitting a seventy-odd commits long branch where every commit was machine generated (but hand reviewed) with the amazing commit message of "component: refresh patches". Whilst this was easy to automate the message isn't acceptable to merge and I was facing the prospect of copy/pasting the same commit message over and over during an interactive rebase. That did not sound like fun. I ended up writing a tiny tool to do this and thought I'd do my annual blog post about it, mainly so I can find it again when I need to do it again next year...

Wise readers will know that Git can rewrite all sorts of things in commits programatically using git-filter-branch and this has a --msg-filter argument which sounds like just what I need. But first a note: git-filter-branch can destroy your branches if you're not careful!

git filter-branch --msg-filter has a simple behaviour: give it a command to be executed by the shell, the old commit message is piped in via standard input, and whatever appears on standard output is the new commit message. Sounds simple but in a way it's too simple, as even the example in the documentation has a glaring problem.

Anyway, this should work. I have a commit message in a predictable format (: refresh patches) and a text editor containing a longer message suitable for submission. I could write a bundle of shell/sed/awk to munge from one to the other but I decided to simply glue a few pieces of Python together instead:

import sys, re

input_re = re.compile(open(sys.argv[1]).read())
template = open(sys.argv[2]).read()

original_message = sys.stdin.read()
match = input_re.match(original_message)
if match:
    print(template.format(**match.groupdict()))
else:
    print(original_message)

Invoke this with two filenames: a regular expression to match on the input, and a template for the new commit message. If the regular expression matches then any named groups are extracted and passed to the template which is output using the new-style format() operation. If it doesn't match then the input is simply output to preserve commit messages.

This is my input regular expression:

^(?P<recipe>.+): refresh patches

And this is my output template:

{recipe}: refresh patches

The patch tool will apply patches by default with "fuzz", which is where if the
hunk context isn't present but what is there is close enough, it will force the
patch in.

Whilst this is useful when there's just whitespace changes, when applied to
source it is possible for a patch applied with fuzz to produce broken code which
still compiles (see #10450).  This is obviously bad.

We'd like to eventually have do_patch() rejecting any fuzz on these grounds. For
that to be realistic the existing patches with fuzz need to be rebased and
reviewed.

Signed-off-by: Ross Burton <ross.burton@intel.com>

A quick run through filter-branch and I'm ready to send:

git filter-branch --msg-filter 'rewriter.py input output' origin/master...HEAD

by Ross Burton at March 06, 2018 05:00 PM

March 02, 2018

Emmanuele Bassi

Recipes hackfest

The Recipes application started as a celebration of GNOME’s community and history, and it’s grown to be a great showcase for what GNOME is about:

  • design guidelines and attention to detail
  • a software development platform for modern applications
  • new technologies, strongly integrated with the OS
  • people-centered development

Additionally, Recipes has become a place where to iterate design and technology for the rest of the GNOME applications.

Nevertheless, while design patterns, toolkit features, Flatpak and portals, are part of the development experience, without content provided by the people using Recipes there would not be an application to begin with.

If we look at the work Endless has been doing on its own framework for content-driven applications, there’s a natural fit — which is why I was really happy to attend the Recipes hackfest in Yogyakarta, this week.

Fried Jawanese noodle make a healty breakfast

In the Endless framework we take structured data — like a web page, or a PDF document, or a mix of video and text — and we construct “shards”, which embed both the content, its metadata, and a Xapian database that can be used for querying the data. We take the shards and distribute them though Flatpak as a runtime extension for our applications, which means we can take advantage of Flatpak for shipping updates efficiently.

During the hackfest we talked about how to take advantage of the data model Endless applications use, as well as its distribution model; instead of taking tarballs with the recipe text, the images, and the metadata attached to each, we can create shards that can be mapped to a custom data model. Additionally, we can generate those shards locally when exporting the recipes created by new chefs, and easily re-integrate them with the shared recipe shards — with the possibility, in the future, to have a whole web application that lets you submit new recipes, and the maintainers review them without necessarily going through Matthias’s email. 😉

The data model discussion segued into how to display that data. The Endless framework has the concept of cards, which are context-aware data views; depending on context, they can have more or less details exposed to the user — and all those details are populated from the data model itself. Recipes has custom widgets that do a very similar job, so we talked about how to create a shared layer that can be reused both by Endless applications and by GNOME applications.

Sadly, I don’t remember the name of this soup, only that it had chicken hearts in it, and that Cosimo loved it

At the end of the hackfest we were able to have a proof of concept of Recipes loading the data from a custom shard, and using the Endless framework to display it; translating that into shareable code and libraries that can be used by other projects is the next step of the roadmap.

All of this, of course, will benefit more than just the Recipes application. For instance, we could have a Dictionary application that worked offline, and used Wiktionary as a source, and allowed better queries than just substring matching; we could have applications like Photos and Documents reuse the same UI elements as Recipes for their collection views; Software and Recipes already share a similar “landing page” design (and widgets), which means that Software could also use the “card” UI elements.

There’s lots for everyone to do, but exciting times are ahead!

And after we’re done we can relax by the pool


I’d be remiss if I didn’t thank our hosts at the Amikom university.

Yogyakarta is a great city; I’ve never been in Indonesia before, and I’ve greatly enjoyed my time here. There’s lots to see, and I strongly recommend visiting. I’ve loved the food, and the people’s warmth.

I’d like to thank my employer, Endless, for letting me take some time to attend the hackfest; and the GNOME Foundation, for sponsoring my travel.

The travelling Wilber


Sponsored by the GNOME Foundation

by ebassi at March 02, 2018 12:50 AM

February 25, 2018

Tomas Frydrych

Coille Coire Chuilc

Coille Coire Chuilc

It’s been a long time since Linda and I climbed Beinn Dubhchraig. Just another couple of Munros bagged. Not a very memorable day of drizzle and nay views, leaving a lingering impression of a long trot through a bog punctuated by spindly pine trees, and no urge to return. One that persisted for a couple of decades. But today couldn’t be more different: the sky is blue, the air is crisp, the ground is frozen. And those spindly trees? They are no more.

Instead I find myself at an edge of a delightful Caledonian pine forest inviting me to step in. And so I do, walking along the east bank of Allt Gleann Auchreoch to the dilapidated bridge higher up the glen, then wandering about the woodland south of Allt Coire Dubhchraig, before following it up the hill. There are some magnificent pine specimen here, framing the views over to Ben Challuim and Beinn Dorrain. And higher up the pines are replaced by young birches, that are rapidly continuing to self-seed, the purple hue of their twigs striking against the snow-covered ground.

Beinn Dubhchraig is in a magnificent winter condition, there is much more snow than I expected, and all perfect firm neve. I enjoy the views: Beinn Dorrain, Ben Challuim, the Crianlarich hills, Ben Oss, Ben Lui. As I nip up the rather windy Ben Oss, Ben Lui looks particularly majestic — I imagine it will be very busy on a day like this.

Coille Coire Chuilc

On the way down I sit under a large pine for a bite to eat, enjoying the afternoon sunshine. A perfect day. It is rare for my days out to bring together the two places where I feel most at home, the hills and the woods. I usually have to choose the one over the other. It needn’t be this way, nor should it. Here in the midst of Coille Coire Chuilc I am reminded that, given will, a real change is possible in less than a lifetime. And just now I can smell it coming on the breeze.

Coille Coire Chuilc

by tf at February 25, 2018 11:08 AM

February 15, 2018

Tomas Frydrych

Pine Seeds

Pine Seeds

Over the twenty something years since the National Trust for Scotland took over the Mar Lodge Estate, the upper Glen Lui (or, Gleann Laoigh Bheag, as it is properly called), has become a real gem of a place. But today is not exactly a gem of a day. There be might fluffy fresh snow on the ground, but it's breezy, and visibility is limited indeed. Some might think it outright miserable!

Or, a natural black and white scene, you might say. In any case, the sort of a day nobody goes out for The Views. I am on my way down to the Bob Scott bothy for a lunch before heading back to civilisation. An end to three days in the hills. Carefully planned in rough outlines, then (even more) carefully improvised, to match the reality of the winter Cairngorms.

A brief promise of sunshine blown away somewhere below the summit of Derry Cairngorm on Sunday morning, leaving just the wind and thick cloud. The map came out there and then, and pretty much stayed out since. Careful navigation over the summit and onto the 1053 point bealach, then down to Loch Etchachan, in hope the cliffs surrounding it will provide some shelter from the strong westerly for the night.

Down at the loch it is indeed much calmer, though you wouldn't know there is loch down here under all that snow. Care is needed not to pitch inside a possible avalanche path, not just in the view of what the conditions are like now, but what they could be in the morning. And so I dig myself a nice rectangular platform, about a foot or so deep, nearly on the loch shore. Not as sheltered as might have been, but safe.

I am done just as the light starts fading. A coffee. While the snow is melting, a couple of messages exchanged with Linda using my InReach, then dinner. One of Ian Rankin's (audio)books for company by candlelight, followed by undisturbed sleep.

Monday morning starts with porridge, then digging myself out of the tent, glad to have kept the shovel inside. I am surprised by the amount of snow drift, my neat rectangular platform all but gone, and the kit I left in its corner buried under good two feet of snow. A scarily compacted fresh, foot thick, windslab capping it all.

Pine Seeds

Beinn Mheadhoin teases me with some lovely pink tones, but barely long enough to get the camera out -- time to get moving.

The (careful) plan was to camp here for two nights, but it is obvious that if I leave the tent here I will have hard time finding it later, and, more importantly, this place is too exposed for the 70mph southerly forecast for tonight. And so I pack my stuff, all 24kg of it (minus some food, plus some snow), put the snowshoes on, and set off into the clag for Ben Macdui, selecting my route carefully, mindful of the windslab I saw down there.

The wind picks up in no time, and while this is a familiar ground, I need a map and compass to keep me on track. I am comfortable with being here, the conditions are challenging, but, I dare say, within my comfort zone. And yet on a day like this, the plateau is one scary place (as it should be). Navigating here is hard, errors easy to make, opportunities to spot and correct them few and far apart. Escape routes limited even in the summer, for cliffs abound in all directions, and in the present winter conditions some of them, if not most, are unsafe.

The spindrift is heading along the surface directly against me, flowing around my boots like a fast river. It is making me feel dizzy, even seasick, yet my eyes are irresistibly drawn to it. A new experience. Keep looking forward, above it, rather than at it; that does the trick.

The ruin, then after a while the summit. Too windy to hang around. I take a bearing for the 'corner' of the Sron Riach ridge, pace and follow it religiously, using Allt Clach nan Taillear as a tick off point. A couple of jets flying repeatedly overhead, or perhaps just the wind swirling around in my hood; I can't tell. I reach the rocky corner bang on, pleased with myself.

As I am taking my next bearing from the map, there is a brief rupture in the cloud offering a glimpse of the cornices lining the ridge -- they are some of the biggest cornices I have ever seen, meters of overhanging snow. Back into the clag. I back off good 30m from the edge before daring to follow my bearing, and even then nervously (the lack of photos is my witness). Visibility is 5-10m; I make a point of always keeping some visible rocks peaking out of the snow to my left.

I finally emerge from the whiteness at around the 1100m contour line with a sigh of relief and sight of the Devil's Point, the first real 'view' of the day. Even better, I can also see that my preferred option for today, descending down the line of Caochan na Cothaiche is viable, for its eastern side is fully scoured, and poses no avalanche risk. In contrast, lower down the western side of the narrow gully has a huge build up of snow on it, and cornices, with some fresh debris lower down.

Pine Seeds

The floor of the glen is not entirely calm, but it will do. I dig another platform, pitch the tent. It's early, but this spot is as good as it will get. From here there is a direct line of sight under the clouds down Glen Lui onto the Glen Shee Munros -- it's sunny over there, and I feast my eyes on the vista, nursing a cup of coffee. Dinner (not much gas left), message to Linda, then time for some John Le Carre.

The wind arrives at 9.30pm, as the forecast promised. The usual moment of anxiety -- will the pegs stay in? Should I go out and check? I don't. I dug right down to the frozen turf and double pegged all the lines, they are going nowhere, or rather, I can't do any better anyway (I double peg as a matter of course, 20cm Y paracord extensions permanently on all the guylines). I briefly toy with sticking the anemometer out of the tent, but can't be bothered looking for it, I guess somewhere around the 40+mph mark. It's over as suddenly as it started not long past midnight (again just as forecast), and I sleep soundly after that.

Tuesday morning. I give the tent a good shake. The porch is covered by an inch of the finest powder I have ever seen, and I curse myself for not tiding more last night, rummaging through it looking for my spork. At least I covered the tops of my boots with bags. I drain the gas to the very last drop (thank God for upside down canister stoves!); there is, just, enough for my porridge and a litre of warm water. Outside it's windy and snowing.

As I pack, the snow is depositing on the tent faster than I am sweeping it away, and after a couple of minutes I give up and just roll it in. Snowshoes on and into the blizzard. Goggles would have been useful, but they are too wet inside to be any use, and no amount of wiping is helping. At least there is no navigating to be done, just follow the stream down the narrow glen.

And so here I am on the nice path in Gleann Laoigh Bheag. It stopped snowing a while ago, and there is but a breeze, four or five inches of fluffy snow covering everything. The pines are looking very Christmassy, in a It's a Wonderful Life sort of B&W way. Pristine scenery, no footprints, fresh or old.

My eye catches the sight of a small brown spec on the undisturbed snow, then another. I bend a bit to take a closer look. A pine seed. They are all around me, they have come from heaven down to earth gliding on their little wings. In the midst of this bleak, inhospitable day, life is being, not born, but hewn out by the gale from the cones; life against the odds. A promise of a brighter, greener, future, one hearkening back to the days before the axe and saw laid this landscape barren.

The bothy is warm. A bit of food, a bit of banter. Then I step outside ... into a different world. The cloud has broken, the sky is blue, the sunlit landscape postcard perfect -- The Views. But the views, they come and go. The pine seeds, I expect some of them I will see again in the years to come. From now on, every time I see a seedling in Gleann Laoigh Bheag, I'll be wondering, is it you?

But 'nough idle musings. The most pressing existential question of today is this: will the Glen Shee snow gates be open? For I am back in the 'real' world.

Pine Seeds

by tf at February 15, 2018 09:48 AM

February 05, 2018

Tomas Frydrych

A Lesson from the Wee Hills

A Lesson from the Wee Hills

Days like these don’t come around that often. After a couple of brief snow flurries the sun banished the cloud, and now the early morning light glitters on the pristine slopes of Beinn Challuim. It is nearly exactly twenty years since I’ve been up here last, in very different conditions; a memorable day, though not for the best of reasons.

When I first arrived in Scotland I was by no means new to the outdoors or the hills. I am fortunate enough to have spent much of my free time in the open since early childhood, exploring the woods, hiking, wild camping, ski touring. From my mid teens treks in the Tatras, and farther afield, became a regular feature of the summer holidays — half a dozen friends of similar age, minimal equipment, high level camps mostly just under the stars.

Over those years there had been a few #microepics, including a couple of close shaves, and by the time I landed in Scotland as a postgraduate student in the mid ‘90s I had gained a healthy respect for the mountains, summer and winter alike. But compared to even the smaller continental ranges Scotland’s ‘wee hills’ — their summits barely reaching the altitudes of Alpine valleys — seemed innocuous and benign.

It didn’t take long to get disabused of that idea. Looking back, some of the incidents we now laugh at. Like when, having ignored Heather the Weather’s warning of 70mph winds, I left Linda a few hundred yards of the summit of Meall Ghaordaidh, weighed down by a large stone, while I crawled on all four to touch the summit cairn (all I can say is, we were young and weekends were precious). But even after all that time, the Beinn Challum day is still not that funny.

As a research student I discovered that clearing my head with a midweek day in the hills much improved my overall productivity, and so Wednesday outings became a regular part of my studies. Even nowadays the hills tend to be fairly quiet midweek, but back then I never ever met anyone. Indeed, tales were circulating of injured midweek hill walkers surviving a couple of days on biscuits until someone turned up at the weekend.

This might seem far fetched, but in those days mobile phones were almost a novelty, cellular signal virtually nonexistent outside of the Central Belt, and consumer GPS units still a few years away — those who got lost in the hills were on their own until someone reported them missing; self reliance was, necessarily, a part of essential hillcraft.

As I expect you have guessed, this particular Wednesday in late February I was heading up Beinn Challuim. I have never been much of a fan of there-and-back outings, and so decided to leave the car at the Auchtertyre farm, and do a horseshoe starting with Beinn Chaorach.

It was not a very nice day, with an unpleasant westerly, sleeting heavily. Having experienced similar conditions a few weeks earlier in the Drummochter hills, I invested into a pair of goggles (not a negligible expense), which on this day didn’t come off my face (sadly, the sleet was so saturated that the glue between the double lens failed in the course of the day).

Visibility gradually deteriorated and by the time I reached Beinn Challuim, I was in a complete whiteout. I wasn’t put out by any of it. I had an excellent Berghaus GoreTex jacket that kept me dry (which I was just about able to afford thanks to James Leckie of Falkirk) and carried two big flasks of hot drink and plenty of food —really, I was in my element, relishing the adversity. But by this point I was also beginning to feel quite tired, it was turning out to be a longer day than I planned.

Fortunately all that was left was the descent back to the car. This should have been quite straight forward, and such was my confidence in my ability to navigate that I didn’t feel the need to get the compass out. I was sure map alone was going to be enough to follow the ‘obvious’ broad ridge. And indeed, the ridge was easy to follow, but somehow progress was slow.

Too slow. I emerged from the cloud eventually but alas, things were not as they should have been. I should have been near the Auchtertyre farm, or at worst near Kirkton, and certainly near a railway track, but I saw no houses and no track. I ended up somewhere in the Lochan a’Craoi area above Inverhaggernie — to this day I am unsure of the exact location — and I was in for a long walk back, with not much of the day left.

I was spared some it by a couple of ghillies on a quad bike, two hinds on a small trailer behind it. They offered me a lift to Inverhaggernie, ‘if you don’t mind sitting on the deer’. I didn’t mind in the slightest.

That day was the end of the ‘wee hills’ mentality, for I knew I got lucky. The careless navigation mistake per se was not super serious, at least I ended up on the right side of the hill, but I understood that I could have easily made a similar one earlier in the day and ended up further north in the Forest of Mamlorn — that would have been a whole different proposition. I started taking the weather lot more seriously from there on, and I also updated my personal Freeserve page about the Scottish hills with a dire warning to the foreign visitor about the deceptiveness of their size, and the nastiness of their inclement climate.

Today Beinn Challuim summit offers views for miles in any direction, and there is no wind, not even a breeze. There are three of us lingering up here, none feeling like leaving. Eventually I descend the W-NW spur toward Bealach Ghlas-Leathaid — that wasn’t my original plan, but twenty years on I still don’t like there-and-back days. It proves to be a good choice. The lower part of Gleann a’Chlachain is a kaleidoscope of colours, their tones striking in the low afternoon light. I stroll leisurely back to Kirkton basking in the sun. There is no hurry, and like I said, days like these don’t come around that often.

by tf at February 05, 2018 09:31 PM

February 03, 2018

Tomas Frydrych

Mountain Star

Mountain Star

It was love at first sight. Those smooth curves, precision crafted from a solid block of stainless steel, the needle-sharp point, the smooth black, fully rubberised, shaft on which big red letters proudly declared:

Stubai — Made in Austria

She (for to my teenage mind that ice axe was definitely she) hung proudly in the window of the small climbing equipment shop. It wasn’t that I needed an ice axe, I wasn’t a mountaineer. But there was something deeply symbolic about it that made me pine for it.

It wasn’t just the smooth curve that drew the eye, there wasn’t that much else to look at in the window of the state-owned shop. A few steel carabiners, a Czech-made rope — in Czechoslovakia of my teenage years climbing gear wasn’t something you bought, it was something you made.

And so my first clumsy attempts to learn how to self arrest were with a slaters hammer, belonging to a friend and re-purposed by a blacksmith, an acquaintance of an acquaintance. Another friend, a machinist, who unlike me was a proper climber, made a pair of technical axes at his work, based on some pictures from a foreign magazine he managed to get hold of. He showed me with great pride, excited about the inverse picks.

(I learnt later that pair failed on their first trip to the Tatras, my friend over-tempered the steel and the tips snapped off; such is the nature of progress. But not in my wildest dreams would have I imagined I will one day reminisce on this in Scotland, where the technical ice axe was born, and I expect Hamish McInnes suffered some similar teething problems as my friend did trying to follow in his footsteps.)

But back to that Stubai. There was yet another reason why that axe stood out. The price tag of 700 Kčs — three weeks of moderately decent wages — put it well out of my reach. And not just mine; it hung there for years, an object of unrequited lust, while in real life tools were improvised and borrowed.

When a decade later I walked into The New Heights in Falkirk to get the first ice axe I could call my own and saw a Stubai Mountain Star hanging there, that was it, there was no other choice I could make. Not the same axe, not as refined, more mass produced. Not the cheapest option either. But the pedigree unmistakable, I wasn’t buying a tool, I was buying into a dream.

It’s a fine winter day today, sandwiched between two ugly fronts, and so I am making most of it. The views from An Caisteal are stunning, with just enough cloud to create an ambience on the ridges. As I lean on the ice axe, all these memories flood in.

Over the years there have been others. Some leaner, some meaner, some definitely prettier. Some long gone, some still around. The Mountain Star among the latter, after twenty something years a trusted companion. It does everything I have ever wanted from a walking axe, and does so perfectly. The chromoly requires no care except for the occasional sharpening, the length is perfect to offer support on easy ground, and I like the reassuring weight, the feeling a tool was made for life rather than a season.

I eat my lunch on the summit and contemplate how to return. The Stob Glas ridge is irresistible. It’s not very cold and as I approach Bealach na Ban Leacainn, my crampons are starting to ball up; time to take them off, once I reach a safe place. In the meantime, a practised, near effortless, flick of the wrist to tap them with my axe, and they are clear again. None of my other axes can do that, they are either too light, or too short, or both — the Mountain Star is going to stay for a while yet, I think. And tonight I shall raise a glass to whoever designed it all those years back. Prost!

by tf at February 03, 2018 11:55 AM

January 21, 2018

Tomas Frydrych

Discovering Snowshoes

Discovering Snowshoes

I have thought about getting a pair of snowshoes a few times over the years, but never did. The copious quantities of snow at the tail end of last year finally gave me the needed nudge. Of course, as invariably happens, all that early snow summarily thawed away on the very day the snowshoes arrived, and I haven't had a chance to play with them until this week.

Having never snowshoed before, I thought an easy potter around the Ochils might provide just the right sort of an introduction, and it did (in fact I was having so much fun, I pottered around for over six hours till the last light). Perhaps it is the fact that I Telemark, and so am used to things dangling underfoot, but I found walking in snowshoes to be an entirely natural, zero learning curve, sort of a thing.

I was pleasantly surprised by the huge reduction in effort snowshoes provide. It does not come so much from not sinking so deep, as I imagined it would have, but rather from the way in which the snowshoes glide. Even when sinking half a meter or so, you don't need to lift your foot out, rather, as the foot starts moving forward, the snowshoe floats up to the surface. I'd go as far as to say that in deep snow this requires lot less effort than skinning would have, particularly with today's wide skis.

But where the snowshoes really come into their own is coming down hill. In nice deep soft snow I am able to move at a pace that is considerably faster than I would be walking down in the summer, indeed, not far off my running pace (though, admittedly, as a runner I am a slow descender). By the same token, I now understand why snowshoers feature so prominently in avalanche victim statistics -- it's really easy to get carried away (not unlike skiing, but skiers have had avalanche awareness drilled into them for decades, and it is paying off).

When I was shopping for the snowshoes, I had a set of fairly specific requirements:

  • Mountaineering-type, so they could cope with steeper terrain (means to an end),
  • Not too heavy, so as not to be too much pain to carry when things get more interesting,
  • Suitable for the Scottish conditions, with our variable depth windblown snow cover, which means making contact with the rock beneath it time from time (i.e., steel, rather than plastic / aluminium, and not a design where the membrane is attached by wrapping it around the frame).

In the end, in spite of the eye watering cost, I settled on the MSR Lighting Ascent, which ticks all the boxes: the flat steel frame promises an all around traction, and the membrane is attached inside it, rather than wrapped. Also, they get good reviews.

I was so encouraged by my wee Ochils potter earlier in the week that yesterday I took my snowshoes for their first proper outing up and over Beinn Each onto Stuc a'Chroin and back. Ideal conditions, snow at places waist deep, and excellent fun. But also an opportunity to test the snowshoes in some more challenging terrain, including patches of steeper névé. All in all just over eight hours of true winter wonderland, of which I wore the showshoes for at least seven (they only came off for the short steep descent from Beinn Each, and the final 50m of the Stuc).

That they work well in soft snow I already knew, but I was impressed with the traction provided by the frame and the crampon when going up firm névé. The main limiting factor here is that beyond certain gradient the toe of the boot, rather than the rotating crampon, starts making contact with the ground, at which point the traction is compromised. The angle at which this happens is quite steep, steep enough to be stabbing the slope with the pick of an ice axe, rather than the spike, once the real crampons come on.

I got caught out this way on a short section of the Stuc. The main problem was not so much that I wasn't wearing real crampons, but that I was still using poles, while on a gradient that really called for an ice axe. Awkward shuffling off to a gentler slope to get the proper tools out followed (obviously, this is not a fault of the snowshoes, but a simple error of judgement).

Similarly, traction descending on firm névé is excellent, and broadly speaking, I found that I can descend comparable gradient to what I can sensibly ascend. In deep snow, however, the snowshoes become problematic on very steep ground, they have a tendency to run away more easily than just boots, and you can't really bum slide very well with them on. (And again, you will quite likely find yourself with poles rather than an axe.)

The main limitation of the MSR Lightning Ascent is poor lateral rigidity; this is a feature of the frame design (though the bindings don't help, on that below), and it makes traversing a firm slope very awkward. I have quickly realised that for short sections it is much more efficient to sidestep such ground, facing into the slope, but best of all is to pick a different line where possible, or to put the crampons on.

The bindings I am not hugely impressed with. They are designed to fit a variety of boots (I expect I could make them fit the Sorels I use to clear the drive), but really the best thing I can say about them is that they are easy to get out of fast. They are hard to tension right when putting them on, and two or three stops were needed each time to make adjustments. This does not improve the lateral stiffness either -- I am thinking for this sort of a technical snowshoe it would make sense to have crampon style step in bindings.

But all in all, would I buy them again? Definitely! Should have done so long time ago.

by tf at January 21, 2018 03:52 PM

January 16, 2018

Tomas Frydrych

Pinto Bean Soup

Pinto Bean Soup

My love of lentils and legumes of all sort goes as far back as I can remember. In recent years, the pinto has become my firm favourite among the beans, for it's a versatile legume of a gentle flavour that is easy to work with. The burrito use aside, the pinto is an excellent foundation for a bean salad, great in chili, and once you taste it baked with tomatoes, you will never want to eat Heinz again. And then there is the soup.

While I enjoy cooking, I don't always have the time for elaborate and time-consuming recipes. Fortunately good homemade food doesn't necessarily mean hours over the stove, and the pinto soup is an example of that -- it takes me under half an hour to make. The ingredients are simple, only the pinto beans, onion and chillies (fresh or crushed) are required, plus some stock; if you have carrots around, then they make a good addition, as does a bit of garlic, but you will get an excellent soup with just onion and chillies.

Being of the 'cooking is an art, not science' school of thought, I consider quantities mere minutia dictated by taste. But as a rough guideline, 400g of dry beans will make around three litres of the soup. For that I use two medium onions, and maybe a couple of larger carrots; chillies to personal taste.

Soak the beans over night (you can get away with less, but it impacts on the cooking time), then cook till soft -- using a pressure cooker hugely speeds this up. You will have to work out the exact timing for your pressure cooker yourself, but in ours, at 0.4 bar, pre-soaked pinto beans take 8min. Now, the secret to a good pinto bean soup is not to drain the cooking liquid, i.e., you should cook the beans in about as much water as you want in the final soup.

While the beans are cooking, chop the onion, not too fine, and fry it off with the chillies until nice and soft (I use rapeseed oil, I find the gentle flavour works well with the subtle flavour of the pinto). Add any garlic to the onion near the end.

Once the beans are ready mix in the onions, and any other ingredients, then add stock to taste (I quite like the Knorr stock pots, usually use one vegetable and one herb pot, but you might prefer something more wholesome and homemade instead). Bring to boil and cook (not pressure-cook!) for about 5min, or if using carrots, until they are soft.

That's it. As many foods, the flavour will develop if it sits for a time rather than being served immediately. It will keep in the fridge for a couple of days, or it can be easily sterilised in a kilner-type jar if you want to keep it longer, or there is not enough space in the fridge.

by tf at January 16, 2018 10:56 AM

January 13, 2018

Tomas Frydrych

And Time to Back Off

And Time to Back Off

Forecast is not great -- high winds, increasing in the course of the day, temperature likely above zero regardless of altitude, and precipitation arriving by an early afternoon. The sort of a day when it's not worth carrying a tripod, or driving too far, yet at the same time not bad enough to just stay at home all weekend and brood (as I know I would).

So here I am at Inverlochlarig; before first light, in the hope of beating the worst of the weather. In the recent years this has become my preferred way into the 'Crianlarich' hills. I like tackling these seven Munros in a single continuous run -- just about the only enjoyable route I have been able to come up with near me that has climbing to distance ratio comparable to some of the bigger rounds. But that would be in the summer, and on a cracker of a day.

Today the plan is less ambitious: head up Beinn Tulaichean, and then, depending on conditions and time, onto Cruach Ardrain, and perhaps Stob Garbh, one way or another returning via Inverlochlarig Glen. I have not been up Beinn Tulaichean from this side for some two decades, and my memories from the last time are rather vague, so this outing has a degree of (welcome) novelty.

In the view of the SE wind I decide to give the usual walkers' path a miss, and instead head up the western flank of the hill, in the shelter of the SW spur. This turns out to be a good choice, with only light wind. I eventually join the main ridgeline somewhere at around the 600m contour line. Here my pocket anemometer registers just over 40mph (I carry one having realised I tend to overestimate wind speeds and hence underestimate forecasts). And spindrift. Time for some extra layers and the goggles.

Visibility is dropping rapidly with height, and by the time I reach the flatter area around the 750m contour line, it's down to ten yards. The compass comes out, from here on I am moving on a bearing as visibility continues to drop further. The terrain is quite complex here, lot of large boulders, with big gaps between them, now covered -- but not necessarily filled -- with snow. I narrowly avoid falling into a large hole that appears out of nowhere right in front of me just before the gradient steepens again.

There are two sets of fairly recent footprints here -- mine was the first car in the car park, so I am guessing from yesterday. I follow them cautiously, while keeping an eye on the needle, one can't be too careful; I loose them somewhere along the way.

I have reached a point where the ground starts descending again. I know I am near the summit now, but in the view of the complex terrain I need to get an accurate location fix. An altimeter would have been useful in these conditions, but I forgot to reset it earlier (a rare, and annoying, omission). I get the phone out; I prefer the map and compass, it sharpens the mind, but I am not a Luddite. I am 120m from the summit cairn, just a bit off the little col below it.

I get a bearing, reach the col. The light is so flat now that even in the goggles it is impossible to adequately judge the gradient under my feet. There is a step down, but I can't tell how big. I get on my knees, only with my face this close to the ground I can see it's not too steep, and it shouldn't be more than a couple of feet down. I descend gingerly.

On the other side the ground starts rising steeply -- the final 30m of ascent to the summit cairn. As I start climbing up I catch a sight of what I think at first to be a small cornice above; in fact it's the fault line from an avalanche -- I am taken aback, the ground under my feet does not feel like avalanche debris, but for a short while I can see the fault clearly enough, including the poorly bonded layers within it. I realise that what I thought was an old line of obscured footprints couple of meters to left of me is a track made by some more recent debris coming from above.

I retreat back over the dip in the col to a safe place to assess the situation. The limited visibility is debilitating: though I am sure I am not more than twenty yards from it, the fault line is just a fuzzy shadow, if that, and I have no idea what the ground above it is like. The part of the slope I was on is unlikely to avalanche again, but on what I have seen so far, it is not unlikely that if I load the ground above the fault line it could release; I can't take the risk.

I don't mind not reaching the summit, but I hate giving up. I study the map. It seems it might be possible to contour hundred or so meters to the east and gain the summit from there. I take a bearing and start pacing the 100m, and voila, here are the two sets of footprints I saw earlier, heading the same way. But after only 50m or so they disappear under what this time is unmistakably avalanche debris, the whole eastern aspect of the summit is covered by it, as far down as I can see, while above me the same fault line continues beyond the limit of visibility.

I decide to pace the entire 100m. From here I can see that the avalanche is delimited by a rocky rib, but it seems too steep to climb it. I retrace my steps back to my safe spot. It's only now I notice that, inexplicably, there is almost no wind at this altitude. I must be in the lee of the Stob Binnein ridge, which also explains the heavy snow deposits on the ground above me.

On a windy day like today, one must not waste an opportunity like this. The flask comes out, I eat my piece; I am quite content now. Then a back bearing -- while I should be able to follow my footprints back the way I came, you never know.

I descend the usual tourist route, mostly following the two pairs of footprints I saw earlier. I can see the pair were conscious of the avalanche risk, taking a sensible line; indeed, a bit lower down they dug a snow pit on their way up.

I reach the snow line, with views of Loch Doine and Loch Voil. Time to take some pictures, and shed some layers; I am overheating. The jacket goes in the bag ... and the rain starts almost immediately. But who cares? I am glad of yet another good day in the hills.

by tf at January 13, 2018 09:08 PM

December 31, 2017

Tomas Frydrych

If Running were Everything ...

If Running were Everything ...

As a lad I used to spend Hogmanay with my friends at some remote and basic cabin, far away from the noise and clutter of the city. There were two customs we invariably welcomed the New Year in with. We chucked one of our mates into the nearest pond to mark his birthday (which meant cutting a hole though the ice the evening before). And then we sat down and each wrote a letter to themselves, reflecting on the year just gone by, hoping for the future, one of the more responsible lads charged with keeping the, gradually thickening, envelopes from Hogmanay to Hogmanay.

I still have that old envelope full of my teenage dreams somewhere, though it’s been many years since I’ve added a page; different times, different place. Yet, I was reminded of it yesterday reflecting on 2017, recalling with unexpected clarity that every year reading the previous year’s letter I was struck by how differently it panned out, indeed, how often those very aspirations were swept away by the flow of time.

If running was everything, and running stats something to worry about, with a mere 1,353km run and just 47,100m ascended, this would have been a decisively poor year. But running is not everything, and I couldn’t care less about stats.

It kicked off pretty well, with a late February trip to the remote Strathfarrar hills, providing minimal support to John Fleetwood on his Strathfarrar Watershed challenge. It was the first bigger outing I was able to do since October of the year before, and one which exceeded expectations—well worth an Achilles heel injury I picked up along the way, even if it kept me out of the hills for the next couple of months.

As always, our two week holiday in Assynt didn’t disappoint. The highlights included an extended variation on the Coigach Horseshoe, a run from Inchnadamph to Kylesku over the Stack of Glencoul (something I wanted to do for years, but never got to) and, what ultimately turned out to be my best, most memorable, day in the hills this year, a run taking in the south ridge of Ben More Assynt. That too came at a cost, another foot injury, one that, unfortunately, has plagued me for the rest of the year.

June brought the West Highland Way Race, on which Linda and I were crewing for our friend David, and while the whole trail running / ultra scene is not my kind of a thing, this was a truly special experience, and all in all possibly the most memorable weekend of our year.

In July Linda and I spent an excellent weekend fast-packing in the Cairngorms, and I also managed to squeeze in the Eastern Mamores and Grey Corries that eluded me last year, plus a couple of fun days on the south side of Glen Etive. But by the end of July I could no longer ignore the nasty plantar fasciitis I picked up in Assynt. By mid September the foot seemed back to normal, but only lasted a couple of trail runs while on a visit to Portland, OR, and I haven’t run since.

I admit, over the last five months I have really missed running, not least because of the inescapable loss of fitness and the sniggering bathroom scale. There really is nothing like it, the simplicity, the lack of faffing, the fact I can run seven days a week from my front door if I want to.

Yet, at the same time, that gap created new opportunities. I have been spending more time in the woods, with no objectives, just binoculars and/or a camera. In many ways this has been very liberating, bringing back memories, and reminding me how much I miss proper forests in Scotland.

Then there have been numerous wee camping trips. I much prefer these to just single day hikes. I like the peace and quiet of the night in the hills you get even in the middle of a raging storm, the uninterrupted time, to think, to listen to audio books (having reached an age where reading glasses have become a necessity, I avoid reading in the tent). The early mornings, the first light. (But also, during the single days out I always find myself wishing I was running, knowing that most of the time I could travel farther, along a more interesting route.)

And then, of course, after some nine months of planning, last May we have launched runslessepic.scot, offering bespoke guiding services, navigation courses, as well as a rudimentary Hillcraft for Runners course. We are planning some guided hillrunning weekends in the summer, watch this space ...

So yes, that was my year. 2018? All I know is, it starts tomorrow, and I am going for a run first thing!

by tf at December 31, 2017 10:02 AM

December 23, 2017

Tomas Frydrych

The Crew that Slept in

The Crew that Slept in

The West Highland Way Race, with its 30+ year history, can only be described an iconic classic. So when earlier this year our friend David got a place, Linda and I enthusiastically volunteered to join Gita (his partner) and McIver (their collie) to do the crewing. Little did we know what we were letting ourselves in for ...

For those who do not know, the West Highland Way is Scotland's premier long distance walking route that goes from Milngavie near Glasgow to Fort William. It is some 96 miles long, and involves nearly 15,000 feet of vertical ascent. Each year many thousands of people walk it, typically taking around a week to finish. The competitors in the Race, run since 1988, must complete the route in no more than 35 hours, and for that they receive a coveted commemorative crystal goblet.

What sets the Race apart from most other running events is that the prize giving ceremony only takes places after all runners finish, so that all the runners, and crews, can be present; this makes for a very special occasion with a unique, hard to describe, atmosphere. But I am jumping ahead here.

Let's rewind to Friday evening, 23 June 2017. Linda and I arrive in Milngavie about an hour before the 1am start. We made no special arrangements to meed David and Gita here, which immediately shows our lack of grasp of the scale of the event -- there must be over a thousand or so folk milling around the railway station! We wander about for a while, and make a couple of visits to the registration point, but there is no sign of our friends.

Having more or less reconciled ourselves to not finding David, we bump into him by sheer chance just before the briefing. He seems in good spirits. Gita has already left to get some sleep, and we wander off to High Street, leaving David to his own thoughts.

There is a visible Polis presence, for whom I expect tonight makes a change from the typical Friday night in Milngavie. I am hoping to get some pictures of David as they set off, but, of course, I fail to spot him.

The Crew that Slept in

Then off to Balmaha for a little sleep. It is only at this point, as we make steady progress in a column of hundreds of vehicles, I begin to appreciate the importance of the 1am start. Our arrangement is to get together with Gita at 3am, so I get up about that time to go to the loo -- to my dismay the visitor centre and its toilets are closed, my already low opinion of the way the Loch Lomond and Trossachs National Parks is run sinking even more. In contrast, the Oak Inn has opened specially at 2am, but with all the good will in the world its toilet simply can't cope.

Linda calls Gita and we are told to look for the annoying orange flashing lights. It's a recovery van, with three laddies trying to fix Gita's headlights, which both blew on her drive to here. This is not a great news. The laddies are nice enough, but I am sceptical of success when one of them confides in me that the Kangoo uses 'strange giant bulbs' they've never seen before (referring to an H4!), and which, obviously, they don't have with them. At this point the most important thing is to shoo them away, because David should be arriving shortly, and there is nothing to be gained by him knowing about any of this.

The Crew that Slept in

He arrives bang on time, on good form, has some food and is off again. Gita stays behind waiting for daylight, while Linda and I set off in hope of finding H4 bulbs somewhere at 5am on Sunday morning; we succeed eventually at Dumbarton Euro Garages, after no luck in the, rather fortified, BP garage in Alexandria.

The next crew stop is Ben Glas farm. Here only one vehicle per crew is allowed, so we regroup first, make ourselves some cooked breakfast among the midges, change Gita's bulbs ... lot of time to kill, so a visit to the Falls of Falloch, deserted at this early hour.

The Crew that Slept in

At Ben Glas we don't have long to wait as David arrives at the check point slightly ahead of time, but convinced he is going too slow (we are aiming for a sub 24h finish).

By the time we arrive at Tyndrum the lack of sleep is beginning to catch up with us. We are operating on our own time, where everything is measured from a zero at Milngavie to (hopefully) just under 24 in Fort William. We have completely lost any sense of how that might relate to 'normal' time. In this private timezone it is the middle of an afternoon, and it comes as a bit of a shock that we can't get three fish suppers from the Real Food Cafe, because they only put on the friers after breakfast! Fortunately it's not a long wait till 11am, and, our fish suppers in Gita's car, we are off to the Auchtertyre check point.

The Crew that Slept in

We don't have long to wait. David arrives on schedule, but the effort is beginning to show. Some food, change of clothes, and he is off. For some reason, I decide that since the stove is out I might just as well make a flask coffee and soup for the next stop -- I don't know why, with hindsight this does not make that much sense, but by now none of us are operating at full metal capacity, so I am faffing about for bit with the food before we head on.

Next stop Bridge of Orchy. By the time we get there we are all properly knackered. The girls decide to get some sleep, but I don't sleep well in daylight and tend to wake up with a nasty headache, so I go for a walk instead. It starts raining almost immediately -- the 'weather' we knew was to come for the second half of the race is nearly with us.

A 45min walk does my brain good, but also stirs my bowels, so a quick trip to the hotel is due. My conscience doesn't allow me to just use the facilities, so I sit at the bar for a bit nursing a pint of lemonade, before making good on why I really came here (I am fairly certain I fell asleep in the cubicle, for I do not think I was that long but by the time I step out there is a long queue, and everyone is giving me the evil eye).

Outside the sun is back out, which is good. As I am about to turn down the road toward the bridge, I catch a glimpse of a runner that moves lot like David. Nah, the clothes are wrong; except then I vaguely remember him changing at Auchtertyre ... sh!t, it's David right enough, a long, very long, time ahead of our schedule.

He is glad to see me, thinking I have been waiting for him here on the corner! Should I tell him??? I excuse myself and sprint down the hill where both girls are still soundly asleep. There are some muffled words from inside the cars, which I can't hear clearly, but can venture a guess, then a lot of commotion. At the same time, there are car shenanigans taking place, parking is very tight here and with our tail gate open the other crew can't open theirs or something. A lot of our stuff falls out onto the road in the process. David does not stop long, and the only reason this pit-stop is not a disaster is that the coffee and soup are already made from Auchtertyre!

By the time we get to Glencoe ski centre the weather has arrived in earnest: it's cold, windy and pissing down. My head is feeling like a giant hangover, I try to sleep for a bit, but it's not helping and neither coffee nor sugar are making any difference. Time to stop feeling sorry for myself, the way the weather is just now I think it is likely that the organisers might insist that the runners are accompanied from now on, so I go to get changed.

But there is no sign of David, and we are all getting rather nervous. He arrives some twenty minutes later than we expected him to, visibly exhausted, soaked to skin and very cold. He is a sight for sore eyes, and all three of us are thinking this is it, but nobody wants to broach the subject.

Eventually, as a round about way, I ask 'do you want me to come along?', fully expecting him to say he was calling it, but instead he simply says 'yes'. There are lots of guts in those three letters, and this, ultimately, will become the moment that in the following weeks and months we will keep returning to.

And so we are off, walking, rather slowly, down the road. By the time we get to Kings House my headache is gone, and I am operating quite normally again (nothing like a bit of exercise!), keeping an eye on the pace, doing the math. I am aware I am talking too much, but conditions are so crap I feel I need to, so neither of us has time to think about that.

Up on the high ground above Kinlochleven it's very windy and our feet are in an inch or two of freezing water more or less constantly. We are moving slower than we need to be, and I am dreading the prospect of getting changed in this weather in a car park. But we pick up the pace a bit on the descent, even overtaking a few people who overtook us earlier on.

Just as we reach the village the sun comes out briefly, blowing some of the bleakness away. And to my great relief Linda and Gita managed to find some space inside the sports centre where the check point is. We don't have time to hang about here, the last two legs were both slower than the 24h pace, claiming back the buffer David built up to Bridge or Orchy. So just getting changed, a bite to eat, hot tea, an official kit check (from here on support runner is mandatory).

We manage a good pace on the climb out, but less so once the route starts descending the other side; I am reminded of the old fellrunner's wisdom, it's not the climbs that get you. The ground here is rough, and after 80 miles David's feet are hurting.

I am not much company, it takes all my effort to concentrate on setting the pace. At times I feel quite bad about pushing him, but I am determined not to let him finish in 24:02; we are either going to make it under 24h, or blow up properly, and just now it could go either way. There is another runner who joins us on the climb out of nowhere, and he makes up for me with conversation.

As we are approaching Lundavra I am glad to hear David saying that if the ground was a bit better he feels he could still do some running, so when we hit the good path beyond, I pick up the pace a bit, but there is no response from behind me. At this point I think that's it, the 24h dream is gone. But in fact David perks up not much later. I turn around at the bottom of the big descent -- it's an amazing sight, a line of bobbing head torches as far as I can see.

I am concerned about the climb out, but it turns out David is still climbing well, and as we start the final descend to Fort William gets proper second wind. We are running about 6-7min/km, overtaking quite a few people, and I am having hard time keeping up with him. We lose some of the energy on the final stretch of the road, which feels much longer than it should be, but that no longer matters, we are going to make it, and David eventually finishes in 23:42:31.

The Crew that Slept in

And then it's the prize giving the next day. This is hard to describe, it really needs to be experienced. 2017 was a particularly special year, with Rob Sinclair setting a new race record of 13:41:08. This is a truly amazing feat.

But as I sit there that morning to my mind, the new record is not as amazing as Nicole Brown, the last finisher, coming in just a few minutes earlier, in 34:40:28. Having been out the previous night in the awful weather for just five hours or so, I can honestly say I would not have stuck it out for another twenty hours of the same if you were paying me. And this, I think, is what the West Highland Way Race is ultimately about.

So yes, if you get a chance to crew on the West Highland Way, do so, it is worth it, unique, and unforgettable.

PS: The organisers recommend using two crew teams, and with hindsight this is wise. We just could not resist the temptation of seeing the start of the race, and did underestimate the fatigue that would bring.

by tf at December 23, 2017 06:07 PM

December 20, 2017

Chris Lord

My Impossible Story

Keeping up my bi-yearly blogging cadence, I thought it might be fun to write about what I’ve been doing since I left Mozilla. It’s also a convenient time, as it coincides with our work being open-sourced and made public (and of course, developed in public, because otherwise what’s the point, right?) Somewhat ironically, I’ve been working on another machine-learning project, though I’m loathe to call it that, as it uses no neural networks so far, and most people I’ve encountered consider those to be synonymous. I did also go on a month’s holiday to the home of bluegrass music, but that’s a story for another post. I’m getting ahead of myself here.

Some time in March I met up with some old colleagues/friends and of course we all got to chatting about what we’re working on at the moment. As it happened, Rob had just started working at a company run by a friend of our shared former boss, Matthew Allum. What he was working on sounded like it would be a lot of fun, and I had to admit that I was a little jealous of the opportunity… But it so happened that they were looking to hire, and I was starting to get itchy feet, so I got to talk to Kwame Ferreira and one thing lead to another.

I started working for Impossible Labs in July, on an R&D project called ‘glimpse’. The remit for this work hasn’t always been entirely clear, but the pitch was that we’d be working on augmented reality technology to aid social interaction. There was also this video:

How could I resist?

What this has meant in real terms is that we’ve been researching and implementing a skeletal tracking system (think motion capture without any special markers/suits/equipment). We’ve studied Microsoft’s freely-available research on the skeletal tracking system for the Kinect, and filling in some of the gaps, implemented something that is probably very similar. We’ve not had much time yet, but it does work and you can download it and try it out now if you’re an adventurous Linux user. You’ll have to wait a bit longer if you’re less adventurous or you want to see it running on a phone.

I’ve worked mainly on implementing the tools and code to train and use the model we use to interpret body images and infer joint positions. My prior experience on the DeepSpeech team at Mozilla was invaluable to this. It gave me the prerequisite knowledge and vocabulary to be able to understand the various papers around the topic, and to realistically implement them. Funnily, I initially tried using TensorFlow for training, with the mind that it’d help us to easily train on GPUs. It turns out re-implementing it in native C was literally 1000x faster and allowed us to realistically complete training on a single (powerful) machine, in just a couple of days.

My take-away for this is that TensorFlow isn’t necessarily the tool for all machine-learning tasks, and also to make sure you analyse the graphs that it produces thoroughly and make sure you don’t have any obvious bottlenecks. A lot of TensorFlow nodes do not have GPU implementations, for example, and it’s very easy to absolutely kill performance by requiring frequent data transfers to happen between CPU and GPU. It’s also worth noting that a large graph has a huge amount of overhead that will be unrelated to the actual operations you’re trying to run. I’m no TensorFlow expert, but it’s definitely a particular tool for a particular job and it’s worth being careful. Experts can feel free to look at our repository history and tell me all the stupid mistakes I was making before we rewrote it 🙂

So what’s it like working at Impossible on a day-to-day basis? I think a picture says a thousand words, so here’s a picture of our studio:

Though I’ve taken this from the Impossible website, this is seriously what it looks like. There is actually a piano there, and it’s in tune and everything. There are guitars. We have a cat. There’s a tree. A kitchen. The roof is glass. As amazing as Mozilla (and many of the larger tech companies) offices are, this is really something else. I can’t overstate how refreshing an environment this is to be in, and how that impacts both your state of mind and your work. Corporations take note, I’ll take sunlight and life over snacks and a ball-pit any day of the week.

I miss my 3-day work-week sometimes. I do have less time for music than I had, and it’s a little harder to fit everything in. But what I’ve gained in exchange is a passion for my work again. This is code I’m pretty proud of, and that I think is interesting. I’m excited to see where it goes, and to get it into people’s hands. I’m hoping that other people will see what I see in it, if not now, sometime in the near future. Wish us luck!

by Chris Lord at December 20, 2017 01:00 PM

December 18, 2017

Tomas Frydrych

Strathfarrar Watershed (A View from the Sidelines)

Strathfarrar Watershed (A View from the Sidelines)

I suspect most of those reading this have never heard of John Fleetwood. Recently someone described John as 'quietly getting on with doing extraordinary mountain journeys with zero fanfare', which about sums him up. Behind that 'extraordinary' hide a few other adjectival phrases, of which perhaps the most important is 'preferably in winter', yet his accounts of these ventures are a bit understated. So here is one mortal's peripheral story of the Strathfarrar Watershed.

I first met John some fifteen years ago in the Christian Rock and Mountain Club. Hillrunning wasn't yet on my personal radar, the shared passion was mountains and climbing. John was a determined (some might even say driven!) winter climber and an alpinist, and though to my recollection I only ever climbed with him on the same rope once (he was climbing much harder stuff than I even aspired to), there were many shared trips, drams, songs and stories (and vegetarian curries; John was about the only vegetarian I knew in those days, and so always volunteering to take care of the food).

As all of my friends, present and past, would undoubtedly readily confirm, I am not very good at keeping in touch, and so we lost contact for a number of years. Time passes rather fast, bringing with it some significant birthdays among the old CRMC crowd, and a reunion meet in the Yorkshire Dales couple of years ago.

By then, hillrunning had become my main passion, and I was (still/again) training for the Assynt Traverse. John was just back from a rather epic traverse of the Alps, and there was much to talk about. I never talked running with John before, and realised quickly that we share a very similar take on it, though we practise it on quite different levels. And he was the first (and last) person that I came across who knew exactly what the Assynt Traverse was!

Consequently, when John got in touch at the start of this year about his plans to attempt a winter traverse of the Strathfarrar watershed, I readily agreed to go along. All we needed was a good dump of snow, which a storm at the end of February helpfully provided. And so on the morning of the 27th we find ourselves at the gate on the Glen Strathfarrar private road. (And if you intend to read any further, I suggest you read John's account before carrying on, what follows will make more sense.)

There was never any question of me accompanying John. Even at the peak of my physical condition this outing would be well beyond my limits, and I am not even remotely at my peak. And so as John heads up the little farm track to gain the hills north of the glen, I assemble my bike and set off along the road. The plan is to cycle to Monar Lodge, run along Loch Monar to gain the high ground over Creag na Gaoithe, eventually joining John's route at Bidean an Eoin Derig; follow it to the Maol-bhuidhe bothy where we will meet, for some warm food, dry clothes and spare batteries. Then, perhaps, I'll accompany John for a bit up to Sgurr na Lapaich, before making my own way down to Monar Lodge to pick up the bike ...

Strathfarrar Watershed (A View from the Sidelines)

But that's all still ahead of us. It is a crisp morning, promising a clear sunny day ahead. An inch or so of snow on the road makes the cycle quite arduous, though the stunning scenery is more than making up for that. But soon my feet are freezing, and I can't think of any explanation why I packed SIDI racing shoes rather than Specialized Defroster winter boots. By the time I reach the far side of Loch a'Mhuillidh, I can't feel my toes and have to stop to put on an extra pair of socks, which helps a bit.

Strathfarrar Watershed (A View from the Sidelines)

The glen is full of deer, there must be thousands of them, feeding on the hay the estate provides. They are somewhat unpredictable, particularly the younger bucks, and so care is needed, especially where the road splits the herd. I slowly gain height, there is more snow, and some pushing to be done, before I reach the lodge -- the 25k or so takes me some three hours, a lot longer than I expected.

After a quick early lunch basking in sunshine, I put on my mudclaws and set off along the loch. The sky is blue, no cloud to speak of, the loch like a mirror -- the centre of the high pressure must be bang on the top of here.

Strathfarrar Watershed (A View from the Sidelines)

The jog is very pleasant, though the temperature has crept up a bit, melting the snow, and so for the entire 10km I run in an inch or two of ice-cold water. I don't mind, days like these don't come around often, and I think the cold feet a price worth paying.

Strathfarrar Watershed (A View from the Sidelines)

When I eventually stop for some oatcakes and cheese at the foot of Creag na Gaoithe, a wisp of cloud appears from somewhere and it suddenly gets rather cold. I don't hang around and start plodding up.

The snow on the sun-exposed hillside is saturated with water, and my cold feet are doing my head in: I start worrying about the inevitable temperature drop on the higher ground, about how far I still have to go today ... in this game, the head is everything.

Nevertheless, the sun is back out on the ridge, surface frozen and runnable, my feet warming up quickly. I pause briefly at the foot of the arrete that leads to the summit of Bidean an Eoin Deirg, wondering if I need to put on crampons, settling for an ice axe only, and quickly regretting it. Conditions are tricky, and exposure on both sides considerable. And, of course, now I am in a place I can't put them on ... I carefully backtrack onto a small platform lower down -- how many times over the years have I got caught out like this?

A bite to eat on the summit, then over to the Sgurr a'Chaorachain trig point. There are some footprints here, and I nearly descend down the N ridge by mistake following them. But I realise quickly enough. The compass comes out to double check, for in the afternoon light the climb out of Bealach Coire Choinnich onto to Sgurr Choinnich seems improbably steep and monolithic. I even briefly contemplate dropping down into Coire Choinnich to avoid it, but the slope there is obviously heavily loaded, the risk of triggering an avalanche high.

As it happens, the ascent is straight forward, the ground at an amenable gradient (as the map clearly shows), but the snow is deep, at times waist deep. I don't even pause on Sgurr Coinnich, I am well behind schedule, reaching Bealach Bhernais exactly at sunset.

Strathfarrar Watershed (A View from the Sidelines)

There are decisions to be made. The next section of John's route is difficult navigation-wise, and the deep snow will make progress hard. I have no idea how far behind me John is, but I do know that should he catch up with me I could not keep up with him. But more than anything, I am tired, have been on my feet for over nine hours, and my lack of fitness is beginning to show.

I decide to take the bailout route -- there really is only one such option today, and it's here in Bealach Bhernais. I should say, this is not something I am desperately devising here in the dropping temperature while watching the stunning sunset. Rather, it is something we discussed over a vegetable curry the previous night in the warmth of the (most excellent, would recommend to a friend) Black Isle Berries Bunkhouse. On these big winter ventures planning is key to survival, and John's planning is nothing if not meticulous.

The bailout route means heading west to pick up the stalker's path that leads into Coire Beithe, following this past Loch an Laoigh to eventually pick up the path into Coire na Sorna, past Loch Calavie and down to the bothy over open hill. It's still 17km or so to go, but an easy 17km compared to the watershed line.

I enjoy the sunset, then get the head torch out and reset the altimeter. The initial descent is awkward, and the stalker's path hard to locate, but once I do, it is a decent pony track, and to my surprise I am running rather well down the gentle gradient. Once I get beyond Loch an Laoigh, I find a huge track where the map indicates a path.

After a short while my head torch beam starts picking up some strange, spooky aberrations ahead. This turns out to be heavy machinery, with high vis jackets left hanging on the operator seats. Even in the darkness, I am saddened by the intrusion, we do not value landscape anywhere near enough in this wee country of ours.

There is only one small snag with the bailout route: when I printed out my map at home, I didn't print enough of it. I am missing perphas no more than 1/2 km, but unfortunately it includes the place where the Coire na Sorna path leaves the track I am on. To avoid dropping too low and off my map, I decide to leave the unenjoyable track early and head for the open hillside. The Sorna path is, in fact, yet another big track, which I intersect at around the 300m contour line.

A short climb, then the track levels out, Loch Calavie should be on my right. Yet my (reasonably powerful BD Icon Mk I) head torch beam is not picking it up. There seems to be just a black bottomless abyss there. This is disconcerting. I stop to check the map. It turns out I am standing no further than a foot away from the edge of the water, I can hear it splashing when I am still, but somehow if I shine my beam further out, there is no reflection whatsoever. Spooky.

I carry on, and, not being able to see the loch that well, I miss its eastern end, where another path I want to take branches off. But I am sufficiently alert to realise almost immediately when the main track starts climbing again. The foot of the loch is a proper peat bog, and it takes me a while to negotiate it, before a brief spell on the new path. Then on a bearing down to Loch Cruoshie. There are a few obvious re-entrants here, serving as useful tick offs, and my navigation is bang on.

The final unknown is whether the outflows from Loch Cruoshie will be manageable to cross or not. There is an alternative, but it means a fair detour which I would prefer to avoid. They are freezing, knee deep, but very mercifully slow flowing. Not far to go now, perhaps the reason why I become somewhat complacent about navigation even though there is a thick mist hanging around. As a result locating the bothy takes me longer than it should have, not ideal after wading through the icy water. I am much relieved when I finally spot its outline at the far reach of my head torch beam.

It's 10.45pm and I am glad this 15h day is at an end. I have a quick look around. There is a wheel barrow in the 'utility room', with what looks like a sack of coal in it -- if wishes were horses ... I don't know who gets a bigger fright, whether me or the mice sheltering in it; alas no coal. I make my dinner, stick a candle into the window for John, and promptly fall asleep.

At 4.30am, roughly the time I think John might be arriving, I get up to scan the hillside for light. There is none, so I replace the nearly burnt out candle with a fresh one, and crawl back into the sleeping bag till 7.30am. Time for porridge as the day is breaking out. Still no sign of John, but I am not concerned, not yet. Another glorious day is beginning, and there are pictures to be taken.

Strathfarrar Watershed (A View from the Sidelines)

Nevertheless, as time progresses I am aware there is a cut off point beyond which I can't stay here. I have only a small amount of food left, and a bit extra I left with the bike, but not much. I need to leave here no later than noon. But if John doesn't arrive by then, we have a more serious situation anyway, I suspect. I decide not to worry prematurely, and John appears half an hour later.

He is visibly exhausted and the bottoms of his walking poles have turned into giant ice balls. Yet, he doesn't stay long, just enough to eat, change some clothes, and have a go at the ice balls with an ice axe. In spite of the very hard snow conditions he is determined to carry on. Having been up there just a few hours earlier, I know exactly what he is up against, and I find the level of mental stamina required to carry on quite astonishing.

There is no question of me joining him for Sgurr na Lapaich. I am too spent, and my right heel is rather tender, has been since midday of the previous day. I suspected a giant blister to start with, but as there was nothing to be done about that until I got to the bothy, I didn't bother. But to my surprise, when I took my socks off previous evening, there was no external damage, which is more disconcerting than a giant blister would have been. So I need an easy option.

The relatively easy exit route takes me into the bealach between An Cruachan and An Soccach, down a stalker's path along Allt Riabhachain and then through the Drochaid nam Meall Bhuidhe bealach to pick up the path leading into Glean Innis an Loichel. It's been overcast since mid day, but the sun comes out for a bit in the afternoon just as I enter the glen. Then a bit of road running to get back to Monar Lodge. All in all just under six hours.

The pain in my right heel has got progressively worse during the day, and for some reason is particularly acute on the bike. But there is no snow left on the road, and I can see some serious weather coming in, so push hard, taking just seventy five minutes to get back to the car. But not before the weather arrives, the last part of the cycle in freezing rain and stinging hail. I spare a thought for John up there on the high ground, better him than me.

Off to Beauly where I devour a fish supper. I hope I'll be able to stay in the Bunkhouse again -- John was hopeful of a 36h finish, so we did not book another night. I am in luck. Early start next day, back at Struy for 5:30am as agreed. I try to sleep in the car for a bit more, but it gets too cold without the engine running, so I get up and go for a walk. I get a call from John at 6.30am -- he has only six miles left, but says he is moving slowly.

The big question is what is 'slowly' in the Fleetwood parlance? I expect it's more like my fast than my slow, so start walking up the forest track John will come down to meet him. Another nice cold morning. Just before the track emerges from the forest, there is a giant, iced over, and hard to avoid, puddle and I lament not wearing walking boots.

Once on the open hillside I can see quite far, but there is no sign of John. I wonder if we might have somehow missed each other, and hurry back. As I do, I register another, quite appealing, forestry track going off to the right, which under different circumstances I'd explore, but the last thing I want is for John having to wait for me.

No sign of John at the car, so I get the camera out and head back to the village, to take pictures of snowdrops and study the grave stones, as you do. There is lot of history here, but not much life. I chat to a couple of drivers of forestry trucks waiting for the time when they are allowed into the forest.

Time moves on, three hours and counting since the call. I keep an eye on the track on the hill, but no sign of John. Then a rather dishevelled figure emerges down the road; I take do a double take, the direction is wrong, but yes, it is John, with tales of a dead end forestry tracks and dense sitka. I am very glad to see him, the last couple of hours I was beginning to worry about him for the first time since he set off.

by tf at December 18, 2017 08:42 PM

November 03, 2017

Tomas Frydrych

Regarding Microspikes

Regarding Microspikes

Recently there has been some chatter about using lightweight footwear in the winter hills, and in that context microspikes have been mentioned. As someone who uses microspikes a lot, I'd really like to warn quite emphatically against taking microspikes into the hills as a substitute for crampons -- in some ways wearing microspikes can be considerably more dangerous than just wearing boots without crampons.

Don't get me wrong, I really like microspikes; they are an excellent tool for winter running.

Regarding Microspikes

But they only work in a very limited range of conditions. Specifically, they are only suitable for moderately steep slopes, roughly speaking, slopes you can consistently keep the entire sole of your foot on the ground, and they only work well on pure, exposed, ice and hard neve. They do not work if the hard surface is covered by even a fairly small amount of loose, non-compacting, snow (e.g. blown-on dry powder), and they do not work on the cruddy snow that much of Scottish winter is made of -- the 9mm spikes are too short to find purchase.

But the real problem with microspikes is not that they have limits, all tools do, but rather that (a) they go from a superb secure grip to zero traction in a fraction of a second, and (b) that this tends to happen on a much steeper ground than would have if I were just wearing boots. With boots the loss of traction tends to be gradual, and I get plenty of warning to get the crampons out, or just to back off. In contrast, the microspikes will happily, and effortlessly, take me on a ground that in boots alone I would have long been aggressively kicking steps. This means that slipping with microspikes is likely to be a much more serious proposition than slipping with just my boots on. What gradient are you comfortable self arresting on? 10°? 30°? 45°?

This is not some just theoretical musing, it's something I have learnt the hard way. One January some years back I was doing my regular training run which takes in Ben Vorlich and Stuc a'Chroin from Braeleny Farm. The hills were in early winter conditions, and as was my habit at the time, I brought my standard winter gear of ice axe, microspikes and crocs (the latter for the several river crossing along the way). Ben Vorlich was nicely iced up and windswept, and the micros were working a treat. From the distance the Stuc a'Chroin 'Nordwand' did not look too bad, plenty of bare rock, and so I decided (to use a technical climbing term) 'to take a look at it'.

Regarding Microspikes

I gained height fairly quickly, and as I did the snow condition, and my traction, progressively deteriorated, until I reached an awkward steep groove where it was obvious that if I carried on any further I would not be able to back off. As I started down-climbing the true limitations of the microspikes became painfully obvious: if my traction going up was poor, it was nothing compared to going down. The next half hour, spent kicking in short step after step, was some of the tensest time I have ever experienced in winter hills (I once had a few awkward minutes in the Man Trap, nowhere near as bad, I dare say).

I made it safely to the foot of the buttress eventually and headed over to the broad corrie that in the summer is used to avoid the Nordwand. The snow conditions there were superb. The iced up neve put a big grin on my face as I made rapid progress up, though the upper section was way too steep for the micros, and I had to make great effort to keep at least my toebox on the slope over the final metres. But my axe placements were bombproof, the sun was shining, and my previous escapade was promptly forgotten.

Regarding Microspikes

It is perhaps the sunshine, so rare in Scottish winter, that explains that a month later I am back, again wearing the micros. By now the winter is full on, and the Nordwand is plastered with snow -- I have no intention heading up there, I have learnt my lesson. Or so I think.

The first warning signs come on the descent to Bealach an Dubh Choirein. There is more snow, and a short steep section that needs to be down-climbed proves very awkward. It is a sign of things to come. The conditions in the NE corrie are much changed as well, the line I took out of here last time is topped up by a steep wall and a cornice, and is out of the question both because of the gradient and the avalanche risk. At the same time there is no sign of the perfect neve, and as I make my way up along the north edge of the corrie, I am struggling for any sort of a grip in the cruddy snow. I weave my way up through a series of awkward traverses and rocky steps, kicking and cutting, at times down to the vegetation. All that in the full knowledge that had I been wearing crampons, I wouldn't have given this sort of ground a second thought.

That day I decided to have a simple policy for my winter runs -- if the terrain is serious enough to require carrying an ice axe, I take crampons. No exceptions. At times it is tempting not to, all that extra weight. Indeed there have been times on a run I wished I had micros instead of crampons; it is almost invariably followed by a relief that I have brought the crampons, when a few miles on conditions change. And so when I am packing my gear and that temptation comes, I just think back to those days and the temptation goes away. Life is too precious, and the winter hills don't stand for hubris.

P.S. As I have mentioned elsewhere, I use the Kahtoola KTS crampon for running.

by tf at November 03, 2017 02:35 PM

October 17, 2017

Tomas Frydrych

To Eat or not to Eat (contd)

To Eat or not to Eat (contd)

The disillusionment with the M&S curry aside, the biggest factor that forced me to rethink camping food was running. While Scotland's hills provide superb playground from short jogs to long days, it is the linking of multiple days together that opens up, literally, whole new horizons. Alas, none of my previous approaches to cooking was suited to self-supported multiday runs.

The problem is twofold. On the one hand, running is far too much impacted by the load we carry. I have never obsessed about weight, not beyond eliminating the unnecessary ('light weight' is a synonym of 'short lasting', and I prefer durable), but for running the elimination approach was not enough. I found out that a load of up to about 6kg impacts my pace, but generally not the quality of my running. However, once it gets above 9kg or so, there is very little genuine running taking place. I managed to cut the base kit, including 0.5l water carried, to about 6.5kg. That leaves about 2kg for food ... and brings me to other issue.

The energy burn while running is just that little bit higher. At the same time I don't like running over multiple days on a large calorific deficit: feeling hungry takes away from the fun, impacts one's metal capacity, and makes subsequent recovery longer. Yet running I can easily burn more than 6,000 kcal per day, while the theoretical (and unreachable) limit of what I can pack into 1kg of food is ~9,000 kcal (pure oil). In other words, I'll never carry enough food not to incur a deficit, which means I need to pay attention to the calorific density of the food I take to make the most of it.

Since we are talking calories and running, there is an additional issue to be aware of. The ultra-runner experience seems to suggest that while on the move we can only absorb ~250kcal/h. This is worth keeping in mind when planning the menu: the bulk of the calories needs to come from the evening meal, while during the day small but frequent food intake is the best strategy.

Doing it on the Cheap

Breakfast is easy -- 2 packets of plain instant porridge; no milk required, just add boiling water and 75g or so of 60% chocolate for extra calories. Stir thoroughly, let it sit for a couple of minutes.

During the day my staple food is nut and raisin mix (I like the Tesco Finest variety, but it's too expensive; you can make a nearly identical mix from the nuts and berries Lidl sells, at about half the price), and oatcakes and hard cheese (I am particularly fond of the rough Orkney oatcakes, and Comte). The benefit of oatcakes is lower GI index, which means a more steady supply of energy, plus they are relatively high in fat (the mentioned ones are about 120 kcal per oatcake). Hard cheese has probably the highest calorific density of any normal food, it does not perish quickly, and I happen to like it. If I need a sugar hit, I take Jelly Babies -- not as good as a gel in terms of the hit but lot cheaper, and more fun (4 Jelly Babies correspond to ~1 gel).

The evening meal is where the main challenge, but also the opportunity for eating well, lies. It takes no genius to realise that the M&S curry and Uncle Ben's rice combo fails badly on the calorific density count, for much of the content of both the rice packet and the tin is water, and water is dead weight, i.e., negative calories. Yet it is easy to prepare a good, cheap, home made, meal that is also lot lighter.

My firm favourite is to make a tomato-based sauce, usually with chorizo, some olives, pine nuts, or whatever else I have around / take fancy to. I reduce this to a thick paste and simply pack it into a zip-lock bag. The trick is to use only as little fat/oil as is necessary for the cooking process, and then take some nice olive oil in a small bottle instead. This reduces the mess in case the zip-lock bag fails you (I confess, I double bag, just in case). Nalgene make small leak-proof utility bottles perfect for the oil; I find the 30ml bottle is about right for a single meal, and the 60ml for two (adding ~250/500 kcal respectively).

I normally tend to have this sauce with Chinese-style noodles. Ultimately, I want something that requires as little cooking as possible, for if I can reduce the amount of cooking that I do, I can significantly reduce the weight of the cooking paraphernalia (on that below). After much searching, I have settled on Sainsburys brand of noodles in round nests; they only require 3 min boiling (which can be cut to less if I leave them to sit for a bit), and they fit neatly inside a Toakes 0.5l pot, which is just big enough to cook two of them.

(As far as reducing the cooking time goes, couscous is the best option, but while I love it, I find it does not fill me up, so I prefer some form of pasta.)

I don't bother heating up the sauce, I simply mix it with the noodles in my food bowl, and add the extra oil, depending on the sauce maybe bringing some Parmesan to sprinkle on the top (if you are anything like me, you will realise quickly that draining the noodles is an awful waste ... makes a great soup instead).

The main shortcoming of this approach is that food this prepared does not keep very long; how long will depend on the ingredients (one of the reasons I like using chorizo), and the ambient temperature. Personally, I am happy with this approach for a two night trip in the usual Scottish temperatures, but one needs to use common sense, and if in doubt, reheat everything thoroughly. The other issue is that I still end up carrying quite a bit of water in the food, making it hard to get more than couple of days of food out of my 2kg allowance.

The answer to both of these problems lies in dehydration, which I shall come to in a third instalment of these posts of my camping food 'journey'.

A Side Note: The Kitchen Sink

I always take a 'bowl' to eat from, it means the pot is free for making coffee while eating -- the bottom of an HDPE milk carton makes a superb camping bowl; it is lightweight, it folds flat, the HDPE withstands boiling water, and it gets simply recycled at the end of the trip (for two nests of noodles, you will need the bottom of a six pint carton).

I don't bother with a cup. I carry a 0.5l Nalgene wide-mouth HDPE bottle: during the day this is my water bottle (I make it a 'policy' not to carry more than 0.5l at any time during the day, in Scotland it is rare that more is needed, particularly if I take the Sawyer mini filter), and in the evening it becomes my cup. It is fine with boiling water, the screw top means I don't spill it by accident in the tent, it holds heat rather well, and it can double up as a hot water bottle during the night.

Once I realised that I only need a 0.5l pot for one person (0.7l for two), it became obvious that the ubiquitous gas camping stove is a lot of dead weight to lug about (as well as bulk). The smaller canister weighs around 230g for 110g of gas, while a decent small stove weighs around 80g (there are smaller stoves on the market, e.g., the 25g Chinese BRS-3000T; mine flares out so dangerously when reducing the flame once it's hot that I will not use it again, and would advise against buying it -- the 55g saved compared to a proper stove from a reputable manufacturer is not worth it). There is also the high cost of gas, exacerbated by the accumulation of partially empty canisters after each trip (that these canisters are not refillable is an ugly blot on the outdoor equipment industry green credentials).

I find that the most weight-, as well cost-, efficient solution for short trips is cooking on alcohol. Alcohol stoves come in different shapes and forms, but my favourite is the 30ml burner made by this guy. It is spill-proof (the alcohol is soaked up into a some sort of a foam), and weights 14g; together with the small stand he also sells, and an alu foil homemade windshield, it comes to around 30g. I need around 50g of alcohol per day, plus 50g extra to give myself a margin for spilling my coffee (or to pour boiling water into my shoes when they freeze solid overnight). Small plastic bottles seem to invariably weigh 20g regardless their size up to about 0.25l, so for one night outing this translates to about 160g less in weight (and about £4 cheaper) than gas (so I can treat myself to more chocolate!).

The things to be aware regarding alcohol cooking:

  • It stops being weight-efficient after about 3-4 days (alcohol contains about 1/2 the energy of gas per weight; the savings come from being able to take only what you need and the low weight of the bottle).
  • It takes longer to boil water on the above linked stove than it does on a good quality gas stove, and you really need a windshield; but time to cook is something I am never short of on my trips.
  • Most importantly, alcohol stoves can produce fairly high amounts of CO if the oxygen supply is restricted by, e.g., a windshield, so always make sure there is enough oxygen getting through to the flame and the tent is adequately ventilated (the latter applies to all stoves, some gas stoves are considerably worse than others).

To be continued ... (on dehydrating food)

by tf at October 17, 2017 05:34 PM

October 16, 2017

Tomas Frydrych

To Eat or not to Eat (Well)

To Eat or not to Eat (Well)

I have always liked my food; perhaps it's because I come from a place that obsesses over wholesome home cooking. I also like my food now more than I once used to; perhaps it's because my adoptive homeland doesn't do food particularly well (doesn't really 'get' food).

A good meal is one of those little, simple, pleasures that can put a smile on your face when there isn't much else to smile about, and this fully applies to eating in the outdoors.

My overnight ventures into the woods started at a time and a place where camping stoves did not really exist, a camping mat was something that two muscle-bound men carried to a lake for kids to float on, and a good warm (fur) kidney belt was one's most treasured possession. I think of those days with bemusement as I mentally survey my current weekend camping kit list -- we were unwitting practitioners of 'extreme ultralight' (except there was nothing particularly light about the coveted US Army issue rucksack, the cotton tarp, or the draughty sleeping bag). But back to food and eating.

My standard fare during those days was a half a kilo shop-bought tin of meat and sauce, cooked on an open fire, in the tin, with bread on the side. All in all, it made a pretty decent evening meal. (I can't remember what we ever ate for breakfast, but my lunch was invariably a tin of Soviet-made sardines in tomatoes sauce; it became a running joke, for they did not agree with me, but I couldn't resist them.)

In my early teens one of my friends found a WWII Wehrmacht issue petrol stove in his loft. It was bulky, heavy, and caused much excitement when he brought it along one weekend. It roared mightily, and promptly burned a neat finger sized hole through the bottom of his tin -- it amused us greatly, as we stirred our own tins on the fire, watching him trying to salvage what he could from his dinner.

But that was an exception. The only readily available stove on the market was a clone of the folding German Esbit. The flame was feeble, it was impossible to keep the hygroscopic fuel tablets dry, and the moisture in them made them explode and shoot burning bits all around. Every so often some younger lad would turn up with one, and we would happily munch on our warm food watching him fighting it, before giving up, and learning to cook the 'normal' way. The only time these solid fuel stoves came into play were our summer treks through the Tatras (and farther); there open fires were banned, and/or there was no natural fuel.

The week or so long treks required a different approach to food. Tins were out of the question because of the weight, and the silly stoves forced us to keep boiling of water to minimum. Our rations for the week came to a loaf of bread and a foot or so of salami for lunches (the culinary highlight of each day), oats and raisins for breakfast, and pasta (usually with sugar an raisins) for tea. The oats were pre-soaked over night to reduce the cooking time, and the pasta was only just brought to boil and left to sit till it was soft enough, meaning it was never very warm when we ate it. (We had some savoury option on the menu as well, but I can't recall what it was; I suspect my mind blocked it away for sanity sake. It might have been pasta with sardines.)

When I came to Scotland in mid '90s, I had a brief fling with ready made camping food -- all in all three dates I recall; we broke up quietly, were not a good match for each other. I did not like the food and could not afford the prices. It made me realise I like my food too much to suffer for no good reason. These foil packets offered nothing that the tins of my childhood did not offer, except with less flavour and at a premium price. And so I reverted to kind. For a number of years my basic camping food became a tin of M&S curry, cooked in the tin, and a packet of Uncle Ben's microwavable rice (a trick I learnt from a friend -- it needs no cooking, just a little hot water to warm it up).

Then one day, after a cancelled trip, Linda away, I made the mistake of heating up the tin of curry for my tea at home. It was terrible. I decided there and then that I deserved better, and so began my quest for good, home made, food on the go.

To be continued ... (with the stuff this post was meant to be about in the first place)

by tf at October 16, 2017 11:04 AM

October 13, 2017

Emmanuele Bassi

GLib tools rewrite

You can safely skip this article if you’re not building software using enumeration types and signal handlers; or if you’re already using Meson.

For more that 15 years, GLib has been shipping with two small utilities:

  • glib-mkenums, which scans a list of header files and generates GEnum and GFlags types out of them, for use in GObject properties and signals
  • glib-genmarshal, which reads a file containing a description of marshaller functions, and generates C code for you to use when declaring signals

If you update to GLib 2.54, released in September 2017, you may notice that the glib-mkenums and glib-genmarshal tools have become sligly more verbose and slightly more strict about their input.

During the 2.54 development cycle, both utilities have been rewritten in Python from a fairly ancient Perl, in the case of glib-mkenums; and from C, in the case of glib-genmarshal. This port was done to address the proliferation of build time dependencies on GLib; the cross-compilation hassle of having a small C utility being built and used during the build; and the move to Meson as the default (and hopefully only) build system for future versions of GLib. Plus, the port introduced colorised output, and we all know everything looks better with colors.

Sadly, none of the behaviours and expected input or output of both tools have ever been documented, specified, or tested in any way. Additionally, it turns out that lots of people either figured out how to exploit undefined behaviour, or simply cargo-culted the use of these tools into their own project. This is entirely on us, and I’m going to try and provide better documentation to both tools in the form of a decent man page, with examples of integration inside Autotools-based projects.

In the interest of keeping old projects building, both utilities will try to replicate the undefined behaviours as much as possible, but now you’ll get a warning instead of the silent treatment, and maybe you’ll get a chance at fixing your build.

If you are maintaining a project using those two utilities, these are the things to watch out for, and ideally to fix by strictly depending on GLib ≥ 2.54.

glib-genmarshal

  • if you’re using glib-genmarshal --header --body to avoid the “missing prototypes” compiler warning when compiling the generated marshallers source file, please switch to using --prototypes --body. This will ensure you’ll get only the prototypes in the source file, instead of a whole copy of the header.

  • Similarly, if you’re doing something like the stanza below in order to include the header inside the body:

    foo-marshal.h: foo-marshal.list Makefile
            $(AM_V_GEN) \
              $(GLIB_GENMARSHAL) --header foo-marshal.list \
            > foo-marshal.h
    foo-marshal.c: foo-marshal.h
            $(AM_V_GEN) (
              echo '#include "foo-marshal.h"' ; \
              $(GLIB_GENMARSHAL) --body foo-marshal.list \
            ) > foo-marshal.c
    

    you can use the newly added --include-header command line argument, instead.

  • The stanza above has also been used to inject #define and #undef pre-processor directives; these can be replaced with the -D and -U newly added command line arguments, which work just like the GCC ones.

  • This is not something that came from the Python port, as it’s been true since the inclusion of glib-genmarshal in GLib, 17 years ago: the NONE and BOOL tokens are deprecated, and should not be used; use VOID and BOOLEAN, respectively. The new version of glib-genmarshal will now properly warn about this, instead of just silently converting them, and never letting you know you should fix your marshal.list file.

If you want to silence all messages outside of errors, you can now use the --quiet command line option; conversely, use --verbose if you want to get more messages.

glib-mkenums

The glib-mkenums port has been much more painful than the marshaller generator one; mostly, because there are many, many more ways to screw up code generation when you have command line options and file templates, and mostly because the original code base relied heavily on Perl behaviour and side effects. Cargo culting Autotools stanzas is also much more of a thing when it comes to enumerations than marshallers, apparently. Imagine what we could achieve if the tools that we use to build our code didn’t actively work against us.

  • First of all, try and avoid having mixed encoding inside source code files that are getting parsed; mixing Unicode and ISO-8859 encoding is not a great plan, and C does not have a way to specify the encoding to begin with. Yes, you may be doing that inside comments, so who cares? Well, a tool that parses comments might.

  • If you’re mixing template files with command line arguments for some poorly thought-out reason, like this:

    foo-enums.h: foo-enums.h.in Makefile
            $(AM_V_GEN) $(GLIB_MKENUMS) \
              --fhead '#ifdef FOO_ENUMS_H' \
              --fhead '#defineFOO_ENUMS_H' \
              --template foo-enums.h.in \
              --ftail '#endif /* FOO_ENUMS_H */' \
            > foo-enums.h
    

    the old version of glib-mkenums would basically build templates depending on the phase of the moon, as well as some internal detail of how Perl works. The new tool has a specified order:

    • the HEAD stanzas specified on the command line are always prepended to the template file
    • the PROD stanzas specified on the command line are always appended to the template file
    • the TAIL stanzas specified on the command line are always appended to the template file

Like with glib-genmarshal, the glib-mkenums tool also tries to be more verbose in what it expects.


Ideally, by this point, you should have switched to Meson, and you’re now using a sane build system that generates this stuff for you.

If you’re still stuck with Autotools, though, you may also want to consider dropping glib-genmarshal, and use the FFI-based generic marshaller in your signal definitions — which comes at a small performance cost, but if you’re putting signal emission inside a performance-critical path you should just be ashamed of yourself.

For enumerations, you could use something like this macro, which I tend to employ in all my projects with just few, small enumeration types, and where involving a whole separate pass at parsing C files is kind of overkill. Ideally, GLib would ship its own version, so maybe it’ll be replaced in a new version.


Many thanks to Jussi Pakkanen, Nirbheek Chauhan, Tim-Philipp Müller, and Christoph Reiter for the work on porting glib-mkenums, as well as fixing my awful Parseltongue.

by ebassi at October 13, 2017 03:21 PM

October 09, 2017

Tomas Frydrych

Of Camera Bags

Of Camera Bags

There is no end of acquiring them, the search for the perfect camera bag seems endless. Here are some of mine, and some thoughts on them.

Ortlieb Protect

The now discontinued (looks like Ortlieb stopped making camera bags altogether), but still available, Protect is a compact, waterproof bag in the tradition of Ortlieb robustness, with a slider closure which is easy to operate in big gloves. The inside of the bag is made of a thick closed-cell foam that gives it rigidity, but, unusually for a camera bag, is not lined with fabric. It is officially IP54 rated (though I am fairly certain that when I first got mine it was sold as IP67; I believe there were issues with the slider seal in cold temperatures). Size wise it is just big enough for my old Lumix GF-2 with a 14-70mm kit lens.

The great thing about this bag is that it can be comfortably hung with a couple of carabiners on backpack shoulder straps, providing fast and easy on-the-go access. This makes it an excellent mountain biking and skiing solution for smaller cameras.

I got the Protect on a recommendation of a friend about a decade ago, and it has served me faithfully ever since. I love its simplicity and wish it was just a little bit bigger to accommodate my Lumix GX-8 camera, which brings me to the next bag ...

Ortlieb Compact-Shot

The Compact-Shot is yet another great, but discontinued, bag from Ortlieb. It is slightly bigger than the Protect, just enough for my Lumix GX-8 with a 12-40mm zoom, but unlike the Protect, the internal padding is lined with a soft cloth, as is normal for camera bags, and there is a small internal pocket. The zip closure is not as easy / fast to open as the Protect slider, and is quite awkward to close fully, but when closed the bag is IP67 rated.

The Compact-Shot has become my default bag of choice when I don't need to carry any extra lenses, and, chest mounted with a couple of carabiners, the bag I use for ski touring.

Thule Perspektive Compact Sling

The Perspektive CS is a roomy bum-bag. It is made from a water-repellent fabric, uses water-resistant zippers, and comes with a detachable stowaway rain cover. It is big enough to take my Lumix GX-8 together with 12-40mm and 40-150mm lenses (with either lens fitted), has a padded iPad Mini-sized pocket inside, as well as a phone pocket on the outside of the lid, and comes with a plenty of adjustable dividers for the inner space.

The waist strap with side stabilisers makes the bag very stable, enough to jog with. The bag is compact enough to combine with a small, high sitting, backpack, up to something like the OMM Adventure Light 20, which makes a good combination for fastpacking trips. The only thing I'd change on the belt is to extend the padding fully under the D-rings, as this would make it more comfortable (I have done a couple of very long fastpacking days with this bag, and was beginning to curse the D-rings near the end).

The one issue I have run into with this bag is that the rain cover is to easy to detach, and the connecting strap will often self-detach when the cover is on -- this makes it easy to loose when taking it off in windy conditions. But overall, this is a well thought out and made bag.

LowePro Flipside Trek BP 350 AW

The Flipside is my 'pottering in the woods' backpack, but also the camera bag I am most ambivalent about. On the upside, it is very comfortable to carry, the camera compartment is spacious enough when I want to bring the big lens and more, and the through-the-back access is handy.

But there are some, to me at least, fairly significant design flaws. The non-gear storage space is very limited, enough for a sandwich, a small water bottle, a light-weight jacket and perhaps an extra thin layer. The lack of internal space is aggravated by the mesh side pockets being both small (i.e., too small for the like of a litre Nalgene bottle) and rather shallow (the bottom half of the pockets is made from a non-stretchy material to make it more durable, but there is not enough of it, so, e.g., normal 0.5l drinks bottle cannot be inserted all the way to the bottom). It is possible to strap things, such as a tripod, on the outside of the bag, but then you have to forego of the built-in rain cover, which is rather snug fitting.

Had there been another 3+ or so litres of non-gear space in this bag, this would have been my ideal camera day-bag. As is, I have strapped an external 5 litre pouch on the back of it, but like I said, that makes the rain cover useless, which is sub-optimal in the normal Scottish weather.

Tenga BYOB 9

Tenga get around the basic problem with camera backpacks (they never really work well enough; see above) by providing a range of minimal padded camera inserts that you put into a bag of your choice. The model number is the depth of the bag in centimetres, and the BYOB 9 is just big enough for my Lumix GX-8 with 12-40mm lens + another lens of a similar size, and either another pancake prime, or a few extra bits and bobs, such as a remote control and a blower.

The great thing about the BYOB is how the sizes of the bags in the range were chosen -- for a given camera size you get optimally low profile bag easy to place at the top of a normal sized backpack. The main downside is that the padding is inexplicably thin (about half of that on my other camera bags); I'd prefer more protection for my kit. Also, although the fabric is water-repellent, the zip is not, so I always feel it necessary to put this inside a dry bag.

Crumpler Light Delight 200

My default running camera is Lumix GM-5 with a 14mm pancake prime lens, and it's proven rather difficult to find a good pouch for it that could be shoulder mounted. The closest I have come to is the Light Delight 200. It's slightly wider than ideal for the GM-5, so I padded it with a strip of an old sleeping map to stop it from moving about when I run. On the upside, the depth is just enough for a 20mm pancake fitted.

Overall this pouch is well made and well padded. The back has a Velcro strap for attaching it to Crumpler backpacks, but it can be attached quite well to OMM packs with a bit of a string, and some creative knotting.

The main downside is that the bag is not even remotely rain proof. Also, the top zip has two sliders which annoyingly rattle when running, so I promptly removed one of them. With that modification, I have happily run hundreds of miles with it.

by tf at October 09, 2017 01:39 PM

September 07, 2017

Tomas Frydrych

Thoughts on the Dumyat Path

Thoughts on the Dumyat Path

If, like me, you thought we saw the last of the heavy machinery on Dumyat, you were wrong. In the last few days diggers have arrived again to (at the expense of SP Energy Networks) graciously bestow upon us a new path from the Sheriff Muir road car park to the very summit.

Updated 9/9/2017, 09:15; see the end. Formal complaints to be addressed to SPEN on customercare@spenergynetworks.com

In broad strokes, the situation as it emerged is this: when SPEN was granted permission for the Beauly power line, it came on the condition that they will do some 'good work' for the locals in return; in the Stirling case this happens to include work on the Dumyat path.

That the main path is in need of some attention, and has been for some time, is not something I would dispute. There is a significant amount of erosion taking place, which I have written about at length before (complete with 60+ images documenting the erosion patterns). But what is happening on Dumyat just now is not the answer. As I see it, there are two big problems here: the contractor's approach, and the lack of understanding how the hill is used.

The contractor is rather heavy handed and appears ill prepared. There is an apparent lack of proper planning (let's just bring a big digger, that will do it), the lack of understanding the geology of the hill (didn't expect it to be this 'rocky', doh), the lack of any sympathy for the natural features of the landscape (levelling uneven sections of the exposed bedrock, really?!). SNH has guidelines on how upland paths should be constructed, and this is not it.

The extent, and progress, of the erosion on the hill varies along its length, depending on the gradient and what is found immediately below the surface. On the steep sections, in some cases the erosion exposes very loosely bound rock and/or gravel deposits, which then suffer from bad water run off. These are the places that most require some stabilisation and mitigation, but in fact this is mainly limited to two locations, both on the upper part of the hill (to be precise, around NS 8278 9772 and NS 8352 9763). These places would benefit from some drainage work, and perhaps relocation of the path, but it needs to be done sensitively and with care, not with a bulldozer.

In other instances of the steep ground, the erosion relatively quickly exposes bare, but solid bedrock. While it's not pretty when it is happening, it simply stops there once the rain cleans up the rock. Yes, if you have been going up this hill for many years, the path has changed dramatically at these places. But it is questionable whether any intervention will achieve anything meaningful here. For example, the contractors seem set on evening out the level exposed section of bedrock around NS 8157 9788 with loose soil. It is not clear to me what the objective of that is, and why such resurfacing is needed at all -- this part has remained stable for many years.

On the easier angled sections the path suffers limited water run off. The damage here falls into two main classes. There are some boggy areas in the vicinity of natural springs (notably NS 8270 9777 and NS 8319 9774). These would benefit from board walks being constructed; the alternative is re-routing the path, but on that see below.

Apart from these natural bogs, the damage on the easily angled parts of the path is almost entirely due to soil being moved by feet and wheels, the effect of water run off is minimal. As such the path tends to broaden (see the link above for more on this), but remains stable in terms of its depth. It is, again, arguable that these parts will benefit from the work being done in any way. The only answer here would be confining the traffic to a narrow corridor, and that brings me to the second problem with the work being undertaken.

It would appear that whoever approved the current solution has absolutely no understanding of how the hill is used. There are many of the regular local users who are quite happy with the hill in its rugged semi-natural state, and they will more likely than not avoid the new path. This will include many of the local hill runners, for whom the current landscape offers excellent and easily accessible training ground. And it will include the mountain bikers who really happen to like the hill the way it is, and even the way in which it evolves (mountain biking nowadays is generally not a means to travel distances, but rather it is about the challenge 'under the wheels').

And here lies the main problem. If the objective of this exercises is to provide the good citizens of Stirling with an easy, all-ability, access to Dumyat, then the contractors are following the wrong line. There is much to be said for such a path, but it would need to follow the natural contours of the hill. Such path would have the benefit of splitting the descending bike traffic from those on the path. But sanitising the current path along its length will simply result in much of the current traffic being shifted to its immediate sides, and the erosion will continue spreading.

When path work is done not understanding, or ignoring, the mountain bike use case, it will fail to relieve the erosion pressures; there are examples of this emerging elsewhere (Ben Lawers, Cairngorms). On Dumyat the bike use is well established, people have been riding bikes here for as long as mountain bikes existed, i.e., for over 20 years. It is wholly unjustifiable not to take them into account.

More so, like it or not, mountain bikes are part of Scotland's outdoor landscape, and they are going to stay, accounting for a significant chunk of Scotland's tourism revenue. Dumyat is a fairly insignificant knoll above Stirling, but it foreshadows issues that are emerging elsewhere in Scotland's bigger hills. Mountain biking is no longer the niche pursuit it once was, and we need to start seriously talking about how it fits in into the outdoor pursuit family and into our hills.

Update 8 Sep 2017

So, SPEN has now released a PR statement about the work, which includes a picture of a short segment a path upgrade from Ben Vorlich, and states:

SP Energy Networks is undertaking works to sensitively restore the existing Dumyat Hill and Cocksburn Reservoir Paths. This project will employ established upland path techniques to create a naturally formed route allowing areas of erosion to organically regenerate.

The works will form an entirely natural upland path developed in soil and stone ... This will help to prevent further severe erosion ... The aim is not to create a formal path but to replicate the existing path using the same materials in a form that will support ever increasing users and user groups visiting the area.

(Emphasis mine.)

I'll simply invite the reader to compare the PR speak and the imagery with what is, in fact, happening on Dumyat:

Sensitive use of heavy machinery:
Thoughts on the Dumyat Path

Established upland path techniques:
Thoughts on the Dumyat Path

Not a formal, but entirely natural, path:
Thoughts on the Dumyat Path

Controlling severe erosion (things are looking just great after one afternoon of rain):
Thoughts on the Dumyat Path

This needs to stop now. If, like me, you are concerned about what is happening on Dumyat, please send a formal letter of complaint with your concerns to SPEN on customercare@spenergynetworks.com.

Update 8 Sep 2017, 23:34

I have just returned from a brief visit to Dumyat, and, as hard it is to believe, things have taken further turn for the worse during today, as the following images will illustrate.

The first image shows the start of the track. An attempt has been made to neaten it up by laying down bits of turf along its sides. However, it should be noted that there is no topsoil present here. The area here was exposed down to bedrock, which during the Denny - Beualy construction was levelled out using grey industrial hardcore. The orange path in this picture is barely a couple of inches of soil that appears to have been scraped from the hollow on the left of the image, and the bits of turf were removed from elsewhere and simply laid on the old hardcore:

Thoughts on the Dumyat Path

The next image shows the old hard core and how the turf has been laid onto it. Considering we are now outside of the growing period, it is very unlikely that much of this will survive the wet winter months.

Thoughts on the Dumyat Path

The hollow that seems to have been used to excavate the soil for this section of the path, covered in badly damaged turf:

Thoughts on the Dumyat Path

The next image shows the start of the first rise. The original surface here was bedrock, part of which is still visible left of the path, and thin layer of intermittent turf, forming a pleasant green slope. The turf has been stripped, and the bedrock covered with a thin layer of topsoil brought from elsewhere:

Thoughts on the Dumyat Path

Along side the entire length of the track being worked on, there is extensive damage to the turf, which at places has been intentionally stripped for no obvious reason. The area in the first picture is of particular concern, because the loosely bonded gravel underneath has been exposed and will be subject to rapid water erosion -- this is the primary erosion pattern on the hill.

Thoughts on the Dumyat Path

Looking back down the initial rise:

Thoughts on the Dumyat Path

And the damage to the side of the track:

Thoughts on the Dumyat Path

This area originally contained a natural rock step. This has been incomprehensibly levelled out with large amount of material excavated from the left of the track:

Thoughts on the Dumyat Path

The next image shows the excavation area. As the result of the excavation the side of the hill has been exposed to water run off, and will deteriorate rapidly during the winter months:

Thoughts on the Dumyat Path

Another natural rocky feature being levelled out; the material for this seems to have been simply dug up to the side of it, leaving deep ditches on both sides. The track at this point is somewhere in the region of 7-8m wide:

Thoughts on the Dumyat Path Thoughts on the Dumyat Path Thoughts on the Dumyat Path

The next image shows the same area as the second image I posted earlier today; in the course of the day the contractor piled up a large amount of topsoil into this area obliterating the natural step at the end of the this section:

Thoughts on the Dumyat Path

Detail of the fill, this is well over a foot in depth:

Thoughts on the Dumyat Path

Just for reference, this is the old path that we are fixing here:

Thoughts on the Dumyat Path

And the area the top soil for the fill was excavated from:

Thoughts on the Dumyat Path

The contractor is McGowan Ltd, and they seem have a track record:

Thoughts on the Dumyat Path In case you find these images disturbing, let me assure you that they in fact don't do justice to the ugly reality, you might want to see for yourself if you are local.

Update 9 Sep 2017, 9:15

It appears the local representative of Cycling UK was given access to the plans for the path, available here. What is clear is that the work undertaken is not in keeping with the agreed plans. Notably, the section covered by the following images was supposed to be 'hand-built only' using stone pitching; what a mess:

Thoughts on the Dumyat Path Thoughts on the Dumyat Path Thoughts on the Dumyat Path Thoughts on the Dumyat Path Thoughts on the Dumyat Path

I am also concerned that the plan for the natural bog area around NS 8309 9761 is 'raised hardcore'.

Formal complaints to be addressed to SPEN on customercare@spenergynetworks.com.

by tf at September 07, 2017 10:18 AM

September 04, 2017

Tomas Frydrych

GPS Accuracy and the Automation Paradox

GPS Accuracy and the Automation Paradox

It's been a busy summer for UK's MRTs. Not a week has gone by without someone getting lost in our hills, without yet another call to learn how to use a map and compass and not to rely on phone apps. This in turn elicits other comments that the problem is not in the use of digital tools per se, but in not being able to navigate. True as this is, the calls for learning traditional navigation should not be dismissed as Luddite, for not being able to navigate competently and the use of digital technologies are intrinsically linked.

GPS Accuracy

Before getting onto the bigger problem, the question of GPS accuracy is perhaps worth digressing into. Our perception of what the GPS in our phone can do for us is skewed by our urban experience. We use mapping applications daily to locate street addresses, and we have got used to how accurate these things are in that context.

However, many of us do not appreciate that because GPS does not work well at all in cities, mobile phones use so called Assisted GPS. With A-GPS the accurate location is derived from the known positions of mobile phone masts and the presence of domestic wifi IDs, which street mapping vehicles collect and store in massive databases. And, obviously, A-GPS only works in cities and with a working Internet connection (which is why your phone will complain when you use the GPS while in the air-plane mode).

So how accurate is GPS alone?

First, there is the accuracy of the GPS service per se. This is the simple part: the US Government undertakes to operate the service in such a manner that a user in the worst location relative to the current position of the satellites can achieve grid accuracy of ±17m and altitude accuracy of ±37m in 95% of cases. You can often get better results, but need to allow for even bigger error 5% of the time.

Then there is how well the device on the ground can access and process that service. The above numbers assume a clear view of the sky down to 5° above horizon, allowing for the acquisition of 6 different satellite signals. They assume no weather interference, and a good quality receiver that makes full use of all the available information.

In the hills the real conditions often are nowhere near optimal, and the tiny GPS devices in watches and phones, with their tiny aerials, are not of the requisite standard. The real errors will be, possibly significantly, bigger (e.g., I have seen an error of some 200m on an iPhone 5 on one occasion in the upper Glen Nevis).

So how good are these numbers from the perspective of mountain navigation?

The altitude resolution error is potentially around a major contour line difference and deteriorates rapidly as the number of visible satellites drops. As such GPS estimated altitude is not much use for accurate navigation. (Altitude is very useful for mountain navigation, and a much better resolution is achievable with a barometric altimeter, when used correctly.)

But if you compare the location accuracy to what a moderately competent navigator in moderately challenging circumstances will be able to estimate without the GPS, the GPS wins hands down; this is what it's designed for. Nevertheless, the GPS based location cannot be assumed to be pinpoint accurate. In complex terrain the errors can be navigationally significant, and are not good enough to keep me safe -- there are many locations in the mountains where if I overshoot my target by 30m I will die.

This is, of course, no different than following a compass bearing. Neither the compass nor the GPS are magic bullets that will keep me safe. But with GPS we seem to be conditioned to trust the technology more than it merits. Competent navigation comes down not to the tools, but to making sound judgements based on the information provided by the tools, whether it's map, compass, or GPS. And that brings me to the Automation Paradox.

The Paradox of Automation

The Automation paradox can be formulated in different ways, but it comes down to this:

Automation leads to degradation of operator skills, while, at the same time, the skills required to handle automation failures are frequently considerably higher than average.

In an industrial field, the introduction of automation largely replaces a workforce of skilled craftsmen/women with a low skilled one. This is unavoidable; the craft skills come from practice of the craft. The automation of the process does away with the practice, and doing so removes the opportunities for practising the skills.

But the bigger problem with automation is this: when automated processes fail and require manual intervention, they tend to do so in atypical, complex, corner cases which require higher level of skill to handle, skill that the workforce does not have. In industrial fields this leads to the development of a small number of exceptionally skilled (and highly paid) experts who get called in when the automatic process fails.

Navigating by GPS is subject to the Automation Paradox; it takes away the grind of reading maps, taking bearings, pacing and timing distances. This is great while it goes well, for it leaves more time to enjoy the great outdoors, and so we do. But in doing so it deprives us of the opportunities to develop the rudimentary navigation skills.

But when it fails, there is every chance it will not be on a nice sunny day with cracking visibility. It will be when the weather is awful enough to interfere with the radio waves, or in a location where no satellites are visible. The competent navigator will simply turn the unit off and carry on, for she has other tools in the bag and knows how to use them. The rest will have to call in the experts to get them off the hill (assuming their phone has a signal).

And this is the problem with the GPS. It's not that it's not a useful tool, it is, in the hands of a competent navigator. But that competency is developed through deliberate and ongoing practice of the basics. What it does not come from is the following of a GPS track downloaded from somewhere on the Internet, and let's not delude ourselves, that's how the GPS gets generally used.

P.S. If you live in the Central Belt and don't know where to start, I offer Basic Mountain Navigation and Night-Time Navigation courses.

by tf at September 04, 2017 11:05 AM

August 11, 2017

Emmanuele Bassi

GUADEC 2017

Another year, another GUADEC — my 13th to date. Definitely not getting younger, here. 😉

As usual, it was great to see so many faces, old and new. Lots of faces, as well; attendance has been really good, this year.

The 20th anniversary party was a blast; the venue was brilliant, and watching people going around the tables in order to fill in slots for the raffle tickets was hilarious. I loved every minute of it — even if the ‘90s music was an assault on my teenage years. See above, re: getting older.

The talks were, as usual, stellar. It’s always so hard to chose from the embarrassment of riches that is the submission pool, but every year I think the quality of what ends up on the schedule is so high that I cannot be sad.

Lots and lots of people were happy to see the Endless contingent at the conference; the talks from my colleagues were really well received, and I’m sure we’re going to see even more collaboration spring from the seeds planted this year.


My talk about continuous integration in GNOME was well-received, I think; I had to speed up a bit at the end because I lost time while connecting to the projector (not enough juice when on battery to power the HDMI-over-USB C connector; lesson learned for the next talk). I would have liked to get some more time to explain what I’d like to achieve with Continuous.

Do not disturb the build sheriff

I ended up talking with many people at the unconference days, in any case. If you’re interested in helping out the automated build of GNOME components and to improve the reliability of the project, feel free to drop by on irc.gnome.org (or on Matrix!) in the #testable channel.


The unconference days were also very productive, for me. The GTK+ session was, as usual, a great way to plan ahead for the future; last year we defined the new release cycle for GTK+ and jump start the 4.0 development cycle. This year we drafted a roadmap with the remaining tasks.

I talked about Flatpak, FlatHub, Builder, performance in Mutter and GNOME Shell; I wanted to attend the Rust and GJS sessions, but that would have required the ability to clone myself, or be in more than one place at once.

During the unconference, I was also able to finally finish the GDK-Pixbuf port of the build system to Meson. Testing is very much welcome, before we bin the Autotools build and bring one of the oldest modules in GNOME into the future.

Additionally, I was invited to the GNOME Release Team, mostly to deal with the various continuous integration build issues. This, sadly, does not mean that I’m one step closer to my ascendance as the power mad dictator of all of GNOME, but it means that if there are issues with your module, you have a more-or-less official point of contact.


I can’t wait for GUADEC 2018! See you all in Almería!

by ebassi at August 11, 2017 01:33 PM

August 10, 2017

Emmanuele Bassi

Dev v Ops

In his talk at the 2017 GUADEC in Manchester, Richard Brown presented a set of objections to the current trend of new packaging systems — mostly AppImage, Snap, and Flatpak — from the perspective of a Linux distribution integrator.

I’m not entirely sure he managed to convince everybody in the attendance, but he definitely presented a well-reasoned argument, steeped in history. I freely admit I went in not expecting to be convinced, but fully expecting to be engaged and I can definitely say I left the room thoroughly satisfied, and full of questions on how we can make the application development and distribution story on Linux much better. Talking with other people involved with Flatpak and Flathub we already identified various places where things need to be improved, and how to set up automated tools to ensure we don’t regress.

In the end, though, all I could think of in order to summarise it when describing the presentation to people that did not attend it, was this:

Linux distribution developer tells application and system developers that packaging is a solved problem, as long as everyone uses the same OS, distribution, tools, and becomes a distro packager.

Which, I’m the first to admit, seems to subject the presentation to impressive levels of lossy compression. I want to reiterate that I think Richard’s argument was presented much better than this; even if the talk was really doom and gloom predictions from a person who sees new technologies encroaching in his domain, Richard had wholesome intentions, so I feel a bit bad about condensing them into a quip.

Of course, this leaves me in quite a bind. It would be easy — incredibly easy — to dismiss a lot of the objections and points raised by Richard as a case of the Italian idiom “do not ask to the inn-keeper if the house wine is good”. Nevertheless, I want to understand why those objections where made in the first place, because it’s not going to be the last we are going to be hearing them.

I’ve been turning an answer to that question in my head for a while, now, and I think I finally came up with something that tries to rise to the level of Richard’s presentation, in the sense that I tried to capture the issue behind it, instead of just reacting to it.


Like many things in tech, it all comes down to developers and system administators.

I don’t think I’m being controversial, or exposing some knowledge for initiates, when I say that Linux distributions are not made by the people that write the software they distribute. Of course, there are various exceptions, with upstream developers being involved (by volunteer or more likely paid work) with a particular distribution of their software, but by and large there has been a complete disconnect between who writes the code and who packages it.

Another, I hope, uncontroversial statement is that people on the Linux distribution side of things are mostly interested in making sure that the overall OS fits into a fairly specific view of how computer systems should work: a central authority that oversees, via policies and validation tools that implement those policies, how all the pieces fit together, up to a certain point. There’s a name for that kind of authority: system administrators.

Linux distributions are the continuation of system administration policies via other means: all installed Linux machines are viewed as part of the same shared domain, with clear lines of responsibility and ownership that trace from a user installation to the packager, to the people that set up the policies of integration, and which may or may not involve the people that make the software in the first place — after all, that’s what distro patches are for.

You may have noticed that in the past 35 years the landscape of computing has been changed by the introduction of the personal computer; that the release of Windows 95 introduced the concept of a mass marketable operating system; and that, by and large, there has been a complete disintermediation between software vendors and users. A typical computer user won’t have an administrator giving them a machine with the OS, validating and managing all the updates; instead of asking an admin to buy, test, and deploy an application for them, users went to a shop and bought a box with floppies or an optical storage — and now they just go to online version of that shop (typically owned by the OS vendor) and download it. The online store may just provide users with the guarantee that the app won’t steal all their money without asking in advance, but that’s pretty much where the responsibility of the owners of the store ends.

Linux does not have stores.

You’re still supposed to go ask your sysadmin for an application to be available, and you’re still supposed to give your application to the sysadmin so that they can deploy it — with or without modifications.

Yet, in the 25 years of their history, Linux distribution haven’t managed to convince the developers of

  • Perl
  • Python
  • Ruby
  • JavaScript
  • Rust
  • Go
  • PHP
  • insert_your_language_here

applications to defer all their dependency handling and integration to distro packagers.

They have just about managed to convince C and C++ developers, because the practices of those languages are so old and entrenched, the tools so poor, and because they share part of the same heritage; and TeX writers, for some weird reason, as you can witness by looking at how popular distribution package all the texlive modules.

The crux is that nobody, on any existing major (≥ 5% of market penetration) platform, develops applications like Linux distributors want them to. Nobody wants to. Not even the walled gardens you keep in your pocket and use to browse the web, watch a video, play a game, and occasionally make a phone call, work like that, and those are the closest thing to a heavily managed system you can get outside of a data center.

The issue is not the “managed by somebody” part; the issue is the inevitable intermediary between an application developer and an application user.

Application developers want to be able to test and have a reproducible environment, because it makes it easier for them to find bugs and to ensure that their project works as they intented; the easiest way to do that is to have people literally use the developer’s computer — this is why web applications deployed on top of a web browser engine that consumes all your CPU cores in a fiery pit are eating everybody’s lunch; or because software as a service even exists. The closest thing application developers have found to ship their working laptop to the users of our applications without physically shipping hardware, is to give them a read-only file system image that we have built ourselves, or a list of dependencies hosted on a public source code repository that the build system will automatically check out prior to deploying the application.

The Linux distribution model is to have system administrators turned packagers control all the dependencies and the way they interact on a system; check all the licensing terms and security issues, when not accidentally introducing them; and then fight among themselves on the practicalities and ideologies of how that software should be distributed, installed, and managed.

The more I think about it, the less I understand how that ever worked in the first place. It is not a mystery, though, why it’s a dying model.

When I say that “nobody develops applications like the Linux distributions encourages and prefers” I’m not kidding around: Windows, macOS, iOS, Electron, and Android application developers are heavily based on the concept of a core set of OS services; a parallel installable blocks of system dependencies shipped and retired by the OS vendor; and a bundling system that allows application developers to provide their own dependencies, and control them.

Sounds familiar?

If it does, it’s becase, in the past 25 years, every other platform (and I include programming languages with a fairly comprehensive standard library in that definition, not just operating systems) has implemented something like this — even in free and open source software, where this kind of invention mostly exists both as a way to replicate Linux distributions on Windows, and to route around Linux distributions on Linux.

It should not come as a surprise that there’s going to be friction; while for the past two decades architects of both operating systems and programming languages have been trying to come up with a car, Linux distributions have been investing immeasurable efforts in order to come up with a jet fueled, SRB-augmented horse. Sure: it’s so easy to run apt install foo and get foo installed. How did foo get into the repository? How can you host a repository, if you can’t, or don’t want to host it on somebody else’s infrastructure? What happens when you have to deal with a bajillion, slightly conflicting, ever changing policies? How do you keep your work up to date for everyone, and every combination? What happens if you cannot give out the keys to your application to everyone, even if the application itself may be free software?

Scalability is the problem; too many intermediaries, too many gatekeepers. Even if we had a single one, that’s still one too many. People using computers expect to access whatever application they need, at the touch of a finger or at the click of a pointer; if they cannot get to something in time for the task they have to complete, they will simply leave and never come back. Sure, they can probably appreciate the ease of installing 30 different console text editors, 25 IRC clients, and 12 email clients, all in various state of disrepair and functionality; it won’t really mean much, though, because they will be using something else by that time.

Of course, now we in the Linux world are in the situation of reimplementing the past 20 years of mistakes other platforms have made; of course, there will be growing pains, and maybe, if we’re careful enough, we can actually learn for somebody else’s blunders, and avoid falling into common traps. We’re going to have new and exciting traps to fall into!

Does this mean it’s futile, and that we should just give up on everything and just go back to our comfort zone? If we did, it would not only be a disservice to our existing users, but also to the users of every other platform. Our — and I mean the larger free software ecosystem — proposition is that we wish all users to have the tools to modify the software they are using; to ensure that the software in question has not been modified against their will or knowledge; and to access their own data, instead of merely providing it to third parties and renting out services with it. We should have fewer intermediaries, not more. We should push for adoption and access. We should provide a credible alternative to other platforms.

This will not be easy.

We will need to grow up a lot, and in little time; adopt better standards than just “it builds on my laptop” or “it works if you have been in the business for 15 years and know all the missing stairways, and by the way, isn’t that a massive bear trap covered with a tarpaulin on the way to the goal”. Complex upstream projects will have to start caring about things like reproducibility; licensing; security updates; continuous integration; QA and validation. We will need to care about stable system services, and backward compatibility. We will not be shielded by a third party any more.

The good news is: we have a lot of people that know about this stuff, and we can ask them how to make it work. We can take existing tools and make them generic and part of our build pipeline, instead of having them inside silos. We can adopt shared policies upstream instead of applying them downstream, and twisting software to adapt to all of them.

Again, this won’t be easy.

If we wanted easy, though, we would not be making free and open source software for everyone.

by ebassi at August 10, 2017 05:55 PM

July 26, 2017

Tomas Frydrych

The Unfinished Business of Stob Coir an Albannaich

The Unfinished Business of Stob Coir an Albannaich

I have a confession to make: I find great, some might think perverse, pleasure at times in bypassing Munro summits. It is the source of profound liberation -- once the need to 'bag' is overcome, a whole new world opens up in the hills, endless possibilities for exploring, leading to all kinds of interesting and unexpected places. Plans laid out in advance become mere sketches, to be refined and adjusted on the go and on a whim.

But I prefer to make such impromptu changes because I can rather than because my planning was poor, or because of factors beyond my control (of course, the latter often is just a euphemism for the former!); my ego does not relish that. Which is why today I have a firmer objective in mind than usual, namely Stob Coir an Albannaich.

Let me rewind. Some time back, while pouring over the maps, the satisfying line of Aonach Mor (the north spur of Stob Ghabhar) caught my eye. And so just over a week ago, I set off from Alltchaorunn in Glen Etive with the intention to take Aonach Mor onto Stob Ghabhar, and then follow the natural ridge line over Stob a' Bhruaich Leith to Meall Odhar, Meall nan Eun, Meall Tarsuinn and onto Stob Coir an Albannaich, and then back over Beinn Ceitlein. About 27km with 2,100m vertical, so I am thinking 6 hours.

But for the first half of the day there is thick low cloud hanging about, and above 500m visibility is very poor, wind quite strong. I don't mind being out in such conditions. That is often when the hills are at their most magical, the brief glimpses of the hidden world down below more memorable than endless blue sky.

The Unfinished Business of Stob Coir an Albannaich

Also, it's not just me who can't see, and I generally find I have many more close encounters with wildlife in conditions such as these; today is no exception, and I get to see quite a few small waders, and even a curlew. But I end up moving by the needle all the way to Meall Odhar, making slow progress.

The cloud clears just as I am having my lunch on the boundary wall below Meal Odhar. The second half of the day is a cracker. I quickly pick up the walkers' path and jog to the Meall nan Eun summit where four seniors are enjoying the sunshine. They tell me not to stop, that the sight of me is too demoralising; I am thinking to myself that I hope I'll still be able to get up the hills when I reach their age, they must have thirty years on me. The usual Scottish banter; an inherent part of the hill experience, as much as the rain, the bog and the midges.

From here onwards the running is good and flowing. As I am about to start the climb onto Albannaich, I do some mental arithmetic. I am five hours in, the planned descent from Albannaich looks precarious from here, and I have no suntan lotion. I decide to cut my losses, run down the glorious granite slabs below Meall Tarsuinn, and return via the Allt a' Chaorainn glen.

It turns out to be a good call, as it still takes me two hours to get back, and a touch of sun on my neck. Nevertheless, it leaves me with the sense of unfinished business.

And so this week I am back. Not the same route, obviously. Rather, I set off from Victoria Bridge, gain the natural ridge line via Beinn Toaig's south west spur, planning to descend Albannaich either over Cuil Ghlas, or Sron na h-Iolaire. It's a wee bit bigger outing than last week (I am expecting eight hours), but there is nothing like responding to a 'failure' with a little bit more ambition!

The weather is glorious, if anything just a bit too hot, views all around. Yet, looking over Rannoch Moor it's impossible not to reflect on how denuded of trees this landscape is. Just a small woodland around the Victoria Bridge houses, a couple of small sitka plantations, and an endless sea of bright green grass. During the autumn and winter months there is a little bit more colour, but this time of the year the monotonous green drives home to me how little varied the vegetation here is.

I make good progress along the ridge (no navigation required), skip (with the aforementioned degree of satisfaction) Meall nan Eun summit and arrive on Stob Coir an Albannaich in exactly five hours. The views are breathtaking; in spite of the heat there is no haze, and Ben Nevis can be seen clearly to the north.

The Unfinished Business of Stob Coir an Albannaich

After a brief chat with a couple of fellow hillgoers I decide to descend down the Cuil Ghlas spur, where slabby granite promises fun.

The heat is beginning to get to me, and I can't wait to take a dip in the river below. The high tussocky grass that abounds on these hills makes the descent from the ridge to Allt Coire Chaorach awkward, and the stream is not deep enough for a dip, but I at least get some fresh water and soak my cap. Not much farther down the river bed becomes a long section of granite slab; the water level is low, and so lot of it is dry to run on.

As an unexpected bonus, at the top of the slabby section is a beautiful pool: waist deep, with a smooth granite bottom, and even a set of steps in. I sit in for a while cooling down, then, refreshed, jog down the slabs. When they run out, I stay in the riverbed; hopping from boulder to boulder is lot more fun than the grassy bank, even if it's not any faster.

The floor of the glen is boggy and covered in stumps of ancient Caledonian pine; on a day like this, it is hard to not pine for their shadow. There is some new birch planted on the opposite side of the glen; perhaps one day there will be more.

34km / 2,300m ascent / 8h

by tf at July 26, 2017 05:41 PM

July 10, 2017

Tomas Frydrych

Eastern Mamores and the Grey Corries

Eastern Mamores and the Grey Corries

The Mamores offer some exceptionally good running. The landscape is stunning, the natural lines are first rate, and the surface is generally runner-friendly. The famed (and now even raced) Ring of Steal provides an obvious half day outing, but I dare to say the Mamores have a lot more to offer! On the western end it is well worth venturing all the way to Meall a'Chaorain for the remarkable change in geology and the unique views of Ben Nevis, but it is the dramatic 'loch and mountain' type of scenery (of a quality rare this far south) of the eastern end that is the Mamore's true crown jewel.

I have two days to play with, and rather than spending them both in the Mamores, I decide to combine the eastern Mamores with the Grey Corries that frame the other side of Glen Nevis. The forecast is not entirely ideal, it's looking reasonable for Saturday, but the forecasters cannot agree on the Sunday, most predicting heavy rain, and some no rain at all. Unfortunately, the eastern Mamores are subject to stalking, and this is probably my last chance of the summer to visit them -- I decide (as you do) to bet on the optimistic forecast.

Day 1 -- Eastern Mamores (20km, 2,200m ascent, 6h)

As I set off from Kinlochleven, it's looking promising. It is not raining, I can see all the way down Loch Leven, and (at the same time!) the tops of the hills above me. The sun even breaks through occasionally, reminding me I neither put on, nor brought with me, suntan lotion. I follow the path up Coire na Ba, enjoying the views. My solitude is interrupted by a single mountain hare. This is red deer country, and after recent trips to Assynt and the Cairngorms, it is impossible not to notice the relative paucity of life in these hills.

As I climb higher, the wind starts picking up, but it is not cold, and once I put on gloves I am comfortable enough in shorts and a shirt. The two summits of Na Gruagaichean are busy, the views braw. I carry on toward Binnein Mor, enjoying the cracking ridge line.

At Binnein Mor summit the well-defined path comes to an end, the Munroists venture no further. The short section beyond is the preserve of independent minds (and the Ramsay Round challengers). I take a faint path descending precariously directly off the summit, but this turns out to be a mistake. It is steep, there is poor grip, and slipping is not an option. I expect a better (and much faster) way would have been to take the north ridge, then cut down into the coire.

I end up traversing out of the precarious ground into the coire. There is another runner coming up the other way. We stop for a brief chat; he was planning to attempt the Ramsay Round solo this weekend, but decided to postpone in the view of the overnight forecast. I have lot of time for anyone taking on Ramsay solo, that is one very serious undertaking; I hope the guy gets his weather window soon.

I stop at the two small lochans to get some water and to have a couple of oatcakes and chunk of Comte for lunch (my staple long run food), then carry on past the bigger lochan up Binnein Beg. The wind has picked up considerably, and near the summit is probably exceeding 40mph. I do not linger.

The gently sloping old stalkers' path leading eventually into Coire an Lochain is delightful, my eyes feasting on the scenery on offer -- I cannot imagine anywhere else I would rather be just now. I follow the narrow track down to Allt Coire a'Bhinnein.

Eastern Mamores and the Grey Corries

The large coire formed by the two Binneins and the two Sgurrs is a classic example of glacial landscape. The retreating glacier left behind huge moraine deposits, forming spectacular steep ridges and dykes, the true nature of which is exposed at the couple of places where the vegetation has eroded, and the very ancient record of history is laid bare for all to see. The gravelly river itself too reminds me more of the untidy watercourses of the Alps than your typical Scottish mountain stream.

Rather than following the zigzag path, I take one of the moraine ridges into Coire an Lochain, then head up Sgurr Eilde Mor. The geology changes, the red tones of the hill due to significant presence of red granite. I find a small hollow sheltered from the worst of the wind and have another couple of oatcakes and Comte, then brace myself for the wind on the summit.

The gently sloping north east ridge Sgurr Eilde Mor again lies outwith the Munroist lands, and there is no path on it to speak off. It's a glorious afternoon, the sun is out, the sky is blue, and as I descend the sodden and slimy slopes of Meall Doire na h-Achlais toward the river, I spot a couple of walkers sunning themselves on the beach below near the river junction. We say hello as I wade across and follow the watercourse up north.

It's time to look for a suitable campsite. My original plan was to camp bit further on in the bealach between Meall a'Bhurich and Stob Ban, but it's far too windy for a high level camp. The opportunities down here are limited, the ground is saturated with water and the grass is very tussocky, but I find a good spot on a large flat sandy deposit inside the crook of the river. It's about 18 inches above the river level, and from the vegetation it would appear it does not flood too often, perhaps only during the snow melt.

The rain arrives not much later, and I have an early night. After getting an updated forecast on the InReach SE (heavy rain throughout the night and tomorrow), I make a slight revision to tomorrow's plans (deciding to leave out the two Sgurr Choinnichs), then listen to Michael Palin narrating Ian Rankin's Knots and Crosses; the Irish-sounding fake Scottish accents are mildly amusing, and I wonder why they could not get a native speaker to do the reading, but the story is too good to be spoilt by that.

It rains steadily all night and the sound of the river gradually changes. I eventually poke my head out to check on the water level, but there are no signs of it spilling out yet, and so I continue to sleep soundly till half seven.

Day 2 -- Grey Corries (27km, 1,750m ascent, 8h)

I start the day by knocking over the first pot of boiling water, but eventually get my coffee and porridge. The river rose by about six inches during the night. The rain has turned into light drizzle, cloud base is at no more than 600m. It is completely still, and as I faff about the midges are spurring me on.

I soon head into the clouds, up Meall a'Bhuirich, where I come across a family of ptarmigan, apart from the numerous frogs, the first real wildlife since the hare I saw yesterday morning. The chicks are quite grown up by now, and they seem unfazed by my presence. At the summit of the roundish hill I have to get the compass out, the visibility is no more than 30 yards and it would be easy to descend in a completely wrong direction.

More ptarmigan on Stob Ban, but no views from the summit. I was planning to descend directly into the bealach above Coire Rath, but this side of the hill is very steep and in the poor visibility I am unable to judge if this is, in fact, possible. I decide to take the well defined path heading east instead, and then traverse the north side of the hill at around the 800m contour line.

This proves to be a good choice and I even pick up a faint, runnable, path traversing at around 780m; perhaps an old stalkers' path. The cloud has lifted a bit and I am now just around its bottom edge; the views down into the coire are absolutely magical. While there is much to be said for sunshine, I have found over the years that some of the most precious moments in the hills come when there is none.

Eastern Mamores and the Grey Corries

I stop at the wee lochan for a bite to eat, and to have a drink, as this is the last water source until after the Grey Corries.

The Grey Corries quartzite ridge line is very fine and dramatic. The low cloud makes it even more so. While it would be hard to get lost here, I realise this is a great opportunity to practice some micro navigation, and so carefully track my progress along the ridge. The running is hard and very technical, and the wet quartzite is rather slippery. I make a mental note that if I ever decide to attempt the Ramsay solo, I should do so in a clockwise direction, so as to get the most serious and committing part of it out of the way on fresh legs (sod the aesthetics of finishing with the Ben!).

My plan is to go as far as the bealach before Sgurr Choinnich More and then descend through Coire Easain into Glen Nevis. The final part of the Stob Coire Easain south west ridge proves quite tricky. Pure, white, blocky quartzite abounds, and in the present conditions it is as slippery as a Zamboni smoothed ice ring -- I am taking my time not to inadvertently reach the coire head first. An unkindness of six or so ravens is circling noisily around me, and somehow in these eerie conditions, that group name seems more than appropriate.

Near the very end of the ridge, the quartzite seems to turn pink, almost red. On close inspection, the colour is due to a fine layer of lichen (my uninformed guess is Belonia nidarosiensis, but I could be wrong, not least since the alternative name Clathroporina calcarea suggests a somewhat different habitat). Under one of these large blocks there is a tiny spruce seedling, trying to carve a life for itself. I wonder where it came from, perhaps the seed arrived on the wind, perhaps attached to the sole of someone's boots; either way, it would have come some distance. I wish it good luck, for it will need it.

A quick stop for water and the last of my oatcakes and cheese, and then down Coire Easain. Up close and personal this turns out to be a real gem of a place. Whereas from above it has the appearance of a simple bowl, it is in fact a series of wet, bright green, terraces (sphagnum moss and frogs abound), with a myriad of small pools from which numerous quickly growing streams start their life, eventually joining up into the mighty Allt Coire Easain. There are a couple of large quartzite escarpments, the lower of which is clearly visible when looking up from Glen Nevis. With a little care, wet feet notwithstanding, this can be descended through the grassy break next to the east bank of the main stream.

Eastern Mamores and the Grey Corries

I cross the stream, and head south on a gently descending traverse, more or less aiming for Tom an Eite. The ground runs well, until it becomes more tussocky near the floor of the glen. I negotiate the edge of the peat hags by Tom an Eite, cross the beginnings of the Water of Nevis and start following Allt Coire a'Bhinnein along its western bank.

Initially, before a faint deer track appears, this is a bit of a trot through bog grass and peat hags strewn with numerous stumps of ancient Caledonian pine. I try to picture what it would have looked like before man came with an axe and a saw. It would have been quite a different, better, more vibrant, landscape. I am, inevitably, thinking of the previous weekend with Linda, the lovely running among the pines in upper Glen Quoich, and the encouraging progress of regeneration on Mar Lodge Estate. Perhaps one day we will see the pines returning here as well.

Nevertheless, this is yet another gem of a landscape. Midway up into the coire, for two, maybe three hundred yards, the river, at this point at least five yards wide already, is squeezed into a channel between two vertical plates of quartzite only a couple of feet apart. The effect is dramatic; I am peaking over the edge with a degree of unease, falling in would not bode well. I spot a ringed plover, and a short while later a ringed ouzel -- it's good to see some other life than the red deer which abounds.

Just above this constriction in the river, Allt a' Gharbh Coire is coming down on the right in a spectacular waterfall. I am taken aback by the sight, thinking this is, by far, the most dramatic waterfall I have seen in Scotland -- it is only a few weeks back since I watched the famous Eas a' Chual Aluinn from the summit of the Stack of Glencoul, but the waterfall I am looking at just now is in a different league. (I am so gobsmacked by the sight it never occurs to me to take a photograph!)

Soon I am climbing up to Coire an Lochain for the second time this trip -- this time I take the zigzags, my legs are beginning to feel tired. But soon I am on the path that skirts Sgorr Eilde Beag, which provides a most enjoyable run back down to Kinlochleven. As I come around the 'corner' formed by the south ridge, I bump into a group of youngsters heading up to the coire on an (DoE?) expedition.

'Does it get flatter soon?!'

They are the first people I have seen, never mind met, since yesterday afternoon -- on reflection there are benefits to weather less then perfect after all!

by tf at July 10, 2017 09:13 PM

June 30, 2017

Chris Lord

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

by admin at June 30, 2017 12:02 AM

June 29, 2017

Tomas Frydrych

The Debt of Magic

The Debt of Magic

My gran married young, and was widowed young, my current age. I have a very few regrets in life, but not getting to know grandpa is one of them. He was a great lover of nature, a working man with little spare time, escaping into the woods with binoculars and a camera whenever he could. A passion borne out by countless strips of film left behind. As I am getting older I too am drawn into the woods, increasingly not for 'adventure', but for the tranquility and the sense of awe it invariably brings. I sense we were kindred spirits, but I can only imagine, he died before my third birthday and I have no memories of him at all.

That in itself I find strange, for I have some very early memories. Hiding a hairbrush in the oven, not older than two, the commotion as it was 'found' (providing extra aroma for the Sunday roast). Sitting on a potty on the balcony of our new flat, not yet three, scaffolding all around, watching a cheery brickie in a red and black striped shirt at work (gone now, a scaffolding collapse some years later). Only just turned three, standing in front of the maternity hospital with my dad, waving, my sister just born.

These are all genuine enough, but at the same time mere fragments, without context and continuity. My first real memories come from the summer after my fifth birthday: it's August and my gran and I are going on a holiday in the Krkonoše mountains. A train, a bus, then a two hour hike to Petrova Bouda mountain refuge, our home for the next two weeks, wandering the hills, armed with a penknife and walking sticks gran fashioned out of some dead wood.

For me this was the time of many firsts. I learned my first (and only) German, 'Nein, das ist meine!', as my gran ripped out our sticks from the hands of a lederhosen-wearing laddie a few years older than me; he made the mistake of laying claim to them while we were browsing the inside of a souvenir shop (I often think somewhere in Germany is a middle aged man still having night terrors). It was the only time ever I heard my gran speaking German; she was fluent, but marked by the War (just as I am marked by my own history).

Up there in the hills I had my first encounter with the police state, squatting among the blaeberries, attending to sudden and necessary business, unfortunately, in the search for a modicum of privacy, on the wrong side of the border. The man in uniform, in spite of sporting a scorpion sub machine gun (the image of the large leather holster forever seared into my memory) stood no chance, and retreated hastily as gran rushed to the rescue. She was a formidable woman, and I was her oldest grandchild. Funnily enough, I was to have a similar experience a decade later in the Tatras, bivvying on the wrong side of the border only to wake up staring up the barrel of an AK-47, but that was all in the future then, though perhaps that future was already being shaped, little by little.

It was also the first time I drank from a mountain stream. A crystal clear water springing out of a miniature cave surrounded by bright green moss, right by the side of the path. Not a well known beauty spot sought after by many, but a barely noticeable trickle of water on the way to somewhere 'more memorable'. We sat there having our lunch. Gran hollowed out the end bit of a bread stick to make a cup and told me a story about elves coming to drink there at night. We passed that spring several times during those days, and I was always hoping to catch a glimpse of that magical world. I still do, perhaps now more than ever.

Gran's ventures into nature were unpretentious and uncomplicated: she came, usually on her old bicycle, she ate her sandwiches, and she saw. And I mean, really saw. Not just the superficially obvious, but the intricate interconnections of life, the true magic. One of her favourite places was a disused sand pit in a pine wood a few miles from where she lived. We spent many a summer day there picking cranberries and mushrooms, and then, while eating our pieces, watched the bees drinking from a tiny pool in the sand. Years later, newly married, I took Linda there, and I recall how, for the first time, I was struck by the sheer ordinariness of the place. Where did the magic go?

The magic is in the eye of the beholder. It is always there, it requires no superhuman abilities, no heroic deeds, no overpriced equipment. But seeing is an art, and a choice. It takes time developing, and a determination practicing. Gran was a seer, and she set me on the path of becoming one; I am, finally, beginning to make some progress.

In the years to come there were to be many more mountain streams. Times when the magic once only imagined by my younger self became real, tangible, perhaps even character-building. Times more intense, more memorable in the moment, piling on like cards, each on the top of the other, leaving just a corner here and corner there to be glimpsed beneath. The present consuming the past with the inevitability we call growing up.

Yet, ever so often it is worth pausing to browse through that card deck. There are moments when we stand at crossroads we are not seeing, embarking on a path that only comes into focus as time passes. Like a faint trace of a track on a hillside hard to see up close, but clearly visible from afar in the afternoon light, I can see now that my passion for the hills and my lifelong quest for the magic go back to the two weeks of a summer long gone in a place I have long since stopped calling home.

Gran passed away earlier this year. Among her papers was an A6 card, a hiking log from those two summer weeks. A laconic record of the first 64km of a life long journey she took me on; an IOU that shall remain outstanding.

by tf at June 29, 2017 03:27 PM

June 28, 2017

Chris Lord

Goodbye Mozilla

Today is effectively my last day at Mozilla, before I start at Impossible on Monday. I’ve been here for 6 years and a bit and it’s been quite an experience. I think it’s worth reflecting on, so here we go; Fair warning, if you have no interest in me or Mozilla, this is going to make pretty boring reading.

I started on June 6th 2011, several months before the (then new, since moved) London office opened. Although my skills lay (lie?) in user interface implementation, I was hired mainly for my graphics and systems knowledge. Mozilla was in the region of 500 or so employees then I think, and it was an interesting time. I’d been working on the code-base for several years prior at Intel, on a headless backend that we used to build a Clutter-based browser for Moblin netbooks. I wasn’t completely unfamiliar with the code-base, but it still took a long time to get to grips with. We’re talking several million lines of code with several years of legacy, in a language I still consider myself to be pretty novice at (C++).

I started on the mobile platform team, and I would consider this to be my most enjoyable time at the company. The mobile platform team was a multi-discipline team that did general low-level platform work for the mobile (Android and Meego) browser. When we started, the browser was based on XUL and was multi-process. Mobile was often the breeding ground for new technologies that would later go on to desktop. It wasn’t long before we started developing a new browser based on a native Android UI, removing XUL and relegating Gecko to page rendering. At the time this felt like a disappointing move. The reason the XUL-based browser wasn’t quite satisfactory was mainly due to performance issues, and as a platform guy, I wanted to see those issues fixed, rather than worked around. In retrospect, this was absolutely the right decision and lead to what I’d still consider to be one of Android’s best browsers.

Despite performance issues being one of the major driving forces for making this move, we did a lot of platform work at the time too. As well as being multi-process, the XUL browser had a compositor system for rendering the page, but this wasn’t easily portable. We ended up rewriting this, first almost entirely in Java (which was interesting), then with the rendering part of the compositor in native code. The input handling remained in Java for several years (pretty much until FirefoxOS, where we rewrote that part in native code, then later, switched Android over).

Most of my work during this period was based around improving performance (both perceived and real) and fluidity of the browser. Benoit Girard had written an excellent tiled rendering framework that I polished and got working with mobile. On top of that, I worked on progressive rendering and low precision rendering, which combined are probably the largest body of original work I’ve contributed to the Mozilla code-base. Neither of them are really active in the code-base at the moment, which shows how good a job I didn’t do maintaining them, I suppose.

Although most of my work was graphics-focused on the platform team, I also got to to do some layout work. I worked on some over-invalidation issues before Matt Woodrow’s DLBI work landed (which nullified that, but I think that work existed in at least one release). I also worked a lot on fixed position elements staying fixed to the correct positions during scrolling and zooming, another piece of work I was quite proud of (and probably my second-biggest contribution). There was also the opportunity for some UI work, when it intersected with platform. I implemented Firefox for Android’s dynamic toolbar, and made sure it interacted well with fixed position elements (some of this work has unfortunately been undone with the move from the partially Java-based input manager to the native one). During this period, I was also regularly attending and presenting at FOSDEM.

I would consider my time on the mobile platform team a pretty happy and productive time. Unfortunately for me, those of us with graphics specialities on the mobile platform team were taken off that team and put on the graphics team. I think this was the start in a steady decline in my engagement with the company. At the time this move was made, Mozilla was apparently trying to consolidate teams around products, and this was the exact opposite happening. The move was never really explained to me and I know I wasn’t the only one that wasn’t happy about it. The graphics team was very different to the mobile platform team and I don’t feel I fit in as well. It felt more boisterous and less democratic than the mobile platform team, and as someone that generally shies away from arguments and just wants to get work done, it was hard not to feel sidelined slightly. I was also quite disappointed that people didn’t seem particular familiar with the graphics work I had already been doing and that I was tasked, at least initially, with working on some very different (and very boring) desktop Linux work, rather than my speciality of mobile.

I think my time on the graphics team was pretty unproductive, with the exception of the work I did on b2g, improving tiled rendering and getting graphics memory-mapped tiles working. This was particularly hard as the interface was basically undocumented, and its implementation details could vary wildly depending on the graphics driver. Though I made a huge contribution to this work, you won’t see me credited in the tree unfortunately. I’m still a little bit sore about that. It wasn’t long after this that I requested to move to the FirefoxOS systems front-end team. I’d been doing some work there already and I’d long wanted to go back to doing UI. It felt like I either needed a dramatic change or I needed to leave. I’m glad I didn’t leave at this point.

Working on FirefoxOS was a blast. We had lots of new, very talented people, a clear and worthwhile mission, and a new code-base to work with. I worked mainly on the home-screen, first with performance improvements, then with added features (app-grouping being the major one), then with a hugely controversial and probably mismanaged (on my part, not my manager – who was excellent) rewrite. The rewrite was good and fixed many of the performance problems of what it was replacing, but unfortunately also removed features, at least initially. Turns out people really liked the app-grouping feature.

I really enjoyed my time working on FirefoxOS, and getting a nice clean break from platform work, but it was always bitter-sweet. Everyone working on the project was very enthusiastic to see it through and do a good job, but it never felt like upper management’s focus was in the correct place. We spent far too much time kowtowing to the desires of phone carriers and trying to copy Android and not nearly enough time on basic features and polish. Up until around v2.0 and maybe even 2.2, the experience of using FirefoxOS was very rough. Unfortunately, as soon as it started to show some promise and as soon as we had freedom from carriers to actually do what we set out to do in the first place, the project was cancelled, in favour of the whole Connected Devices IoT debacle.

If there was anything that killed morale for me more than my unfortunate time on the graphics team, and more than having FirefoxOS prematurely cancelled, it would have to be the Connected Devices experience. I appreciate it as an opportunity to work on random semi-interesting things for a year or so, and to get some entrepreneurship training, but the mismanagement of that whole situation was pretty epic. To take a group of hundreds of UI-focused engineers and tell them that, with very little help, they should organised themselves into small teams and create IoT products still strikes me as an idea so crazy that it definitely won’t work. Certainly not the way we did it anyway. The idea, I think, was that we’d be running several internal start-ups and we’d hopefully get some marketable products out of it. What business a not-for-profit company, based primarily on doing open-source, web-based engineering has making physical, commercial products is questionable, but it failed long before that could be considered.

The process involved coming up with an idea, presenting it and getting approval to run with it. You would then repeat this approval process at various stages during development. It was, however, very hard to get approval for enough resources (both time and people) to finesse an idea long enough to make it obviously a good or bad idea. That aside, I found it very demoralising to not have the opportunity to write code that people could use. I did manage it a few times, in spite of what was happening, but none of this work I would consider myself particularly proud of. Lots of very talented people left during this period, and then at the end of it, everyone else was laid off. Not a good time.

Luckily for me and the team I was on, we were moved under the umbrella of Emerging Technologies before the lay-offs happened, and this also allowed us to refocus away from trying to make an under-featured and pointless shopping-list assistant and back onto the underlying speech-recognition technology. This brings us almost to present day now.

The DeepSpeech speech recognition project is an extremely worthwhile project, with a clear mission, great promise and interesting underlying technology. So why would I leave? Well, I’ve practically ended up on this team by a series of accidents and random happenstance. It’s been very interesting so far, I’ve learnt a lot and I think I’ve made a reasonable contribution to the code-base. I also rewrote python_speech_features in C for a pretty large performance boost, which I’m pretty pleased with. But at the end of the day, it doesn’t feel like this team will miss me. I too often spend my time finding work to do, and to be honest, I’m just not interested enough in the subject matter to make that work long-term. Most of my time on this project has been spent pushing to open it up and make it more transparent to people outside of the company. I’ve added model exporting, better default behaviour, a client library, a native client, Python bindings (+ example client) and most recently, Node.js bindings (+ example client). We’re starting to get noticed and starting to get external contributions, but I worry that we still aren’t transparent enough and still aren’t truly treating this as the open-source project it is and should be. I hope the team can push further towards this direction without me. I think it’ll be one to watch.

Next week, I start working at a new job doing a new thing. It’s odd to say goodbye to Mozilla after 6 years. It’s not easy, but many of my peers and colleagues have already made the jump, so it feels like the right time. One of the big reasons I’m moving, and moving to Impossible specifically, is that I want to get back to doing impressive work again. This is the largest regret I have about my time at Mozilla. I used to blog regularly when I worked at OpenedHand and Intel, because I was excited about the work we were doing and I thought it was impressive. This wasn’t just youthful exuberance (he says, realising how ridiculous that sounds at 32), I still consider much of the work we did to be impressive, even now. I want to be doing things like that again, and it feels like Impossible is a great opportunity to make that happen. Wish me luck!

by Chris Lord at June 28, 2017 11:16 AM

June 20, 2017

Ross Burton

Identifying concurrent tasks in Bitbake logs

One fun problem in massively parallel OpenEmbedded builds is when tasks have bad dependencies or just bugs and you can end up with failures due to races on disk.

One example of this happened last week when an integration branch was being tested and one of the builds failed with tar error: file changed as we read it whilst it was generating the images. This means that the root filesystem was being altered whilst tar was reading it, so we've a parallelism problem. There's only a limited number of tasks that could be having this effect here so searching the log isn't too difficult, but as they say: why do something by hand when you can write a script to do it for you.

findfails is a script that will parse a Bitbake log and maintain the set of currently active tasks, so when it finds a task that fails it can tell you what other tasks are also running:

$ findfails log
Task core-image-sato-dev-1.0-r0:do_image_tar failed
Active tasks are:
 core-image-sato-sdk-ptest-1.0-r0:do_rootfs
 core-image-sato-dev-1.0-r0:do_image_wic
 core-image-sato-dev-1.0-r0:do_image_jffs2
 core-image-sato-dev-1.0-r0:do_image_tar
 core-image-sato-sdk-1.0-r0:do_rootfs```

We knew that there were changes to do_image_wic in that branch, so it was easy to identify and drop the patch that was incorrectly writing to the rootfs source directory. Sorted!

by Ross Burton at June 20, 2017 02:24 PM

June 13, 2017

Ross Burton

Dynamic source checksums in OpenEmbedded

Today we were cleaning up some old bugs in the Yocto Project bugzilla and came across a bug which was asking for the ability to specify a remote URL for the source tarball checksums (SRC_URI[md5sum] and/or SRC_URI[sha256um]). We require a checksum for tarballs for two reasons:

  1. Download integrity. We want to be sure that the download wasn't corrupted in some way, such as truncation or bad encoding.
  2. Security. We want to be sure that the tarball hasn't changed over time, be it the maintainer regenerating the tarball for an old release but with different content (this happens more than you'd expect, with non-trivial changes too), or alternatively a malicious attack on the file which now contains malware (such as the Handbrake hack in May).

The rationale for reading remote URLs for checksums was that for files that are changing frequently it would be easier to upgrade the recipe if the checksums didn't need to be altered too. For some situations I can see this argument, but I don't want to encourage practices that nullify the security checksums. For this reason I rejected the bug but thanks to the power of Bitbake I did provide a working example of how to do this in your recipe.

The trick is to observe that the only time the SRC_URI[md5sum] is read is during do_fetch. By adding a new function to do_fetch[prefuncs] (the list of functions that will be executed before do_fetch is executed) we can download the checksums and write the variable just before the fetcher needs it. Here is a partial example that works for GNOME-style checksums, where each upload generates foo-1.2.tar.bz2, foo-1.2.tar.xz, foo-1.2.sha256sum, and foo-1.2.md5sum. To keep it interesting the checksum files contain the sums for both compression types, so we need to iterate through the file to find the right line:

SRC_URI = "https://download.gnome.org/sources/glib/2.52/glib-2.52.2.tar.xz"
SHASUM_URI = "https://download.gnome.org/sources/glib/2.52/glib-2.52.2.sha256sum"

do_fetch[prefuncs] += "fetch_checksums"
python fetch_checksums() {
    import urllib
    for line in urllib.request.urlopen(d.getVar("SHASUM_URI")):
        (sha, filename) = line.decode("ascii").strip().split()
        if filename == "glib-2.52.2.tar.xz":
            d.setVarFlag("SRC_URI", "sha256sum", sha)
            return
    bb.error("Could not find remote checksum")
}

Note that as fetch_checksums is a pre-function for do_fetch it is only executed just before do_fetch and not at any other time, so this doesn't impose any delays on builds that don't need to fetch.

If I were taking this beyond a proof of concept and making it into a general-purpose class there's a number of changes I would want to make:

  1. Use the proxies when calling urlopen()
  2. Extract the filename to search for from the SRC_URI
  3. Generate the checksum URL from the SRC_URI

I'll leave those as an exercise to the reader though. Patches welcome!

by Ross Burton at June 13, 2017 01:43 PM

Tomas Frydrych

The Case for 'Make No Fire'

The Case for 'Make No Fire'

I agree with David Lintern that we (urgently) need a debate about the making of fires in our wild spaces, and I am grateful that he took the plunge and voiced that need. But while I think David's is, by far, the most sensible take on the matter among some of the other advice dished out recently, I want to argue that we, the anonymous multitude of outdoor folk, need to go a step further and make the use of open fire in UK wild places socially unacceptable. Not making a fire is the only responsible option available to us. Not convinced? Here is my case.

There are three key issues that need to be addressed when it comes to the responsible use of fire: 1. the risk of starting a wildfire, 2. the immediate damage a controlled fire causes, and, 3. the paucity of fuel and the damage caused by foraging for it.

Wildfire risk

There are two main ways in which a controlled fire can start a wildfire: an underground burn and overground spark. The former happens when a fire is located on the top of material that is itself combustible. Such material can smoulder underground for days, and travel some distances before flaring up. The most obvious risk here comes from peat, which happens to be the second largest carbon store on the planet (after the Amazonian rain forest), i.e., it burns extremely well, and most of which is in Scotland; majority of our wild places are covered in it -- it might seem obvious to some not to start a fire on peat, but I suspect many don't know what peat actually looks like, particularly when dry, or don't realise how ubiquitous it is.

Peat is not the only problem, underground roots can smoulder away for ages thanks to high resin content, and are very hard to put out, a couple of guys pissing into the fire pit is nowhere near enough. (I once spent over an hour extinguishing a fire someone built on the top of an old stump, it wouldn't stop sizzling in spite of copious amounts of water repeatedly poured onto it, it scared the hell out of me.)

Then there is the flying spark igniting stuff outside your controlled fire pit. Open fire always generates sparks, even wood burning stoves do, when the pot is not on. In the right conditions it takes a very tiny spark to get things going. Sparks can fly considerable distances, and might well jump any perceived safe buffer zone around your fire. The amount and size of sparks generated grows with the size of the fire, plus the bigger the fire, the bigger the updraft and the less control you have over where your sparks land.

In practice, the risk of wildfire can be reduced, but is hard to eliminate. It's ultimately a numbers game. If the individual chances of unwittingly starting a wild fire from a small controlled fire are 1 in 1000, then a 1000 people each making a fire once will start one wildfire. Whatever the actual numbers, the growth in participation works against us. Let's be under no illusion: an outdoor culture that accepts fire in wild spaces as a part of the game will start wildfires. It's not a question of whether, just of how often. Is that something we are happy to accept as a price worth paying? How often is OK? Once a year, once a decade? Once a month?

Immediate Damage

Fire is the process of rapid release of energy, and that energy has to go somewhere; in our wild places it goes somewhere where it should not, where it is not expected and in doing so it affects an irreversible change. Fire kills critters in the soil. The rising and radiating heat damages vegetation in the vicinity (it takes surprisingly little heat to cause lasting damage to trees, I reckon an irreversible damage happens to at least three times the distance to which a human face can comfortably bear the radiation). Such damage is not necessarily immediately obvious, but is there, and adds up with repeated use. A single fire under the Inchriach pines might seem as doing no harm, but tomorrow that's someone else fire in its place. (Next time you pass Inchriach, look up directly above the ignominious fire pit, compare the two sides of the pine.)

There are other, more subtle issues. Ash is a fertiliser; it is also alkaline, affecting soil acidity. The repeated dumping of ash around a given locus will inevitably change that ecosystem, particularly if it's naturally nutrient poor and/or acidic. Dumping ashes into water suffers from the same problem. Individually, these might be minute, seemingly insignificant changes but they are never isolated. We might feel like it, but we are not lonely travellers exploring vast sways of wilderness previously untouched by human foot. I am but one of many, and increasingly more, passing through any given of UK's wild places. The numbers, again, work against us.

Lot of folk seem to think that if they dig a fire pit, then replace the turf next day they are 'leaving no trace' -- that's not no trace, that's a scar with little superficial make up applied to it. It does not work even on the cosmetic level; it might look good when you are leaving, but doesn't last long. The digging damages the turf, particularly around the edges, as does the fire. The fire bakes the ground in the pit, making it hard for the replaced turf to repair its roots, and it will suffer partial or even complete dieback as a result. Even if the turf catches eventually, it takes weeks for the damaged border to repair -- the digging of single use fire pits, particularly at places that are used repeatedly by many is far more damaging than leaving a single, tidy fire ring to be reused. Oh, the sight of it offends you? The issues surrounding fire go lot deeper than the cosmetics.

Paucity of fuel

In the UK we have (tragically) few trees. It takes surprisingly large quantity of wood to feed even a small fire to just make a cup of coffee. It is possible to argue that the use of stoves, gas or otherwise, too has a considerable environmental impact, just less obvious, less localised, and that the burning of local wood is more environmentally friendly. It's a good argument, worth reflecting upon, but it only works for small numbers; it doesn't scale. Once participation gets to a certain level, it burns lot quicker than it grows, and in the UK we have long crossed that line.

There are many of us heading into the same locations, and it is always possible to straight away spot places where people make fire by how denuded of dead wood they are (been to a bothy recently?). This is not merely cosmetic, the removal of dead wood reduces biodiversity. Fewer critters on the floor mean fewer birds in the trees, and so on. Our 'wild' places suffer from the lack of biodiversity as is, no change for the worse is insignificant. If you have not brought the fuel with you, your fire is not locally sustainable, it's simple as that. If it's not locally sustainable, it has no place in our wild locations.

The Fair Share

It comes down to the numbers. As more of us head 'out there', the chances of us collectively starting a wildfire grow, as does the damage we cause locally by having our fires. We can't beat the odds, indeed, as this spring has shown, we are not beating the odds. There is only one course of action left to us and that's to completely abstain from open fires in our wild places. I use the word abstain deliberately. The making of fire is not a necessity, not even a convenience. It's about a brief individual gratification that comes at a considerable collective price in the long run.

As our numbers grow we need the personal discipline not to claim more than our fair share of the limited and fragile resource our wild places are. The aspirations of 20 or 30 years ago are no longer enough, what once might have been acceptable no longer can be. We must move beyond individual definitions of impact and start thinking in combined, collective, terms -- sustainable behaviour is not one which individually leaves no obvious visual trace, but one which can be repeated over and over again by all without fundamentally changing the locus of our activity. I believe the concept of fair share is the key to sustainable future. Any definition of responsible behaviour that does not consciously and deliberately take numbers into account is delusory. And fire doesn't scale.

by tf at June 13, 2017 10:05 AM

June 12, 2017

Tomas Frydrych

Eagle Rock and Ben More Assynt

Eagle Rock and Ben More Assynt

The south ridge of Ben More Assynt has been on my mind for a while, ever since I laid eyes on it a few years back from the summit. It's a fine line. Today is perhaps not the ideal day for it, it's fairly windy and likely to rain for a bit, but at least for now the cloud base is, just, above the Conival summit. I dither whether to take the waterproof jacket, it will definitely rain, but it's not looking very threatening just now, and it's not so cold. In the end common sense prevails and I add it to the bag, then set off from Inchnadamph up along Traligill river.

The rain starts within a couple of minutes, and by the time I reach the footbridge below the caves it is sustained enough for the jacket to come out of the bag. In a moment of fortuitous foresight I also take the camera out of its shower resistant shoulder pouch and put it in a stuff sack, before I carry on, past the caves, following Allt a'Bhealaich.

This is a familiar ground I keep returning to, fascinated by the stream which on the plateau above Cnoc nan Uamh runs mostly underground, yet leaving a clearly defined riverbed on the surface; a good running surface. Not far after the Cnoc there is a rather large sinkhole, new since the last time I was here. It's perhaps five meters in diameter, and about as deep, the grass around its edges beginning to subside further. I wonder where it leads, how big the underground space might be, would love to have seen it forming.

Another strange thing, strewn along the grassy river bed are large balls of peat, about the size of a medicine ball, and really quite round. I don't remember seeing these before either, and wonder how they formed and where they came from, presumably they were shaped, and brought down, by a torrent of water from higher up; the dry riverbed is a witness to significant amounts of water at least occasionally running through here on the surface.

As Allt a' Bhealaich turns south east into the steepsided reentrant below Bealach Trallgil, I start climbing up the eastern slopes of Conival to pick up the faint path that passes through the bealach, watching the cloud oozing out of it. The wind has picked up considerably as it channels through the narrow gap between Conival and Braebag, it doesn't bode well for the high ground above.

The bealach provides an entry into a large round natural cauldron, River Oykel the only break in its walls. On a good clear day its circumnavigation would provide a fine outing. Today it's filled with dense cloud, there is no chance of even catching sight of the dramatic cliffs of Breabag, though I can see briefly that the Conival summit is above the clouds, and I wonder if perhaps I might be lucky enough to climb above them later.

The rain hasn't let off, and as I descend toward Dubh Loch Mor I chastise myself (not for the first time) for not reproofing my jacket. But there is no time to dwell on such trivialities as being wet. The southwest bank of the loch is made of curious dunes, high and rounded just like sand dunes, but covered in short grass. I have seen nothing like this before in Scotland's hills. I have an inkling; this entire cauldron shows classic signs of glaciation and I expect under the thin layer of peat and vegetation of these dunes is the moraine the retreating glacier left behind.

I reset my altimeter, there is some navigation to be done to reach the bealach north of Eagle Rock, and I am about to enter the cloud. As I climb, the visibility quickly drops to about fifteen yards. Suddenly I catch the sight of a white rump, then another. Thinking these are sheep I carry on ... about ten yards up wind from me is a herd of a dozen or so deer, all facing away from me. I am spotted after a few seconds and we all stand very still looking at each other for what seems like ages. I have the distinct sense I am being studied as a strange curiosity, if not being outright mocked for being out in this weather.

I break the stalemate carrying on up the hill to the 600m line, then start contouring. The weather is truly miserable now, the wind picked up some more and I wish I brought the Buffalo gloves, they are made for days like these. The compass and map come out so I can track my progress on the traverse until the slope aspect reaches the 140 degrees I am looking for. I consider giving Eagle Rock the miss today, but decide to man up.

Perhaps it's the name, but I am rather surprised, dare I say, disappointed, by the tame character of this hill. It can be fairly accurately described as a rounded heap of coarse aggregate, with little soil and some vegetation filling up the cracks, I suspect it's a bigger brother of the smaller dunes below, a large moraine deposit from a long time ago. It is quite unpleasant to run on in the fell shoes, there is no give in it, and on every step multiple sharp edges are making themselves felt through the soles.

I reach the trig point, take a back bearing (if anything, the visibility is even worse) and start heading down. Suddenly a female ptarmigan shoots out from a field of slightly bigger stones just to the side of me, and starts running tight circles around me, on no more than a three feet radius. The photographer in me has a brief urge to get the camera out, but it seems unfair. I admire her pluckiness, no regard for her own safety, she repeatedly tries to side step me and launch herself at me from behind, and I expect had I let her, I'd have been in for some proper pecking. But she will not take me head on, for which I am grateful as we dance together.

I have no idea in which way to retreat, for I haven't caught the sight of her young; I assume they are somewhere to my right where she came from, so head away from there. She continues to circle around frantically for some twenty yards or so, and I begin to wonder whether I might in fact be heading toward her hatchlings. Then her tactic changes. She runs for about five yards ahead of me in the direction I am moving in, then crouches down watching until I get within three feet or so, then runs on another five yards, and so on. We travel this way some three hundred yards, then she flies off some ten yards to the side; for the first time she stands upright, tall and proud, wings slightly stretched, watching me to carry on down the hill, her job is done. A small flock of golden plovers applaud, her textbook performance deserves nothing less.

This brief encounter made me forget all about the miserable weather and my cold hands, a moment like this well outweighs hours of discomfort, and is perhaps even unachievable without them. The rain has finally eased off, but it's clear now the whole ridge will be in this thick cloud.

The line is fine indeed, narrow, for prolonged sections a knife edge. Surprisingly, above 750m or so the wind is relatively light, nothing like in the cauldron below, which is just as well. On a different day this would be an exhilarating outing. But I am taken aback by how slippery the wet gneiss is, even in the normally so grippy fell shoes. Along the ridge are a number of tricky points; they are not scrambles in the full sense of the word, just short awkward steps and traverses that in the dry would present little difficulty, but are very exposed. Slipping on any of these is not just a question of getting hurt, a fall either side of the ridge would be measured in hundreds rather than tens of meters. I have done a fair amount of 'real' climbing in the past, but I am struggling to recall the last time I have felt this much out of my comfort zone. There are no escape routes and after negotiating a couple of particularly awkward bits, I realise I am fully committed to having to reach Ben More.

I am glad when the loose quartzite eventually signals I am nearly there. Normally quite lethal in the wet, today it feels positively grippy compared to the gneiss back there. I still can't get my head around it. As I start descending the summit, a person, in what from distance looks like a bright orange onesie, emerges from the fog. I expect we are both surprised to meet anyone else on the hill. We exchange a few sentence, I mention how slippery the ridge is, thinking today I'd definitely not want to be on it in heavy boots.

I jog over Conival without stopping, down the usual tourist route. I was planning to head to Loch nan Cuaran, and descend from there, but am out of time, Linda will already be waiting for me at Inchnadamph. As I drop to 650m or so I finally emerge from the cloud into the sunlit glen below, shortly reaching the beautiful path in Gleann Dubh; I will it to go on longer. The rain is forgotten, my clothes rapidly drying off, this is as perfect as life gets.

I stop briefly at River Traligill, I am about to reenter that other, 'normal', world and my legs are covered in peat up to my thighs -- best not to frighten the tourists having a picnic in the carpark.

PS: I think it's high time we got rid of the term 'game birds', it befits neither us nor them.

26km / 1,700m ascent / 5h

by tf at June 12, 2017 10:47 AM

June 07, 2017

Tomas Frydrych

Assynt Ashes

Assynt Ashes

Today I walked through one of my favourite Assynt places, off the path well trodden, just me, birds, deer ... and ash from a recent wild fire. I couldn't but think of MacCaig's frogs and toads, always abundant around here, yet today conspicuous by their absence.

A flashback to earlier this year: I am just the other side of this little rise, watching a pair of soaring eagles, beyond the reach of my telephoto lens. A brief conversation with a passing local. I mention the delight of walking in the young birch woodland, the pleasure of seeing it burst into life after winter. He worries about it being destroyed by wild fire, had seen a few around here. I think him somewhat paranoid, I can't imagine it happening, not here.

Now I am weeping among the ashes. Over a tree, of which this landscape could bear many more, up in a rock face, years of carving out life away from human intrusion brought to an abrupt end, for what? Over this invasive, all destroying, parasitic species that we call human, that has long outlived its usefulness. Different tears, of sadness, of frustration.

I pity such emotional poverty that needs fire to find fulfilment in the midst of the wonders of nature. I curse those who encourage it, those who feed, and feed on, this neediness. The neo-romantic evangelists preaching Salvation through Adventure to electronic pews of awestruck followers. I loath what 'adventure' has come to represent in recent years, the endless, selfie-powered quest for publicity, for likes, the k-tching sound likes make, the distortion of reality they inflict, the mutual ego stroking.

Out of the ashes, from distance at least, life is slowly getting reborn. Yet, this is not the rebirth of a landscape that has adapted to being regularly swept by fire. On closer inspection, the new greenery is just couch grass and bracken, the latter rapidly colonising the space where heather once was. This is a landscape yet again reshaped by man, and yet again for worse not better. For what? For the delusion of primeval 'authenticity' (carefully to be documented by a smartphone for the 'benefit' of those less authentic)?

Can we please stop looking into the pond to see how adventurous we are, and maybe, just once, look for what is there instead?

by tf at June 07, 2017 08:45 PM

June 04, 2017

Damien Lespiau

Building and using coverage-instrumented programs with Go


tl;dr We can create coverage-instrumented binaries, run them and aggregate the coverage data from running both the program and the unit tests.

In the Go world, unit testing is tightly integrated with the go tool chain. Write some unit tests, run go test and tell anyone that will listen that you really hope to never have to deal with a build system for the rest of your life.

Since Go 1.2 (Dec. 2013), go test has supported test coverage analysis: with the ‑cover option it will tell you how much of the code is being exercised by the unit tests.

So far, so good.

I've been wanting to do something slightly different for some time though. Imagine you have a command line tool. I'd like to be able to run that tool with different options and inputs, check that everything is OK (using something like bats) and gather coverage data from those runs. Even better, wouldn't be neat to merge the coverage from the unit tests with the one from those program runs and have an aggregated view of the code paths exercised by both kind of testing?

A word about coverage in Go

Coverage instrumentation in Go is done by rewriting the source of an application. The cover tool inserts code to increment a counter at the start of each basic block, a different counter for each basic block of course. Some metadata is kept along side each of the counters: the location of the basic block (source file, start/end line & columns) and the size of the basic block (number of statements).

This rewriting is done automatically by go test when coverage information has been asked by the user (go test -x to see what's happening under the hood). go test then generates an instrumented test binary and runs it.

A more detailed explanation of the cover story can be found on the Go blog.

Another interesting thing is that it's possible to ask go test to write out a file containing the coverage information with the ‑coverprofile option. This file starts with the coverage mode, which is how the coverage counters are incremented. This is one of set, count or atomic (see blog post for details). The rest of the file is the list of basic blocks of the program with their metadata, one block per line:

github.com/clearcontainers/runtime/oci.go:241.29,244.9 3 4

This describes one piece of code from oci.go, composed of 3 statements without branches, starting at line 241, column 29 and finishing at line 244, column 9. This block has been reached 4 times during the execution of the test binary.

Generating coverage instrumented programs

Now, what I really want to do is to compile my program with the coverage instrumentation, not just the test binary. I also want to get the coverage data written to disk when the program finishes.

And that's when we have to start being creative.

We're going to use go test to generate that instrumented program. It's possible to define a custom TestMain function, an entry point of a kind, for the test package. TestMain is often used to setup up the test environment before running the list of unit tests. We can hack it a bit to call our main function and jump to running our normal program instead of the tests! I ended up with something like this:


The current project I'm working on is called cc-runtime, an OCI runtime spawning virtual machines. It definitely deserves its own blog post, but for now, knowing the binary name is enough. Generating a coverage instrumented cc-runtime binary is just a matter of invoking go test:

$ go test -o cc-runtime -covermode count

I haven't used atomic as this binary is really a thin wrapper around a library and doesn't use may goroutines. I'm also assuming that the use of atomic operations in every branch a "quite a bit" higher then the non-atomic addition. I don't care too much if the counter is off by a bit, as long as it's strictly positive.

We can run this binary just as if it were built with go build, except it's really a test binary and we have access to the same command line arguments as we would otherwise. In particular, we can ask to output the coverage profile.

$ ./cc-runtime -test.coverprofile=list.cov list
[ outputs the list of containers ]

And let's have a look at list.cov. Hang on... there's a problem, nothing was generated: we din't get the usual "coverage: xx.x% of statements" at the end of a go test run and there's no list.cov in the current directory. What's going on?

The testing package flushes the various profiles to disk after running all the tests. The problem is that we don't run any test here, we just call main. Fortunately enough, the API to trigger a test run is semi-public: it's not covered by the go1 API guarantee and has "internal only" warnings. Not. Even. Scared. Hacking up a dummy test suite and running is easy enough:


There is still one little detail left. We need to call this FlushProfiles function at the end of the program and that program could very well be using os.Exit anywhere. I couldn't find better than having a tiny exit package implementing the equivalent of the libc atexit() function and forbid direct use of os.Exit in favour of exit.Exit(). It's even testable.

Putting everything together

It's now time for a full example. I have a small calc program that can compute additions and substractions.

$ calc add 4 8
12

The code isn't exactly challenging:


I've written some unit-tests for the add function only. We're going to run calc itself to cover the remaining statements. But first, let's see the unit tests code with both TestAdd and our hacked up TestMain function. I've swept the hacky bits away in a cover package.


Let's run the unit-tests, asking to save a unit-tests.cov profile.

$ go test -covermode count -coverprofile unit-tests.cov
PASS
coverage: 7.1% of statements
ok github.com/dlespiau/covertool/examples/calc 0.003s

Huh. 7.1%. Well, we're only testing the 1 statement of the add function after all. It's time for the magic. Let's compile an instrumented calc:

$ go test -o calc -covermode count

And run calc a few times to exercise more code paths. For each run, we'll produce a coverage profile.

$ ./calc -test.coverprofile=sub.cov sub 1 2
-1
$ covertool report sub.cov
coverage: 57.1% of statements

$ ./calc -test.coverprofile=error1.cov foo
expected 3 arguments, got 1
$ covertool report error1.cov
coverage: 21.4% of statements

$ ./calc -test.coverprofile=error2.cov mul 3 4
unknown operation: mul
$ covertool report error2.cov
coverage: 50.0% of statements

We want to aggregate those profiles into one single super-profile. While there are some hints people are interested in merging profiles from several runs (that commit is in go 1.8), the cover tool doesn't seem to support these kind of things easily so I wrote a little utility to do it: covertool

$ covertool merge -o all.cov unit-tests.cov sub.cov error1.cov error2.cov

Unfortunately again, I discovered a bug in Go's cover and so we need covertool to tell us the coverage of the aggregated profile:

$ covertool report all.cov
coverage: 92.9% of statements

Not Bad!

Still not 100% though. Let's fire the HTML coverage viewer to see what we are missing:

$ go tool cover -html=all.cov


Oh, indeed, we're missing 1 statement. We never call add from the command line so that switch case is never covered. Good. Seems like everything is working as intended.

Here be dragons

As fun as this is, it definitely feels like very few people are doing this kind of instrumented binaries. Everything is a bit rough around the edges. I may have missed something obvious, of course, but I'm sure the Internet will tell me if that's the case!

It'd be awesome if we could have something nicely integrated in the future.

by Damien Lespiau (noreply@blogger.com) at June 04, 2017 04:36 PM

May 29, 2017

Damien Lespiau

Testing for pending migrations in Django

DB migration support has been added in Django 1.7+, superseding South. More specifically, it's possible to automatically generate migrations steps when one or more changes in the application models are detected. Definitely a nice feature!

I've written a small generic unit-test that one should be able to drop into the tests directory of any Django project and that checks there's no pending migrations, ie. if the models are correctly in sync with the migrations declared in the application. Handy to check nobody has forgotten to git add the migration file or that an innocent looking change in models.py doesn't need a migration step generated. Enjoy!

See the code on djangosnippets or as a github gist!

by Damien Lespiau (noreply@blogger.com) at May 29, 2017 04:15 PM

May 28, 2017

Tomas Frydrych

Fraochaidh and Glen Creran Woods

Fraochaidh and Glen Creran Woods

The hills on the west side of Glen Creran will be particularly appreciated by those searching for some peace and quiet. None of them reach the magic 3,000ft mark, and so are of no interest to the Munroist, while the relatively small numbers of Corbettistas follow the advice of the SMC guidebook and approach their target from Ballachuilish. Yet, the lower part of Glen Creran, with its lovely deciduous woodland, deserves a visit, and the east ridge of Fraochaidh offers excellent running.

Start from the large carpark at the end of the public road (NN 0357 4886). From here, you have two options. The first is to follow the marked pine marten trail to its most westerly point (NN 0290 4867). From here a path leads off in a SW direction; take this to an old stone foot bridge over Eas an Diblidh (NN 0273 4846; marked on OS 25k map).

Alternatively, set off back along the road until it crosses Eas an Diblidh, then immediately pick up the path heading up the hill (see the 25k map) to the aforementioned bridge; this is my preferred option, the surrounding woodland is beautiful, and the Eas Diblidh stream rather dramatic -- more than adequate compensation for the brief time spent on the road.

Whichever way you get to the bridge, take the level path heading SW; after just a few meters a faint track heads directly up the hill following the stream. In the spring the floor in this upper part of the woods is covered in a mix of bluebells and wild garlic, providing an unusual sensory experience.

The path eventually peters out and the woodland comes to an end. Above the woodland is a typical Scottish overgrazed hillside, and as you emerge from the woods buzzing with life, it's impossible not to be struck by the apparent lack of it. Follow the direction of the stream up to the bealach below Beinn Mhic na Ceisich (391m point on 25k map).

From the bealach head up N to the 627m summit and from here follow the old fence line onto the summit of Fraochaidh (879m). As indicated on the 25k map, this section is damp underfoot, the fence line follows the best ground. The final push onto Fraochaidh is steep, but without difficulties.

Once you have taken in the views from Fraochaidh summit, follow the faint path along its east ridge. The running and scenery are first class, with Sgorr Deargh forming the main backdrop; the path worn out to its summit, so obvious even from this distance, perhaps a cause for reflection on our impact on he hills we love.

Fraochaidh exhibits some interesting geology. The upper part of the mountain is made of slate, which, as you approach Bealach Dearg, briefly changes (to my untrained eye at least) to gneiss, promptly followed by a band of quartzite forming the knoll on its other side. Then, as the ridge turns NE, it changes to orange coloured limestone, covered in alpine flora, with excellent view back at Fraochaidh.

Follow the ridge all the way to Mam Uchdaich bealach where it is crossed by the Ballachuilish path. This has been impacted by recent forestry operations on the Glen Creran side, and a new, broad hard surface path zigzags toward the forestry track. As of the time of writing, it is still possible to pick up the original path near the first sharp turn, descending through a grassy fire break in the woods -- this much to be preferred.

The forestry track initially has little to commend it, other than being gently downhill, but for the last couple of kilometres it renters the lovely deciduous woodland for a pleasant final jog to the finish.

20km / 1600m ascent / ~4h

by tf at May 28, 2017 08:08 AM

May 27, 2017

Chris Lord

Free Ideas for UI Frameworks, or How To Achieve Polished UI

Ever since the original iPhone came out, I’ve had several ideas about how they managed to achieve such fluidity with relatively mediocre hardware. I mean, it was good at the time, but Android still struggles on hardware that makes that look like a 486… It’s absolutely my fault that none of these have been implemented in any open-source framework I’m aware of, so instead of sitting on these ideas and trotting them out at the pub every few months as we reminisce over what could have been, I’m writing about them here. I’m hoping that either someone takes them and runs with them, or that they get thoroughly debunked and I’m made to look like an idiot. The third option is of course that they’re ignored, which I think would be a shame, but given I’ve not managed to get the opportunity to implement them over the last decade, that would hardly be surprising. I feel I should clarify that these aren’t all my ideas, but include a mix of observation of and conjecture about contemporary software. This somewhat follows on from the post I made 6 years ago(!) So let’s begin.

1. No main-thread UI

The UI should always be able to start drawing when necessary. As careful as you may be, it’s practically impossible to write software that will remain perfectly fluid when the UI can be blocked by arbitrary processing. This seems like an obvious one to me, but I suppose the problem is that legacy makes it very difficult to adopt this at a later date. That said, difficult but not impossible. All the major web browsers have adopted this policy, with caveats here and there. The trick is to switch from the idea of ‘painting’ to the idea of ‘assembling’ and then using a compositor to do the painting. Easier said than done of course, most frameworks include the ability to extend painting in a way that would make it impossible to switch to a different thread without breaking things. But as long as it’s possible to block UI, it will inevitably happen.

2. Contextually-aware compositor

This follows on from the first point; what’s the use of having non-blocking UI if it can’t respond? Input needs to be handled away from the main thread also, and the compositor (or whatever you want to call the thread that is handling painting) needs to have enough context available that the first response to user input doesn’t need to travel to the main thread. Things like hover states, active states, animations, pinch-to-zoom and scrolling all need to be initiated without interaction on the main thread. Of course, main thread interaction will likely eventually be required to update the view, but that initial response needs to be able to happen without it. This is another seemingly obvious one – how can you guarantee a response rate unless you have a thread dedicated to responding within that time? Most browsers are doing this, but not going far enough in my opinion. Scrolling and zooming are often catered for, but not hover/active states, or initialising animations (note; initialising animations. Once they’ve been initialised, they are indeed run on the compositor, usually).

3. Memory bandwidth budget

This is one of the less obvious ideas and something I’ve really wanted to have a go at implementing, but never had the opportunity. A problem I saw a lot while working on the platform for both Firefox for Android and FirefoxOS is that given the work-load of a web browser (which is not entirely dissimilar to the work-load of any information-heavy UI), it was very easy to saturate memory bandwidth. And once you saturate memory bandwidth, you end up having to block somewhere, and painting gets delayed. We’re assuming UI updates are asynchronous (because of course – otherwise we’re blocking on the main thread). I suggest that it’s worth tracking frame time, and only allowing large asynchronous transfers (e.g. texture upload, scaling, format transforms) to take a certain amount of time. After that time has expired, it should wait on the next frame to be composited before resuming (assuming there is a composite scheduled). If the composited frame was delayed to the point that it skipped a frame compared to the last unladen composite, the amount of time dedicated to transfers should be reduced, or the transfer should be delayed until some arbitrary time (i.e. it should only be considered ok to skip a frame every X ms).

It’s interesting that you can see something very similar to this happening in early versions of iOS (I don’t know if it still happens or not) – when scrolling long lists with images that load in dynamically, none of the images will load while the list is animating. The user response was paramount, to the point that it was considered more important to present consistent response than it was to present complete UI. This priority, I think, is a lot of the reason the iPhone feels ‘magic’ and Android phones felt like junk up until around 4.0 (where it’s better, but still not as good as iOS).

4. Level-of-detail

This is something that I did get to partially implement while working on Firefox for Android, though I didn’t do such a great job of it so its current implementation is heavily compromised from how I wanted it to work. This is another idea stolen from game development. There will be times, during certain interactions, where processing time will be necessarily limited. Quite often though, during these times, a user’s view of the UI will be compromised in some fashion. It’s important to understand that you don’t always need to present the full-detail view of a UI. In Firefox for Android, this took the form that when scrolling fast enough that rendering couldn’t keep up, we would render at half the resolution. This let us render more, and faster, giving the impression of a consistent UI even when the hardware wasn’t quite capable of it. I notice Microsoft doing similar things since Windows 8; notice how the quality of image scaling reduces markedly while scrolling or animations are in progress. This idea is very implementation-specific. What can be dropped and what you want to drop will differ between platforms, form-factors, hardware, etc. Generally though, some things you can consider dropping: Sub-pixel anti-aliasing, high-quality image scaling, render resolution, colour-depth, animations. You may also want to consider showing partial UI if you know that it will very quickly be updated. The Android web-browser during the Honeycomb years did this, and I attempted (with limited success, because it’s hard…) to do this with Firefox for Android many years ago.

Pitfalls

I think it’s easy to read ideas like this and think it boils down to “do everything asynchronously”. Unfortunately, if you take a naïve approach to that, you just end up with something that can be inexplicably slow sometimes and the only way to fix it is via profiling and micro-optimisations. It’s very hard to guarantee a consistent experience if you don’t manage when things happen. Yes, do everything asynchronously, but make sure you do your book-keeping and you manage when it’s done. It’s not only about splitting work up, it’s about making sure it’s done when it’s smart to do so.

You also need to be careful about how you measure these improvements, and to be aware that sometimes results in synthetic tests will even correlate to the opposite of the experience you want. A great example of this, in my opinion, is page-load speed on desktop browsers. All the major desktop browsers concentrate on prioritising the I/O and computation required to get the page to 100%. For heavy desktop sites, however, this means the browser is often very clunky to use while pages are loading (yes, even with out-of-process tabs – see the point about bandwidth above). I highlight this specifically on desktop, because you’re quite likely to not only be browsing much heavier sites that trigger this behaviour, but also to have multiple tabs open. So as soon as you load a couple of heavy sites, your entire browsing experience is compromised. I wouldn’t mind the site taking a little longer to load if it didn’t make the whole browser chug while doing so.

Don’t lose sight of your goals. Don’t compromise. Things might take longer to complete, deadlines might be missed… But polish can’t be overrated. Polish is what people feel and what they remember, and the lack of it can have a devastating effect on someone’s perception. It’s not always conscious or obvious either, even when you’re the developer. Ask yourself “Am I fully satisfied with this” before marking something as complete. You might still be able to ship if the answer is “No”, but make sure you don’t lose sight of that and make sure it gets the priority it deserves.

One last point I’ll make; I think to really execute on all of this, it requires buy-in from everyone. Not just engineers, not just engineers and managers, but visual designers, user experience, leadership… Everyone. It’s too easy to do a job that’s good enough and it’s too much responsibility to put it all on one person’s shoulders. You really need to be on the ball to produce the kind of software that Apple does almost routinely, but as much as they’d say otherwise, it isn’t magic.

by Chris Lord at May 27, 2017 12:00 PM

May 19, 2017

Emmanuele Bassi

Further experiments in Meson

Meson is definitely getting more traction in GNOME (and other projects), with many components adding support for it in parallel to autotools, or outright switching to it. There are still bugs, here and there, and we definitely need to improve build environments — like Continuous — to support Meson out of the box, but all in all I’m really happy about not having to deal with autotools any more, as well as being able to build the G* stack much more quickly when doing continuous integration.

Now that GTK+ has added Meson support, though, it’s time to go through the dependency chain in order to clean up and speed up the build in the lower bits of our stack. After an aborted attempt at porting GdkPixbuf, I decided to port Pango.

All in all, Pango proved to be an easy win; it took me about one day to port from Autotools to Meson, and most of it was mechanical translation from weird autoconf/automake incantations that should have been removed years ago1. Most of the remaining bits were:

  • ensuring that both Autotools and Meson would build the same DSOs, with the same symbols
  • generating the same introspection data and documentation
  • installing tests and data in the appropriate locations

Thanks to the ever vigilant eye of Nirbheek Chauhan, and thanks to the new Meson reference, I was also able to make the Meson build slightly more idiomatic than a straight, 1:1 port would have done.

The results are a full Meson build that takes about the same time as ./autogen.sh to run:

* autogen.sh:                         * meson
  real        0m11.149s                 real          0m2.525s
  user        0m8.153s                  user          0m1.609s
  sys         0m2.363s                  sys           0m1.206s

* make -j$(($(nproc) + 2))            * ninja
  real        0m9.186s                  real          0m3.387s
  user        0m16.295s                 user          0m6.887s
  sys         0m5.337s                  sys           0m1.318s

--------------------------------------------------------------

* autotools                           * meson + ninja
  real        0m27.669s                 real          0m5.772s
  user        0m45.622s                 user          0m8.465s
  sys         0m10.698s                 sys           0m2.357s

Not bad for a day’s worth of work.

My plan would be to merge this in the master branch pretty soon; I also have a branch that drops Autotools entirely but that can wait a cycle, as far as I’m concerned.

Now comes the hard part: porting libraries like GdkPixbuf, ATK, gobject-introspection, and GLib to Meson. There’s already a GLib port, courtesy of Centricular, but it needs further testing; GdkPixbuf is pretty terrible, since it’s a really old library; I don’t expect ATK and GObject introspection to be complicated, but the latter has a non-recursive Make layout that is full of bees.

It would be nice to get to GUADEC and have the whole G* stack build with Meson and Ninja. If you want to help out, reach out in #gtk+, on IRC or on Matrix.


  1. The Windows support still checks for GCC 2.x or 3.x flags, for instance. 

by ebassi at May 19, 2017 05:20 PM

February 23, 2017

Chris Lord

Machine Learning Speech Recognition

Keeping up my yearly blogging cadence, it’s about time I wrote to let people know what I’ve been up to for the last year or so at Mozilla. People keeping up would have heard of the sad news regarding the Connected Devices team here. While I’m sad for my colleagues and quite disappointed in how this transition period has been handled as a whole, thankfully this hasn’t adversely affected the Vaani project. We recently moved to the Emerging Technologies team and have refocused on the technical side of things, a side that I think most would agree is far more interesting, and also far more suited to Mozilla and our core competence.

Project DeepSpeech

So, out with Project Vaani, and in with Project DeepSpeech (name will likely change…) – Project DeepSpeech is a machine learning speech-to-text engine based on the Baidu Deep Speech research paper. We use a particular layer configuration and initial parameters to train a neural network to translate from processed audio data to English text. You can see roughly how we’re progressing with that here. We’re aiming for a 10% Word Error Rate (WER) on English speech at the moment.

You may ask, why bother? Google and others provide state-of-the-art speech-to-text in multiple languages, and in many cases you can use it for free. There are multiple problems with existing solutions, however. First and foremost, most are not open-source/free software (at least none that could rival the error rate of Google). Secondly, you cannot use these solutions offline. Third, you cannot use these solutions for free in a commercial product. The reason a viable free software alternative hasn’t arisen is mostly down to the cost and restrictions around training data. This makes the project a great fit for Mozilla as not only can we use some of our resources to overcome those costs, but we can also use the power of our community and our expertise in open source to provide access to training data that can be used openly. We’re tackling this issue from multiple sides, some of which you should start hearing about Real Soon Now™.

The whole team has made contributions to the main code. In particular, I’ve been concentrating on exporting our models and writing clients so that the trained model can be used in a generic fashion. This lets us test and demo the project more easily, and also provides a lower barrier for entry for people that want to try out the project and perhaps make contributions. One of the great advantages of using TensorFlow is how relatively easy it makes it to both understand and change the make-up of the network. On the other hand, one of the great disadvantages of TensorFlow is that it’s an absolute beast to build and integrates very poorly with other open-source software projects. I’ve been trying to overcome this by writing straight-forward documentation, and hopefully in the future we’ll be able to distribute binaries and trained models for multiple platforms.

Getting Involved

We’re still at a fairly early stage at the moment, which means there are many ways to get involved if you feel so inclined. The first thing to do, in any case, is to just check out the project and get it working. There are instructions provided in READMEs to get it going, and fairly extensive instructions on the TensorFlow site on installing TensorFlow. It can take a while to install all the dependencies correctly, but at least you only have to do it once! Once you have it installed, there are a number of scripts for training different models. You’ll need a powerful GPU(s) with CUDA support (think GTX 1080 or Titan X), a lot of disk space and a lot of time to train with the larger datasets. You can, however, limit the number of samples, or use the single-sample dataset (LDC93S1) to test simple code changes or behaviour.

One of the fairly intractable problems about machine learning speech recognition (and machine learning in general) is that you need lots of CPU/GPU time to do training. This becomes a problem when there are so many initial variables to tweak that can have dramatic effects on the outcome. If you have the resources, this is an area that you can very easily help with. What kind of results do you get when you tweak dropout slightly? Or layer sizes? Or distributions? What about when you add or remove layers? We have fairly powerful hardware at our disposal, and we still don’t have conclusive results about the affects of many of the initial variables. Any testing is appreciated! The Deep Speech 2 paper is a great place to start for ideas if you’re already experienced in this field. Note that we already have a work-in-progress branch implementing some of these ideas.

Let’s say you don’t have those resources (and very few do), what else can you do? Well, you can still test changes on the LDC93S1 dataset, which consists of a single sample. You won’t be able to effectively tweak initial parameters (as unsurprisingly, a dataset of a single sample does not represent the behaviour of a dataset with many thousands of samples), but you will be able to test optimisations. For example, we’re experimenting with model quantisation, which will likely be one of multiple optimisations necessary to make trained models usable on mobile platforms. It doesn’t particularly matter how effective the model is, as long as it produces consistent results before and after quantisation. Any optimisation that can be made to reduce the size or the processor requirement of training and using the model is very valuable. Even small optimisations can save lots of time when you start talking about days worth of training.

Our clients are also in a fairly early state, and this is another place where contribution doesn’t require expensive hardware. We have two clients at the moment. One written in Python that takes advantage of TensorFlow serving, and a second that uses TensorFlow’s native C++ API. This second client is the beginnings of what we hope to be able to run on embedded hardware, but it’s very early days right now.

And Finally

Imagine a future where state-of-the-art speech-to-text is available, for free (in cost and liberty), on even low-powered devices. It’s already looking like speech is going to be the next frontier of human-computer interaction, and currently it’s a space completely tied up by entities like Google, Amazon, Microsoft and IBM. Putting this power into everyone’s hands could be hugely transformative, and it’s great to be working towards this goal, even in a relatively modest capacity. This is the vision, and I look forward to helping make it a reality.

by Chris Lord at February 23, 2017 04:55 PM

February 13, 2017

Emmanuele Bassi

On Vala

It seems I raised a bit of a stink on Twitter last week:

Of course, and with reason, I’ve been called out on this by various people. Luckily, it was on Twitter, so we haven’t seen articles on Slashdot and Phoronix and LWN with headlines like “GNOME developer says Vala is dead and will be removed from all servers for all eternity and you all suck”. At least, I’ve only seen a bunch of comments on Reddit about this, but nobody cares about that particular cesspool of humanity.

Sadly, 140 characters do not leave any room for nuance, so maybe I should probably clarify what I wrote on a venue with no character limit.

First of all, I’d like to apologise to people that felt I was attacking them or their technical choices: it was not my intention, but see above, re: character count. I may have only about 1000 followers on Twitter, but it seems that the network effect is still a bit greater than that, so I should be careful when wording opinions. I’d like to point out that it’s my private Twitter account, and you can only get to what it says if you follow me, or if you follow people who follow me and decide to retweet what I write.

My PSA was intended as a reflection on the state of Vala, and its impact on the GNOME ecosystem in terms of newcomers, from the perspective of a person that used Vala for his own personal projects; recommended Vala to newcomers; and has to deal with the various build issues that arise in GNOME because something broke in Vala or in projects using Vala. If you’re using Vala outside of GNOME, you have two options: either ignore all I’m saying, as it does not really apply to your case; or do a bit of soul searching, and see if what I wrote does indeed apply to you.

First of all, I’d like to qualify my assertion that Vala is a “dead language”. Of course people see activity in the Git repository, see the recent commits and think “the project is still alive”. Recent commits do not tell a complete story.

Let’s look at the project history for the past 10 cycles (roughly 2.5 years). These are the commits for every cycle, broken up in two values: one for the full repository, the other one for the whole repository except the vapi directory, which contains the VAPI files for language bindings:

Commits

Aside from the latest cycle, Vala has seen very little activity; the project itself, if we exclude binding updates, has seen less than 100 commits for every cycle — some times even far less. The latest cycle is a bit of an outlier, but we can notice a pattern of very little work for two/three cycles, followed by a spike. If we look at the currently in progress cycle, we can already see that the number of commits has decreased back to 55/42, as of this morning.

Commits

Number of commits is just a metric, though; more important is the number of contributors. After all, small, incremental changes may be a good thing in a language — though, spoiler alert: they are usually an indication of a series of larger issues, and we’ll come to that point later.

These are the number of developers over the same range of cycles, again split between committers to the full repository and to the full repository minus the vapi directory:

Developers

As you can see, the number of authors of changes is mostly stable, but still low. If we have few people that actively commit to the repository it means we have few people that can review a patch. It means patches linger longer and longer, while reviewers go through their queues; it means that contributors get discouraged; and, since nobody is paid to work full time on Vala, it means that any interruption caused by paid jobs will be a bottleneck on the project itself.

These concerns are not unique of a programming language: they exist for every volunteer-driven free and open source project. Programming languages, though, like core libraries, are problematic because any bottleneck causes ripple effects. You can take any stalled project you depend on, and vendor it into your own, but if that happens to the programming language you’re using, then you’re pretty much screwed.

For these reasons, we should also look at how well-distributed is the workload in Vala, i.e. which percentage of the work is done by the authors of those commits; the results are not encouraging. Over that range of cycles, Only two developers routinely crossed the 5% of commits:

  • Rico Tzschichholz
  • Jürg Billeter

And Rico has been the only one to consistently author >50% of the commits. This means there’s only one person dealing with the project on a day to day basis.

As the maintainer of a project who basically had to do all the work, I cannot even begin to tell you how soul-crushing that can become. You get burned out, and you feel responsible for everyone using your code, and then you get burned out some more. I honestly don’t want Rico to burn out, and you shouldn’t, either.

So, let’s go into unfair territory. These are the commits for Rust — the compiler and standard library:

Rust

These are the commits for Go — the compiler and base library:

Go

These are the commits for Vala — both compiler and bindings:

Vala

These are the number of commits over the past year. Both languages are younger than Vala, have more tools than Vala, and are more used than Vala. Of course, it’s completely unfair to compare them, but those numbers should give you a sense of scale, of what is the current high bar for a successful programming language these days. Vala is a niche language, after all; it’s heavily piggy-backing on the GNOME community because it transpiles to C and needs a standard library and an ecosystem like the one GNOME provides. I never expected Vala to rise to the level of mindshare that Go and Rust currently occupy.

Nevertheless, we need to draw some conclusions about the current state of Vala — starting from this thread, perhaps, as it best encapsulates the issues the project is facing.

Vala, as a project, is limping along. There aren’t enough developers to actively effect change on the project; there aren’t enough developers to work on ancillary tooling — like build system integration, debugging and profiling tools, documentation. Saying that “Vala compiles to C so you can use tools meant for C” is comically missing the point, and it’s effectively like saying that “C compiles to binary code, so you can disassemble a program if you want to debug it”. Being able to inspect the language using tools native to the language is a powerful thing; if you have to do the name mangling in your head in order to set a breakpoint in GDB you are elevating the barrier of contributions way above the head of many newcomers.

Being able to effect change means also being able to introduce change effectively and without fear. This means things like continuous integration and a full test suite heavily geared towards regression testing. The test suite in Vala is made of 210 units, for a total of 5000 lines of code; the code base of Vala (vala AST, codegen, C code emitter, and the compiler) is nearly 75 thousand lines of code. There is no continuous integration, outside of the one that GNOME Continuous performs when building Vala, or the one GNOME developers perform when using jhbuild. Regressions are found after days or weeks, because developers of projects using Vala update their compiler and suddenly their projects cease to build.

I don’t want to minimise the enormous amount of work that every Vala contributor brought to the project; they are heroes, all of them, and they deserve as much credit and praise as we can give. The idea of a project-oriented, community-oriented programming language has been vindicated many times over, in the past 5 years.

If I scared you, or incensed you, then you can still blame me, and my lack of tact. You can still call me an asshole, and you can think that I’m completely uncool. What I do hope, though, is that this blog post pushes you into action. Either to contribute to Vala, or to re-new your commitment to it, so that we can look at my words in 5 years and say “boy, was Emmanuele wrong”; or to look at alternatives, and explore new venues in order to make GNOME (and the larger free software ecosystem) better.

by ebassi at February 13, 2017 01:12 PM

February 11, 2017

Emmanuele Bassi

Epoxy

Epoxy is a small library that GTK+, and other projects, use in order to access the OpenGL API in somewhat sane fashion, hiding all the awful bits of craziness that actually need to happen because apparently somebody dosed the water supply at SGI with large quantities of LSD in the mid-‘90s, or something.

As an added advantage, Epoxy is also portable on different platforms, which is a plus for GTK+.

Since I’ve started using Meson for my personal (and some work-related) projects as well, I’ve been on the lookout for adding Meson build rules to other free and open source software projects, in order to improve both their build time and portability, and to improve Meson itself.

As a small, portable project, Epoxy sounded like a good candidate for the port of its build system from autotools to Meson.

To the Bat Build Machine!

tl;dr

Since you may be interested just in the numbers, building Epoxy with Meson on my Kaby Lake four Core i7 and NMVe SSD takes about 45% less time than building it with autotools.

A fairly good fraction of the autotools time is spent going through the autogen and configure phases, because they both aren’t parallelised, and create a ton of shell invocations.

Conversely, Meson’s configuration phase is incredibly fast; the whole Meson build of Epoxy fits in the same time the autogen.sh and configure scripts complete their run.

Administrivia

Epoxy is a simple library, which means it does not need a hugely complicated build system set up; it does have some interesting deviations, though, which made the porting an interesting challenge.

For instance, on Linux and similar operating systems Epoxy uses pkg-config to find things like the EGL availability and the X11 headers and libraries; on Windows, though, it relies on finding the opengl32 shared or static library object itself. This means that we get something straightforward in the former case, like:

# Optional dependencies
gl_dep = dependency('gl', required: false)
egl_dep = dependency('egl', required: false)

and something slightly less straightforward in the latter case:

if host_system == 'windows'
  # Required dependencies on Windows
  opengl32_dep = cc.find_library('opengl32', required: true)
  gdi32_dep = cc.find_library('gdi32', required: true)
endif

And, still, this is miles better than what you have to deal with when using autotools.

Let’s take a messy thing in autotools, like checking whether or not the compiler supports a set of arguments; usually, this involves some m4 macro that’s either part of autoconf-archive or some additional repository, like the xorg macros. Meson handles this in a much better way, out of the box:

# Use different flags depending on the compiler
if cc.get_id() == 'msvc'
  test_cflags = [
    '-W3',
    ...,
  ]
elif cc.get_id() == 'gcc'
  test_cflags = [
    '-Wpointer-arith',
    ...,
  ]
else
  test_cflags = [ ]
endif

common_cflags = []
foreach cflag: test_cflags
  if cc.has_argument(cflag)
    common_cflags += [ cflag ]
  endif
endforeach

In terms of speed, the configuration step could be made even faster by parallelising the compiler argument checks; right now, Meson has to do them all in a series, but nothing except some additional parsing effort would prevent Meson from running the whole set of checks in parallel, and gather the results at the end.

Generating code

In order to use the GL entry points without linking against libGL or libGLES* Epoxy takes the XML description of the API from the Khronos repository and generates the code that ends up being compiled by using a Python script to parse the XML and generating header and source files.

Additionally, and unlike most libraries in the G* stack, Epoxy stores its public headers inside a separate directory from its sources:

libepoxy
├── cross
├── doc
├── include
│   └── epoxy
├── registry
├── src
└── test

The autotools build has the src/gen_dispatch.py script create both the source and the header file for each XML at the same time using a rule processed when recursing inside the src directory, and proceeds to put the generated header under $(top_builddir)/include/epoxy, and the generated source under $(top_builddir)/src. Each code generation rule in the Makefile manually creates the include/epoxy directory under the build root to make up for parallel dispatch of each rule.

Meson makes is harder to do this kind of spooky-action-at-a-distance build, so we need to generate the headers in one pass, and the source in another. This is a bit of a let down, to be honest, and yet a build that invokes the generator script twice for each API description file is still faster under Ninja than a build with the single invocation under Make.

There are sill issues in this step that are being addressed by the Meson developers; for instance, right now we have to use a custom target for each generated header and source separately instead of declaring a generator and calling it multiple times. Hopefully, this will be fixed fairly soon.

Documentation

Epoxy has a very small footprint, in terms of API, but it still benefits from having some documentation on its use. I decided to generate the API reference using Doxygen, as it’s not a G* library and does not need the additional features of gtk-doc. Sadly, Doxygen’s default style is absolutely terrible; it would be great if somebody could fix it to make it look half as good as the look gtk-doc gets out of the box.

Cross-compilation and native builds

Now we get into “interesting” territory.

Epoxy is portable; it works on Linux and *BSD systems; on macOS; and on Windows. Epoxy also works on both Intel Architecture and on ARM.

Making it run on Unix-like systems is not at all complicated. When it comes to Windows, though, things get weird fast.

Meson uses cross files to determine the environment and toolchain of the host machine, i.e. the machine where the result of the build will eventually run. These are simple text files with key/value pairs that you can either keep in a separate repository, in case you want to share among projects; or you can keep them in your own project’s repository, especially if you want to easily set up continuous integration of cross-compilation builds.

Each toolchain has its own; for instance, this is the description of a cross compilation done on Fedora with MingW:

[binaries]
c = '/usr/bin/x86_64-w64-mingw32-gcc'
cpp = '/usr/bin/x86_64-w64-mingw32-cpp'
ar = '/usr/bin/x86_64-w64-mingw32-ar'
strip = '/usr/bin/x86_64-w64-mingw32-strip'
pkgconfig = '/usr/bin/x86_64-w64-mingw32-pkg-config'
exe_wrapper = 'wine'

This section tells Meson where the binaries of the MingW toolchain are; the exe_wrapper key is useful to run the tests under Wine, in this case.

The cross file also has an additional section for things like special compiler and linker flags:

[properties]
root = '/usr/x86_64-w64-mingw32/sys-root/mingw'
c_args = [ '-pipe', '-Wp,-D_FORTIFY_SOURCE=2', '-fexceptions', '--param=ssp-buffer-size=4', '-I/usr/x86_64-w64-mingw32/sys-root/mingw/include' ]
c_link_args = [ '-L/usr/x86_64-w64-mingw32/sys-root/mingw/lib' ]

These values are taken from the equivalent bits that Fedora provides in their MingW RPMs.

Luckily, the tool that generates the headers and source files is written in Python, so we don’t need an additional layer of complexity, with a tool built and run on a different platform and architecture in order to generate files to be built and run on a different platform.

Continuous Integration

Of course, any decent process of porting, these days, should deal with continuous integration. CI gives us confidence as to whether or not any change whatsoever we make actually works — and not just on our own computer, and our own environment.

Since Epoxy is hosted on GitHub, the quickest way to deal with continuous integration is to use TravisCI, for Linux and macOS; and Appveyor for Windows.

The requirements for Meson are just Python3 and Ninja; Epoxy also requires Python 2.7, for the dispatch generation script, and the shared libraries for GL and the native API needed to create a GL context (GLX, EGL, or WGL); it also optionally needs the X11 libraries and headers and Xvfb for running the test suite.

Since Travis offers an older version of Ubuntu LTS as its base system, we cannot build Epoxy with Meson; additionally, running the test suite is a crapshoot because the Mesa version if hopelessly out of date and will either cause most of the tests to be skipped or, worse, make them segfault. To sidestep this particular issue, I’ve prepared a Docker image with its own harness, and I use it as the containerised environment for Travis.

On Appveyor, thanks to the contribution of Thomas Marrinan we just need to download Python3, Python2, and Ninja, and build everything inside its own root; as an added bonus, Appveyor allows us to take the build artefacts when building from a tag, and shoving them into a zip file that gets deployed to the release page on GitHub.

Conclusion

Most of this work has been done off and on over a couple of months; the rough Meson build conversion was done last December, with the cross-compilation and native builds taking up the last bit of work.

Since Eric does not have any more spare time to devote to Epoxy, he was kind enough to give me access to the original repository, and I’ve tried to reduce the amount of open pull requests and issues there.

I’ve also released version 1.4.0 and I plan to do a 1.4.1 release soon-ish, now that I’m positive Epoxy works on Windows.

I’d like to thank:

  • Eric Anholt, for writing Epoxy and helping out when I needed a hand with it
  • Jussi Pakkanen and Nirbheek Chauhan, for writing Meson and for helping me out with my dumb questions on #mesonbuild
  • Thomas Marrinan, for working on the Appveyor integration and testing Epoxy builds on Windows
  • Yaron Cohen-Tal, for maintaining Epoxy in the interim

by ebassi at February 11, 2017 01:34 AM

January 11, 2017

Emmanuele Bassi

Constraints editing

Last year I talked about the newly added support for Apple’s Visual Format Language in Emeus, which allows to quickly describe layouts using a cross between ASCII art and predicates. For instance, I can use:

H:|-[icon(==256)]-[name_label]-|
H:[surname_label]-|
H:[email_label]-|
H:|-[button(<=icon)]
V:|-[icon(==256)]
V:|-[name_label]-[surname_label]-[email_label]-|
V:[button]-|

and obtain a layout like this one:

Boxes approximate widgets

Thanks to the contribution of my colleague Martin Abente Lahaye, now Emeus supports extensions to the VFL, namely:

  • arithmetic operators for constant and multiplication factors inside predicates, like [button1(button2 * 2 + 16)]
  • explicit attribute references, like [button1(button1.height / 2)]

This allows more expressive layout descriptions, like keeping aspect ratios between UI elements, without requiring hitting the code base.

Of course, editing VFL descriptions blindly is not what I consider a fun activity, so I took some time to write a simple, primitive editing tool that lets you visualize a layout expressed through VFL constraints:

I warned you that it was primitive and simple

Here’s a couple of videos showing it in action:

At some point, this could lead to a new UI tool to lay out widgets inside Builder and/or Glade.

As of now, I consider Emeus in a stable enough state for other people to experiment with it — I’ll probably make a release soon-ish. The Emeus website is up to date, as it is the API reference, and I’m happy to review pull requests and feature requests.

by ebassi at January 11, 2017 02:30 PM

December 25, 2016

Tomas Frydrych

A Year in the Hills

A Year in the Hills

TL;DR: ~440 hours of running, 3,000km travelled, 118km ascended, an FKT set on the Assynt Traverse. Yet, the numbers don't even begin to tell the story ...

It's been a good year, with some satisfying longer days in the hills: an enjoyable two day round of Glen Affric & Kintail in April (still in full-on winter conditions), a two day loop around Glen Lyon in May (taking in the Lawers and Carn Mairg ridges), a round of the seven Crianlarich Munros (East to West in May, West to East in June), a two day trot through the Mamores in September, a three day run through the Cairngorms in October (with some of the most amazing light I have ever seen, and shoes turning into solid blocks of ice over night). There have also been many great shorter days, the Carn Eighe Horse Shoe and the Coigach Horse Shoe come to mind. But the highlight of my year, without any question, was the July Assynt Traverse, at 74km of largely off track running, and some 6,400m of a vertical ascent, by far the most physically challenging thing I have ever attempted, setting a new FKT (23h 54min) an icing on the cake.

The Assynt Traverse had been haunting me since the summer of 2013. During those three years I had gone through a random mixture of great enthusiasm, physical setbacks (from too much enthusiasm!), and self doubt (as the scale of the challenge had become clear). I came very close to not attempting it (again), thinking failure was inevitable. Fortunately, a brief, incidental, conversation with a friend helped me to refocus -- at this scale DNF is never a failure, just an attempt, the only real possibility of a failure is a DNS, a failure of the mind. From that point on the rest was just logistics, and some running in the most beautiful landscape I know!

The real significance of the Traverse for me, however, was neither in completing it, nor in setting the FKT. Rather, the Traverse turned out to be a condensed essence of the totality of my running experiences, neatly packaged into a single day. As such it brought much clarity into my understanding of why I run, and, in particular, what drives me into the hills.

Obviously, there are the views, at times but brief glimpses, at times sustained (and, far too often, none at all). There are the brief encounters with wildlife: the sense of awe over a golden eagle you nearly run into, the envy of a raven playing with the wind that to me, a supposedly superior species, is proving such a nuisance (and in Assynt, the ever present frogs and toads).

Then there is the whole mind over matter thing, like when merely three and half hours into your twenty four hour venture the body declares it can't go any further, but your mind knows it's nothing but load of BS, and you somehow manage to carry on. There is the simple enjoyment of running, six continuous hours of it negotiating the ridges of the Ben More Assynt massive, hopping from boulder to boulder under blue skies. There is that sense of complete physical and mental liberation as the dopamine high goes through the roof after fifteen hours of hard graft. There is the need to hold it together, sleep deprived in the wee hours on Quinag, simply because there is no other alternative, it's just you and the hills.

All of the above are reasons why I run hills. But the reason that exceeds all of the above is the time to think it affords me, time to reflect in the peace and quiet, senses sharpened by physical exertion -- that is the real reason why I run, and why I unashamedly enjoy running in my own company.

In a place like Assynt, in the midst of the seemingly immutable, aeons old landscape, it is impossible to escape the sense of one's own transience and insignificance. The knowledge that these hills have been around long before me, and will remain long after my brief intrusion somehow puts everything into perspective. The hills ask not merely 'what are you doing here?' but also 'what do you do when you are not here?', and 'why?'. They question our priorities, our commitments, or the the lack of thereof. They encourage us to look forward beyond the immediate horizon of tomorrow, of the next pay check.

There is much thinking to be done in twenty four hours, and on the back of that some decisions have been made in the weeks that followed, some plans laid, there are some changes on the horizon for the coming year. It's too early days to say more for now, maybe in a couple of months.

As for my running, the Ramsay Round has been in my thoughts since the morning after Assynt -- I am toying with the unsupported solo option (I don't think I have it in me to meet the 24h limit anyway, so might just as well, and it simplifies the logistics), but I expect a realistic timetable for that is 2018. I am hoping for some more multiday runs, there is so much exploring to be still done in this wee country of ours, so little time.

Happy 2017!

by tf at December 25, 2016 11:16 AM

December 17, 2016

Emmanuele Bassi

Laptop review

Dell XPS 13 (Developer Edition 2016)

After three and a half years with my trusty mid-2013 MacBook Air, I decided to get a new personal laptop. To be fair, my Air could have probably lasted another 12-18 months, even though its 8GB of RAM and Haswell Core i7 were starting to get pretty old for system development. The reason why I couldn’t keep using it reliably was that the SSD had already started showing SMART errors in January, and I already had to reset it and re-install from scratch once. Refurbishing the SSD out of warranty is still an option, if I decided to fork over a fair chunk of money and could live without a laptop for about a month1.

After getting recommendations for the previous XPS iterations by various other free software developers and Linux users, I waited until the new, Kaby Lake based model was available in the EU and ordered one. After struggling a bit with Dell’s website, I managed to get an XPS 13 with a US keyboard layout2 — which took about two weeks from order to delivery.

The hardware out of the box experience is pretty neat, with a nice, clean box; very Apple-like. The software’s first boot experience could be better, to say the least. Since I chose the Developer Edition, I got Ubuntu as the main OS instead of Windows, and I have been thoroughly underwhelmed by the effort spent by Dell and Canonical in polishing the software side of things. As soon as you boot the laptop, you’re greeted with an abstract video playing while the system does something. The video playback is not skippable, and does not have volume controls, so I got to “experience” it at full blast out of the speakers.

Ubuntu’s first boot experience UI to configure the machine is rudimentary, at best, and not really polished; it’s the installer UI without the actual installation bits, but it clearly hasn’t been refined for the HiDPI screen. The color scheme has progressively gone worse over the years; while all other OSes are trying to convey a theme of lightness using soft tones, the dark grey, purple, and dark orange tones used by Ubuntu make the whole UI seem heavier and oppressive.

After that, you get into Unity, and no matter how many times I try it, I still cannot enjoy using it. I also realized why various people coming from Ubuntu complain about the GNOME theme being too heavy on the whitespace: the Ubuntu default theme is super-compressed, with controls hugging together so closely that they almost seem to overlap. There is barely no affordance for the pointer, let alone for interacting through the touchscreen.

All in all, I resisted half a day on it, mostly to see what was the state of stock Ubuntu after many years of Fedora3. After that, I downloaded a Fedora 25 USB image and re-installed from scratch.

Sadly, I still have to report that Anaconda doesn’t shine at all. Luckily, I didn’t have to deal with dual booting, so I only needed to interact with the installer just enough to tell it to use the stock on disk layout and create the root user. Nevertheless, figuring out how to tell it to split my /home volume and encrypt it required me to go through the partitioning step three times because I couldn’t for the life of me understand how to commit to the layout I wanted.

After that, I was greeted by GNOME’s first boot experience — which is definitely more polished than Ubuntu’s, but it’s still a bit too “functional” and plain.

Fedora recognised the whole hardware platform out of the box: wifi, bluetooth, webcam, HiDPI screen. On the power management side, I was able to wring out about 8 hours of work (compilation, editing, web browsing, and a couple of Google hangouts) while on wifi, without having to plug in the AC.

Coming from years of Apple laptops, I was especially skeptical of the quality of the touchpad, but I have to say I was pleasantly surprised by its accuracy and feedback. It’s not MacBook-level, but it’s definitely the closest anyone has ever been to that slice of fried gold.

The only letdowns I can find are the position of the webcam, which is on the bottom of the panel and to the left, which makes for very dramatic angles when doing video calls, and requires you never type if you don’t want your fingers to be in the way; and the power brick, which has its own proprietary connector. There’s a USB-C port, though, so there may be provisions for powering the laptop through it.

The good

  • Fully supported hardware (Fedora 25)
  • Excellent battery life
  • Nice keyboard
  • Very good touchpad

The bad

  • The position of the webcam
  • Yet another power brick with custom connector I have to lug around

Lenovo Yoga

Thanks to my employer I now have a work laptop as well, in the shape of a Lenovo Yoga 900. I honestly crossed off Lenovo as a vendor after the vast amounts of stupidity they imposed on their clients — and that was after I decided to stop buying ThinkPad-branded laptops, given their declining build quality and bad technical choices. Nevertheless, you don’t look a gift horse in the mouth.

The out of the box experience of the Yoga is very much on par with the one I had with the XPS, which is to say: fairly Apple-like.

The Yoga 900 is a fairly well made machine. It’s an Intel Sky Lake platform, with a nice screen and good components. The screen can fold and turn the whole thing into a “tablet”, except that the keyboard faces downward, so it’s weird to handle in that mode. Plus, a 13” tablet is a pretty big thing to carry around. On the other hand, folding the laptop into a “tent” and using an external keyboard and pointer device is a nice twist on the whole “home office” approach. The webcam is, thankfully, centered and placed at the top of the panel — something that Lenovo has apparently changed in the 910 model, when they realised that folding the laptop would put the webcam at the bottom of the panel.

On the software side, the first boot experience into Windows 10 was definitely less than stellar. The Lenovo FBE software was not HiDPI-aware, which posed interesting challenges to the user interaction. This is something that a simple bit of QA would have found out, but apparently QA is too much to ask when dealing with a £1000 laptop. Luckily, I had to deal with that only inasmuch as I needed to get and install the latest firmware updates before installing Linux on the machine. Again, I went for Fedora.

As in the case of the Dell XPS, Fedora recognised all components of the hardware plaform out of the box. Even the screen rotation and folding works out of the box — though it can still get into inconsistent states when you move the laptop around, so I kind of recommend you keep the screen rotation locked until you actually need it.

On the power management side, I was impressed by how well the sleep states conserve battery power; I’m able to leave the Yoga suspended for a week and still have power on resume. The power brick has a weird USB-like connector to the laptop which makes me wonder what on earth were Lenovo engineers thinking; on the other hand, the adapter has a USB port which means you can charge it from a battery pack or from a USB adapter as well. There’s also a USB-C port, but I still haven’t tested if I can put power through it.

The keyboard is probably the biggest let down; the travel distance and feel of the keys is definitely not up to par with the Dell XPS, or with the Apple keyboards. The 900 has an additional column of navigation keys on the right edge that invariably messes up my finger memory — though it seems that the 910 has moved them to Function key combinations.5 The power button is on the right side of the laptop, which makes for unintended suspend/resume cycles when trying to plug in the headphones, or when moving the laptop. The touchpad is, sadly, very much lacking, with ghost tap events that forced me to disable the middle-click emulation everywhere4.

The good

  • Fully supported hardware (Fedora 25)
  • Solid build
  • Nice flip action
  • Excellent power management

The bad

  • Keyboard is a toy
  • Touchpad is a pale imitation of a good pointing device

  1. Which may still happen, all things considered; I really like the Air as a travel laptop. 

  2. After almost a decade with US layouts I find the UK layout inferior to the point of inconvenience. 

  3. On my desktop machine/gaming rig I dual boot between Windows 10 and Ubuntu GNOME, mostly because of the nVidia GPU and Steam. 

  4. That also increased my hatred of the middle-click-to-paste-selection easter egg a thousandfold, and I already hated the damned thing so much that my rage burned with the intensity of a million suns. 

  5. Additionally, the keyboard layout is UK — see note 2 above. 

by ebassi at December 17, 2016 12:00 AM

December 02, 2016

Tomas Frydrych

Winter's upon us

Winter's upon us

It's that time of the year again when the white stuff is covering the hills. This year it's come early and without a warning, one day still running in shorts, next day rummaging for the winter gear (and, typically, by the time I have finished writing this, much of the snow is gone again). Winter hill running is bit of an acquired taste, but taking on the extra challenges is, often, worth it.

The key to having an enjoyable time in the Scottish hills throughout the winter can, I think, be summed up in one word: respect. The winter hills are a serious place. Whereas getting benighted in split shorts and a string vest during the summer will earn one a (perhaps very) uncomfortable night and a bruised ego, the same scenario during the winter would quite likely end up with one's mates taking the piss over sausage rolls after the funeral (editor's note: mate quality varies). As hill runners, we tend to operate with smaller margins of comfort and safety, and in winter time it is critical to maintain those margins when things don't go to plan (which, among other things, means running the winter hills is not a sensible way to be learning the rudiments of mountain craft, one should serve that apprenticeship in some other way first).

However, with that 'participation statement' out of the way, the winter does bring exciting running opportunities for those minded to take on the challenges, and one need not to be called Kilijan or Finlay to venture into the snow. So here are some of my thoughts on the matter, things I have learnt, at times through bad mistakes; it's stuff that works for me running, it might not work for you, but perhaps it might help someone avoiding some of my mistakes.

Planning

In the winter months careful planning is doubly important for two reasons: the limited daylight hours mean longer runs are always a tight squeeze, and, the rate of progress along snow covered ground is impossible to predict -- a firm neve is grin-inducing, foot deep powder will make you work hard for a good pace, and a breakable crust on the top of a foot of soft snow will turn the air blue, and reduce one's pace to a crawl; I have been venturing into mountains for some four decades, but I still know of no way to reliably predict which you will find up there from down below. As such, it is important to have realistic expectations. My rule of the thumb is to expect to need around 30% extra time compared to the summer, and to plan for 50% more. Your adjustments might be quite different, but it pays to be conservative, and, always, to plan for the dark.

On the longer runs, it is also important to have a bailout plan. What do I do if the ground conditions make it impossible to complete the run? (In the winter, this a perfectly normal scenario.) Is there a point of no return? What are the early exit lines beyond this? Looking for alternative options when the sh!t is about to hit the fan is a recipe for turning minor difficulties into an emergency, and (this should not need saying, but does) expecting to be lifted out by an MRT simply because 'too much snow' / 'getting tired' / etc., is not a responsible plan. Sometimes there is no convenient bailout possible, that too is worth knowing in advance.

It is also important to understand how winter conditions impact on the technical difficulty of terrain -- the basic rule of thumb is that even the easiest of scrambles turn into full blown technical climbs that cannot be safely tackled with the sort of an equipment we as runners might carry. Corries get corniced, and pose avalanche risk. Consequently, some excellent summer routes might require variations in the winter, some are simply not suitable -- it is important to assess this beforehand to avoid getting caught out, and, yes, to have a bailout plan.

Finally, one should pay attention to the weather forecast, and in particular wind speed and direction at the target altitude (the MeteoEarth app is a great resource for this). Even moderately strong headwind has a big impact on a runner's rate of progress; in the summer, one might wave that off as MTFU but for the winter runner wind is a significant risk factor beyond the obvious windchill.

Emergency Kit

Anyone venturing into the winter hills should be equipped well enough to last safely for a few stationary hours on the open hill. This is nothing more than the common sense. Say I find myself in a genuine emergency necessitating an MRT call out. And say I am lucky enough to be somewhere with sufficient mobile phone coverage to raise the alarm. It takes time for the call to go out, the team to mobilise, and to get to me. Even if I am literally in the MRT's back yard, I can expect to spend a couple of hours stationary on my own before the help can reach me, and possibly lot more if I am in a remote area, or the weather conditions are poor. While in the summer MTFU can be a perfectly valid backup plan, the winter hills don't stand for such hubris.

As runners, we rely on our high energy expenditure to maintain our body temperature. In order to do that, we have to maintain a good pace, which in turn requires that we travel light. This works great, until it doesn't. While 'safe' is not the same as 'comfortable', it is fair to say, I think, that during the winter running kit falls well below even the most optimistic view of a safety margin. In the summer I generally aim to only carry the stuff I intend to use, in the winter I always carry a few things I hope not to have to use (yes, it's a nuisance, but I just think of it as weight training):

  • Some sort of a shelter (for a solo runner a crisp-bag type of a bivvy bag might be a reasonable option, for bigger parties a bothy bag is a much better alternative),
  • A properly warm layer well above what I anticipate to need for the run itself (in my case usually North Face Thermaball hooded jacket),
  • Very thick woollen socks (wet running shoes and socks facilitate rapid heat loss when not moving; if I had to use that crisp bag, I would want them off my feet),
  • Buffalo mitts -- these are the best weight to warmth gloves I know off, good to well below zero and working when wet; everyone should have a pair of these (they are bit slippery and clumsy, which doesn't matter running, and in any case, perfect as a spare).
  • Extra emergency food (chocolate is a good carbohydrate / fat mix, with decent calorie density).

It is also worth keeping in mind, particularly when running solo in remote, quiet areas, that mobile phone coverage in the Scottish hills is still very patchy. There are technological answers to this, such as PLBs or satellite messengers, and if you spend lot of time in the hills on your own, this might be worth considering (I wrote a bit about the Delorme InReach SE gadget previously). And, of course, there is the old fashioned letting someone know your route and expected return time, technology is great, but sometimes the old fashioned ways are even better.

Feet

One of the big challenges during the winter months is keeping the feet warm. Ironically, the mild Scottish weather tends to work against us, with runs often starting below the freezing line in a bog, then moving well above the freezing line higher up -- wet shoes are the inescapable reality of Scottish hill running, and a real nuisance in subzero temperatures (anyone who has had their running shoes freeze solid on their feet during a brief stop to have a bite to eat, knows exactly what I mean).

These days specialist waterproof winter shoes with integral gaiters exist, but, being aimed at the European market, they seem to be impossible to get hold of in the UK, and, who knows, they might not be that great in our peculiar conditions. The basic approach to this problem that works quite well for me is:

  • Thickish woolen socks (I tend to wear two pairs of the inov-8 mudsock),
  • Waterproof socks; they are not really waterproof (the elastic membrane in these cannot take the sort of pressures running exerts in the toe box), and not too warm, but they prevent water from circulating freely, effectively creating sort of a wetsuit effect,
  • A 1/2 size bigger shoe than I use in the summer to accommodate the socks.

Traction

Fell shoes provide surprisingly good traction on snow covered ground, in fact considerably better than any mountaineering boots I have ever owned, but they have their limits: they don't work on iced up ground, and beyond certain, not very steep, gradient. These days there are specialist winter shoes with tiny carbide spikes, but these are tiny indeed and don't work if the ice is dusted with snow -- the gain over a fell shoe is too marginal to make it worth the expense, IMO.

For lot of my winter needs Kahoola Microspikes provide the answer (other traction devices exist, but unless you plan to accessorise with a shopping bag on wheels, my advice would be to go with Kahtoola). The microspikes are easy to put on and take off, they don't impede running, and, in a limited range of conditions, the 9mm spikes provide excellent traction -- they work brilliantly on ice and neve (well worth the looks you get hammering it down an iced up hill) but they have a gradient limit, roughly an angle on which you can consistently maintain the entire forefoot on the slope, reduced further if a hard surface is covered with some fresh snow; how much depends on the type of snow, etc.

However, as much as I love the microspikes, it must be said emphatically, microspikes are not a crampon substitute, and they become positively lethal when the ground steepens enough to automatically switch into 'front pointing' mode. My simple rule is, if the terrain does not call for packing an ice axe, I take the microspikes.

In fact most of the winter Scottish hills do require brining an ice axe (and knowing how to use it), and crampons. Normal crampons are, for good reasons, designed for stiff-soled boots, and can't be used with running shoes. The Kahtoola KTS Crampon can, as its linkage bar is made of a flexible leaf spring. The 23mm spikes are bit shorter than on a typical walking crampon, but more than adequate for the sort of conditions I run in. The front points are quite steep, which makes them less of a trip hazard, and with a bit of practice it is possible to run in these quite well. By the same token, the steep front points make them unsuitable for very steep ground.

The most important thing to be aware of with these is that wearing flexible shoes means steep front pointing is difficult, and very strenuous on the calfs; they are quite capable, but the experience is nothing like a normal crampon, that's for sure.

As for axes, there are many lightweight models on the market. Personally, I have great misgivings about any axe that doesn't have a solid steel head, as experiments have shown that aluminium picks are less effective in emergency self arrest, and I don't trust a bit of steel riveted to an alloy head either. My axe of choice is the BD Raven Ultra, though I have to say, the spike design is poor, making it hard work driving the shaft into snow.

Have fun!

by tf at December 02, 2016 09:55 PM

November 01, 2016

Emmanuele Bassi

Constraints (reprise)

After the first article on Emeus various people expressed interest in the internals of the library, so I decided to talk a bit about what makes it work.

Generally, you can think about constraints as linear equations:

view1.attr1 = view2.attr2 × multiplier + constant

You take the the value of attr2 on the widget view2, multiply it by a multiplier, add a constant, and apply the value to the attribute attr1 on the widget view1. You don’t need view2.attr2 either, for instance:

view1.attr1 = constant

is a perfectly valid constraint.

You also don’t need to use an equality; these two constraints:

view1.width ≥ 180
view1.width ≤ 250

specify that the width of view1 must be in the [ 180, 250 ] range, extremes included.

Layout

A layout, then, is just a pile of linear equations that describe the relations between each element. So, if we have a simple grid:

+--------------------------------------------+
| super                                      |
|  +----------------+   +-----------------+  |
|  |     child1     |   |     child2      |  |
|  |                |   |                 |  |
|  +----------------+   +-----------------+  |
|                                            |
|  +--------------------------------------+  |
|  |               child3                 |  |
|  |                                      |  |
|  +--------------------------------------+  |
|                                            |
+--------------------------------------------+

We can describe each edge’s position and size using constraints. It’s important to note that there’s an implicit “reading” order that makes it easier to write constraints; in this case, we start from left to right, and from top to bottom. Generally speaking, it’s possible to describe constraints in any order, but the Cassowary solving algorithm is geared towards the “reading” order above.

Each layout has some implicit constraint already available. For instance, the “trailing” edge is equal to the leading edge plus the width; the bottom edge is equal to the top edge plus the height; the center point is equal to the width or height, divided by two, plus the leading or bottom edges. These constraints help solving the layout, as well as provide additional values to other constraints.

So, let’s start.

From the first row:

  • the leading edge of the super container is the same as the leading edge of child1, minus a padding
  • the trailing edge of child1 is the same as the leading edge of child2, minus a padding
  • the trailing edge of child2 is the same as the trailing edge of the super container, minus a padding
  • the width of child1 is the same as the width of child2

From the second row:

  • the leading edge of the super container is the same as the leading edge of child3, minus a padding
  • the trailing edge of child2 is the same as the trailing edge of the super container, minus a padding

From the first column:

  • the top edge of the super container is the same as the top edge of child1, minus a padding
  • the bottom edge of child1 is the same as the top edge of child3, minus a padding
  • the bottom edge of the super container is the same as the bottom edge of child3, minus a padding
  • the height of child3 is the same as the height of child1

From the second column:

  • the top edge of the super container is the same as the top edge of the child2, minus a padding
  • the bottom edge of child1 is the same as the top edge of child3, minus a padding
  • the bottom edge of the super container is the same as the bottom edge of child3, minus a padding
  • the height of child3 is the same as the height of child2

As you can see, there are some redundancies; these are necessary to ensure that the layout is fully resolved, though obviously there are some properties of the elements of the layout that implicitly eliminate some results. For instance, if child3s height is the same as child1, and child1 lies on the same row as child2 and it’s an axis-aligned rectangle, the it immediately follows that child3 must have the same height of child2 as well. It’s important to note that, from a solver perspective, there only are values, not boxes, and you could use the solver with any kind of geometric shape; only the constraints give us the information on what those shapes should be. It’s also easier to start from a fully constrained layout and then remove constraints, than to start from a loosely constrained layout and add constraints until it’s stable.

Representation

From the text description we can now get into a system of equations:

  • super.start = child1.start - padding
  • child1.end = child2.start - padding
  • super.end = child2.end - padding
  • child1.width = child2.width
  • super.start = child3.start - padding
  • super.end = child3.end - padding
  • super.top = child1.top - padding
  • child1.bottom = child3.top - padding
  • super.bottom = child3.bottom - padding
  • child3.height = child1.height
  • super.top = child2.top - padding
  • child2.bottom = child3.top - padding
  • child3.height = child2.height

Apple, in its infinite wisdom and foresight, decided that this form is still too verbose. After looking at the Perl format page for far too long, Apple engineers came up with the Visual Format Language, or VFL for short.

Using VFL, the constraints above become:

H:|-(padding)-[child1(==child2)]-(padding)-[child2]-(padding)-|
H:|-(padding)-[child3]-(padding)-|
V:|-(padding)-[child1(==child3)]-(padding)-[child3]-(padding)-|
V:|-(padding)-[child2(==child3)]-(padding)-[child3]-(padding)-|

Emeus, incidentally, ships with a simple utility that can take a set of VFL format strings and generate GtkBuilder descriptions that you can embed into your templates.

Change

We’ve used a fair amount of constraints, or four lines of faily cryptic ASCII art, to basically describe a non-generic GtkGrid with two equally sized horizontal cells on the first row, and a single cell with a column span of two; compared to the common layout managers inside GTK+, this does not seem like a great trade off.

Except that we can describe any other layout without necessarily having to pack widgets inside boxes, with margins and spacing and alignment rules; we also don’t have to change the hierarchy of the boxes if we want to change the layout. For instance, let’s say that we want child3 to have a different horizontal padding, and a minimum and maximum width; we just need to change the constraints involved in that row:

H:|-(hpadding)-[child3(>=250,<=500)]-(hpadding)-|

Additionally, we now want to decouple child1 and child3 heights, and make child1 a fixed height item:

V:|-(padding)-[child1(==250)]-(padding)-[child3]-(padding)-|

And make the height of child3 move within a range of values:

V:|-(padding)-[child2]-(padding)-[child3(>=200,<=300)]-(padding)-|

For all these cases we’d have to add intermediate boxes in between our children and the parent container — with all the issues of theming and updating things like GtkBuilder XML descriptions that come with that.

Future

The truth is, though, that describing layouts in terms of constraints is another case of software engineering your way out of talking with designers; it’s great to start talking about incremental simplex solvers, and systems of linear equations, and ASCII art to describe your layouts, but it doesn’t make UI designers really happy. They can deal with it, and having a declarative language to describe constraints is more helpful than parachuting them into an IDE with a Swiss army knife and a can of beans, but I wouldn’t recommend it as a solid approach to developer experience.

Havoc wrote a great article on how layout management API doesn’t necessarily have to suck:

  • we can come up with a better, descriptive API that does not make engineers and designers cringe in different ways
  • we should have support from our tools, in order to manipulate constraints and UI elements
  • we should be able to combine boxes (which are easy to style) and constraints (which are easy to lay out) together in a natural and flexible way

Improving layout management should be a goal in the development of GTK+ 4.0, so feel free to jump in and help out.

by ebassi at November 01, 2016 05:07 PM

October 17, 2016

Emmanuele Bassi

Constraints

GUI toolkits have different ways to lay out the elements that compose an application’s UI. You can go from the fixed layout management — somewhat best represented by the old ‘90s Visual tools from Microsoft; to the “springs and struts” model employed by the Apple toolkits until recently; to the “boxes inside boxes inside boxes” model that GTK+ uses to this day. All of these layout policies have their own distinct pros and cons, and it’s not unreasonable to find that many toolkits provide support for more than one policy, in order to cater to more use cases.

For instance, while GTK+ user interfaces are mostly built using nested boxes to control margins, spacing, and alignment of widgets, there’s a sizeable portion of GTK+ developers that end up using GtkFixed or GtkLayout containers because they need fixed positioning of children widget — until they regret it, because now they have to handle things like reflowing, flipping contents in right-to-left locales, or font size changes.

Additionally, most UI designers do not tend to “think with boxes”, unless it’s for Web pages, and even in that case CSS affords a certain freedom that cannot be replicated in a GUI toolkit. This usually results in engineers translating a UI specification made of ties and relations between UI elements into something that can be expressed with a pile of grids, boxes, bins, and stacks — with all the back and forth, validation, and resources that the translation entails.

It would certainly be easier if we could express a GUI layout in the same set of relationships that can be traced on a piece of paper, a UI design tool, or a design document:

  • this label is at 8px from the leading edge of the box
  • this entry is on the same horizontal line as the label, its leading edge at 12px from the trailing edge of the label
  • the entry has a minimum size of 250px, but can grow to fill the available space
  • there’s a 90px button that sits between the trailing edge of the entry and the trailing edge of the box, with 8px between either edges and itself

Sure, all of these constraints can be replaced by a couple of boxes; some packing properties; margins; and minimum preferred sizes. If the design changes, though, like it often does, reconstructing the UI can become arbitrarily hard. This, in turn, leads to pushback to design changes from engineers — and the cost of iterating over a GUI is compounded by technical inertia.

For my daily work at Endless I’ve been interacting with our design team for a while, and trying to get from design specs to applications more quickly, and with less inertia. Having CSS available allowed designers to be more involved in the iterative development process, but the CSS subset that GTK+ implements is not allowed — for eminently good reasons — to change the UI layout. We could go “full Web”, but that comes with a very large set of drawbacks — performance on low end desktop devices, distribution, interaction with system services being just the most glaring ones. A native toolkit is still the preferred target for our platform, so I started looking at ways to improve the lives of UI designers with the tools at our disposal.

Expressing layout through easier to understand relationships between its parts is not a new problem, and as such it does not have new solutions; other platforms, like the Apple operating systems, or Google’s Android, have started to provide this kind of functionality — mostly available through their own IDE and UI building tools, but also available programmatically. It’s even available for platforms like the Web.

What many of these solutions seem to have in common is using more or less the same solving algorithm — Cassowary.

Cassowary is:

an incremental constraint solving toolkit that efficiently solves systems of linear equalities and inequalities. Constraints may be either requirements or preferences. Client code specifies the constraints to be maintained, and the solver updates the constrained variables to have values that satisfy the constraints.

This makes it particularly suited for user interfaces.

The original implementation of Cassowary was written in 1998, in Java, C++, and Smalltalk; since then, various other re-implementations surfaced: Python, JavaScript, Haskell, slightly-more-modern-C++, etc.

To that collection, I’ve now added my own — written in C/GObject — called Emeus, which provides a GTK+ container and layout manager that uses the Cassowary constraint solving algorithm to compute the allocation of each child.

In spirit, the implementation is pretty simple: you create a new EmeusConstraintLayout widget instance, add a bunch of widgets to it, and then use EmeusConstraint objects to determine the relations between children of the layout:

simple-grid.js [Lines 89-170] download
        let button1 = new Gtk.Button({ label: 'Child 1' });
        this._layout.pack(button1, 'child1');
        button1.show();

        let button2 = new Gtk.Button({ label: 'Child 2' });
        this._layout.pack(button2, 'child2');
        button2.show();

        let button3 = new Gtk.Button({ label: 'Child 3' });
        this._layout.pack(button3, 'child3');
        button3.show();

        this._layout.add_constraints([
            new Emeus.Constraint({ target_attribute: Emeus.ConstraintAttribute.START,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button1,
                                   source_attribute: Emeus.ConstraintAttribute.START,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_object: button1,
                                   target_attribute: Emeus.ConstraintAttribute.WIDTH,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button2,
                                   source_attribute: Emeus.ConstraintAttribute.WIDTH }),
            new Emeus.Constraint({ target_object: button1,
                                   target_attribute: Emeus.ConstraintAttribute.END,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button2,
                                   source_attribute: Emeus.ConstraintAttribute.START,
                                   constant: -12.0 }),
            new Emeus.Constraint({ target_object: button2,
                                   target_attribute: Emeus.ConstraintAttribute.END,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_attribute: Emeus.ConstraintAttribute.END,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_attribute: Emeus.ConstraintAttribute.START,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button3,
                                   source_attribute: Emeus.ConstraintAttribute.START,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_object: button3,
                                   target_attribute: Emeus.ConstraintAttribute.END,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_attribute: Emeus.ConstraintAttribute.END,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_attribute: Emeus.ConstraintAttribute.TOP,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button1,
                                   source_attribute: Emeus.ConstraintAttribute.TOP,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_attribute: Emeus.ConstraintAttribute.TOP,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button2,
                                   source_attribute: Emeus.ConstraintAttribute.TOP,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_object: button1,
                                   target_attribute: Emeus.ConstraintAttribute.BOTTOM,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button3,
                                   source_attribute: Emeus.ConstraintAttribute.TOP,
                                   constant: -12.0 }),
            new Emeus.Constraint({ target_object: button2,
                                   target_attribute: Emeus.ConstraintAttribute.BOTTOM,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button3,
                                   source_attribute: Emeus.ConstraintAttribute.TOP,
                                   constant: -12.0 }),
            new Emeus.Constraint({ target_object: button3,
                                   target_attribute: Emeus.ConstraintAttribute.HEIGHT,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button1,
                                   source_attribute: Emeus.ConstraintAttribute.HEIGHT }),
            new Emeus.Constraint({ target_object: button3,
                                   target_attribute: Emeus.ConstraintAttribute.HEIGHT,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button2,
                                   source_attribute: Emeus.ConstraintAttribute.HEIGHT }),
            new Emeus.Constraint({ target_object: button3,
                                   target_attribute: Emeus.ConstraintAttribute.BOTTOM,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_attribute: Emeus.ConstraintAttribute.BOTTOM,
                                   constant: -8.0 }),
        ]);

A simple grid

This obviously looks like a ton of code, which is why I added the ability to describe constraints inside GtkBuilder XML:

centered.ui [Lines 28-45] download
            <constraints>
              <constraint target-object="button_child"
                          target-attr="center-x"
                          relation="eq"
                          source-object="super"
                          source-attr="center-x"
                          strength="required"/>
              <constraint target-object="button_child"
                          target-attr="EMEUS_CONSTRAINT_ATTRIBUTE_CENTER_Y"
                          relation="eq"
                          source-object="super"
                          source-attr="center-y"/>
              <constraint target-object="button_child"
                          target-attr="width"
                          relation="ge"
                          constant="200"
                          strength="EMEUS_CONSTRAINT_STRENGTH_STRONG"/>
            </constraints>

Additionally, I’m writing a small parser for the Visual Format Language used by Apple for their own auto layout implementation — even though it does look like ASCII art of Perl format strings, it’s easy to grasp.

The overall idea is to prototype UIs on top of this, and then take advantage of GTK+’s new development cycle to introduce something like this and see if we can get people to migrate from GtkFixed/GtkLayout.

by ebassi at October 17, 2016 05:30 PM

October 07, 2016

Tomas Frydrych

The MTB Impact Myth

The MTB Impact Myth

The mountain bike is a great iteration in the evolution of the bicycle, opening a whole new world of possibilities as well as challenges. There are places where this is undoubtedly more true than others, and Scotland is, unquestionably, such a place. Not simply because of our long standing tradition of access, but because much of our spectacular landscape lends itself well to what the mountain bike has to offer.

Folk who don't ride mountain bikes sometimes (often?) don't get it. I do. Over the years, I have done a fair bit of rides of all kinds, the trail centres and the local skooshes, the winter mud and darkness, the long days of multiday expeditions into some of the more remote parts of Scotland, sometimes in sunshine, sometimes in torrential rain, the alpine summer holidays in search of dust, the amazing MegaAvalanche extravaganza. I have even earned some titanium bits somewhere along the way.

And like many mountain bikers, I have for years believed that the environmental impact of a person on a mountain bike is more or less identical to that of a person on foot. Well, I have since lost faith in that entrenched dogma, it is not supported by my day to day experience, and as it happens, it is not supported by the research either.

But There are Studies!

The reason most mountain bikers claim the bike has a similar environmental impact as a pedestrian is because back there in the pre-historic times of mountain biking there have been a couple of studies done that came to that conclusion, specifically a 1994 study from Montana by Wilson and Seney[1], and a 1997 Canadian study by Thurston and Reader[2]. The former of these has little relevance to the Scottish situation, or the issues as they stand today. It examines the impact of different users on pre-existing, hardened mountain trails -- in Scotland the equivalent might be recent hard packed path restoration work, but in this context it makes perhaps sense to talk about cost of maintenance, but not of environmental impact. In addition to that the study is seriously methodologically flawed (see the critique in Pickering, 2010[3]).

The study by Thurston and Reader in 1997[2] looks at the impact of hikers and bikes on pristine floor of a deciduous forest. The attempt to quantify the impact in this study is rigorous (they essentially count individual blades of grass), but unfortunately the very desire to exactly quantify the impact leads to a set up of the load test that does not at all reflect how bikes are ridden. The test areas are just 4m long, with an extra 0.5m buffer either end (i.e., about 2 bike length), 1m wide, with a marked centre line to follow, as they roll the bikes down:

Bikers traveled at a moderate speed, usually allowing bicycles to roll down lanes with-out pedaling where the slope would allow. Brakes were applied as needed to keep bicycles under control. Over rough terrain, some firm braking, occasional skidding, and some side-to-side movement of the front tire was required to maintain balance until a path developed. Once participants reached the bottom of a lane, they would turn and circle around the nearest end of the block back to the top of the lane to make a second pass.

Considering the detail to which the experiment set up is documented up to this point, the failure to set parameters for the riding is striking. The 'moderate speed' declaration needs to be interpreted in the light of 'side-to-side movement of the front tire was required to maintain balance', and the fact that the runs are just 5m long on not extremely steep slopes -- it is reasonable to conclude that by today's mountain biking standards these test runs are extremely slow.

The 4m length of the test area is entirely inadequate to capture bike behaviour -- there is an implicit assumption that the bike impact is homogeneous along its run, which is simply not the case, mountain bike riding is extremely dynamic. There are no corners, yet the area immediately before and in a corner is where the greatest impact of a bike is generally observed (i.e., braking bumps and skid marks). Similarly there are no uphill runs, for 'bikers could not make uphill passes, even in the lowest of 21 gears' -- this happens to be a normal part of mountain biking, and a realistic scenario would have made the biker to make best effort and then push. There is no defined braking protocol, and the 'firm braking' has to be seen in the context of what has been said about the speed above. The study has also been restricted to dry days only, which eliminates the situation when the ground is most vulnerable.

Not to beat about the bush, this is not how an average mountain biker rides their bike; bikes are invariably ridden fast, braked and cornered hard, and wheels dug in / span on climbs -- the best that can be said about this study is that it represents the most optimistic case of a cautious rider on a dry day, and that in itself should raise a few alarm bells.

Pickering's 2010[3] survey shows that up to 2010 there has been only one other study on comparative impact between pedestrian and bike traffic done in Australia (Chiu and Kriwoken, 2003[4]) -- this study suffers from similar limitations to its two predecessors; it takes place on a hardened track (a fire road), and the bike load application is not adequately defined beyond 'recreational riding'.

There is a more recent Australian study by Pickering, 2011[5], that looks on a mountain biking impact on subalpine grass. It uses very similar experimental set as used by Thurston and Reader (in spite of the earlier Pickering survey[3] noting the flaws in the methodology), on an 8 degree slope. This study has also been limited to dry days. As such it again represents the most optimistic case. This study is, however, interesting for two reasons. The results show that mountain biking impact on vegetation scales worse than that of a pedestrian, as by 500 passes the bike lanes were significantly worse off. It further shows that riding up or down hill is considerably more damaging than riding on flat; I shall return to this point later.

So here you go, that's the studies, and, no, the results are not too encouraging. They show that at best mountain bikes can hope for an equivalent impact to pedestrians when ridden with the greatest care in the dry. Nevertheless, this does not stop the mountain biking press from peddling the myth and worse. As recently as November of last year, the MBR has claimed that 'Research reveals walkers do more damage to trails than mountain bikers'. This is complete BS. The basis for the claim is a 2009 USGS study[5], which, however, was not conducted in a way that would allow meaningful comparison between different user groups. This point is clearly made in the paper:

In contrast, mountain biking, at 3.5 m 3 /km, has the lowest estimated level of soil loss, about 30% as much as on hiking trails. This finding reflects a limited mileage of trails where mountain biking was the predominant use (3.1 km), and these trails received low to moderate levels of use. [emphasis mine]

This point is also reiterated in Pickering's 2010, review of research[5], but neither of Pickering's papers is mentioned -- one can only wonder whether this is because the MBR writer is completely inept at Googling, or whether it has anything to do with the fact the reality does not fit his agenda (and headline).

Empirical Observations from the Real World

As I have noted above, the main issue with all of the rigorous studies is that they fail to create realistic riding conditions for their experiments. This is understandable. In order to get realistic data the research would have to be carried out on a real trail, considerably longer than the 4m test lanes, using real riders, preferably unaware of the study. This, however, makes quantitative assessment very hard, never mind comparison with other types of use. What we are left with is having to make judgements based on empirical observations in the real world.

So here are some of mine, mostly from the Dumyat hill above Stirling (for those interested, I also have 65 images that capture virtually all of the erosion on the west side of the hill, with comments, at Flickr). About four years ago I picked up an injury that kept me of the bike for the better part of six months. During this time I started hill running to keep the weight from piling on, and got quickly hooked. Much of my initial running was done in the exact same places around Stirling I used to ride, and in the Ochils, with Dumyat, the closest reasonably sized hill to where I live, becoming a regular destination for my midweek runs.

Compared to its neighbours in the Ochils, Dumyat is quite a peculiar hill: a lump of loosely bonded volcanic rocks, covered in a thin layer of soil (at places no more than a couple of inches thick), which in turn is held together by vegetation, primarily grass and some heather. In its natural form, the hill is pretty resilient to erosion. However, when the vegetation is damaged, water quickly washes off the top soil, and once the rock is exposed, it starts to disintegrate rapidly into gravel, forming ever bigger and deeper water channels. By this point the erosion is well beyond the hill's ability to self repair. (Just to be clear on this point, bulk of the erosion on the hill, i.e., the actual moving of soil, is caused by water run off, not by boots or wheels. The character of the rock is such that once exposed it does not require any further mechanical disturbance to erode. This makes the pace and physical scale of the erosion rather large relative the overall visitor numbers.)

The overall pattern of vegetation stripping followed by rapid water erosion is one of the main erosion patterns that can be observed on many of Scotland's more popular hills, except in somewhat accelerated form. Being heavily used by both walkers and bikes, this makes Dumyat a possible useful case study with a view toward the bigger picture.

The thing about running off road, particularly as a novice, is that you are very acutely aware of what is under your feet; far more than either walking or cycling. I did not really set out at any point to investigate erosion on Dumyat, I simply started noticing more clearly than before where the erosion is, and, over time, also how it forms. Over the next few months I made a couple of observations which forced me to change my view on the bike impact.

The first of these was noticing that there are significant differences in the way in which pedestrian and wheeled traffic impacts the vegetation. A pedestrian exerts a primarily downward crushing force. When this force is applied repeatedly in the same location, the area eventually gets de-vegetated, creating a foot shaped hollow, or rather a series of such hollows forming staggered steps. As the pressure continues to be applied, these hollows enlarge, until they form a continuous erosion scar. Notably, while the hollows remain in their discrete state, the area continues to be fairly resistant to water erosion, which only kicks in in earnest once the individual hollows merge.

In contrast, a bicycle wheel primarily exerts a tangential drag force, generated either by accelerating (i.e., pedalling), decelerating (i.e., braking) or centrifugal force (i.e., cornering). Pickering, 2011, as mentioned above, noted that bikes cause considerably more damage when going up or down, than on level. I believe this is the reason why, the wheel does not simply crush vegetation, it pulls on it, tending to damage roots faster. In real off road riding, the bike nearly always generates drag, for freewheeling in a straight line without any braking is a fairly rare occurrence (on Dumyat there is only one longer section that allows for this, and, it also happens to be one that shows very limited signs of erosion). The other important, if self-evident, difference from the pedestrian impact is that the wheel generates a single continuous trace. This, unlike the pedestrian indentations, tends to be subject to additional water run off immediately.

One of the obvious fall-outs from this is that at the start of, and well into, the erosion process we can reliably differentiate the erosion trigger. Over the last 3 years, I have only observed a handful of new areas of erosion that were clearly pedestrian triggered. At the same time, any erosion scar that is narrower than around 45cm can be unambiguously classed as bike-triggered. My conservative estimate is that bikes account for somewhere around 80% of the vegetation stripping on the hill that triggers subsequent water erosion, even though the pedestrian numbers seem considerably higher than the number of bike users.

(The actual numbers are hard to estimate without someone sitting there for a couple of weeks with a clicker. My mate Callum reckons bikes only account for around 11% of the hill users. I expected this number is somewhat understated. My own experience shows that the user make up is very weather dependent, with the walkers being mainly fair weather users; in good weather conditions, a 10% estimate for the bikes might be in the right ballpark. However, on a clagged out autumn day, the pedestrians tend to be reduced mainly to the local hill runners, whose numbers are fairly limited even compared to the local mountain bikers. The biker numbers are less effected by weather, they come for the exceptional quality of the riding, not the views, tend to ride all year around as well as after dark. I think it is reasonable to say that the pedestrians outnumber bikes by several times. This, however, is not a good news for the bikes, in view of what said in the previous paragraph.)

The other observation I made was regarding the pace with which the erosion develops to the 'beyond self repair' state. To my surprise, erosion initiated by pedestrian traffic develops relatively slowly -- because the discrete hollows tend to resist water erosion, it takes a large number of repetitions before a continuous scar forms. In some cases I have observed on Dumyat, as well as elsewhere, the process from the first de-vegetated hollows appearing to the forming of a continuous scar can take a year or more.

The situation with bike initiated erosion is very different. When the ground is saturated with water (which is 6-8 months of the year), it can take just a single, one off, bike track to remove most of the vegetation and kick-start the water erosion process. In one particular place I have observed how a single drag mark in the grass has turned into a 'beyond self-repair' two inch wide deep scar in the matter of weeks, and then rapidly progressed from there.

The empirical observations I have made over the recent years make me believe that the actual bike impact in the typical wet and soggy Scottish conditions is considerably worse than the best case scenarios of the idealised studies. The relative damage is, of course hard to quantify, but an educated guess can be made based on noting visual disturbances caused by a repetitive use by each of the two user groups. Having observed conditions of the ground after both hill running races and mates bike races taking place on similar types of a ground, I am inclined to believe that the impact of a single bike on surface vegetation might be as much as an order of magnitude bigger than that of a pedestrian, i.e., that it takes about 100 strong race field in a hill running race to have the impact of 10 bikes in the same sort of environment.

As far as I am concerned, the relative comparison does not really matter. I have simply stated my observations this way because that is how the mountain biking community chooses to frame the issue. Also, this is not about Dumyat, a rather insignificant hill used for grazing, on the boundary between urban environment on one side and heavily over grazed hills (chunk of which has been recently turned over to commercial forestry) on the other. This puts the erosion somewhat into perspective. This is about discernible patterns of damage, which I am noting with concern at other, more environmentally and culturally significant sites (e.g., Ben Lawers, the Cairngorms). Ridging big hills on bikes has in the recent years become very popular, and I understand why; but this makes it important that the mountain biking community understands the reality of bike impact on the mountain environment, and that it commits to the 'responsible' in Responsible Access. Sometimes we have to adjust the way in which we access the 'wild' places. At times bike is simply not appropriate for a given environment and/or conditions. At times runner is not appropriate either (FWIW, I now tend to stay away from Dumyat during the worst of the wet season when even the fell shoes are just too much).

PS: Well done for making it this far. :)

Literature Cited

[1] Wilson, J.P., Seney, J.P., 1994. Erosional impacts of hikers, horses, motors cycles, and off-road bicycles on mountain trails in Montana. Mountain Research and Development 14, 77–88.

[2] Thurston, E., Reader, R.J., 2001. Impacts of experimentally applied mountain biking and hiking on vegetation and soils of a deciduous forest. Journal of Environmental Management 27, 397–409.

[3] Pickering, C.M., Hill, W., Newsome, D, Yu-Fai, L., 2010. Comparing hiking, mountain biking and horse riding impacts on vegetation and soils in Australia and the United States of America. Journal of Environmental Management 91, 551-562.

[4] Chiu, L., Kriwoken, L., 2003. Managing recreational mountain biking in Wellington Park, Tasmania, Australia. Annals of Leisure Research 6, 339–361.

[5] Pickering, C.M., Rossi, S., Barros, A., Assessing the impacts of mountain biking and hiking on subalpine grassland in Australia using an experimental protocol. Journal of Environmental Management 92, 3049-3055.

[6] Olive, N.D., Marion, J.L., 2009. The influence of use-related, environmental, and managerial factors on soil loss from recreational trails. Journal of Environmental Management 90, 1483–1493.

by tf at October 07, 2016 12:58 PM

September 21, 2016

Emmanuele Bassi

Who wrote GTK+ and more

I’ve just posted on the GTK+ development blog the latest article in the “who writes GTK+” series. Now that we have a proper development blog, this is the kind of content that should be present there instead of my personal blog.

If you’re not following the GTK+ development blog, you really should do so now. Additionally, you should follow @GTKToolkit on Twitter and, if you’re into niche social platforms, a kernel developer, or a Linus groupie, there’s the GTK+ page on Google Plus.

Additionally, if you’re contributing to GTK+, or using it, you should consider writing an article for the developers blog; just contact me on IRC (I’m ebassi on irc.gnome.org) or send me an email if you have an idea about something you worked on, a new feature, some new API, or the internals of the platform.

by ebassi at September 21, 2016 10:19 AM

September 08, 2016

Tomas Frydrych

Glen Affric: Carn Eighe Horseshoe

Glen Affric: Carn Eighe Horseshoe

The Lochness Marathon is approaching fast, and with it my turn to be the support crew. Because of the race logistics there is a fair bit of hanging around ... but Glen Affric being just down the road, I know the perfect way to 'kill' the time -- the Carn Eighe loop is just the right length to be back at the finish line in a good time! A run along a great natural line, without any significant technical or navigational challenges, yet offering stunning views, on the edge of one of the more remote feeling parts of Scotland.

From the carpark at NH 216 242 head up the track following the river, then just after crossing Allt Toll Easa take the path following its W bank. After about half a km a path not on the map heads up the Creag na h-Inghinn ridge -- take this to the Tom a Choinnich summit (NB: Allt Toll Easa is the last available water source for the whole of the horseshoe).

Glen Affric: Carn Eighe Horseshoe

Follow the obvious ridge line onto An Leath-chreag, Carn Eighe and Mam Sodhail. From here continue along the E ridge onto Sgurr na Lapaich, and descent along the SE ridge into the glen to pick up the path toward Affric Lodge, and from here along the track/road back to start.

25km / 1,700m ascent

Creag Chaorainn Variation

The Sgurr na Lapaich descent is not the only option on this run, as there are two other inviting, even irresistible, ridges just little farther to the SW of Mam Sodhail.

Glen Affric: Carn Eighe Horseshoe

When time (and legs) permit, it is well worth to follow the SW ridge over the 1108m, 1068m, and 1055m points as it curves around the spectacular Coire Coulavie, for it offers the single best view of Loch Affric that there is to be had. The ground is a little bit more technical, as there is no path through the rocky, but generally enjoyable, terrain.

To reach the floor of Glen Affric, negotiate your way down Creag a'Chaorainn as the ridge curves NE onto the the flat area at around the 750m contour line. From here a direct descent S is possible, however, the grassy slope is very steep, the ground is pocketed, and sodden, and lot of care is required. I suspect a better option might be to descend N into the coire and then pick up the stalker's path marked on OS map) below An Tudair Beag.

The path on the floor of Glen Affric provides enjoyable running through a lovely Caledonian pine woodland, worth a visit in its own right. Follow this toward Affric Lodge and from there back to start along the road.

29km / 1,950m ascent

Glen Affric: Carn Eighe Horseshoe

[Updated 27/09/2016 -- I made it back with whole 5 minutes to spare!]

by tf at September 08, 2016 07:36 PM

September 03, 2016

Tomas Frydrych

Round of Camban Bothy

Round of Camban Bothy

Bothies are, in my mind at least, a national treasure, capturing something of the very essence of the Scottish character and generous attitude to strangers. They are more than shelters, they provide for chance encounters of likeminded folk, to share stories (and drams) the old fashioned way, their logbooks testimony to whatever it is that drives us out of our sterile urban existences. And for the runner, they are an excellent resource for multi day trips, cutting on the amount of kit required, as well as extending the window of opportunity beyond the summer months.

The Scottish north west is one of the most spectacular places I know, the Kintail area is no exception, and the Camban bothy is bang in the heart of it. The hills and glens around it offer a genuine sense of remoteness, rare in that it persists even on the summits, for obvious signs of human activity in this area are a few. It is only when one registers the names on the map -- Kintail Forest, Inverinate Forest, Glenaffric Forest, West Benula Forest -- that the full extent to which our species has left its mark on this landscape dawns. Nevertheless, this area is a real paradise for the walker and runner alike, offering some longer ventures to both.

The run described here is a two day outing, taking in some of the iconic hills of this area, the awe-inspiring Falls of Glomach, and spending a night in the aforementioned Camban bothy. I should say at the outset that this is not a run for an outdoor novice; ability to navigate reliably regardless of weather is a necessary pre-requisite, as is being adequately equipped for two days and a night in these hills. At the same time multiple manageable single day outings in this area are perfectly feasible.

The starting point for this trip is the tiny hamlet of Camas-Luinie. My recommendation would be to book a place in the Hiker's Bunk House for the night before your run. The bunk house is small (IIRC sleeps 6), basic but cosy, with a wood burner, well equipped kitchen, and most friendly owners. Using the bunk house solves two big problems -- there is no public space where you could leave a car, and the Camas-Luinie area is completely unsuitable for wild camping.

Day 1

From the end of the public road take the path heading east through Glen Elchaig, crossing the Elchaig river via a bridge at a cluster of houses. Continue east along the undulating landy track past Loch na Leitreach to Carnach. From here take the path heading SE. After about a kilometer the path splits (NH 0311 279), only to rejoin later -- you can take either fork to where they again rejoin (NH 040 276), the W one being the more scenic option, then follow the path to where it ends near a small lochan at the foot of Creag Ghlas. (NB: if you do take the W fork, it is easy to get carried away and miss the NE turn, as the path heading S to Loch Lon Mhurchaidh runs very well!)

Round of Camban Bothy

Gain the ridge, and follow it in southerly direction over Stuc Fraoch Coire, Stuc Mor, Stuc Beag to Sgurr nan Ceathramhnan; if Munros are your thing, you might want to make the detour to bag Mullach na Dheiragain, otherwise follow the ridge east onto Stob Coire na Cloiche. (However, please note that in poor visibility the descent from Sgurr nan Ceathramhnan requires care, as it is easy to pick up the NE ridge by a mistake. Also, when the ridge is covered in snow, the section between the west and main summits as well as the start of the descent is fairly serious, with some steep front pointing down and exposed narrow ridge traversing -- once on this hill, your options are very limited; also the significant altitude means snow stays on these hills well into the spring.)

Round of Camban Bothy

Again, you might want to nip up An Socach, otherwise head directly down the excellent path to Alltbeithe (Youth Hostel), and from here take the path SW to Camban.

~28km / 1,700m ascent

Round of Camban Bothy

Day 2

Head up directly up the hill behind the bothy to gain the ridge, and follow it over Sgurr a'Dubh Doire onto the summit of Beinn Fhada, enjoying the view of the Five Sisters of Kimtail. From the summit descend along the N ridge to the bealach below (S of) Meall a' Bheallaich. From here the obvious, pure, line is to continue over it, down to Bealach an Sgairne and up A Ghlas-bheinn, then follow the ridge over Meall Dubh to pick up the Bealach na Sroine path -- this is undoubtedly possible, but I have no idea what the ground is like ...

Round of Camban Bothy While on this section in mid April this year, the weather deteriorated badly and I was forced to retreat from the high ground, taking a path W from the bealach and contouring the E side of Coire an Sgairne -- this path is excellent, an the coire is very scenic, so this might be a worthwhile alternative for getting into Bealach an Sgairne. I can also say with some authority that the short gap between the two most westerly forestry roads on the slopes of A' Mhac is not to be recommended, the ground is very steep, and there are some rusty fences and Sitka bashing involved -- it's not worth the 120m altitude saved.

Which ever way you reach the Bealach na Sroine path, take it past the Falls of Glomach back onto the track alongside River Elchaig, and to Camas-Luinie.

~22km / 1800m ascent

by tf at September 03, 2016 08:50 PM

August 27, 2016

Emmanuele Bassi

GSK Demystified (III) — Interlude

See the the tag for the GSK demystified series for the other articles in the series.

There have been multiple reports after GUADEC about the state of GSK, so let’s recap a bit by upholding the-long standing tradition of using a FAQ format as a rhetorical device.


Q: Is GSK going to be merged in time for 3.22?

A: Short answer: no.

Long-ish answer: landing a rewrite of how GTK renders its widgets near the end of a stable API cycle, when a bunch of applications that tend to eschew GTK+ itself for rendering their content — like Firefox or LibreOffice — finally, after many years, ported to GTK+ 3, seemed a bit sadistic on our part.

Additionally, GSK still has some performance issues when it comes to large or constantly updating UIs; try running, for instance gtk3-widget-factory on HiDPI using the wip/ebassi/gsk-renderer branch and marvel at the 10 fps we achieve currently.

Q: Aside from performance, are there any other concerns?

A: Performance is pretty much the biggest concern we found. We need to reduce the amount of rasterizations we perform with Cairo, and we need better ways to cache and reuse those rasterizations across frames; we really want all buttons with the same CSS state and size to be rasterized once, for instance, and just drawn multiple times in their right place. The same applies to things like icons. Caching text runs and glyphs would also be a nice win.

The nice bit is that, with a fully retained render tree, now we can actually do this.

The API seems to have survived contact with the widget drawing code inside GTK+, so it’s a matter of deciding how much we need to provide in terms of convenience API for out-of-tree widgets and containers. The fallback code is in place, right now, which means that porting widgets can proceed at its own pace.

There are a few bugs in the rendering code, like blend modes; and I still want to have filters like blur and color conversions in the GskRenderNode API.

Finally, there’s still the open question of the mid-level scene graph API, or GskLayer, that will replace Clutter and Clutter-GTK; the prototype is roughly done, but things like animations are not set in stone due to lack of users.

Q: Is there a plan for merging GSK?

A: Yes, we do have a plan.

The merge window mostly hinges on when we’re going to start with a new development cycle for the next API, but we decided that as soon as the window opens, GSK will land. Ideally we want to ensure that, by the time 4.0 rolls around, there won’t be any users of GtkWidget::draw left inside GNOME, so we’ll be able to deprecate its use, and applications targeting the new stable API will be able to port away from it.

Having a faster, more featureful, and more optimized rendering pipeline inside GTK+ is a pretty good new feature for the next API cycle, and we think that the port is not going to be problematic, given the amount of fallback code paths in place.

Additionaly, by the time we release GTK+ 4.0, we’ll have a more battle-tested API to replace Clutter and Clutter-GTK, allowing applications to drop a dependency.

Q: How can I help?

A: If you’re a maintainer of a GTK+ library or application, or if you want to help out the development of GTK+ itself, then you can pick up my GSK development branch, fork it off, and look at porting widgets and containers. I’m particularly interested in widgets using complex drawing operations. See where the API is too bothersome, and look for patterns we can wrap into convenience API provided by GTK+ itself. For instance, the various gtk_render_* family of functions are a prime candidate for being replaced by equivalent functions that return a GskRenderNode instead.

Testing is also welcome; for instance, look at missing widgets or fragments of rendering.


Hopefully, the answers above should have provided enough context for the current state of GSK.

The next time, we’ll return to design and implementation notes of the API itself.

by ebassi at August 27, 2016 09:28 AM

August 26, 2016

Ross Burton

So long Wordpress, thanks for all the exploits

I've been meaning to move my incredibly occasional blog away from Wordpress for a long time, considering that I rarely use my blog and it's a massive attack surface. But there's always more important things to do, so I never did.

Then in the space of ten days I received two messages from my web host, one that they'd discovered a spam bot running on my account, and after that was cleared and passwords reset another that they discovered a password cracker.

Clearly I needed to do something. A little research led me to Pelican, which ticks my "programming language I can read" (Python), "maintained", and "actually works" boxes. A few evenings of fiddling later and I just deleted both Wordpress and Pyblosxom from my host, so hopefully that's the end of the exploits.

No doubt there's some links that are now dead, all the comments have disappeared, and the theme needs a little tweaking still, but that is all relatively minor stuff. I promise to blog more, too.

by Ross Burton at August 26, 2016 10:10 PM

August 18, 2016

Emmanuele Bassi

GUADEC/2

Writing this from my home office, while drinking the first coffee of the day.

Once again, GUADEC has come and gone.

Once again, it was impeccably organized by so many wonderful volunteers.

Once again, I feel my batteries recharged.

Once again, I’ve had so many productive conversations.

Once again, I’ve had many chances to laugh.

Once again, I’ve met both new and long since friends.

Once again, I’ve visited a new city, with interesting life, food, drinks, and locations.

Once again, thanks to everybody who worked very hard to make this GUADEC happen and be the success it was.

Once again, we return.

by ebassi at August 18, 2016 08:00 AM

August 10, 2016

Tomas Frydrych

On Running, Winning, and Losing

On Running, Winning, and Losing

I have a thing about hills. It goes back a long way. Aged five, my granny took me on a holiday in the mountains, and I have been drawn back ever since. Forty-plus years later, out there on the high ground, the inner child comes out just as wide-eyed as when during those two weeks I listened to tales of mountain creatures, real and mythical alike, and imagined the fairies and elfs coming out after dark.

Over the years I have walked, climbed, skied and biked the hills. Now that I am wiser, I mostly run. A means to an end. Don't get me wrong, I enjoy running per se. I like the sense of rhythm and flow. But I run hills because of the convenience (I can travel farther in less time), and because of the sense of freedom (unencumbered by excess of kit, I can go pretty much where I like).

To scratch my itch, I try to sneak in a longer hill run most weekends. And so on this particular Saturday in October I find myself at the Ben Lawers Nature Reserve, a second weekend in a row, after a last minute change of plans late the previous night. I ran the Lawers main ridge the previous week, and wanting to do something different, it is time for the Tarmachan.

As far as I am concerned the Tarmachan is right out there with such Scottish classics as the Anoach Eagach, the setting is nothing short of stunning. But also the ridge provides steady, first class running. The usual Tarmachan loop is a bit on the short side, but this is easily remedied by taking in Beinn Ghlas, Ben Lawers and Meall Corranaich to start with, and then ascending Meall nan Tarmachan off track along its North East ridge rather than via the usual walkers' path from the South.

This proves an excellent choice, hard work rewarded by solitude on an otherwise always busy hill, the otherworldly magic of Lochan an Tairbh-uisge, amplified by a low rolling cloud, taking the sting out of the steep push up onto the main ridge.

It's funny how our mind often works on different levels, how while our conscious thoughts are pre-occupied with this or that, we still continue to function in a kind of a semi-autonomous, semi-conscious way, that gets us through much of life. As I finally emerge on the walkers' path heading for the summit, I am still thinking about the magic of the lochan below, and then, as I check my watch, about being a half an hour behind schedule -- I will definitely not make my intended (wholly arbitrary) target time.

At the same time, in the background, I register that the summit is fairly busy (not surprising on a dry Saturday), people are grinning at me and getting out of my way (Scots are quite nice people, all in all), saying 'well done' without the usual Scottish self-deprecating, yet somehow still always slightly barbed banter (surprising), and even holding dogs on a short leash (definitely unusual!).

As I am approaching the summit cairn, I register a couple unmistakable Mountain Rescue Team jackets, and having noticed a couple of fluorescent cajoles down on the walkers' path (looking very much like the Police) I am (not)thinking 'oh, a rescue in progress', and then seeing that two guys are very glad to see me, (not)thinking 'bummer, someone saw me plodding up the open hill and raised alarm thinking me lost' (far fetched but not wholly inconceivable). All that, while still pre-occupied with that half an hour behind schedule.

It is only when I am asked about my number for the second time, that my consciousness finally takes over and all the pieces fall in place -- there is a race on, and all these people think I am in the lead (impressive lead, I must add, I can see way down the path, and there is nobody else coming up; and yet, by sheer coincidence, I realise later, not an unfeasible lead in the light of the standing course record).

I am not endowed by any particular athletic abilities. I have never won any sporting event in my whole life. I do the occasional Scottish hill race, and I consider it a success if I place in the top half (I often don't, so I don't race that often either). But suddenly, for this brief moment, I get a glimpse of what it feels like to be in the lead. And I tell you what -- I could get used to this!

But then clarifications are made, and I am again just another ordinary, anonymous, runner, and carry on along the ridge, briefly confusing another pair of race marshals as I pass the point where the race course descends the ridge early, while I stay on it.

I rejoin the course an hour or so later on my way back. I can see a handful of runners on the hillside above me, but from the footprints on the ground, and the fact that I can't see anyone at all in front of me, I surmise I am now near the end of the field. I settle into a comfortable jogging pace, my legs enjoying the rhythm in the knowledge that there are no climbs and no descends left. I am happy, it's been a great day.

As I approach the car park, I start encountering runners jogging in the opposite direction, returning from the finish back to their cars. Of course, they assume I am in the race, and so I get another round of 'well dones'. Funny how nominally identical phrases can be so semantically different. The one 'awesome, how the f* did you get here so fast -- well done!', the other 'poor sod, still running, but, good on him, still running; well done!'.

As a self confessed child of Derrida and Mann, I know that meaning is a construct of my mind, and has often little to do with what might have been intended. I know these are genuine words of encouragement. I know that on some level the experience of a hill runner approaching the finish line is very similar whether we are at the front or back -- we have pushed ourselves, we are suffering and we are only running at this point because we make ourselves; that when the car door opens at the end of our journey home, we will fall out rather than step out, and that come Monday we will put immense effort into hobbling only when we think no one is looking. I know it is this shared experience that is behind those 'well dones'.

I know all of that. And if that was not enough, I am not even in the freaking race, I have no reason to feel irate, at myself or anyone else -- I have been on the go for five hours, covered over 25km, climbed 2000m and I am still going well and enjoying it. And yet, somehow, I do feel irate about being at the end of a field of a race that I am not taking part in!

There are some who believe that running itself has a sort of magical, life transforming quality; I don't know, perhaps it does, for some. What I do know is that at times, up there in the hills, I experience brief moments of extreme clarity and self understanding. Today was one of those days: I learnt that losing, more than anything, is a state of mind, that I can make myself lose even where there is nothing to be lost and everything to be won …

But then the moment passes and all I can think of is the burger I'll have in Mhor-84 caffe on my way home --all pretense aside, there lies the real reason I run!

P.S. The 2015 Tarmachan Hill Race (9.5km / 700m of ascent) was won by George Foster of fellicionado.com in a very respectable, though not earth shattering, time of 00:54:13. Well done!

by tf at August 10, 2016 07:43 PM

Emmanuele Bassi

GUADEC

Speaking at GUADEC 2016

I’m going to talk about the evolution of GTK+ rendering, from its humble origins of X11 graphics contexts, to Cairo, to GSK. If you are interested in this kind of stuff, you can either attend my presentation on Saturday at 11 in the Grace Room, or you can just find me and have a chat.

I’m also going to stick around during the BoF days — especially for the usual GTK+ team meeting, which will be on the 15th.

See you all in Karlsruhe.

by ebassi at August 10, 2016 02:40 PM

GSK Demystified (II) — Rendering

See the previous article for an introduction to GSK.


In order to render with GSK we need to get acquainted with two classes:

  • GskRenderNode, a single element in the rendering tree
  • GskRenderer, the object that effectively turns the rendering tree into rendering commands

GskRenderNode

The usual way to put things on the screen involves asking the windowing system to give us a memory region, filling it with something, and then asking the windowing system to present it to the graphics hardware, in the hope that everything ends up on the display. This is pretty much how every windowing system works. The only difference lies in that “filling it with something”.

With Cairo you get a surface that represents that memory region, and a (stateful) drawing context; every time you need to draw you set up your state and emit a series of commands. This happens on every frame, starting from the top level window down into every leaf object. At the end of the frame, the content of the window is swapped with the content of the buffer. Every frame is drawn while we’re traversing the widget tree, and we have no control on the rendering outside of the state of the drawing context.

A tree of GTK widgets

With GSK we change this process with a small layer of indirection; every widget, from the top level to the leaves, creates a series of render nodes, small objects that each hold the drawing state for their contents. Each node is, at its simplest, a collection of:

  • a rectangle, representing the region used to draw the contents
  • a transformation matrix, representing the parent-relative set of transformations applied to the contents when drawing
  • the contents of the node

Every frame, thus, is composed of a tree of render nodes.

A tree of GTK widgets and GSK render nodes

The important thing is that the render tree does not draw anything; it describes what to draw (which can be a rasterization generated using Cairo) and how and where to draw it. The actual drawing is deferred to the GskRenderer instance, and will happen only once the tree has been built.

After the rendering is complete we can discard the render tree. Since the rendering is decoupled from the widget state, the widgets will hold all the state across frames — as they already do. Each GskRenderNode instance is, thus, a very simple instance type instead of a full GObject, whose lifetime is determined by the renderer.

GskRenderer

The renderer is the object that turns a render tree into the actual draw commands. At its most basic, it’s a simple compositor, taking the content of each node and its state and blending it on a rendering surface, which then gets pushed to the windowing system. In practice, it’s a tad more complicated than that.

Each top-level has its own renderer instance, as it requires access to windowing system resources, like a GL context. When the frame is started, the renderer will take a render tree and a drawing context, and will proceed to traverse the render tree in order to translate it into actual render commands.

As we want to offload the rendering and blending to the GPU, the GskRenderer instance you’ll most likely get is one that uses OpenGL to perform the rendering. The GL renderer will take the render tree and convert it into a (mostly flat) list of data structures that represent the state to be pushed on the state machine — the blending mode, the shading program, the textures to sample, and the vertex buffer objects and attributes that describe the rendering. This “translation” stage allows the renderer to decide which render nodes should be used and which should be discarded; it also allows us to create, or recycle, all the needed resources when the frame starts, and minimize the state transitions when doing the actual rendering.

Going from here to there

Widgets provided by GTK will automatically start using render nodes instead of rendering directly to a Cairo context.

There are various fallback code paths in place in the existing code, which means that, luckily, we don’t have to break any existing out of tree widget: they will simply draw themselves (and their children) on an implicit render node. If you want to port your custom widgets or containers, on the other hand, you’ll have to remove the GtkWidget::draw virtual function implementation or signal handler you use, and override the GtkWidget::get_render_node() virtual function instead.

Containers simply need to create a render node for their own background, border, or custom drawing; then they will have to retrieve the render node for each of their children. We’ll provide convenience API for that, so the chances of getting something wrong will be, hopefully, reduced to zero.

Leaf widgets can remain unported a bit longer, unless they are composed of multiple rendering elements, in which case they simply need to create a new render node for each element.

I’ll provide more example of porting widgets in a later article, as soon as the API will have stabilized.

by ebassi at August 10, 2016 02:20 PM

July 24, 2016

Tomas Frydrych

Coigach Horseshoe

Coigach Horseshoe

The Coigach hills provide perhaps the single best short run in the entire Coigach / Assynt area. The running is easy on excellent ground (if at places exposed -- not recommended on a windy day!), the views are magnificent in all directions, and the caffe in the Achiltibuie Piping School provides excellent post-run cakes!

Start from the small parking area just before the road dips down toward Culnacraig. Where the public road ends, take the small footpath behind the houses as it contours the hill, aiming to cross Allt nan Coisiche at the apex of the E to S bend below the water fall (NC 071 037; NB: there are no other suitable crossing points), then follow a steep faint path on its S side until reaching easier angled ground in the coirre.

From the coirre it is possible to either head directly for the edge of the W ridge of Garbh Choireachan (more hands on), or slightly to the N, aiming for the right (S) edge of the obvious erosion on the NW facing side of the ridge, where a faint path heads up onto the ridge proper.

Coigach Horseshoe

The high ridge is well defined and narrow. The drops are steep on both sides, but the S side in particular is at places vertigo inducing. However, no scrambling is required, and the more rocky crest beyond the 738m high point can be avoided by staying on a path running along its N side.

After reaching the summit of Ben More Coigach return back onto the, now broader, ridge and continue along it in E direction to where the Speicein Coinnich ridge begins, but instead of heading onto Speicen Coinnich, descend down the broad N facing coirre into the bealach below Sgurr an Fhidhleir (around NC 097 049), enjoying the dramatic views over the north facing cliffs. From here continue W for a bit to ascend up onto the Fhidhleir from the SW.

Coigach Horseshoe

The Fiddler summit once again offers excellent views (because Beinn an Eoin and the Fiddler ridge are of similar heights, the view changes quite dramatically along the different points of this outing). Once you have enjoyed the views from the summit, descend initially SW, then contour into the bealach that separates the Fiddler from Beinn nan Caorach; from here you get an excellent view of the Fiddler's north cliffs.

Coigach Horseshoe

Next head up near the N cliffs up onto the 648m summit for more views and then SW onto the main summit of Beinn an Caorach. From here the easiest descent is to return back to the bealach below the Fiddler, then contour along the 600m line to pickup the well trodden walkers' path that descends along the Fiddler's SW ridge; take this back to the road.

14km / 1100m ascent

by tf at July 24, 2016 05:57 PM