Planet Closed Fist

November 17, 2018

Richard Purdie

Post exercise collapse

Its no secret that I’ve some kind of energy problem. What is less known is how horrible the effects can be, or how plain weird the pattern is. People see me manage the motorcycling, the mountain biking or cycling but they don’t see what happens afterwards.

My current working model is that exercise generates some kind of toxin. That toxin builds up in the body and its effects peak around 24-28 hours after the exercise that generated it. The symptoms of that are best described as “flu like”, lethargy, feeling cold, a brain fog, inability to concentrate, feeling washed out, muscle and joint pains, tinnitus, teeth nerve pain and general mood changes. Those feelings can go on for around 2 weeks. There is no respiratory side to it and it never seems to be actual flu.

How serious is it? At its worst, after two days of motorcycling I collapsed semi-conscious and was out of it for 18 hours. My body had run so low on energy it lost the ability to regulate its temperature properly (thankfully I collapsed onto a duvet).

Whatever the toxin is, it appears to attack the nervous system, hence the various joint/muscle pains, tinnitus, teeth sensitivity and so on. If I’ve kept going and not given in to it either (through shear willpower or pain killers) I ended up with a permanent hand/limb tremor. Thankfully I also discovered biotin (Vitamin B7) appears to accelerate healing of that and after around a year, I managed to stop shaking, a major win.

The closest thing in medical knowledge that matches is paracetamol overdose, the key match being the timeframe. You don’t see symptoms of that for 24-48 hours after overdose. This lead to the realisation that its treatment also seems to help me. Its treated with NAC (N-acetyl-Cysteine) which is thankfully freely available and is a non-essential amino acid so its comparatively safe with low side effect risks.

Have I talked to a doctor about this? In short, yes, a lot. They have run a ton of tests over around a decade and there are few types of specialists I’ve not seen at some point. There are a ton of things we know it isn’t and some abnormalities. They don’t match anything they recognise. The two interesting data points are that my liver is unhappy about something (always raised GGT and sometimes raised ALT/AST/ALP) and prolactin is elevated. No idea about the prolactin (a story in its own right) but the liver fits the toxin poisoning model.

Its taken me that decade to figure out the pattern and to come up with the current coping strategy which takes the recovery from two weeks to 2-3 days. At one point I felt like I was accumulating damage (the tremor in particular), I’m pleased to say that it feels much less so now.

There are swings and roundabouts as the NAC appears to help a huge amount but may expose/cause other vitamin deficiency (B2?).

I personally suspect there is some genetic glitch somewhere, not enough to threaten life but enough to mean backup pathways which are less efficient are being used. There is solid science behind NAC increasing levels of Glutathione which is a master anti-oxidant and the way the body cleans up toxins. B2 is needed for recycling Glutathione and biotin is being studied in nervous system disease.

I guess I’m putting this out there in case it helps anyone else or that someone with knowledge of biochemistry could give any further insight into this.

by Richard at November 17, 2018 11:06 AM

November 15, 2018

Emmanuele Bassi

History of GNOME / Episode 1.4: Founding the Foundation

As is the case of many free and open source software projects, GNOME’s planning and discussions mostly happen online, on a variety of mediums, like IRC, mailing lists, and issue trackers. Some of those mediums fade out over time, replaced by other, newer ones; or simply because they become less efficient.

The first 10 to 15 years of GNOME can be clearly traced and reconstructed from mailing list archives, which is a very good thing, otherwise this podcast would not be possible without spending hundreds of hours in compiling an oral history of the project, with the chance of getting things wrong—or, at least, wronger than I do—or simply omitting details that have long since been forgotten by the people involved.

Nevertheless, it’s clear that some of the planning and discussion cannot really happen over a written medium; the bandwidth is simply not there. Shared, physical presence is much more efficient for human beings, in order to quickly communicate or iterate over an idea; the details can be summarised later, once everyone is on the same page.

By the year 2000, the GNOME project was already more than 2 years old, had its first major release, and it was now encompassing various companies, as well as volunteers around the globe.

While some of those people may have ended up sharing an office while working for the same company, and of course everyone was on IRC pretty much 24/7, the major stakeholders and contributors had yet to meet in one place.

Through some aggressive fundraising, what was supposed to be a small conference organised by Mathieu Lacage in March for the benefit of the students of the Ecole National Supérieure des Télécommunications in Paris was scaled up to allow the attendance of 40 GNOME developers from around the world for four days of presentations and discussions.

The main theme of the meeting was fairly ambitious: laying down the foundation for the next major release of GNOME, starting from the core platform libraries, with a new major version of GTK; the introduction of a comprehensive text rendering API called “Pango”; and a more pervasive use of Bonobo components in the stack.

Additionally, it allowed companies like Eazel, Ximian, and Red Hat, to present their work to the larger community.

Owen Taylor and Tim Janik presented their plans for GTK 1.4, including a new type system and improved integration with language bindings; Owen also presented Pango, a library for text rendering in non-Latin localisations, with Unicode support, and support for bidirectional and complex text. GTK was also going to be available on Windows and BeOS, thanks to the efforts of Tor Lillqvist and Shawn Amundson, respectively. Havoc Pennington was working on a new text editing widget, based on Tk’s multi-line text entry. Language bindings, for C++, Python, and Ada, were also presented, as well as applications targeting the GNOME platform.

Out of the four days of presentations, planning, discussions, and hacking came two major results:

  • the creation of a “steering committee”, with the goal of planning and directing the development efforts for GNOME 2.0
  • the push for the creation of a legal entity capable of collecting donations on behalf of the GNOME project, and act as a point of reference between the community and the commercial entities that wanted to contribute to GNOME

As a side note, Telsa Gwynne’s report of GUADEC’s first edition is also the first time I’ve seen an explicit mention of the “Old Farts Club”, as well as the rule of being over 30 in order to enter it; I think we can add that to the list of major achievements of the conference.

After GUADEC, in May 2000, GNOME 1.2 was released, as part of the stabilisation of the 1.x platform, and GNOME 1.4 was planned by the steering committee to be released in 2001, with the 2.0 development happening in parallel.

The process of creating the GNOME Foundation would take a few additional months of discussions, and in July 2000 the foundation mailing list was created for various stakeholders to outline their positions. The initial shape of the Foundation was modelled on the Apache Software Foundation, as both a forum for the technical direction of the project, and a place for corporations to get involved with the project itself. The goals for this new entity, as summarised by Bart Decrem, an Eazel co-founder, were:

  1. Providing a forum to determine the overall technical direction of GNOME
  2. Promoting GNOME
  3. Foster collaboration and communication among GNOME developers
  4. Manage funds donated to the GNOME project

There was a strong objection on having corporations being able to dictate the direction of the project, so one of the stated non-goals was for the Foundation to not be an industry consortium, similar to The Open Group. The Foundation would also not hire developers directly.

In order to avoid corporate dominance, no company would be allowed to be a member of the foundation: if a company wanted to have a voice in the direction of the project they could hire a Foundation member, and thus have a representative in the community. Additionally, there would be a limit on directors on the board working for the same company.

As the Foundation was going to be incorporated in the US, one way to avoid both under-representation of non-US members and the potential of fragmentation through separate entities in each large geographical region, was to be more open about both the membership and the board election process. GNOME contributors would be able to join the Foundation as long as they were actively involved with the project, and each member would be eligible to be elected as a director. Companies would be part of an advisory organ, not directly involved in shaping the project.

The idea of having the Foundation be the structure for setting the technical direction of the project was dropped fairly quickly, replaced by its function to be the place for settling controversial decisions, leaving the maintainers of each module in charge of their project.

It is interesting to note that many of the discussions that were part of the Foundation’s initial push have yet to be given an answer, more than 15 years later. If the Foundation is meant to be a forum for module maintainers, how do we define which modules should be part of GNOME, and which ones shouldn’t? Is being hosted on GNOME infrastructure enough to establish membership of a module? And, if so, who gets to decide that a module should be hosted on GNOME infrastructure? Is GNOME the desktop, or is that just a project under the GNOME umbrella? Are applications part of GNOME? The GNOME project is, to this day, still re-evaluating those questions.

Alongside the push from Eazel, Red Hat, and Ximian to get the Foundation going, came the announcement that Sun was going to support GNOME as the desktop for their Solaris operating system, in order to replace the aging CDE. To that goal, Sun was going to share the resources of its engineering, design, and QA teams with the GNOME project. Additionally, IBM, HP, and Dell wanted to support GNOME through the newly created Foundation.

Surprisingly, the discussions over the Foundation proceeded quickly; the self-imposed deadline for the announcement was set for August 15, 2000, three years after the first announcement of the GNOME project, to presented at the Linux World Expo, a trade fair with a fair amount of media exposure. The creation of the actual legal entity, an initial set of bylaws, and the election of a board of directors would follow.

Having a few of the hot startups in the Linux space, as well as well established companies in the IT sector, come together and announce they were putting their weight behind the GNOME project would, of course, be spun in a way that was adversarial to Microsoft, and so it was. The press release at the LWE pushed the angle of a bunch of companies joining together to challenge Microsoft, using a bunch of free code wrote by hacker weirdos to do so.

The announcement of the GNOME Foundation did not impress the KDE project, which released a statement trying to both downplay the importance of GNOME and of the companies that pledged resources to the GNOME project.

In November 2000, after finalising the initial set of bylaws for the Foundation and opening the membership to the people contributing to the project, GNOME held the first ever elections for the position of director of the board. With 33 candidates, a pool of 370 possible voters, and 330 valid ballots in the box, the first eleven directors were:

  • Miguel de Icaza (Helix Code)
  • Havoc Pennington (Red Hat)
  • Owen Taylor (Red Hat)
  • Jim Gettys (Compaq)
  • Federico Mena Quintero (Helix Code)
  • Bart Decrem (Eazel)
  • Daniel Veillard (W3C)
  • Dan Mueth (Eazel)
  • Maciej Stachowiak (Eazel)
  • John Heard (Sun Microsystems)
  • Raph Levien (Eazel)

Additionally, the advisory board was thus composed:

  • Compaq
  • Eazel
  • Free Software Foundation
  • Gnumatic
  • Helix Code
  • Henzai
  • IBM
  • Object Management Group
  • Red Hat
  • Sun Microsystems
  • VA Linux

After the election, the new board started working in earnest on the process for incorporating the foundation and registering it as a non-profit entity; this took until March 2001, after a couple of false starts. In the meantime, the main topics of discussions were:

  • the foundation bylaws, needed for the incorporation, the tax-exempt status, and for opening a bank account in order to receive membership fees from the advisory board
  • the GNOME 1.4 release management, handled by Maciej Stachowiak
  • the preparation for the 2nd edition of GUADEC, to be held in Denmark

Additionally, the GNOME Foundation was going to work on establishing a trademark for the project, both as the name GNOME and for the project’s logo.

Originally, the GNOME logo was not a logo at all. It was part of a repeating pattern for one of the desktop backgrounds, designed by Tuomas Kuosmanen, who also designed Wilber, the GIMP mascot. Tuomas reused the foot pattern as an icon for the panel, namely the button for the launcher menu, which contained a list of common applications, and let the user add their own.

In a typical free software spirit, and with a certain amount of bravery considering the typical results for such requests, Red Hat decided to host a competition for the logo setting as the prize for the winning submission a graphic tablet; they also asked contestants to use GIMP to create the logo, which, sadly, precluded the ability to get vector versions of it. In the end, many good submissions notwithstanding, the decision fell to a modified version of the original foot, also done by Tuomas—only instead of a right foot, it was a left foot, shaped like a “G”.

Leaving aside the administrivia of the Foundation for a moment, let’s go back to the technical side of the GNOME project, and take small detour to discuss the tools used by GNOME developers. Over the years these tools have changed, in many cases for the better, but it helps to understand why these changes were made in the first place, especially for newcomers that did not experience how life was way back when developers had to bang rocks together to store the code, use leaves and twigs to compile it, and send pigeons to file bug reports.

GNOME code repositories started off using CVS. If you know, even in passing, what Git is, you can immediately think of CVS as anything that Git isn’t.

CVS was slow; complicated; obscure; unforgiving; not extensively documented; with a terrible user experience; and would fail in ways that could leave both the local copy of the code and the remote one in a sorry state for everyone.

No, hold on.

Sorry, that’s precisely like Git.

Well, except “slow”.

Unlike Git, though, all the operations on the source revisions were done on the server, which meant that you didn’t have access to the history of the project unless you were online, and that you couldn’t commit intermediate states of your work without sending them to the server. Branching was terrible, so it was only done when strictly necessary. These limitations influenced many of the engineering practices of the time; you had huge change log files in place of commit logs; releases were only marked as such by virtue of having generated an archive, as tagging was atrocious; the project history was stored per-file, so you would not have the ability to see a change in its entirety unless you manually extracted a patch between two revisions; conflicts between developers working on the same tree were a daily occurance, and made integration of different work a pain.

It was not odd to have messy history in the revision control, as well as having to ask the CVS administrators to roll back a change to a previously backed up version, to compensate for some bad commit or source tree surgery.

Due to how GNOME components were initially developed — high level modules with shared functionality which were then split up — having commit access to one module’s repository allowed access to every other repository. This allowed people to work on multiple modules, and encouraged contributions across the whole code base, especially from newcomers. As a downside, it would lead to unreviewed commits and flames on mailing lists.

All in all, though, the “open doors” policy for the repositories worked well enough, and has been maintained over the years, across different source revision control software, and has led to not only many “drive by” patches, but also to a fair number of bugs being fixed.

Talking about bugs, the end of 2000 was also the point when the GNOME project moved to Bugzilla as their bug tracking system.

Between the establishment of the project and October 2000, GNOME used the same software platform also used by Debian to track bugs in the various modules. The Debian Bug Tracking System was, and still is, email based. You’d write an email, fill out a couple of fields with the module name and version, add a description of the issue and the steps to reproduce it, and then send it to submit@bugs.gnome.org. The email would be sent to the owner of the module, who would then be able to reply via email, add more people to the email list, and in general control the status of the bug report through commands sent, you guessed, by email. The web interface at bugs.gnome.org would show the email thread, and let other people read and subscribe to it by sending an email, if they were experiencing the same issue, or were simply interested in it.

By the late 2000, the amount of traffic was enough to make the single machine dedicated to the BTS keel over and die; so, a new solution was being sought, and it presented itself in the form of Bugzilla, a bug tracking system originally developed by Mozilla as a replacement for the original Netscape in house bug tracker once Netscape published their code in the open.

The web-friendly user interface; the database-backed storage for bug reports; and the query system made Bugzilla a very attractive proposition for the GNOME project. Additionally, Eazel and Ximian were already using Bugzilla for their own projects, which made the choice much more easy to make. Bugzilla went on to be the bug tracking system for GNOME for the following 18 years.

By the end of the millenium, GNOME was in a good position, with a thriving community of developers, translators, and documentation writers; taking advantage of the licensing woes of the “Kompetition”, and with a major release under its belt, the project now had commercial backing and a legal entity capable of representing, and protecting, the community. The user base was growing on Linux, and with Sun’s committment to move to GNOME for their next version of Solaris, GNOME was one step away from becoming a familiar environment for millions of users.

This is usually when the ground falls beneath your feet.


While GNOME developers, community members, and companies were off gallivanting in the magical world of foundations and transformative projects, the rest of the IT world was about to pay its dues. The bubble that had been propelling the unsustainable amount of growth of the previous 3 years was about to burst spectacularly.

We’re going to see the effects of the end of the Dot com bubble on the GNOME project in next week’s episode, “End of the road”.

by ebassi at November 15, 2018 04:00 PM

November 08, 2018

Emmanuele Bassi

History of GNOME / Episode 1.3: Land of the bonobos

With the GNOME 1.0 release, and the initial endorsement of the project by Linux distributors like Red Hat and Debian, it was only a matter of time before a commercial ecosystem would start to coalesce around the GNOME project. The first effort was led by Red Hat, with its Red Hat Advanced Development laboratories. Soon, others would follow.

Two companies, in particular, shaped the early landscape of GNOME: Ximian and Eazel.

Ximian was announced alongside GNOME 1.0, at the Linux Expo 1999, by none other than the project creator and lead, Miguel de Icaza, and his friend and GNOME contributor, Nat Friedman; it was meant to be a company that would work on GNOME, and provide support for it, in a model similar to the Red Hat one, but with a more focused approach than a whole Linux distribution. Initially, the name of the company was “International GNOME Support”, before being renamed Helix Code first, and Ximian in 2001, when it proved impossible to secure a trademark on Helix Code. For the benefit of clarity, I’ll use “Ximian” throughout this episode, even if we’re going to cover events that transpired while the company name was still called “Helix Code”.

Ximian’s initial effort was mostly spent towards raising capital to hire developers to work on GNOME and its applications, using the extant community as a ready made talent pool, like many other companies did, and still do to this day.

Ximian focused on:

  • providing GNOME as a commercially supported independent platform on top of existing Unix-like operating systems
  • jump-starting an application ecosystem that would target GNOME and focused on enterprise-oriented platforms

The results of these two areas of focus were Red Carpet and Ximian GNOME, for the former; and Evolution, for the latter.

Red Carpet was a way to distribute software developed at its own pace, on top of different Linux and Unix-like operating systems. Whether you were running Red Hat Linux, SuSE, Debian, Mandrake, or Solaris; whether your platform was Intel-based or PPC-based; whether you had a fully supported OS or you were simply an enthusiast and early adopter; you’d be able to download a utility that checked the version of a known list of components on your system; downloaded the latest version of those components and their dependencies from the Ximian server; installed the packages that would fit with your system; and kept them up to date every time a new version was released upstream. All of this was managed through a user-friendly interface, definitely nicer than the alternatives provided at the time by any other distribution, with clear indications of software sources; out of date packages; and suggested updates.

Through Red Carpet, Ximian created Ximian GNOME, a software distribution channel that delegated the maintenance of the desktop environment and selected applications, to an entity outside of the one that provided the core OS, and allowed to keep GNOME updated at the pace of upstream development, minus the time for QA.

Of course, this whole approach relied on the desktop being fully separate from the core OS, something that was possible only back in the early days of Linux as a competitive workstation operating system; additionally, it wasn’t something devoid of risks. Linux installations have always leaned towards heavy customisation post-installation, with the combinatorial explosion of packages available for every user and system administator to install on top of a base system, and without a clear separation of namespaces between the core OS, the graphical environment, and the user applications. Bolting a whole new desktop environment on top of a fluid base system could be problematic at best, and it made upgrading the base system, the desktop environment, and the applications an interesting challenge—with “interesting” defined as “we’re all going to die”. Red Carpet did allow, though, for keeping a stable base underneath a fast moving target like GNOME, assuming that you only cared about upgrading GNOME.

Another advantage of Red Carpet was the ability to provide a standard upgrade path for a fleet of systems, which is what made it attractive for system administrators.

Outside of Red Carpet, Ximian was working on Evolution, an email client and groupware platform for enterprise environment.

Email clients for Linux, like text editors and IRC clients, are a dime a dozen, but mostly terminal-based, and a poor fit in an enterprise environment, as they would not integrate with groupware services, like calendars, events, or shared address books. Sure, you could script your way out of most anything, if you were playing at being a BOFH, but Carol in Human Resources would not be able to get away with not receiving her email because Charlie made an error writing a procmail rule, and sent all the notifications halfway to Siberia.

Groupware suites are pretty much a requirement in corporate environments, and since contracts with those corporations can make or break a company, having a client capable of not only providing the same features, but also integrating with existing infrastructure is a fundamentally interesting proposition.

While Ximian tried to break into the commercial space through the tried and tested route of support contracts and enterprise software, Eazel chose a different, and somewhat risker route.

Eazel was a case of building a company around an idea 10 years too early; the idea in question being: providing remote storage for files and application, complete with browsing remote volumes and folders from your file manager as if they were on local storage. All of this would happen on the nascent Linux desktop, which meant creating a fair amount of the necessary infrastructure and shaping the design and user interaction at the same time.

Eazel was founded by many former Apple employees, including Andy Hertzfeld and Darin Adler, who were the technical leads of the original Macintosh team; and Susan Kare, the designer of the Macintosh icons and typefaces.

The main entry point for users of Eazel services was a new file manager, called Nautilus, coupled to a virtual file system abstraction that would be designed specifically for graphical user interfaces, to avoid blocking the UI while file operations on resources with large latency, like a network volume, were in progress. Given that fuzzy licensing situation around KDE and Qt, Eazel decided to work with the GNOME community, and took the remarkable step of developing Nautilus in the open.

Eazel started working their way upstream with the GNOME community around the late 1999/early months of 2000, thanks to Maciej Stachowiak, an early hire and a GNOME developer who worked on Guile, as well as various GNOME applications and components. Of course, the first thing the core GNOME developers did when approached by Eazel was to port their code from its early C++ incarnation to C, to fit in with the rest of the platform. The interesting thing that happened was that Eazel developers complied with that request, and stuck with the project.

At the time, GNOME’s file manager was a GUI layer around Miguel de Icaza’s Midnight Commander, a Norton Commander clone; MC was mostly used as a terminal application, even if it could integrate with different GUI toolkits, like Tk and GTK when running under X11. The accretion of various GUI led to cruft accumulating as well, and some of the design requirements for a responsive UI in a desktop environment poorly fit in with how a terminal UI would work. Additionally, maintenance was, as it’s wont, mostly volunteer-based. Federico Mena spent time, while working at the RHAD labs, on the GNOME integration, and that was the closest thing to somebody being paid to work on the MC code base. As the work on Nautilus progressed both from Eazel and the GNOME community, the scale slowly tipped in favour of the more integrated, desktop-oriented file manager with paid maintenance.

Of course, what we call Nautilus (or “Files”) today was a very different beast than what it was when Eazel introduced it to the GNOME community in February 2000. Folder views could have annotations, you could add emblems on files and folders, set custom icons, and even custom backgrounds. The file view was a canvas, with the ability to zoom in and out the grid of icons, or even stretch the icons and change their sizes, as well as their position. You could have live preview of files directly inside Nautilus without opening a different application, and thanks to the work done at Red Hat by Christopher Blizzard on integrating the Mozilla web rendering engine with GTK, Nautilus could also browse the web, or more likely your company’s intranet, or WebDAV shares.

While working on Nautilus, Eazel also provided functionality to the core GNOME platform; mainly, the GNOME VFS library was made available outside of Nautilus as a way to perform file operations outside of the simple POSIX API, and other applications quickly started using it to access remote volumes, or to integrate functionality like copying and moving files; libeel, a library for custom widgets used in Nautilus; and librsvg, a library for rendering SVG files, mostly graphical assets used by Nautilus for its icons, saw adoption on the GNOME stack.

Despite the differences in the originating companies, both Nautilus and Evolution shared a common design philosophy: componentisation. Not just plugins and extension modules, but whole components for integrating functionality into, and from, those applications.

The “object model environment” that begat the GNOME project’s acronym, and was simmering in the background, started getting into full swing between GNOME 1.0 and 1.2, with complex applications exposing components for other applications to reuse and re-arrange, as well as taking components themselves and adding new functionality without necessarily adding new dependencies.

Instead of raw CORBA, Eazel and Ximian worked on OAF, the object activation framework, a way to enumerate, activate, and watch components on a system; and Bonobo, a library that made it easy to write GUI components for GNOME applications to use, reducing the CORBA boilerplate.

The idea was that applications would mostly be “shells” that instantiated components, either provided by themselves or by other projects, and build their UI out of them. Things like menus and actions associated with those menus could be described in XML, everyone’s favourite markup language, and exposed as part of the component. GNOME and its applications would be a collection of libraries and small processes, combined and recombined as the users needed them.

Of course, the first real application of this design was to embed the Minesweeper game into Gnumeric, the spreadsheet application, because why wouldn’t you?

The effort didn’t stop there, though; Evolution was, in fact, a shell with email client, calendar, and address book components. Nautilus used Bonobo to componentise functionality outside the file management view, like the web rendering, or the audio playback and music album view.

This was the heyday of the component era of GNOME, and while its promises of shiny new functionality were attractive to both platform and application developers, the end result was by and large one of untapped potential. Componentisation requires a large initial effort in designing the architecture of an application, and it’s really hard to introduce after the fact without laying waste to working code. As an initial roadblock it poorly fits with the “scratch your own itch” approach of free and open source software. Additionally it requires not just a higher level of discipline in the component design and engineering, it also depends on comprehensive and extensive documentation, something that has always been the Achille’s Heel of many an open source project.

In practice, the componentisation started and stopped around the same time as GNOME 2 was being developed, and for a long while in the history of the project it was the proverbial dead albatross stuck around the neck of the platform.

For GNOME developers in mid-2000, though, this was a worthy goal to pursue, so they worked hard on stabilising the platform while applications iterated over it.

The project’s efforts moved in two prongs: minor releases for the 1.x platform, and planning towards the 2.0 release

GNOME 1.2 was released as a minor improvement over the existing 1.0 platform in May 2000, and a similar 1.4 minor release was planned for mid-2001; meanwhile, the effort of integrating the functionality spun off in libraries like Bonobo and OAF, gnome-vfs and librsvg, into the core platform was well underway.


Ximian and Eazel helped shape GNOME’s future not just by creating products based on the GNOME desktop and platform, or by hiring GNOME developers; they also contributed to establish two very important parts of the GNOME community, that exist to this day: GUADEC, the GNOME conference; and the GNOME Foundation.

Next week we’re going to witness the birth of both GUADEC and the Foundation, and we’ll take a small detour to look at the tools used by the GNOME project to host their code and track the bugs contained in said code, in “Founding the Foundation”.

References

by ebassi at November 08, 2018 04:00 PM

November 01, 2018

Emmanuele Bassi

History of GNOME / Episode 1.2: Desktop Wars

The year is 1998.

In an abandoned warehouse in San Francisco, in a lull between the end of the previous rave and the beginning of the next, the volume of the electronica has been turned all the way down; the strobes and lasers have been turned off, and somebody cracked open one of the black tinted windows, to let some air in. On one of the computers, made by parts scavenged here and there, with a long since forgotten beer near its keyboard, a script is running to compile a release archive of GNOME 0.20. The script barely succeeds, and the results are uploaded to an FTP server, just in time for the rave to start. There’s no need to write an announcement, the Universe will provide.

At the same time, somewhere in Europe, in a room dominated by large glass windows, white walls with geometric art hanging off of them, and lots of chrome finish, the hum of 50 developers with headphones working in concert quiets down after the project leader, like an orchestra conductor, raises to his feet. He looks at every young developer, from the 16 years old newcomer with a buzz haircut, to the 25 years old grizzled old timer that will soon leave for his 5 years mandatory military service; he then looks down, and presses a key that runs the build of a pre-release for KDE 1.0. The build will of course succeed without a hitch, and the announcement will be prepared later, in the common room, with a civilised discussion between all project members.

The stage is thus evenly divided.

The players are:

  • KDE, a commune of software developers using a commercially backed toolkit to write a free software desktop environment
  • GNOME, a rag tag band of misfits and university students, writing a free software toolkit and desktop environment

These are the desktop wars.

I jest, of course. The reality was wildly different than the memes. Even calling them “the desktop wars” is really a misnomer — after all, we don’t call the endless, tiresome arguments between Emacs and vi users as “the text editor wars”; or the the equally exhausting diatribe between spaces and tabs aficionados as “the code indentation wars”. At most, you could call this a friendly competition between two volunteer projects in the same problem space, but that doesn’t make for good, click-bait-y, tribal division.

Far from being a clinical, cold, and corporate-like project, KDE was started by volunteers across the globe, even if it did have a strong European centre of mass; while it did use a version of Qt not released under the terms of a proper free software license, KDE had a strong ethos towards user freedom from the very beginning, being released under the terms of the GNU General Public License; and while it was heavily centralised, its code base was not a machine of perfect harmony that never failed to build.

GNOME, on the other hand, was not a Silicon Valley-by-way-of-Burning Man product of acid casualties and runaways; its centre of mass was nearer to the East coast of the US than to the West, even if GIMP was initially developed at Berkeley. GNOME was indeed perceived as the underdog, assembled out of a bunch of components developed at different paces, but its chaotic initial form was both the result of the first few months of alpha releases, and of the more conscious decision of supporting and integrating existing projects.

Nevertheless, the memes persisted, and the battle lines were drawn by the larger free and open source software community pretty much immediately, like it happened many times before, and will happen many times after.

The programming language was one of the factors of the division, certainly, bringing along the extant fights between C and C++ developers, with the latter claiming the higher technical ground by using a modern-ish language, compared to the portable assembly of the former. GNOME used the existence of bindings for other languages, like Perl, Python, Objective C, and Guile, as a way to counter-balance the argument, by including other communities and programming paradigms into the mix. Writing GNOME libraries surely was a game for C developers, but writing GNOME applications was supposedly to be all about choice. Or, at least, it would have been, once the GNOME libraries were done; while the project was in its infancy, though, the same people writing the libraries ended up writing the applications, which meant a whole lot of C being unleashed unto the unsuspecting world.

From a modern perspective, relying on C as the main programming language was probably the most contentious choice, but in the context of 1997 it would be hard to call it surprising. Sure, C++ was already fairly well known as a system level language, but the language itself was pretty much stuck to the 2nd edition of “The C++ Programming Language” book, published in 1989; the first ISO C++ standardisation came in 1998, followed by one 2011, 13 years later. Additionally, programmers had been bitten by the binary incompatibilities across compilers as well as different versions of the same compiler, while the language evolved; and the standard library was found lacking in both design and performance, to the point that any major C++ library, like Qt or Boost, ended up re-implementing the same large chunks of the same basic data types. In 1997, writing a complex, interdependent project like a desktop environment using C++ was the “edgy” effort, for lack of a better word, comparable to writing a desktop environment in, say, Rust in 2018.

Another consideration, tied into the support for multiple languages, was that basically all high level languages exposed the ability to interface their internals using the C binary interface, as it was needed to handle the OS-level integration, like file and network operations.

We could debate forever if C was the right choice — and there are people that still do that to this day, so we would be in good company — but in the end the choice was made, and it can’t be unmade.

By and large, though, the deciding factor that put somebody in either the KDE or the GNOME camp was social and political; fittingly, as the free and open source software movement is a social and political movement. The argument boiled down to a very simple fact: the toolkit chosen by the KDE project was, at the time, not distributed under a license that fit the definition of free software, and that made redistributing KDE a pain for everyone that was not the KDE project themselves.

The original Qt license, the Qt Free Edition License, required that your own project never depended on modifications of Qt itself, and that you licensed your own code under the terms of the GPL, the LGPL, or a BSD-like license. Writing libraries depending on Qt also required to jump through additional hoops, like sending a description of the project to Trolltech.

Of course, that put the KDE project in the clear: they were consuming Qt mostly as a black box, and they were releasing their own code under the terms of the GPL. It did place the distributors of KDE binaries on less certain grounds, though, with Debian outright refusing to package KDE as it would put the terms of the GPL used by KDE in direct conflict with the terms of the Qt Free Edition License; the license itself was really not conforming to the Debian Free Software Guidelines, so distributing Qt itself (as well as any other project that decided to use its license) was out of the question. If you wanted to use pre-built packages for KDE 1.0 on Debian, you had to download and install them from a third party repository, maintained by KDE themselves.

Other distributions, such as SuSE and Mandrake, were less finnicky about the details of the interaction between different licenses, and decided to ship binary builds of KDE as part of their main package repositories.

The last big name in the Linux distributions landscape at the time was Red Hat, and things were afoot there.

Just like Debian, Red Hat was less than enthused by the licensing issues of Qt and KDE, and saw a fully GPL and LGPL desktop environment as a solution for their commercial contracts. Of course, GNOME was mostly alpha quality software at the time, and had to catch up pretty quickly if it wanted to be a viable alternative to, well, shipping nothing and supporting roughly every possible combination of window managers and utilities.

Which is why, in 1998, Red Hat created the Red Hat Advanced Development Laboratories.

The RHAD labs were “an independent development group to work on problems of usability of the Linux operating system”: a few developers embedded in upstream communities, tasked with both polishing the many, many, many rough edges of GNOME, and taking over some of the unglamorous aspects of maintenance in a large project.

Under the watchful eye of Mark Ewing and Michael Fulbright, RHAD labs hired Elliot Lee, who wrote the CORBA implementation library ORBit, and worked on the componentisation of GNOME; Owen Taylor, who co-maintained GTK and shepherded the 1.0 release; Carsten “the rasterman” Haitzler, who wrote the Enlightenment window manager and worked on specifying how other window managers should integrate with GNOME; Jonathan Blandford, who worked on the GNOME control centre; and Federico Mena, who worked on GIMP, GTK, and on the GNOME GUI for the Midnight Commander file manager, as well as writing the first GNOME calendar application. In time, the RHAD would acquire the talents of other well-known GNOME contributors, like Havoc Pennington and Tim Janik, to work on GTK 2.0; Christopher Blizzard, to work on the then newly released Mozilla web browser; and David Mason, to work on the GNOME user documentation.

In September 1998, GNOME released version 0.30, to be shipped by Red Hat Linux 5.2 as a technology preview, thanks to the work of the people at the RHAD labs. Of course, it was not at all smooth sailing, and everyone there had to fight to keep GNOME from getting cut — mostly by convincing the Red Hat co-founder and CEO, Robert Young, that the project was in a much better state than it looked. The now infamous “Project Bob” was a mad dash of 36 hours for all the members of the RHAD labs to create and present a demo of GNOME, making sure it would work — at least, within the strict confines of the demo itself. Additionally, Carsten Haitzler would write a cool new theme using the newly added loadable module support in GTK, to show off the capabilities of the toolkit in its upcoming 1.2 release. Up until then, GTK looked fairly similar to Motif, but counting on his experience on the Enlightenment window manager, Haitzler added the ability to customise the appearance of each UI element in a variety of ways.

Of course, in the end, it was the theme that won over Red Hat management, and without it, “Project Bob” may have failed and spelled doom for the whole RHAD labs and thus the commercial viability of the GNOME project.

In December 1998, Trolltech announced that Qt would be released under the terms of a new license, called the “Q Public License”, or QPL. The QPL was meant to comply with the Debian Free Software Guidelines and lead to Debian being able to ship packages of Qt, but it did not really solve the issue of GPL compatibility, so, in essence, nothing changed: Debian and Red Hat would not ship KDE until its 2.0 release, 2 years later, when Trolltech relented, and decided to ship Qt 2.0 under the terms of both the QPL and of the GNU GPL.

By the beginning of 1999, KDE was about to release version 1.1, whereas GNOME locked down the features for a 1.0 release, which was announced in March 1999. In April 1999, Red Hat Linux 6.0 shipped with GNOME as its default desktop environment. The 1.0 release was presented at the Linux Expo, in May; the presentation was hectic, and plagued by the typical first release issues; GMC, the file manager, for instance, would crash when opening a new directory, but the session manager was modified to restart it immediately, so all you’d see was a window closing and re-opening in the desired location.

The splash of the first major release attracted attention; it was a signal that the GNOME developers were ready for a higher form of war, one that involved commercial products that could integrate with, and be based off of GNOME.

The desktop wars were entering a new phase, one where attrition and fights by proxy would dominate the following years.


Next week we’re going to zoom into the nascent ecosystem of companies that were born around GNOME, and focus on two of them: Ximian, or Helix Code as it was called at the time; and Eazel. Both companies defined GNOME’s future in more ways than just by adoping GNOME as a platform; they worked within the GNOME community to create products for the Linux desktop, and they shaped the technical and social decisions of the GNOME project well after the first chapter of its history.

Alongside that, we’re also going to look at the effort to bring about the era of components that was initially outlined with the GNOME acronym: a desktop and an object model. We’re going to see what happened to that, once we step into “The land of the bonobos”.

References

by ebassi at November 01, 2018 05:00 PM

October 25, 2018

Emmanuele Bassi

The History of GNOME

I’ve done a thing which may be of interest if you’re following the GNOME community.

As I said on Twitter, I have spare time, and I like boring people to death by talking about things that matter to me a lot; one of the things that matter to me is GNOME and its community—and especially its history.

Of course, I had to go and make it about liminal spaces and magic rituals, because that’s what makes it fun. This, though, is a magic ritual. I’m holding a seance, and I’m calling forth the past of the GNOME project for the people that live down its light-cone.

GNOME has the luxury of having a lot of people that stuck around—some even from the early days when there was no GNOME; there are also other people, though, some of them born after Miguel’s announcement, that are now starting to contribute to GNOME. I guess that means that it’s time to look back a bit, and give some more context to the history of the project.

I hope I won’t bore you that much with this; I hope that people will learn something new, or re-discover something that was forgotten. In general, I do hope people will have fun with it.

by ebassi at October 25, 2018 06:00 PM

History of GNOME / Episode 1.1: GNOME

It is a long running joke in the GNOME community, that the acronym, or, more accurately, the backronym, that serves as the name of the project does not apply any more.

The acronym, though, has been there since the very first announcement of the project: GNOME, the GNU Network Object Model Environment.

The history of GNOME begins on August 15th, 1997.

NASA landed the Pathfinder on Mars just the month before.

Diana, Princess of Wales, would tragically die at the end of the month in Paris.

The number one song in the US charts is “I’ll be missing you”, by Puff Daddy.

On the 15th of August, Miguel de Icaza, a Mexican university student, announced the creation of the GNOME project, an attempt at creating “a free and complete set of applications and desktop tools, similar to CDE and KDE but based entirely on free software”.

To understand each part of that sentence, I’m afraid we will have to go back to a time forgotten by the laws of gods and men: the ‘80s.

In 1984, Richard Stallman started the GNU project. Don’t bother try to expand the acronym, it’s one of those nerdy things for which the explanation is just not as clever when said it out loud as it feels in one’s head. Incidentally, GNU is the reason why the G in GNOME is not silent. The history of GNU is an interesting topic, but we’ll avoid covering it here; if you want to, you can read all about it on the Free Software Foundation’s website.

GNU was, and it still is, an attempt at creating a Unix-compatible system that is completely free as in “software freedom”; the freedom in question is actually four different freedoms:

  • the freedom to use this operating system on whatever machine you want
  • the freedom to study it, down to its source code
  • the freedom to share it with others, without going through a single vendor
  • the freedom to modify it, if it does not do something you want

These four freedoms are enshrined in the “Copyleft” movement, which uses distribution licenses such as the GNU General Public License and Lesser General Public License, to create software programs and libraries, that respect those four freedoms.

GNU is a collection of tools, like the compiler suite and Unix-like command line utilities, in search of a kernel capable of running them — the attempt at creating a working, main stream GNU kernel is, shall we say, still in progress to this day. In 1991, though, Linus Torvalds, a student from Finland, created the Linux kernel, which was quickly adopted as the major platform for GNU, and the rest, as they say, is history — though, like GNU’s history, also not the one you’re going to hear in this podcast.

While the development tools and console utilities were mostly getting taken care of, the GUI landscape on Linux was composed by an heterogeneous set of tools, typically starting with a window manager; some task management tool, like a list of running programs and a way to switch between them; and smaller utilities, like launchers for common applications or monitoring tools for local and network resources.

Each environment was typically built from the ground up and customised within an inch of its life, a system tailored to the levels of micro-management and pain tolerance of each Linux user, and in those days, the levels of both were considerably high.

Large, integrated desktops were the remit of commercial Unix systems, like SunOS, AIX, HP/UX. One of those environments was the Common Desktop Environment, or CDE.

Created in 1993 by the Open Group, an industry consortium founded by the likes IBM, Sun, and HP, CDE was built around Motif, a commercial widget toolkit written in the late 80s by HP and DEC for the X display server, the mainstream graphics architecture on Unix and other Unix-compatible operating systems.

The Open Group quickly standardised CDE, and until the early 2000s, when Linux had started eating most of their lunches, it was considered the de facto standard desktop environment for commercial Unix platforms.

One of the things that Linux and the commercial Unix systems running on the Gibsons had in common was the X graphic system. This meant that you could write applications on your Unix system at university, or at work, and then run it on Linux at home, and vice versa.

As it’s often the case, Linux users wanted to bring some of the tools used on the Big Irons to their little platform, and in 1996 a group of C++ developers led by Matthias Ettrich, created the KDE project, a desktop environment in the same vein as the CDE project, as the name implies. Since Motif was written in C and released under an expensive proprietary license, they used a different widget toolkit, written in C++, as the basis for the desktop called “Qt” (pronounced “cute”), made by a Norwegian company called Trolltech.

Qt was released under a license called “The Qt Free Edition License”, which limited the redistributability of the code: you could get the source of the toolkit, but you could not redistribute modified versions of it. While this was good enough if you wanted to download and build the source on your personal computer, it put some strain on the people distributing Linux and its software, and it went against the Copyleft ethos of the GNU operating system that was the basis of Linux distributions — to the point that an effort to reimplement a Qt-compatible widget toolkit and releasing it under a Free Software license was started by the GNU project, called “Project Harmony”.

In the meantime, Motif was proving to be a hurdle for other free software projects as well.

In 1996, Spencer Kimball and Peter Mattis, two students of the University of California at Berkeley, wrote a raster graphics image editing tool using the C programming language, as part of a university project, and called it the “GNU image manipulation program”, or GIMP, as many have come to regret when searching the name on the Internet. As university students, they had access to Motif for free, so the initial implementation of the GIMP used that toolkit, but redistribution to the world outside university, as well as technical issues with Motif, led them to write an ad hoc GUI widget library for their application, called “The GNU Image Manipulation Program Toolkit”, or “GTK” for short.

A community of software developers, including people that would be influential to GNOME like Federico Mena and Owen Taylor, started to coalesce around GIMP; they saw the value of an independent, free software widget toolkit, so GTK was split off from the main GIMP code repository in the early 1997, and began a new life as an independent project, released under the terms of the GNU Library General Public License. The licensing terms of GTK allowed it to follow the four software freedoms — use, study, share, and modify — but it also allowed the creation of less-free software on top; as long as you distributed any eventual changes you did to GTK under the same license, your application’s code could be released under any other license you wanted. This major distinction with Motif and Qt pushed the newer, volunteer-driven GTK forward, while it filled the gaps with the older, commercially supported toolkits.

GTK had a fairly lean API, and its use of C quickly allowed developers to write “bindings”, that let other programmng languages use the underlying C API with a minimal translation layer to pass values around. Soon after, programmers of Perl, Python, C++, and Guile, an implementation of a dialect of the LISP programming language called Scheme, could use GTK to write plugins for GIMP, or complete, stand alone applications. Compared to KDE’s choice of Qt, which only supported C++ and Python, it was a clear advantage, as it exposed GTK to different ecosystems.

What was GNOME like, back in late 1997/early 1998?

The answer to that question is: an heterogeneous collection of tools, mostly sharing dependencies, and developed together, that occasionally got released and even more rarely did build out of the box without resorting to using random snapshots out of the source revision control.

You had a panel, with launchers for common applications, and with a list of running programs. There were the beginnings of a set of core applications: a help browser; a file manager; a suite of small utilities, mostly GUI ports of command line tools; games; an image viewer; a web browser based on an embeddable HTML renderer; a text editor. Notably, and unlike KDE with its own KWM, not a window manager.

The question of what kind of window manager should be part of a GNOME environment was punted to the users — it’s actually the first instance of a controversial thread on the GNOME mailing list, with multiple calls for an “officially sanctioned” window manager, typically opposed by people happy to let everyone use whatever they wanted — the externally developed Enlightenment was the most common choice at the time, but you could literally run GNOME with WindowMaker; GNUStep; or XF Window Manager 2; and you could still call it “GNOME”.

From a code organization perspective, GNOME started off as a single blob, which got progressively spun into separate components:

  • gnome-libs, a series of core libraries based off of GTK
  • gnome-core, which contained the session manager and the panel
  • gnome-graphics, which contained the Electric Eyes image viewer
  • gnome-games, a mixed bag of simple games
  • gnome-utils, a mixed bag of GUI utilities, like gtop
  • gnome-admin, which contained an SNMP monitoring tool and Gulp, the line printer configuration utility

While GNOME at the time was still a loosely connected set of components, the overall direction of the design was far more grandiose, though. If you remember the GNOME acronym from the announcement, it mentioned a “network object model”.

But what is an “object model”?

One of the things that Microsoft and Apple did with much fanfare, back in the early ‘90s, was introducing the concept of embeddable components provided by both the OS and applications.

You could create a spreadsheet, then take a section of it, embed it into your word processor document, and edit the table from within the word processor itself, instead of flip flopping between the two applications on the limited screen real estate available. The idea was that documents and applications would just be built out of blocks of data and controls, shared across the whole operating system. Those components were available to third party developers in programming suites like Visual Basic, Visual C++, or Delphi, and you could quickly create a well integrated application out of them.

Sun tried to push Java as the foundation for this design; Apple called their short-lived implementation of this technology “OpenDoc”; Microsoft called its much more successful version “OLE”, or: Object Linking and Embedding; and, of course, any self-respecting desktop environment competing with Windows needed something similar to match features.

In order to implement an infrastructure of objects and remote procedure calls that could be invoked on those objects you needed a communication system; OLE used COM, the Common Object Model; GNOME decided to use CORBA, or the Common Object Request Broker Architecture, which was not only meant to be used on local systems but it also worked on the network. As a CORBA implementation, GNOME started by using MICO, a C++ library, and then replaced it with to ORBit, a C library written specifically for the project and addressing some of the MICO shortcomings.

While ideally GNOME would provide components for everything, the initial beneficiary of this architecture was the only recognisable bit of the desktop environment that was visible to the user all the time: the panel.

The contents of the panel were small, self-contained applications that would use CORBA as a mechanism to negotiate being embedded into the panel own window to display their state, or pop up things like menus and other windows on user request.

The core applications started getting CORBA-based components, but we’ll have to wait until after GNOME 1.0 to get to a widespread adoption of this architecture.

Between August 1997 and May 1998, GNOME released various 0.10 snapshots to mark the progress of the project. By June 1998, GNOME 0.20 was released as a “pseudo-beta”, followed, in September 1998, by 0.30, the first named release of GNOME, called “Bouncing Bonobo”.

Between 0.20 and 0.30, though, something had happened: Red Hat, a Linux distribution vendor, founded the Red Hat Advanced Development Labs, hired a bunch of software developers that happened to contribute to GNOME, amongst other things, and the benevolent corporate overlords started taking notice of this Linux desktop.

Nobody really knew it at the time, but the First Desktop Wars had begun.

References

by ebassi at October 25, 2018 03:27 PM

History of GNOME / Prologue

Prologue

History is one of my passions; more than offering a narrative of the past, it gives us context, and with context comes a better understanding of the people that made history, and of their choices.

This is true for every human endeavour, and so it’s also true of software projects.

Free and Open Source Software is a social and political movement before a techological one; as such, the history of a free software project should help us contextualise not just the technological decisions, but the personalities, the goals, and the constraintsthat influenced those decisions.

It’s often the case, especially in the field of software development, that newcomers to a specific project, or a subset of a project, end up facing some code that does not work; or that barely works, some of the time. The immediate reaction is typically trying to understand that code, that technology, that whole stack, in order to fix it, or replace it. Just learning what the code does in that specific instant, though, gives only a partial picture of how that code, that technology, that whole stack came into being.

It’s said that programmers think that bad code comes from bad programmers; in reality, bad code often comes from good people operating under different constraints than ours. Understanding those constraints, and how they changed over time, is the first step in understanding how to write better code.

This is especially true of projects that involve hundreds of developers, operating under wildly variable constraints, for decades.

Projects like GNOME.

GNOME is a free software project that aims to create an environment for computer users that is

  • easy to use
  • accessible for everyone
  • under a license that allows everyone to modify and contribute to it

As far as free and open source software projects go, it’s fairly old; it’s old enough to vote, and to buy a drink in the US. GNOME, as a community, attracts not only volunteer contributions, but commercial ones as well, and it provides the technological underpinnings for both volunteer and commercially driven products. It is a recognised brand, for better or worse, and it comes with opinionated decisions, as well as an ethos that not only drives the individuals that make it, but also the ones that use it.

More importantly, GNOME comes with a varied history, and a lot of baggage, deeply rooted in the choices made by thousands of people. It’s something that newcomers to the project often deal with by asking questions to whoever appears to be knowledgeable — but it’s not something that the the project itself offers.

So, I wanted to work on recontextualising GNOME.

Why me, though?

My history with GNOME does not date as far back as the project’s origins. I did start using Linux in 1997, around the time GNOME was created, but for the first couple of years I mostly used the VT for emails, IRC, Usenet, and the occasional Napster download (kids, ask your parents); a GUI was something I used to launch Netscape Navigator. I experimented with KDE, but I ended up settling on WindowMaker, because I enjoyed the non-Windows-y look of the desktop.

As a user, I switched to GNOME full time by the tail end of 1.x series, and went through the 2.0 transition with a stronger interest in the direction of the project — to the point that I started contributing to it.

GNOME is the reason I became a passionate believer not just in the freedom of releasing the source code of your work, but also in the necessity of a working environment for the people using a server, a workstation, a laptop, or even a phone. I learned principles of software development; architecture; security; privacy; hardware enablement; design and user experience; quality assurance and testing; accessibility. I got my first job through GNOME, and I met wonderful people that I consider friends, peers, and inspirations. I lived through a huge chunk of the history of the GNOME project, and in some cases I can kid myself into thinking I helped shape that history. This does not make me the most neutral or dispassionate person to lend his perspective on GNOME, but you’ll have to make do with me, I guess.

In the beginning this was to be a presentation, but Jonathan Blandford did an amazing keynote on the history of GNOME in Manchester, at GUADEC 2017, for the 20th anniversary of the project, where he went through most of the milestones and events of the previous three decades. I realised, at that point, that it would take somebody like me, who’s less talented than Jonathan, much more than an hour to do the same, let alone go through GNOME’s history in depth.

I started writing a series of blog posts, but halfway through I realised I was using the same kind of voice I use when writing presentation notes, which is when I realised I should probably just read them. Thus, a podcast.

So, let’s lay down some of the ground rules for this.

The first question is: who is this podcast for?

When I write, I have three broad categories of people that I want to reach; the first is composed by people that are new, or relatively new, to the GNOME project, and want to add more context to the history of the project and of the community they just joined. In a large, established project like GNOME, there’s a lot of insider information that you typically have to suss out in person, or that you end up gleaning if you hang out in the social channels long enough. Why is using flags in a UI such a problem? Why are settings expensive? What’s up with the clocks? There is a lot of established knowledge that is only evident by its final results, but context for those results is missing. It’s already hard to get started in a new community, so this podcast aims at lowering the curve for newcomers at least a bit.

The second audience is made by people that are not familiar with the GNOME project outside of having heard about it; they are familiar enough with Linux, but lack the “insider baseball” aspect of both the community and the technology. They may know that GNOME is a desktop environment, but have no idea what a desktop environment is made of.

The third audience is the GNOME developers themselves; people that have been embedded in the community long enough to know some, or many, of the ins and outs of the decisions made in the past, but they may be unfamiliar with the whole history of the project. I hope they’ll forgive me for going on tangents, espcially at the beginning.

The podcast is divided in chapters, roughly corresponding with each major version of the desktop. There will be one episode per week, with breaks between chapters. I’m not the fastest, nor the most adept at this medium, so I want to allow some leeway to maintain both quality and quantity.

Each episode will go through events in the history of the GNOME project, but I’ll try to take some time to expand on the context in the Free Software world, as well as the rest of the happenings around the same time.

In some cases, I may have to give some technological definition, mostly to ensure that we have a common vocabolary, especially for people who are not heavily invested in the stack itself, or for people that learned English as a second language.

The primary sources for this podcast are public mailing list archives; additionally, I’ll refer to blogs from GNOME developers, as well as other public sources, like interviews and presentations given at conferences. As I said earlier, I do have direct experience of a chunk of the GNOME timeline, but I’ll try my best to stick to public sources for the events that involved me as well; this is not a “tell-all” kind of podcast. Using only public sources has the advantage that I can refer to specific information, but has the downside that I might miss some of the backstory; I don’t want to create an oral history of the project, as that would be its own endeavour. I’m sure people involved in some of the events will have their own version, and I’ll gladly accept corrections.

Each episode will have a companion article, which will contain the script and the sources I’ve used.

Hopefully, this podcast will be interesting for both newcomers to the GNOME project, and for old timers as well. Looking back to what happened helps us shape what’s to come; and having a better understanding of the past can give us confirmation of our choices, or a chance to revisit them.

Finally, I wanted to have an excuse to say out loud, with apologies to Mike Duncan:

Hello, and welcome to the history of GNOME.

References

by ebassi at October 25, 2018 03:26 PM

October 07, 2018

Tomas Frydrych

Dr Beeching’s Unicorns

Dr Beeching’s Unicorns

Glen Ogle. Most of the time a place on the way to somewhere else, somewhere more exciting. Yet, for me also a special, magical place where years ago my inner eye first really glimpsed the beauty of this land.

We just got our first car, a decade old Astra Belmont, and were on a first road trip out of the Central Belt. Up to Loch Ness and Skye, if I recall correctly—truth be said, I no longer recall much of it, a vague memory of the Nessie exhibition, and a boat trip on the loch, zero memories of Skye. But I do remember Glen Ogle. The way it opened up in front of my eyes for the first time as we drove up from Lochearnhead. It cast a spell over me, one that has lasted throughout the years. Looking back, I think this was the moment this land became My Scotland.

We returned to explore the old railway line. Back then there was no path to speak off, the old embankment shrouded in trees. Nature doing its best to erase man’s intrusion. Years later I watched with dismay as so many of those trees were cleared during Route 7 construction—a valuable project for sure, but such chainsaw enthusiasm.

But before that it was like being transported into another, wild, mythical world, half expecting a ghost train to appear at any moment from around the corner underneath the dark canopy.

It didn’t. Instead we run into a giant locked up gate worthy of Alcatraz. A gate clearly meant to keep people, not animals, out. (Not such an uncommon sight in the days before the Land Reform Act; the ‘good old days’ weren’t really.) Undeterred, we climbed over for a wee bit more exploring, till an industrial size manure heap finally stopped us.

‘That would be one of Dr Beeching’s, then’, said my mother-in-law as we discussed our trip over the dinner table.

Dr Beeching, 1913 - 1985. An overpaid political appointee to the chair of the British Railways Board. His lasting legacy the decimation of UK’s railway infrastructure. All in all some 6000 miles of railway track, over a third of the UK total, decommissioned.

The construction of the Callander to Oban railway involved 13,000 workers and took fifteen years; undone by a stroke of a pen. A story repeated over and over across the land. Political expediency masquerading as fiscal prudence. Lack of long term strategic vision, the consequences of which are felt astutely more than half a century later, and will be much longer, perhaps for ever.

Today Linda’s doing her recce for the Glen Ogle race, and I am out for a wander with a camera on the other side of the glen. A blustery autumnal day. The colours are beautiful, the showers frequent, the wind gusts strong enough to knock my tripod over and send my bag of lenses rolling down the steep hill. I spend most of my time ‘waiting for the light’, left to my thoughts.

Another heavy patch of rain has painted a bright rainbow across the top of the glen. Below its arch a steady stream of relentless traffic slowly making its way up the road, the frustration of the drivers almost palpable in the air. Dr Beeching’s unicorns.

My Glen Ogle. A place of internal inter-generational conflict. There on the other side, standing on the old viaduct, the younger me protesting the tree felling, the loss of (perceived) wildness. On this side the present me, wishing the working railway was back. Not a change of values as such, merely of perspective.

Later today when I develop the film I will find out that nothing worthwhile came out of the pictures. But it never is about the pictures.

by tf at October 07, 2018 09:59 AM

September 14, 2018

Tomas Frydrych

A Prince Holding Court

A Prince Holding Court

I saw him as soon as I came over the rise. Hard not to, perched on a rock some hundred yards ahead, completely out of scale in this landscape.

I heard of them being seen around here time from time, but this is the first I have laid my eyes on one, the first time I have come close to one at all. A young bird judging by the colouring, just sitting there, that unmistakable long neck and large beak, the pronounced shoulders -- the royal posture of a sea eagle.

Of course, I have neither binoculars nor a camera, not even a phone. I am three hours into a short run that got a bit out of hand when, on the spur of a moment, I decided to wade through the loch into the no-man's land. But,

Don't need no Real-to-Reel
    Recorder
to tell me I've been there,
I ken that fine.*

And so I forget about the midges (and the dozens of ticks crawling over me) and just stand here watching, in awe.

The thing that takes me by surprise the most is not the eagle, but the other birds. There are about a dozen, perhaps more, mostly crows (hooded and not) sitting in a circle around it. A tight circle, two, three yards away, quite unconcerned, preening their feathers. Like a scene from the Jungle Book: a Prince holding Court.

I am seen. The eagle flaps lazily and flies in my direction, passing maybe thirty yards away, checking me out. An about turn, back at my eye level, this time no farther than fifteen yards. You read about the two-plus meter wing span, but o' my, pictures and numbers didn't prepare me for the reality.

The eagle lands back at its perch, the other birds settling once more around it; the Court is back in session. Eventually the Prince gets bored and takes off once more in the direction of the sea, and the Court rises again.

One of the carrion crows follows it, trying to peck its back.

Keep your friends close but your enemies closer.

The difference in size makes it look rather comical! The eagle seems unperturbed and, with a nonchalance of an apex predator, keeps rising and rising until it's just a dot high up in the sky.

And me? I'll be back here an hour later, with two cameras and five lenses, and the immutable Time laughing behind my back. Never mind, I ken that fine. (And you? You get a picture of a sunset instead.)

--

[*] Andrew Greig, Men on Ice.

by tf at September 14, 2018 08:50 AM

September 10, 2018

Tomas Frydrych

Of Eagles and Men

Of Eagles and Men

There are three of them up there, and what a racket! Correction: the racket, that’s just the two of them. She is soaring silent, near motionless, regal; aloof. Her path seemingly unalterable. On a mission permitting no distractions.

--

I am transported years back to a different place, studying a black and white photograph above a fireplace. Old Mr Thornburn shuffling around a table behind me. ‘Such a camaraderie I had not known before, nor have since,’ is all he (ever) said about it.

Up on the mantle piece a young Mr Thornburn. In a glass bubble on the tail end of a (seemingly immovable) Lancaster in a cloudless sky. To me, a two generations removed causal observer, a picture of peace and tranquility, of adventure.

See? The camera does lie! An what an illusion! For this is an image of nothing less than Life Itself suspended. For a few hours? For eternity? For a mission permitting no distractions.

--

The screeching grows louder. Lot of posturing, then just one brief clash. More posturing, but we (me, her, the two of them) know it’s all over. Age and experience triumphs over the virility of youth. The upstart, minus a few feathers, retreating; he will not be back.

Yet, we (me, her, the two of them) know it’s far from over. Merely the beginning of the end. The upstart will return, perhaps next year, perhaps the year after. Bigger, craftier, having the upper claw.

I watch the two of them soar higher and higher. And we (me, her, him) know that when that day comes it will be the other she takes back to her nest. Her mission allows no sentiment. But, for now at least, the inevitability of the future has been deferred.

[Though we (you and me, if not her and them) know chances are one or the other, if not all three, will get shot, trapped, poisoned, or just fly miles out to sea to drown—Scottish eagles seem to prefer such a fate to longevity, some (useless wastrels) opine.]

--

Mr Thornburn is gone now, and with him the memories he never spoke of. I didn’t then, but I understand now that some experiences are too profound to trivialise by telling, in turn making other experiences too trivial to merit it. My great grandfather’s journal, from yet an earlier war, contains such stories—stories that could never be shared with those who couldn’t understand, and didn’t need telling to those who did.

The younger me, too preoccupied with the (untold) tales of heroic deeds. The older me, too late, wanting to ask the (heavily pregnant) question that back then was hanging in the air.

Mime is a sheltered generation, collectively short on stories of substance. And so we have become compulsive tellers of trivialities, serial manufacturers of pseudo-heroic deeds, dressed up in multicoloured cloaks of fake profundity. Entertaining distractions from the imperative of the (ultimate) mission.

Tall tales of denial.

--

All that is left of the eagles are their distant calls, leaving me alone to my thoughts. ‘Generation goes, and generation comes, but the earth lasts forever.’

Perhaps.

The Earth is groaning underneath us. We can count on the going, but can we on the coming? How many more cycles are there left? Two, three?

If only the eagles could talk.

by tf at September 10, 2018 07:16 AM

September 04, 2018

Emmanuele Bassi

Reference counting in modern GLib

Reference counting is a fairly common garbage collection technique used in many projects; the core GNOME platform uses pretty much all the time, from container data types, to GObject.

Implementing reference counting in C is typically fairly easy. Add a field to your data type:

typedef struct {
  int ref_count;

  char *name;
  char *address;
  char *city;

  int age;
} Person;

Then initialise it when creating a new instance:

Person *
person_new (void)
{
  Person *res = g_new0 (Person, 1);

  res->ref_count = 1;

  return res;
}

Instead of a person_copy() and a person_free() pair of functions, you need to write a person_ref() and a person_unref() pair:

Person *
person_ref (Person *p)
{
  // Acquire a reference
  p->ref_count++;

  return p;
}

static void
person_free (Person *p)
{
  // Free the data
  g_free (p->name);
  g_free (p->address);
  g_free (p->city);

  // Free the instance
  g_free (p);
}

void
person_unref (Person *p)
{
  // Release a reference
  p->ref_count--;

  // If this was the last reference, free the data
  // associated to the instance and then the instance
  // itself
  if (p->ref_count == 0)
    {
      person_free (p);
    }
}

Of course, trivial doesn’t mean correct. For instance, the code above assumes that all reference acquisition and release operations will happen from the same thread; if that’s not the case, you’ll have to use atomic integer operations to increase and decrease the reference count. Additionally, the code above does not check for overflows in the reference counting, which means that the value could saturate and lead to leaks.

Reimplementing all the checks and behaviours is not only boring, but it inevitably leads to mistakes along the way. For this reason, GLib 2.58 introduced two new types:

  • grefcount, to implement simple reference counting
  • gatomicrefcount, to implement atomic reference counting

Both come with their own API, which leads to this code:

typedef struct {
  grefcount ref_count;
  // or: gatomicrefcount ref_count;

  // same as above
} Person;

Person *
person_new (void)
{
  Person *res = g_new0 (Person, 1);

  g_ref_count_init (&res->ref_count);
  // or: g_atomic_ref_count_init (&res->ref_count);

  return res;
}

Person *
person_ref (Person *p)
{
  g_ref_count_inc (&p->ref_count);
  // or: g_atomic_ref_count_inc (&p->ref_count);

  return p;
}

void
person_unref (Person *p)
{
  if (g_ref_count_dec (&p->ref_count))
  // or: if (g_atomic_ref_count_dec (&p->ref_count))
    {
      person_free (p);
    }
}

The grefcount and gatomicrefcount make it immediately obvious that the field is used to implement reference counting semantics; the API checks for saturation of the counters, and will emit a warning; the atomic operations are verified. Additionally, the API checks if you’re trying to pass an atomic reference count to the grefcount API, and vice versa, so you have a layer of protection there during eventual refactoring, even if the types are both integer aliases.

We did not stop there, though.

Adding a reference count field to a structure is not always possible; for instance, if you have ABI compatibility restrictions, or if the type defition is public, adding a field may just not be something you can do within the same API version. By way of an example: you may have a type meant to be typically placed on the stack, so it needs a public, complete declaration, in order to have a well-defined size at compile time. You may also want to pass around an instance of the type as the argument for a GObject signal, or as a property — but it may be expensive to copy the data around, so you really want to have an optional reference counting mechanism that is invisible to the vast majority of the use cases.

This is why we also added an allocator function that adds reference counting semantics to the memory areas it returns, called GRcBox:

typedef struct {
  char *name;
  char *address;
  char *city;

  int age;
} Person;

Person *
person_new (void)
{
  return g_rc_box_new0 (Person);
}

Person *
person_ref (Person *p)
{
  return g_rc_box_acquire (p);
}

void
person_unref (Person *p)
{
  // person_free() is copied from the code above; we use
  // g_rc_box_release_full() because we have data to free
  // as well, but there's a variant for structures that
  // do not have internal allocations
  g_rc_box_release_full (p, person_free);
}

As you can see, this cuts down the boilerplate and repetition considerably.

The data returned to you by g_rc_box_new() and friends is exactly the same as you’d get from g_new(), so you can pass it around to your API exactly the same — but it is transparently augmented with reference counting semantics. You acquire and release references without having an explicit reference counter. The only restriction is that you cannot reallocate the data, so calling realloc() is not allowed; and, of course, you cannot free the memory directly with free() — you need to release the last reference.

Similar to GRcBox, there’s a GAtomicRcBox, which provides atomic reference counting semantics, with a similar API.

Both GRcBox and GAtomicRcBox are Valgrind-safe, so you won’t get false positives or unreachable memory if run your code under Valgrind.

You don’t even need to have a structure to allocate: you can use GRcBox to allocate any memory area with a non-zero size. Incidentally, this is how we implemented a oft-requested feature: reference counted strings, which are also available in GLib 2.58.

Reference counted strings work exactly like any other C string, but instead of copying them and freeing them, you acquire and release references to them:

char *s = g_ref_string_new ("hello, world!");

// "s" is just like any other C string
g_print ("s['%s'] length = %d\n", s, strlen (s));

g_ref_string_release (s);

Reference counted strings can also be interned, which means that calling g_ref_string_new_intern() with the same string twice will give you a new reference the second time around, instead of allocating a new string; once the last reference to the interned string is dropped, the string is un-interned, and the resource allocated for it are freed.

Since you may be wary of passing around char * for reference counted strings, there’s a handy GRefString C type you can use to improve readability, like we have GStrv for char **:

GRefString *s = g_ref_string_new ("hello");

// ...

g_ref_string_release (s);

GRefString also has an autocleanup function, so you can do:

{
  g_autoptr(GRefString) s = g_ref_string_new ("hello!");

  // ...
}

and the string will automatically be released once it goes out of scope.

by ebassi at September 04, 2018 03:35 PM

August 18, 2018

Tomas Frydrych

M4/3: The Outdoor Camera System

M4/3: The Outdoor Camera System

It’s been 10 years since the birth of the M4/3 camera system. I got my first M4/3 (Lumix GF2) in 2010 and never looked back. Indeed, I am about to argue that during that decade M4/3 has become the best camera system both for the landscape photographer on the move and a wildlife photographer alike, hitting the sensor size sweet spot. And yet, it’s completely overlooked by the outdoor movers and shakers!

There are reasons for that. Some historical (there is a large contingent of outdoor photographers who switched to digital when it was being grafted into legacy 35mm systems). Some to do with misunderstanding of what determines digital picture quality and where the technology is. And let’s not be shy, there is some pure ‘mine is bigger than yours’.

Compared to all the other removable lens systems out there M4/3 has a whole bunch of things in its favour, here are some of them of the top of my head.

  • M4/3 is light on the back. My ‘complete’ outdoor camera kit, which consists of a camera, 24-80mm eq. zoom, a fisheye, 200-800mm eq. ‘bird’ lens, a full-sized tripod, spare batteries and a handful of filters weights 5.5kg. Apart from the tripod it fits inside a BYOB 10 insert – my mother-in-law goes around with a (considerably) bigger handbag than that! Now try that with a full-frame system; the equivalent ‘bird’ lens alone weights more than the whole lot and measures over 50cm.

  • M4/3 is light on the wallet. At £1,200 the aforementioned Leica ‘bird’ zoom is not cheap by any means, but it’s not out of the reach of us mere mortals. And that’s basically as expensive as it gets. Equivalent full-frame lenses start at ~£6,500 ... Similarly, top, professional grade, M4/3 cameras such as the Lumix G9 will set you back by ~£1,200 ... need I say more?

  • The M4/3 dual image stabilisation (5-axis in camera + in lens) is second to none. It means not only I can comfortably hold shots to 1/25s, but that if required I can hand-hold that 800mm eq. beast, and that is priceless.

  • If you want a camera that can take both excellent stills and video, the M4/3 is without a competition, and most of the lenses have been designed with this in view (i.e., zooms with constant aperture when zooming).

  • Ask yourself why is there such an abundance of sensor cleaning kits for full-frame cameras, but nothing for M4/3? As it happens, M4/3 comes with built-in sensor cleaning as standard. Taking pictures outdoors? Invaluable!

‘But the image quality, the bokeh ...’, I hear you say. OK, let’s talk some geekery.

Size Matters

Megapixels sell cameras, for the belief that image quality is to do with pixel count is more deeply entrenched in the psyche of the digital era photographer than anything else. Yet the reality is lot more complex, and in fact, more pixels are not necessarily better (back to that in a moment).

On its own, the MP value has only a bearing on how much the image can be enlarged. To get a good quality photographic print you need around 200 dpi (dots per inch), if you are looking for a truly bespoke work, and have a capable enough printer (a person, not a gadget), then 300 dpi. But that means an old 6MP camera is enough for an excellent print at around 10"-15" wide, while 20MP is good enough up to 17"-25" – a very few, even professional, photos will ever be printed to anything near that.

But what do I know? Here is what Colin Prior has to say on the subject of megapixels:

What do you need a 100MPs for? ... What do we need that resolution for? Nobody can answer that question! Anything more than 30MPs is total bonkers (An interview on the The Togcast Podcast, episode #26)

So, if you do the sort of photography that requires to regularly produce prints at A2 or bigger size then the M4/3 might not be a good fit, but do you? Most of us don’t, vast majority of today’s imagery is intended for digital comsumption, which puts much lesser demand on pixel density. A 27" iMac retina display has a resolution of 200 dpi, a typical consumer monitor 120 dpi, ‘4k’ means 4000 pixels wide – an 20.3MP M4/3 image exceeds all of those.

‘But lot of pixels means you can crop a lot!’ I prefer to compose my pictures on location instead, don’t you?

In contrast, the physical size of the sensor matters a great deal. M4/3 cameras have a sensor that has (a clue in the name) a 4:3 ratio, and a diagonal exactly 1/2 of a 35mm film negative (what in the digital age, to the amusement of medium and large format photographers everywhere, has come to be known ‘full frame’, FF for short). The relationship between the FF and M4/3 diagonals is referred to as a crop factor of 2.

The physical size of the sensor determines three things in particular: the total amount of light that falls on the sensor at any given light conditions, the focal length of a lens needed to achieve a given angle of view, and the perceived depth of field.

Equivalent Aperture

Because the M4/3 sensor area is a little bit over 1/4 of the FF sensor area, the total amount of light (the number of photons) falling on the sensor is ~1/4 of what would fall on a FF sensor if both lenses were set at the same f-stop, or the same amount of light that would fall on the FF sensor if its lens was closed down by 2 f-stops. This relationship is referred to as equivalent aperture.

The amount of light that falls on the sensor has bearing on noise. There is a whole field of science dedicated to the subject (signal theory), but in essence, in any real-world system that transmits information, the useful bits (signal) are always mixed with bits that should not be there (noise; think of it as the crackling of an old vinyl). The closer the size (amplitude) of the signal is to that of the noise, the more intrusive the latter becomes (quiet music, loud crackling). This relationship is described by signal-noise ratio (SNR).

In a digital sensor the amplitude of the noise is largely given by the quality of the circuitry, while the amplitude of the useful signal is given by the intensity of the light. When it’s dark, the signal is small and has to be amplified. This is where the ISO number comes in. With a digital camera the ISO number represents the amplification factor: the bigger, the more amplification. But the problem with amplification is that it increases not only the amplitude of the signal, but also of the noise.

But the important thing to understand, and I get the distinct impresion that lot photographers don’t, is that when it comes to digital, noise is not determined by the amount of light that falls on the sensor (indicated by effective aperture) but that falling on an individual pixel on the sensor. In other words, noise depends not only on the effective aperture, but also on the overall pixel count.

So, at the same f-stop a pixel on a 45MP FF sensor receives not 4x (effective aperture) but only ~1.7x the amount of light that a pixel on 20MP M4/3 sensor does, i.e., all other things being equal, the M4/3 lens needs to be opened up by less than a stop to achieve the same performance – the fact that the current generation of M4/3 cameras has stuck with sensible sensor pixel counts, whereas FF sensors are chasing ‘bonkers’ pixel densities, works hugely in the M4/3 favour.

In practical terms, the noise of the system determines two things: how quickly the image degrades when the ISO value is pushed up, and the dynamic range of the sensor (the difference between the lightest and darkest light level the sensor can capture). So what does it look the real world?

Regarding the high ISO usability it looks pretty good. The current crème de la crème of the M4/3 cameras, the Lumix G9, doesn’t show significant amounts of noise until above ISO 3200. In terms of outdoor photography this is very decent. And even my now somewhat ageing GX8 is in my experience perfectly fine at least ISO 800.

At the same time, M4/3 lenses tend to be fast. A typical zoom lens will come at f2.8, good enough to shoot any daytime landscape at ISO 100, and it gets even better with primes. The Olympus M.Zuiko fisheye opens up to f1.8, which is good enough for night time photography at ISO 100. Personally, I rarely need to set my M4/3 cameras to anything other than ISO 100, the main exception being bird photography with the big telephoto at fast shutter speeds (1/2500s or so); but even for that ISO 3200 is usually enough.

Then, of course, there is the excellent performance of current noise reduction algorithms that can be used in post-processing. In other words, it’s just like the mega pixels – in terms of low light noise the M4/3 cameras are well beyond what is necessary to take that perfect landscape image.

What about the dynamic range? The active range of the human eye is estimated to be ~10-14EV (the absolute dynamic range is about 24EV , but this is not actively usable since the eye takes a better part of an hour to fully adjust to a transition from light to darkness). A top end FF (as well as MF) sensors come bit under 15EV, while top end M4/3 cameras deliver bit over 13EV.

‘Aha!’, you say, ‘the FFs are almost 1.5 stops better!’

My answer to that is two fold: on the one hand 13 and a bit EV is more than enough in most landscape scenarios, and when it is not, it is easily remedied (either by an old fashioned graduated filter, or by that wonderful feature of modern digital cameras, exposure bracketing. On the other hand, 1.5 stop is not enough of a difference; I routinely use 3EV GND filter, which means I’d still have to do something even with a top end FF (or even MF) camera.

So to sum up, I don’t believe the ‘better image quality’ argument for FF sensors holds. It’s not that sensor size doesn’t come at a cost of quality, it does. But once the quality of the hardware gets beyond a certain point, it no longer matters, and the quality of the image comes down purely to what we do with the camera. And today’s M4/3 cameras are, in my personal view at least, well beyond that point.

Equivalent Focal Length

For a given angle of view, the focal length of the lens is proportional to the crop factor. So, if the ‘normal’ angle of view for FF sensor is created by a 50mm lens, for M4/3 the same is achieved with a 25mm lens, which would be said to be a ‘50mm equivalent’.

The big consequence of this, and the real strength of M4/3 as a system for the photographer on the move, as I already noted, is that the lenses are much smaller and lighter (not to mention cheaper). As Jim Frost once put it, ‘you don’t need a wheelbarow to carry it around’. Nothing to add to that really.

Depth of Field

While equivalent focal length is useful to compare the angle of view, two lenses of such equivalent focal length are not equivalent in other important respects. The most notable difference is the depth of field. Lenses of equivalent focal length have the same depth of field at apertures proportionate to the crop factor, i.e., the depth of field of a M4/3 25mm lens at f2.8 is the same as that of a 50mm FF lens at f5.6. In other words, M4/3 has a noticeably greater depth of field than a FF camera for the same angle of view and aperture.

The increased depth of field is the single biggest, and I think the only practically significant, difference between modern M4/3 and FF cameras; whether the M4/3 is deemed to be a good fit will depend on how much value shallow depth of field adds to one’s images.

There are photographic disciplines where a shallow depth of field is definitely a must, but personally I don’t think outdoor photography is necessarily one of them. First of all, it’s not so much of an issue with closeups, for at short distances even the M.Zuiko f1.8 fisheye will produce a decent bokeh (plus closeups can, more often than not, be done using a longer focal length). At long focal lengths this is not an issue at all, e.g., the depth of field on the Leica 100-400mm zoom is perfect for wildlife photography.

So in practical terms the depth of field only makes a noticeable difference at mid foreground distances with normal to wide angle lenses. To be specific, with an M4/3 12mm lens (24mm eq.) at f2.8 with far focus limit stretching to infinity, the near focus limit will never be more than ~3.5m. A FF 24mm lens at 2.8 can push that near limit to around 6.5m. But how much difference does this really make to me? If nothing else, if I really want that extra 3m of unfocused foreground, and I can’t achieve the desired composition at a longer focal length, I can fix that in post-processing.

In Conclusion

As far as I am concerned the M4/3 does it all. It’s a great system for outdoor photography, whether it’s landscape or wildlife, and after eight years of using it, I can’t envisage ever wanting to ‘step up’ to a full-frame system.

It’s not that I don’t understand the benefits of a bigger media, having shot thousands of pictures in years gone by on 35mm film, I do. But for vast majority of my pictures the M4/3 is the optimal system. And when it’s not? Well that’s where medium format comes in, of course. Full frame? But a relic of disposable cameras and holiday snaps from a long gone era of film. 😉


A Few Notes on M4/3 Cameras

Just a few comments on the cameras and lenses I have owned, with a random selection of pictures for illustration. Probably worth saying, the pictures have had a varied degree of post processing applied in Lightroom, but no Photoshop involved (life is too short for that).

Lumix DMC-GF2

My first M4/3 camera. Introduced in 2010 as a somewhat dumbed down successor to the hugely popular GF1, the GF2 delivered 12.1MP resolution with a dynamic range of 10.3. By today’s standards not much, but nevertheless in a combination with a 14mm pancake lens it proved to be an excellent setup for my outdoor activities. The RAW output made it amenable to decent amount of post processing, compensating somewhat for the limited dynamic range, and with little care it could produce pretty decent pictures.

M4/3: The Outdoor Camera System Loch Voil (Lumix DMC-GF2, Lumix G 20mm/f1.7)

M4/3: The Outdoor Camera System Larch Tree (Lumix DMC-GF2, Lumix G 14mm/f2.5)

Lumix DMC-GM5

The main limitation of the GF2 for me was that it did not have a viewfinder, only a fixed LCD screen. This made it hard to use in bright light, and also to compose images properly in general. Eventually I had enough ‘if only this was a bit to the left’ pictures to look for a replacement. The GM1 ticked all the boxes, but by the time I made up my mind (the GM1 was quite dear), its successor, GM5, came out in 2014.

To me the GM5 was a true marvel of technology. It was much smaller and much lighter than the GF2, at 260g (with the 14mm pancake lens) this was a camera I could fit into shoulder strap pouch and run with. Yet, it addressed what I thought were the main defficiencies of the GF2: it has a decent viewfinder and even some external mechanical controls, e.g., for exposure settings. And the 16MP sensor had a much improved dynamic range of 11.7 – this was a compact-like camera packing some serious punch.

In spite being four years old now, which in digital sensor technology is a lifetime, I still use the GM5, and indeed over the years it has produced some of the pictures that made me most happy. In many ways it remains the perfect outdoor camera for me. These days I usually refer to it as ‘the running camera’ on account of its size an weight, for it’s what I take when weight matters. Nevertheless, the dynamic range is enough for most outdoor photography scenarios, and with minimal post-processing the GM5 is capable of producing excellent quality prints of up to ~18" or so.

M4/3: The Outdoor Camera System Sgurr an Fhidleir and Lochan Tuath (Lumix DMC-GM5, Lumix G 14/f2.5)

M4/3: The Outdoor Camera System Bynack More (Lumix DMC-G5, Lumix G 14/2.5)

Lumix DMC-GX8

The Lumix GX series came as the spiritual successor of the GF1, a class of camera aimed at the serious amateur. The GX8, introduced in 2015, is arguably a pinnacle of that series (for reasons only known to Panasonic, the GX9 is a dumbed down version of the GX8, lacking some of the things that make the GX8 so great, not least the mechanical controls and weatherproofing).

The GX8 has a well thought out external controls, including a dedicated EV Compensation dial big enough to work in winter gloves, decent resolution tilting viewfinder, a tilting screen, is weatherproof, and all in all a pleasure to use. The in-body 5-axis stabilisation works extremely well, and the 20.3MP sensor has an excellent dynamic range of 12.6 – more often than not, the GX8 produces great looking images straight out the camera.

The main limitation of the GX8 is the presence of an anti-aliasing filter on the sensor, which results in some loss of sharpness in fine detail. Nevertheless it produces excellent prints up to ~24".

M4/3: The Outdoor Camera System Loch Leven (Lumix DMC-GX8, Olympus M.12-40mm / f2.8)

M4/3: The Outdoor Camera System Bealach Bhearnais (Lumix DMC-GX8, Olympus M.12-40mm / f2.8; hand-held at ISO 800)

Lumix GC-G9

The Lumix G series is aimed at the professional user. The much improved 20.3MP sensor has a dynamic range of over 13EV, no AA filter, and produces very sharp images that hold their own even alongside today’s best FF cameras. Like the GX8, the G9 is weatherproof, but the body is somewhat bigger and heavier (mostly due to an enlarged hand grip).

Some of the nice features of the GX8 are missing (no dedicated EV compensation dial, the viewfinder doesn’t tilt), but the number of customisable buttons is greater, and the little on-camera LCD is a nice touch. The viewfinder resolution has gone up, its image very bright and clear. The 5-axis stabilisation has been further improved, there is a sensor shifting high resolution mode, delivering 80MP (cool, but awkward in real life), and 4k / 6k mode for shooting stills.

I haven’t had the G9 for long, but the step up from the GX8 is palpable; while the GX8 is a contender for a serious use camera, the G9 is that camera without a question.

M4/3: The Outdoor Camera System Kolla, Iceland (Lumix DC-G9, Olympus M.12-40mm / f2.8)

M4/3: The Outdoor Camera System Ísafjörður, Iceland (Lumix DC-G9, Olympus M.12-40mm / f2.8)

A Few Notes on Lenses

Over the years, I have accumulated a collection of M4/3 lenses, but the ones I most often use are the following.

Lumix H-H014 14mm Pancake

When weight really matters, this is the sole lens I’ll pack, together with the GM5. At f2.5 it is reasonably fast, light, and very low profile.

Olympus M.Zuiko Digital ED Pro 12-40mm

This is my default lens, combined with either the GX8 or G9. It is weatherproof, f2.8, constant aperture while zooming (great for videos), and, as the whole M.Zuiko series, of excellent optical quality.

M4/3: The Outdoor Camera System Shenavall (DMC-GX8, Olympus M.12-40mm / f2.8)

M4/3: The Outdoor Camera System Beinn a'Chlaidheimh (DMC-GX8, Olympus M.12-40mm / f2.8)

Olympus M.Zuiko Digital ED Pro 8mm Fisheye

A true, 180 degree, fisheye. At f1.8, this is a great lens for both night time photography and daytime landscapes.

M4/3: The Outdoor Camera System Lochain Coire an Lochain, Breariach (DMC-GX8, Olympus M.8mm / f1.8; exposure bracketted)

M4/3: The Outdoor Camera System Bealach Beinn an Eoin (Lumix DC-G9, Olympus M.8mm / f1.8)

Leica DG Vario Elmar 100-400mm Zoom

The aforementioned ‘bird’ lens, brilliant for wildlife photography. F4-6.3, in-lens image stabilisation, excellent optical quality. At 1kg, not the lightest but, more often than not, worth it.

M4/3: The Outdoor Camera System Tawny Owl (DMC GX8, Leica DG 100-400 / f4.0-6.3; shot at dusk from a monopod at ISO 6400, with noise cleaned up in post processing)

M4/3: The Outdoor Camera System Greenshank (Lumix DC-G9, Leica DG 100-400 / f4.0-63; hand-held at ISO 400)

by tf at August 18, 2018 12:11 PM

August 14, 2018

Tomas Frydrych

More on the Triad & Decagon Stoves

More on the Triad & Decagon Stoves

I have mentioned the Vargo Triad and Decagon in an earlier post on Cooking with Alcohol. I have now had a chance to use both stoves for real, and the experience was, unfortunately, not so good.

The Triad

More on the Triad & Decagon Stoves The Vargo Triad is a lovely made pressurised titanium stove with a capacity of ~40ml. I bought it specifically with a view to a two week trip to Iceland this summer. When it arrived, it was a love at first sight, it is a real thing of beauty. And as you can see from the earlier post, it performed well both on the initial tests in our kitchen, as well as during experimental snow melting on a very nice, calm, day in the Ochils -- I felt confident the Triad was going to be perfect for us.

However, before going to Iceland I happened to take it on an overnight outing in the hills. It was during the heatwave, and so, unusually for me, I decided to just bivvy rather than take a tent, and, somewhat unexpectedly, it turned out to be a fairly windy evening, hitting around 40mph during the night on the high ground.

Not a big deal. I chose relatively sheltered locations to cook in, and also used some large stones as an external windbreak, with the stove itself inside a home made metal windshield. This should have been fine (and would have been with my Speedster stove). But both in the evening and the morning a gust of wind lasting a few seconds caused the stove to flare up quite uncontrollably, with flames of about 30cm. In both instances the stove would not return to a normal burn until the fuel burned out, and the second time this happened it was so bad, I decided to have my porridge cold rather than risk setting the hill on fire.

While I suspect this behaviour might have, to some extent at least, been exacerbated by the high and tight-fitting windshield I was using, it is I think primarily caused by the low profile of the stove (more on this below).

Based on that experience I would never use this stove unless it was on a very large non-flammable surface (at least a meter in diameter), and certainly not inside a tent. We ended up taking a Trangia to Iceland instead (and it really grew on me during those two weeks, but that's for another time).

The Decagon

More on the Triad & Decagon Stoves The Vargo Decagon is another beautifully made, but unfortunately very poorly designed stove. It seems that the overriding objective was to create a stove that would be indestructible, but in achieving this, Vargo created one that fails miserably at its primary function. The design suffers from three flaws, each one bad enough on its own.

Priming

The Decagon is impossible to prime. The packaging suggests that the stove will prime in 1-2 minutes, but that is just a wishful thinking. Even in the controlled environment of my kitchen, it takes on average 4-5 minutes to bloom. Vargo suggest a faster alternative priming method is to splash some alcohol around the base -- I find it hard to believe that they really suggest this, for while it indeed speeds things up, it results in a large uncontrollable flame, so is only possible if your stove is inside a large completely inflammable area.

This is a pity, because this flaw can be easily remedied by adding a priming ring; the picture above shows the stove with a priming ring made of glass fibre cable insulation, with which it can be primed in under 1 min.

Excessive Burner Cooling by the Pot

To make the Decagon indestructible, Vargo did not include any pot supports. Instead the pot sits on three small bumps on the top of the burner. The result of this is that it conducts heat directly from the burner itself, with clear impact on the gasification of the fuel; the amount of heat generated by the burner visibly grows as the pot warms up (just about the very opposite of an ideal burner behaviour, I think).

One of the consequences of this design is that the pot cannot be placed on until the burner has bloomed (as is noted in the instructions). But in fact the heat loss is so bad that if you put a pot of ice cold water on even once the stove is in full bloom, it will de-bloom in a matter of seconds, and go completely out shortly after. (So no, this is not a stove that could be used to melt snow, as I hoped.)

This flaw can, again, be worked around by using external pot supports that lift the pot slightly above the burner, but this can lead to the burner overheating and flaring up, so I would not recommend it. Which brings me to the biggest problem with the Decagon.

Safe Fill Level

The thing that really appealed to me about this stove is the large capacity of just under 60ml. Unfortunately, the stove cannot be safely filled to this level. If you do, liquid fuel will spew out of the jets during priming. This then creates a very strong positive feedback loop, causing the stove to get into an uncontrollable rocket-like flair; throw a tight windshield into the mix, and it will generate enough heat to not only pulverise silicon matting but also, literally, melt aluminium (but unlike the Triad, the Decagon doesn't need a windshield to get there).

When in this state the stove cannot be blown out, you either have to cover it with a large pot to deprive it of oxygen, or let it burn out. My experiments suggest that the Decagon safe fill is only about 40ml (but this makes it impossible to prime without a priming ring!).

In spite all of this, we took the Decagon with us to Iceland as a secondary stove, and used it a couple of times with a frying pan. Assuming it is not filled up more than the 40ml and precautions are taken so it can be safely left to burn out just in case, it is OK for that, I might use it again that way now that I have it, but I expect you have guessed by now, I'd not buy it again.

Lessons Learnt

I think the basic problem with both of these stoves is the jets are too close to the fluid level. It's worth noting that the Trangia instructions say not to fill their burner more than 3/4 of the depth of the bath, which means there there is at least 1cm gap between the fluid and the jets. In contrast, the instructions for the Vargo stoves say they must be filled to capacity to prime (and, indeed, will not prime otherwise) but that makes the jets very close to the fuel surface, and susceptible to spewing out liquid fuel when the alcohol boils. Whichever way you look at it, and whatever the burner design, this can never be considered a good thing.

So my quest for a winter alcohol stove goes on -- I think I am going to give the Trangia a shot in spite of its bulk. But in any case, I am definitely going to avoid any pressurised alcohol stove where there is not a sufficient gap between the jets and the fluid level.

by tf at August 14, 2018 12:44 PM

July 28, 2018

Tomas Frydrych

The Wind

The Wind

I saw the front in a distance. A solid wall of water, just obscuring where Kings House once stood (a view improved, I dare say). It was upon me before I had the tent up, a scramble to get inside, wait it out by candle light.

Half an hour or so and it’s over. A tiny light on Beinn Bheag, a kindred spirit I imagine. But there is no sign of the blood moon (unlikely as it was, the reason I rushed here this evening after work). Though the cloud has cleared somewhat, so there is still a tiny glimmer of hope, some time left.

Alas, it’s not to be. I notice a white cloud oozing from Lairig Gartain, then a rather menacing black one emerges from behind the Buchaille. But the real weather is behind my back. The wind has done a half circle turn and another wall of rain just passed Beinn Fhada and is advancing my way, fast. The torrents hit just as I am unzipping the tent. More time for idle thoughts by the candle light; much to be recommended.

The first lightning takes me by surprise. Sure, it was forecast, but out there just now it did not feel like a stormy sort of a day. One elephant, two elephants, three elephants, four elephants. One elephant, two elephants, ... eight elephants. Good. One elephant, ... three elephants. Less good; glad I didn’t camp any higher up, sparing a thought for the tiny light on Beinn Bheag.

It lasts another half hour. Time to put the boots on again, just in case the moon has appeared; there is still a little time left. But, naturally, there is no moon, just some sheet lightning far to the east. For some reason the road is very busy just now, so I decide to take some cliche long exposure pictures, but the rain returns before I am ready, so back into the tent. More lightning. This is it, out of time, turn in for the night.

When I wake up in the morning, the first thing I see are two unfamiliar blotches. As my eyes manage to focus, I make out two birch seeds stuck to the outside of the inner tent. They must have come on the wind during the night, possibly a long way – I can’t recall any birches around here.

They make me smile. Symbols of new life, of change; of the very possibility of change. But they make me sad at the same time, for, unwittingly, by my being here, their progress came to a sudden halt. Another hard to escape symbolism.

I know, it’s just a couple of birch seeds. But symbolism matters. As every religion ever understood, symbolism makes it possible to participate in what we do not see, in what is not (yet) here. Symbols turn the abstract and theoretical, into personal and tangible; they turn thoughts into surrogate experiences.

Take the plastic straw. It’s been pointed out (unhelpfully) that if you, and I, and everyone else, give up on plastic straws, nothing much will change, for most of that plastic pollution in our seas comes from fishing nets. Perhaps, but that is to miss the point.

The moment I decide ‘no more plastic straws’ is the moment I, personally, start owning the bigger problem, the moment I accept my complicity. The symbolic value of this act is immense, for without such an ownership, both individual and collective, real change cannot happen.

Perhaps that is also the reason we need the lynx back in Scotland.

I will be honest, I have read the NGO materials, the economic forecasts, the description of the methodologies used to derive these. They leave me cold – they do not read like economic forecasts from people rushing to make money out of the lynx. They read like statements from people who believe it is the right thing to do, but the only way to achieve that is to convince the world at large there is money to be made. I, for one, am not sold on this Lynx-the-tourist-attraction, not least because should the economic case fail, the whole project is doomed, and doomed for a long time after.

But the argument that, ecologically, and I think also morally, it’s the right, even necessary, thing to do, seems to me very strong. Lynx as a symbol of change, of accepting that our natural resources cannot be managed purely form the perspective of primitive market economics, that repairing damage done in the past rather than merely maintaining status quo has to be part of modern environmental ethos, that is, I think very potent and could have ramifications beyond the few roe deer that will not need to be culled.

Perhaps. I am neither an ecologist, nor a sheep farmer, but, FWIW, I know a bit about religion and symbolism.

I forgot all about my two seeds until just now, drying the tent in the garden. They are no longer there, just two small stains on the fabric. And now I am not even sure they were there in the first place. Did my eyesight deceive me? It would not be the first time.

It doesn’t matter. Just now there are birch seeds everywhere I look. There are thousands of them in the air, in my hair, they fall behind my shirt, and land in my pockets, my shoes are full of them. And the wind? The wind is picking up again!

by tf at July 28, 2018 06:23 PM

April 30, 2018

Tomas Frydrych

Six Months with Cotton Analogy®

Six Months with Cotton Analogy®

When the row over the National Trust for Scotland trademarking the name ‘Glencoe’ erupted last summer, I had never heard of a company called Hilltrek. But for a while then I had been on the look out for some clothes for pottering about the woods with binoculars and a camera during the winter months, and had not seen anything that would be well suited to the (sodden) Scottish conditions. And I liked what I saw at the Hilltrek website.

Hilltrek are a tiny Scottish company offering a range of clothing made from Ventile®. If you, like me, have not heard of Ventile® before, it’s a cotton fabric developed in the 1930s essentially for fire hoses. When subject to water its very dense weave swells so much it prevents water penetration. The swelling is not instantaneous, so a single layer of the material is not enough to keep a wearer completely dry when subject to lot of water, but two layers, so called Double Ventile®, are.

Hilltrek make clothes in three fabric options: Single Ventile®, Double Ventile®, and Cotton Analogy®. The latter is a single layer of Ventile combined with the Nikwax Analogy® lining, also used by the Paramo® range of clothes. It was this that caught my attention, for the Nikwax Analogy® lining is well proven, and while I have never owned any Paramo® clothing (I am more of a Buffalo man myself), I know many who swear by it, and I have seen it perform excellently in some ‘real’ Scottish and Welsh weather. Unlike Paramo®, Cotton Analogy® offers the natural feel of cotton, and the lack of the irritating rustling of nylon — I was sold.

The Conival Trousers

While I was looking for something to wear about the woods, with the Conival Trousers I got more than I bargained for — without exaggeration, these are the best outdoor trousers I have ever owned. Over the last six months I have spent somewhere in the region of thirty five days wearing them, from sodden days in the woods, to numerous big full on days in the hills, including multi-day camping trips in the snow and temperatures dropping below -10C. In all of this they performed impeccably.

The Conivals have a no-nonsense cut, can be customised at the point of ordering, and if you have special requirements, all you need to do is to lift the phone (the great thing about dealing with small companies). There are two zipped pockets on the back, and two front hand pockets; cargo pockets can be ordered as an extra.

Unlike typical waterproof fabrics, the Analogy® lining is pleasant enough to wear next to skin, so these really are trousers rather than over-trousers, and they breathe very well. I tend to sweat fairly heavily, and so I normally avoid wearing waterproofs until it is really raining — these are the first waterproof trousers I have owned that don’t feel like being inside a banya and that I am happy to wear all the time.

The two layer construction is quite warm. I have found them good down to a few degrees C below zero on their own, and with a pair of thin merino long johns in temperatures down to -10C. On the upper end, I find them fine to about 12C, beyond that they are too warm for me (but then I don't usually wear waterproofs in those sort of temperatures anyway, and I am so impressed I am saving up for the Single Ventile® version Hilltrek make).

I have heard it said of Paramo® trousers that if you kneel on wet ground the water gets through. I have knelt in the Conivals in mud and snow on numerous occasions, pitching a tent or resting calves on long steep front pointing stints, and I have not found that to be the case, perhaps it is the benefit of the Ventile® itself being shower proof (or perhaps it was just an evil rumour about Paramo®).

The Ventile® fabric is quite heavy compared to ‘modern’ ‘technical’ kit, but I am really growing sick and tired of this current obsession with weight, which invariably translates into equipment that lasts a season or two. Indeed, the Conivals have shown themselves to be (I admit, surprisingly) hard wearing. I have done a fair bit of sliding about in them, sometimes on quite coarse icy ground, without noticeable surface wear. Some of the stitching around where the front pockets merge the side seam is starting to come undone, but that’s easily fixed.

The main wear-related issue with the Conivals is to do with the Ventile® dye, which does not seem to penetrate deep into the fibre, so where the fabric creases regularly, it starts reverting to the natural colour of cotton, and this happens so easily that somewhat disconcertingly the trouser started showing these whitish marks from the very first short walk in them, and it gets progressively worse, though it does appear purely cosmetic.

Six Months with Cotton Analogy®

The biggest drawback of Ventile® is that, according to the manufacturer’s recommendation, it is supposed to be dry cleaned. For a jacket this might be OK, but for outdoor trousers this is not practical. A closer look at the Ventile® site shows that the fabric can be washed with soap. I have been washing mine in 30C using the Nikwax® Tech wash, and can report no ill effects.

(It's worth noting that as with all waterproof fabrics the special care requirements have naught to do with the fabric per se, but the DWR coating that is applied to it, which has largely worn off before the first wash. I have tried Nikwax® Cotton Proof per the manufacturer recommendation; it does not produce the same sort of beading the original DWR did. It does seem to slow the water absorption a bit, but I am not entirely convinced it merits the expense.)

The Assynt Jacket

The Assynt jacket is billed as ‘ideal for field sports, nature watching and photography’. It has a corresponding cut with a waist level draw cord, two voluminous, low down, front pockets with stud closures, two chest level hand warming pockets, and a 5” high collar, with a stowaway hood.

In terms of size, based on the official size chart I am bang on for S, and indeed, have found the chest size to allow for adequate layering for winter use. But the sleeves are a different story. If anything the nominal size suggests these should be too long for me, but in fact they are well on the short size (1-2” shorter than on any other jacket of a comparable size I own), which becomes very noticeable with more layers underneath.

The snug fitting collar is the jacket’s best feature, keeping the dreich weather a bay. The stowing of the hood works better than is usual with such an arrangement, but unavoidably results in a hood of a low volume. This is the jacket’s main limitation. I have used it on a couple of fairly full on mountain days to see what it would be like, and the hood is not up to the task (this is not the intended use, and there are other jackets in the Hilltrek range that come with big volume, helmet-compatible, hoods).

Other minor drawback is that the hand warming pockets don’t have any closures, and, as they are not Ventile lined, this makes them draughty in moderately strong side-on wind. This feels as a bit of an oversight within the overall well thought out design.

All in all, I have found the jacket to be excellent within the parameters for which it was intended. I do wish the hood was bigger, I find I keep it out most of the time, simply because Scotland, and a bigger, non stowable, hood would make this a much more versatile garment.

None of the Hilltrek clothing is cheap, especially if you decide to do some customisation, but not incomparable to prices of some big brand mass produced outdoor kit. On the other hand, I expect it to last longer. I own a very nice Gore-Tex jacket from a big brand name that cost a similar amount as the Assynt jacket. It’s my ‘special occasions’ jacket, for on past experiences I know that in intensive use it wouldn’t last more than a season. I have no such quibbles with the Hilltrek clothing, there is a sturdy feel to it, and it is obvious that it was not only made in Scotland, but also for Scotland.

by tf at April 30, 2018 09:57 AM

April 16, 2018

Tomas Frydrych

Cooking with Alcohol

Cooking with Alcohol

In the last couple of years I have become a great fan of alcohol stoves. For three reasons. On short trips they are very weight-efficient. Alcohol is a much more environmentally friendly fuel than gas. And alcohol stoves are cheap to run!

As I have mentioned before, through my childhood and teenage years outdoor cooking involved an open fire. My first real stove was MSR WhisperLite™ purchased in the Mountain Equipment Coop in Vancouver in ’96. I still have it, with the original seals and that, though I haven’t used it for some years. The truth is that petrol stoves really come into their own on long remote trips and I don’t do those. And they take a bit of getting used to, the priming can easily get out of hand!

(On one particularly memorable occasion in Glen Brittle in the late ’90s the WhisperLite™ got me invited to cook in the kitchen of a giant luxury mobile home by a kind German couple who thought my stove was broken when I misjudged the volume of the priming fuel resulting in a flare worthy of Grangemouth. The trick, I learnt eventually, is to use little cotton wool and meths, but by then I also realised that this, excellent, stove was a poor match to my needs.)

And so, like everyone else, I switched to gas.

Gas stoves, without a question, win on the convenience front. There is no risk of spilling stinky fuel, no priming. But they have their drawbacks, not least the fuel is expensive and environmentally unfriendly — the LPG gas brings with it the whole oil industry baggage, the cartridges are manufactured in the Far East then shipped around the globe, and, being non-refillable, they end up in landfill (or left in a bothy); these things increasingly bother me.

Gas stoves are also rather weight inefficient. I didn’t fully appreciate this until I started thinking of multi day running trips, and was forced to rationalise the weight of my kit. My first move was, of course, a lighter gas stove, the 25g BRS-3000T. It only took a couple of trips to realise this was a dangerous piece of crap (mine flares uncontrollably sideways at any attempt to reduce the flame; sometimes we really get what we pay for).

In any case, if the objective is to reduce weight, even the lightest of gas stoves doesn’t help much, for the fundamental problem lies with the canister: on the one hand, I have very little control over how much fuel I take, and on the other the canister is far too heavy. So if my requirement is, say, for 60g of gas, I have to take 110g, plus the 120g of the canister; if I need 120g, I have to take 220g of it, plus 180g of the canister, etc.

Just as petrol/kerosine stoves beat gas in the weight game for long trips, alcohol stoves do so for short trips. Alcohol has two big advantages: it is very easy to store and transport, with minimal weight overheads, and it is very easy to burn, making it possible to create simple, light stoves.

Of course, burning alcohol produces only about half the amount of energy per weight as gas. But for short trips this is more than offset by the weight of the canister: if you need 60g of gas you have to pack 230g of fuel + canister; for an alcohol stove the equivalent will come to ~140g. Broadly speaking the weight game works out in favour of alcohol, or at least level pegging, until you need enough fuel to take the big 460g gas canister. How long a trip that is will depend on your cooking style, but in my case that is 3+ solo nights when snow melting, and something like 10+ nights in the summer.

And alcohol is cheap, and the environmental footprint is much smaller. There are, of course, downsides, most notably cooking with alcohol takes longer, how much longer will depend on the actual stove, so let’s talk about the stoves.

Alcohol burners come in two basic types: pressurised and unpressurised. An unpressurised stove is really just a small bowl holding the fuel, burning the vapours as they rise from the surface. While this works perfectly fine, such an open bath stove is potentially quite dangerous because of the risk of spilling the burning fuel; this is easily remedied by filling the bowl with some kind of a fireproof porous material. The simplicity means unpressurised stoves are usually home, or cottage, made.

In case of a pressurised stove, the fuel vapour is expelled under pressure from an enclosed fuel reservoir through a series small holes, resulting in discrete jets of flame. Unlike with petrol/kerosene this pressure is simply created by heating up the fuel in the reservoir, and is not very high. It, however, means that the stove has to have some way of priming. Most often it comes in the form of an open bath in the centre of the burner. The best known pressurised alcohol stove is undoubtedly the Trangia, but this type of burner can also be made fairly easily at home from a beer can, e.g., the famous Penny Stove — beer can stoves are neat and really fun to experiment with (but they are also quite large and fragile).

(Before going any further, it is worth saying that alcohol stoves always need a windshield, the flame is just too feeble to cope with even a slight breeze. The cheapest, and also most lightweight option is to make some from a double layer of kitchen foil. If you look after it, it will last quite a while, but it is too light for use in real wind, though perfectly fine for in-tent use. Of course, alternatives, commercial or otherwise, exist.)

Back to stoves. So, which one is better, pressurised or not? The cued up reader, who undoubtedly now expects a detailed discussion on fuel efficiency of the different designs is going to be most disappointed in me. As exciting as carefully measuring the fuel burnt by different models to find the Ultimate Stove is, when it comes to alcohol such comparisons are of a very limited value.

The fuel efficiency of any stove really comes down to a single thing: is the vapourised fuel mixed with enough oxygen to allow complete combustion? In the case of all alcohol stoves the mixing happens above the burner, and so is given more by the size of the pot, its distance from the burner, and the airflow provided by the windshield, than the design of the burner itself. Consequently any comparison is only valid for the one specific testing configuration, and you will almost certainly be able to come up with a different setup to produce quite different results.

OK, but, which is better? They both have advantages and disadvantages. The great thing about unpressurised burners is you can put in as little or as much fuel in as you want, and if there is any left, you screw the lid on, and it will keep till the next time. Also the variety with the absorbent material is the safest alcohol stove there is (and one shouldn’t really underestimate the danger of spilling the burning fuel, as the flames are nigh invisible).

The main advantage of a pressurised stove is a higher rate of burn, i.e., it cooks faster. But it is quite difficult to make a really tiny one, because below certain size the priming/gasification doesn’t work very well. Also the usual method of priming from an open bath on the top of the burner is super inefficient, and for there to be enough fuel in the bath, the stove generally needs to be filled near to capacity. For the smaller stoves, this will be around 30ml — if like me you only use 50ml per day and less than 15ml at a time, this a nuisance, as there is always unavoidable significant loss due to continued evaporation while the stove cools down before you can pour the excess out of it, and there is always spillage draining it.

If the burner doesn’t sit directly on the ground, it is possible to prime from below, using a small vessel, e.g., a bottle cap. This is much more efficient and needs just a few drops of fuel. But it is quite tricky to get right and requires practice — if you use too much fuel, you get a flair up, not ideal in a tent!

An unpressurised stove is great for solo summer use. Mine is of the makeup case variety; if you look carefully, you can see it among my other cooking paraphernalia in the title picture (taken on an unexpectedly cold autumn morning in the Cairngorms; the -4C meant I had to boil an extra pot that morning to pour into the running shoes to soften them up).

The stove came from redspeedster on eBay (you could easily make your own, but it’s not worth sourcing the materials for just one). It has a 30ml capacity, and with the nice pot supports he also makes comes to 24g. In my setup using 1/2l pot it will bring 400ml from 8C to rolling boil in about 12min, using 11g of fuel — yes, it’s slow, but then I rarely need rolling boil, so my actual ‘boil’ time is ~8 min, and really, I have all the time in the world, after all I am escaping the time-obsessed rat race.

But once you start looking at cooking for more than one person the unpressurised stove becomes impractical. I still want something small, i.e., not the family sized Trangia, but nevertheless something faster.

The Vargo Triad fits the bill. It’s a nicely made little gadget, and has about double the rate of burn of my makeup stove, bringing 0.8l of water from 8C to rolling boil in just under 13min, using 23g of fuel. This will do nicely for our next summer trip, I reckon. It’s a pity the burner does not have a cap, but I have cut a circle from a silicon baking sheet to cover it, which reduces fuel evaporation after the stove is extinguished and is cooling down.

[Updated 14/8/18 -- I have run into significant issues in real use of this stove, see this post]

Cooking with Alcohol

My current quest is to find an alcohol stove I’d be happy with for winter use. During winter I tend to heat up about twice the amount of water than I do during the summer (~3l), while at the same time snow melting roughly doubles the energy requirements (I am sure I could cut this down by manning up, but TBH, the winter brings enough misery as it is). The Triad at near-full fill of 35g of fuel will just bring 1/2 litre of water from snow to boil in 20min — for winter solo use I think this is borderline, I’d prefer something that would do about 0.8l at a time and a bit faster.

The Vargo Decagon looks a possible option. The 60ml capacity should be enough to melt 0.8l from snow, and it appears to have considerably higher burn rate than the Triad. But by all accounts, the Decagon is very slow priming, and unlike the Triad the pot can’t go on until the priming is finished (the pot sits directly on the top of the burner, so it conducts heat away from the burner); it also doesn’t lend itself as well to bottom priming as the Triad, nor can be so easily capped. Nevertheless, I am keen to give it try, preferably while there is still some snow in the local hills.

[Updated 14/8/18 -- I have run into significant issues in real use of this stove, see this post]

by tf at April 16, 2018 12:46 PM

March 15, 2018

Emmanuele Bassi

pkg-config and paths

This is something of a frequently asked question, as it comes up every once in a while. The pkg-config documentation is fairly terse, and even pkgconf hasn’t improved on that.

The problem

Let’s assume you maintain a project that has a dependency using pkg-config.

Let’s also assume that the project you are depending on loads some files from a system path, and your project plans to install some files under that path.

The questions are:

  • how can the project you are depending on provide an appropriate way for you to discover where that path is
  • how can the project you maintain use that information

The answer to both questions is: by using variables in the pkg-config file. Sadly, there’s still some confusion as to how those variables work, so this is my attempt at clarifying the issue.

Defining variables in pkg-config files

The typical preamble stanza of a pkg-config file is something like this:

prefix=/some/prefix
libdir=${prefix}/lib
datadir=${prefix}/share
includedir=${prefix}/include

Each variable can reference other variables; for instance, in the example above, all the other directories are relative to the prefix variable.

Those variables that can be extracted via pkg-config itself:

$ pkg-config --variable=includedir project-a
/some/prefix/include

As you can see, the --variable command line argument will automatically expand the ${prefix} token with the content of the prefix variable.

Of course, you can define any and all variables inside your own pkg-config file; for instance, this is the definition of the giomoduledir variable inside the gio-2.0 pkg-config file:

prefix=/usr
libdir=${prefix}/lib

…

giomoduledir=${libdir}/gio/modules

This way, the giomoduledir variable will be expanded to /usr/lib/gio/modules when asking for it.

If you are defining a path inside your project’s pkg-config file, always make sure you’re using a relative path!

We’re going to see why this is important in the next section.

Using variables from pkg-config files

Now, this is where things get complicated.

As I said above, pkg-config will expand the variables using the definitions coming from the pkg-config file; so, in the example above, getting the giomoduledir will use the prefix provided by the gio-2.0 pkg-config file, which is the prefix into which GIO was installed. This is all well and good if you just want to know where GIO installed its own modules, in the same way you want to know where its headers are installed, or where the library is located.

What happens, though, if your own project needs to install GIO modules in a shared location? More importantly, what happens if you’re building your project in a separate prefix?

If you’re thinking: “I should install it into the same location as specified by the GIO pkg-config file”, think again. What happens if you are building against the system’s GIO library? The prefix into which it has been installed is only going to be accessible by the administrator user; or it could be on a read-only volume, managed by libostree, so sudo won’t save you.

Since you’re using a separate prefix, you really want to install the files provided by your project under the prefix used to configure your project. That does require knowing all the possible paths used by your dependencies, hard coding them into your own project, and ensuring that they never change.

This is clearly not great, and it places additional burdens on your role as a maintainer.

The correct solution is to tell pkg-config to expand variables using your own values:

$ pkg-config \
> --define-variable=prefix=/your/prefix \
> --variable=giomoduledir
> gio-2.0
/your/prefix/lib/gio/modules

This lets you rely on the paths as defined by your dependencies, and does not attempt to install files in locations you don’t have access to.

Build systems

How does this work, in practice, when building your own software?

If you’re using Meson, you can use the get_pkgconfig_variable() method of the dependency object, making sure to replace variables:

gio_dep = dependency('gio-2.0')
giomoduledir = gio_dep.get_pkgconfig_variable(
  'giomoduledir',
  define_variable: [ 'libdir', get_option('libdir') ],
)

This is the equivalent of the --define-variable/--variable command line arguments.

If you are using Autotools, sadly, the PKG_CHECK_VAR m4 macro won’t be able to help you, because it does not allow you to expand variables. This means you’ll have to deal with it in the old fashioned way:

giomoduledir=`$PKG_CONFIG --define-variable=libdir=$libdir --variable=giomoduledir gio-2.0`

Which is annoying, and yet another reason why you should move off from Autotools and to Meson. 😃

Caveats

All of this, of course, works only if paths are expressed as locations relative to other variables. If that does not happen, you’re going to have a bad time. You’ll still get the variable as requested, but you won’t be able to make it relative to your prefix.

If you maintain a project with paths expressed as variables in your pkg-config file, check them now, and make them relative to existing variables, like prefix, libdir, or datadir.

If you’re using Meson to generate your pkg-config file, make sure that the paths are relative to other variables, and file bugs if they aren’t.

by ebassi at March 15, 2018 04:45 PM

March 06, 2018

Ross Burton

Rewriting Git Commit Messages

So this week I started submitting a seventy-odd commits long branch where every commit was machine generated (but hand reviewed) with the amazing commit message of "component: refresh patches". Whilst this was easy to automate the message isn't acceptable to merge and I was facing the prospect of copy/pasting the same commit message over and over during an interactive rebase. That did not sound like fun. I ended up writing a tiny tool to do this and thought I'd do my annual blog post about it, mainly so I can find it again when I need to do it again next year...

Wise readers will know that Git can rewrite all sorts of things in commits programatically using git-filter-branch and this has a --msg-filter argument which sounds like just what I need. But first a note: git-filter-branch can destroy your branches if you're not careful!

git filter-branch --msg-filter has a simple behaviour: give it a command to be executed by the shell, the old commit message is piped in via standard input, and whatever appears on standard output is the new commit message. Sounds simple but in a way it's too simple, as even the example in the documentation has a glaring problem.

Anyway, this should work. I have a commit message in a predictable format (: refresh patches) and a text editor containing a longer message suitable for submission. I could write a bundle of shell/sed/awk to munge from one to the other but I decided to simply glue a few pieces of Python together instead:

import sys, re

input_re = re.compile(open(sys.argv[1]).read())
template = open(sys.argv[2]).read()

original_message = sys.stdin.read()
match = input_re.match(original_message)
if match:
    print(template.format(**match.groupdict()))
else:
    print(original_message)

Invoke this with two filenames: a regular expression to match on the input, and a template for the new commit message. If the regular expression matches then any named groups are extracted and passed to the template which is output using the new-style format() operation. If it doesn't match then the input is simply output to preserve commit messages.

This is my input regular expression:

^(?P<recipe>.+): refresh patches

And this is my output template:

{recipe}: refresh patches

The patch tool will apply patches by default with "fuzz", which is where if the
hunk context isn't present but what is there is close enough, it will force the
patch in.

Whilst this is useful when there's just whitespace changes, when applied to
source it is possible for a patch applied with fuzz to produce broken code which
still compiles (see #10450).  This is obviously bad.

We'd like to eventually have do_patch() rejecting any fuzz on these grounds. For
that to be realistic the existing patches with fuzz need to be rebased and
reviewed.

Signed-off-by: Ross Burton <ross.burton@intel.com>

A quick run through filter-branch and I'm ready to send:

git filter-branch --msg-filter 'rewriter.py input output' origin/master...HEAD

by Ross Burton at March 06, 2018 05:00 PM

March 02, 2018

Emmanuele Bassi

Recipes hackfest

The Recipes application started as a celebration of GNOME’s community and history, and it’s grown to be a great showcase for what GNOME is about:

  • design guidelines and attention to detail
  • a software development platform for modern applications
  • new technologies, strongly integrated with the OS
  • people-centered development

Additionally, Recipes has become a place where to iterate design and technology for the rest of the GNOME applications.

Nevertheless, while design patterns, toolkit features, Flatpak and portals, are part of the development experience, without content provided by the people using Recipes there would not be an application to begin with.

If we look at the work Endless has been doing on its own framework for content-driven applications, there’s a natural fit — which is why I was really happy to attend the Recipes hackfest in Yogyakarta, this week.

Fried Jawanese noodle make a healty breakfast

In the Endless framework we take structured data — like a web page, or a PDF document, or a mix of video and text — and we construct “shards”, which embed both the content, its metadata, and a Xapian database that can be used for querying the data. We take the shards and distribute them though Flatpak as a runtime extension for our applications, which means we can take advantage of Flatpak for shipping updates efficiently.

During the hackfest we talked about how to take advantage of the data model Endless applications use, as well as its distribution model; instead of taking tarballs with the recipe text, the images, and the metadata attached to each, we can create shards that can be mapped to a custom data model. Additionally, we can generate those shards locally when exporting the recipes created by new chefs, and easily re-integrate them with the shared recipe shards — with the possibility, in the future, to have a whole web application that lets you submit new recipes, and the maintainers review them without necessarily going through Matthias’s email. 😉

The data model discussion segued into how to display that data. The Endless framework has the concept of cards, which are context-aware data views; depending on context, they can have more or less details exposed to the user — and all those details are populated from the data model itself. Recipes has custom widgets that do a very similar job, so we talked about how to create a shared layer that can be reused both by Endless applications and by GNOME applications.

Sadly, I don’t remember the name of this soup, only that it had chicken hearts in it, and that Cosimo loved it

At the end of the hackfest we were able to have a proof of concept of Recipes loading the data from a custom shard, and using the Endless framework to display it; translating that into shareable code and libraries that can be used by other projects is the next step of the roadmap.

All of this, of course, will benefit more than just the Recipes application. For instance, we could have a Dictionary application that worked offline, and used Wiktionary as a source, and allowed better queries than just substring matching; we could have applications like Photos and Documents reuse the same UI elements as Recipes for their collection views; Software and Recipes already share a similar “landing page” design (and widgets), which means that Software could also use the “card” UI elements.

There’s lots for everyone to do, but exciting times are ahead!

And after we’re done we can relax by the pool


I’d be remiss if I didn’t thank our hosts at the Amikom university.

Yogyakarta is a great city; I’ve never been in Indonesia before, and I’ve greatly enjoyed my time here. There’s lots to see, and I strongly recommend visiting. I’ve loved the food, and the people’s warmth.

I’d like to thank my employer, Endless, for letting me take some time to attend the hackfest; and the GNOME Foundation, for sponsoring my travel.

The travelling Wilber


Sponsored by the GNOME Foundation

by ebassi at March 02, 2018 12:50 AM

February 25, 2018

Tomas Frydrych

Coille Coire Chuilc

Coille Coire Chuilc

It’s been a long time since Linda and I climbed Beinn Dubhchraig. Just another couple of Munros bagged. Not a very memorable day of drizzle and nay views, leaving a lingering impression of a long trot through a bog punctuated by spindly pine trees, and no urge to return. One that persisted for a couple of decades. But today couldn’t be more different: the sky is blue, the air is crisp, the ground is frozen. And those spindly trees? They are no more.

Instead I find myself at an edge of a delightful Caledonian pine forest inviting me to step in. And so I do, walking along the east bank of Allt Gleann Auchreoch to the dilapidated bridge higher up the glen, then wandering about the woodland south of Allt Coire Dubhchraig, before following it up the hill. There are some magnificent pine specimen here, framing the views over to Ben Challuim and Beinn Dorrain. And higher up the pines are replaced by young birches, that are rapidly continuing to self-seed, the purple hue of their twigs striking against the snow-covered ground.

Beinn Dubhchraig is in a magnificent winter condition, there is much more snow than I expected, and all perfect firm neve. I enjoy the views: Beinn Dorrain, Ben Challuim, the Crianlarich hills, Ben Oss, Ben Lui. As I nip up the rather windy Ben Oss, Ben Lui looks particularly majestic — I imagine it will be very busy on a day like this.

Coille Coire Chuilc

On the way down I sit under a large pine for a bite to eat, enjoying the afternoon sunshine. A perfect day. It is rare for my days out to bring together the two places where I feel most at home, the hills and the woods. I usually have to choose the one over the other. It needn’t be this way, nor should it. Here in the midst of Coille Coire Chuilc I am reminded that, given will, a real change is possible in less than a lifetime. And just now I can smell it coming on the breeze.

Coille Coire Chuilc

by tf at February 25, 2018 11:08 AM

February 15, 2018

Tomas Frydrych

Pine Seeds

Pine Seeds

Over the twenty something years since the National Trust for Scotland took over the Mar Lodge Estate, the upper Glen Lui (or, Gleann Laoigh Bheag, as it is properly called), has become a real gem of a place. But today is not exactly a gem of a day. There be might fluffy fresh snow on the ground, but it's breezy, and visibility is limited indeed. Some might think it outright miserable!

Or, a natural black and white scene, you might say. In any case, the sort of a day nobody goes out for The Views. I am on my way down to the Bob Scott bothy for a lunch before heading back to civilisation. An end to three days in the hills. Carefully planned in rough outlines, then (even more) carefully improvised, to match the reality of the winter Cairngorms.

A brief promise of sunshine blown away somewhere below the summit of Derry Cairngorm on Sunday morning, leaving just the wind and thick cloud. The map came out there and then, and pretty much stayed out since. Careful navigation over the summit and onto the 1053 point bealach, then down to Loch Etchachan, in hope the cliffs surrounding it will provide some shelter from the strong westerly for the night.

Down at the loch it is indeed much calmer, though you wouldn't know there is loch down here under all that snow. Care is needed not to pitch inside a possible avalanche path, not just in the view of what the conditions are like now, but what they could be in the morning. And so I dig myself a nice rectangular platform, about a foot or so deep, nearly on the loch shore. Not as sheltered as might have been, but safe.

I am done just as the light starts fading. A coffee. While the snow is melting, a couple of messages exchanged with Linda using my InReach, then dinner. One of Ian Rankin's (audio)books for company by candlelight, followed by undisturbed sleep.

Monday morning starts with porridge, then digging myself out of the tent, glad to have kept the shovel inside. I am surprised by the amount of snow drift, my neat rectangular platform all but gone, and the kit I left in its corner buried under good two feet of snow. A scarily compacted fresh, foot thick, windslab capping it all.

Pine Seeds

Beinn Mheadhoin teases me with some lovely pink tones, but barely long enough to get the camera out -- time to get moving.

The (careful) plan was to camp here for two nights, but it is obvious that if I leave the tent here I will have hard time finding it later, and, more importantly, this place is too exposed for the 70mph southerly forecast for tonight. And so I pack my stuff, all 24kg of it (minus some food, plus some snow), put the snowshoes on, and set off into the clag for Ben Macdui, selecting my route carefully, mindful of the windslab I saw down there.

The wind picks up in no time, and while this is a familiar ground, I need a map and compass to keep me on track. I am comfortable with being here, the conditions are challenging, but, I dare say, within my comfort zone. And yet on a day like this, the plateau is one scary place (as it should be). Navigating here is hard, errors easy to make, opportunities to spot and correct them few and far apart. Escape routes limited even in the summer, for cliffs abound in all directions, and in the present winter conditions some of them, if not most, are unsafe.

The spindrift is heading along the surface directly against me, flowing around my boots like a fast river. It is making me feel dizzy, even seasick, yet my eyes are irresistibly drawn to it. A new experience. Keep looking forward, above it, rather than at it; that does the trick.

The ruin, then after a while the summit. Too windy to hang around. I take a bearing for the 'corner' of the Sron Riach ridge, pace and follow it religiously, using Allt Clach nan Taillear as a tick off point. A couple of jets flying repeatedly overhead, or perhaps just the wind swirling around in my hood; I can't tell. I reach the rocky corner bang on, pleased with myself.

As I am taking my next bearing from the map, there is a brief rupture in the cloud offering a glimpse of the cornices lining the ridge -- they are some of the biggest cornices I have ever seen, meters of overhanging snow. Back into the clag. I back off good 30m from the edge before daring to follow my bearing, and even then nervously (the lack of photos is my witness). Visibility is 5-10m; I make a point of always keeping some visible rocks peaking out of the snow to my left.

I finally emerge from the whiteness at around the 1100m contour line with a sigh of relief and sight of the Devil's Point, the first real 'view' of the day. Even better, I can also see that my preferred option for today, descending down the line of Caochan na Cothaiche is viable, for its eastern side is fully scoured, and poses no avalanche risk. In contrast, lower down the western side of the narrow gully has a huge build up of snow on it, and cornices, with some fresh debris lower down.

Pine Seeds

The floor of the glen is not entirely calm, but it will do. I dig another platform, pitch the tent. It's early, but this spot is as good as it will get. From here there is a direct line of sight under the clouds down Glen Lui onto the Glen Shee Munros -- it's sunny over there, and I feast my eyes on the vista, nursing a cup of coffee. Dinner (not much gas left), message to Linda, then time for some John Le Carre.

The wind arrives at 9.30pm, as the forecast promised. The usual moment of anxiety -- will the pegs stay in? Should I go out and check? I don't. I dug right down to the frozen turf and double pegged all the lines, they are going nowhere, or rather, I can't do any better anyway (I double peg as a matter of course, 20cm Y paracord extensions permanently on all the guylines). I briefly toy with sticking the anemometer out of the tent, but can't be bothered looking for it, I guess somewhere around the 40+mph mark. It's over as suddenly as it started not long past midnight (again just as forecast), and I sleep soundly after that.

Tuesday morning. I give the tent a good shake. The porch is covered by an inch of the finest powder I have ever seen, and I curse myself for not tiding more last night, rummaging through it looking for my spork. At least I covered the tops of my boots with bags. I drain the gas to the very last drop (thank God for upside down canister stoves!); there is, just, enough for my porridge and a litre of warm water. Outside it's windy and snowing.

As I pack, the snow is depositing on the tent faster than I am sweeping it away, and after a couple of minutes I give up and just roll it in. Snowshoes on and into the blizzard. Goggles would have been useful, but they are too wet inside to be any use, and no amount of wiping is helping. At least there is no navigating to be done, just follow the stream down the narrow glen.

And so here I am on the nice path in Gleann Laoigh Bheag. It stopped snowing a while ago, and there is but a breeze, four or five inches of fluffy snow covering everything. The pines are looking very Christmassy, in a It's a Wonderful Life sort of B&W way. Pristine scenery, no footprints, fresh or old.

My eye catches the sight of a small brown spec on the undisturbed snow, then another. I bend a bit to take a closer look. A pine seed. They are all around me, they have come from heaven down to earth gliding on their little wings. In the midst of this bleak, inhospitable day, life is being, not born, but hewn out by the gale from the cones; life against the odds. A promise of a brighter, greener, future, one hearkening back to the days before the axe and saw laid this landscape barren.

The bothy is warm. A bit of food, a bit of banter. Then I step outside ... into a different world. The cloud has broken, the sky is blue, the sunlit landscape postcard perfect -- The Views. But the views, they come and go. The pine seeds, I expect some of them I will see again in the years to come. From now on, every time I see a seedling in Gleann Laoigh Bheag, I'll be wondering, is it you?

But 'nough idle musings. The most pressing existential question of today is this: will the Glen Shee snow gates be open? For I am back in the 'real' world.

Pine Seeds

by tf at February 15, 2018 09:48 AM

February 05, 2018

Tomas Frydrych

A Lesson from the Wee Hills

A Lesson from the Wee Hills

Days like these don’t come around that often. After a couple of brief snow flurries the sun banished the cloud, and now the early morning light glitters on the pristine slopes of Beinn Challuim. It is nearly exactly twenty years since I’ve been up here last, in very different conditions; a memorable day, though not for the best of reasons.

When I first arrived in Scotland I was by no means new to the outdoors or the hills. I am fortunate enough to have spent much of my free time in the open since early childhood, exploring the woods, hiking, wild camping, ski touring. From my mid teens treks in the Tatras, and farther afield, became a regular feature of the summer holidays — half a dozen friends of similar age, minimal equipment, high level camps mostly just under the stars.

Over those years there had been a few #microepics, including a couple of close shaves, and by the time I landed in Scotland as a postgraduate student in the mid ‘90s I had gained a healthy respect for the mountains, summer and winter alike. But compared to even the smaller continental ranges Scotland’s ‘wee hills’ — their summits barely reaching the altitudes of Alpine valleys — seemed innocuous and benign.

It didn’t take long to get disabused of that idea. Looking back, some of the incidents we now laugh at. Like when, having ignored Heather the Weather’s warning of 70mph winds, I left Linda a few hundred yards of the summit of Meall Ghaordaidh, weighed down by a large stone, while I crawled on all four to touch the summit cairn (all I can say is, we were young and weekends were precious). But even after all that time, the Beinn Challum day is still not that funny.

As a research student I discovered that clearing my head with a midweek day in the hills much improved my overall productivity, and so Wednesday outings became a regular part of my studies. Even nowadays the hills tend to be fairly quiet midweek, but back then I never ever met anyone. Indeed, tales were circulating of injured midweek hill walkers surviving a couple of days on biscuits until someone turned up at the weekend.

This might seem far fetched, but in those days mobile phones were almost a novelty, cellular signal virtually nonexistent outside of the Central Belt, and consumer GPS units still a few years away — those who got lost in the hills were on their own until someone reported them missing; self reliance was, necessarily, a part of essential hillcraft.

As I expect you have guessed, this particular Wednesday in late February I was heading up Beinn Challuim. I have never been much of a fan of there-and-back outings, and so decided to leave the car at the Auchtertyre farm, and do a horseshoe starting with Beinn Chaorach.

It was not a very nice day, with an unpleasant westerly, sleeting heavily. Having experienced similar conditions a few weeks earlier in the Drummochter hills, I invested into a pair of goggles (not a negligible expense), which on this day didn’t come off my face (sadly, the sleet was so saturated that the glue between the double lens failed in the course of the day).

Visibility gradually deteriorated and by the time I reached Beinn Challuim, I was in a complete whiteout. I wasn’t put out by any of it. I had an excellent Berghaus GoreTex jacket that kept me dry (which I was just about able to afford thanks to James Leckie of Falkirk) and carried two big flasks of hot drink and plenty of food —really, I was in my element, relishing the adversity. But by this point I was also beginning to feel quite tired, it was turning out to be a longer day than I planned.

Fortunately all that was left was the descent back to the car. This should have been quite straight forward, and such was my confidence in my ability to navigate that I didn’t feel the need to get the compass out. I was sure map alone was going to be enough to follow the ‘obvious’ broad ridge. And indeed, the ridge was easy to follow, but somehow progress was slow.

Too slow. I emerged from the cloud eventually but alas, things were not as they should have been. I should have been near the Auchtertyre farm, or at worst near Kirkton, and certainly near a railway track, but I saw no houses and no track. I ended up somewhere in the Lochan a’Craoi area above Inverhaggernie — to this day I am unsure of the exact location — and I was in for a long walk back, with not much of the day left.

I was spared some it by a couple of ghillies on a quad bike, two hinds on a small trailer behind it. They offered me a lift to Inverhaggernie, ‘if you don’t mind sitting on the deer’. I didn’t mind in the slightest.

That day was the end of the ‘wee hills’ mentality, for I knew I got lucky. The careless navigation mistake per se was not super serious, at least I ended up on the right side of the hill, but I understood that I could have easily made a similar one earlier in the day and ended up further north in the Forest of Mamlorn — that would have been a whole different proposition. I started taking the weather lot more seriously from there on, and I also updated my personal Freeserve page about the Scottish hills with a dire warning to the foreign visitor about the deceptiveness of their size, and the nastiness of their inclement climate.

Today Beinn Challuim summit offers views for miles in any direction, and there is no wind, not even a breeze. There are three of us lingering up here, none feeling like leaving. Eventually I descend the W-NW spur toward Bealach Ghlas-Leathaid — that wasn’t my original plan, but twenty years on I still don’t like there-and-back days. It proves to be a good choice. The lower part of Gleann a’Chlachain is a kaleidoscope of colours, their tones striking in the low afternoon light. I stroll leisurely back to Kirkton basking in the sun. There is no hurry, and like I said, days like these don’t come around that often.

by tf at February 05, 2018 09:31 PM

February 03, 2018

Tomas Frydrych

Mountain Star

Mountain Star

It was love at first sight. Those smooth curves, precision crafted from a solid block of stainless steel, the needle-sharp point, the smooth black, fully rubberised, shaft on which big red letters proudly declared:

Stubai — Made in Austria

She (for to my teenage mind that ice axe was definitely she) hung proudly in the window of the small climbing equipment shop. It wasn’t that I needed an ice axe, I wasn’t a mountaineer. But there was something deeply symbolic about it that made me pine for it.

It wasn’t just the smooth curve that drew the eye, there wasn’t that much else to look at in the window of the state-owned shop. A few steel carabiners, a Czech-made rope — in Czechoslovakia of my teenage years climbing gear wasn’t something you bought, it was something you made.

And so my first clumsy attempts to learn how to self arrest were with a slaters hammer, belonging to a friend and re-purposed by a blacksmith, an acquaintance of an acquaintance. Another friend, a machinist, who unlike me was a proper climber, made a pair of technical axes at his work, based on some pictures from a foreign magazine he managed to get hold of. He showed me with great pride, excited about the inverse picks.

(I learnt later that pair failed on their first trip to the Tatras, my friend over-tempered the steel and the tips snapped off; such is the nature of progress. But not in my wildest dreams would have I imagined I will one day reminisce on this in Scotland, where the technical ice axe was born, and I expect Hamish McInnes suffered some similar teething problems as my friend did trying to follow in his footsteps.)

But back to that Stubai. There was yet another reason why that axe stood out. The price tag of 700 Kčs — three weeks of moderately decent wages — put it well out of my reach. And not just mine; it hung there for years, an object of unrequited lust, while in real life tools were improvised and borrowed.

When a decade later I walked into The New Heights in Falkirk to get the first ice axe I could call my own and saw a Stubai Mountain Star hanging there, that was it, there was no other choice I could make. Not the same axe, not as refined, more mass produced. Not the cheapest option either. But the pedigree unmistakable, I wasn’t buying a tool, I was buying into a dream.

It’s a fine winter day today, sandwiched between two ugly fronts, and so I am making most of it. The views from An Caisteal are stunning, with just enough cloud to create an ambience on the ridges. As I lean on the ice axe, all these memories flood in.

Over the years there have been others. Some leaner, some meaner, some definitely prettier. Some long gone, some still around. The Mountain Star among the latter, after twenty something years a trusted companion. It does everything I have ever wanted from a walking axe, and does so perfectly. The chromoly requires no care except for the occasional sharpening, the length is perfect to offer support on easy ground, and I like the reassuring weight, the feeling a tool was made for life rather than a season.

I eat my lunch on the summit and contemplate how to return. The Stob Glas ridge is irresistible. It’s not very cold and as I approach Bealach na Ban Leacainn, my crampons are starting to ball up; time to take them off, once I reach a safe place. In the meantime, a practised, near effortless, flick of the wrist to tap them with my axe, and they are clear again. None of my other axes can do that, they are either too light, or too short, or both — the Mountain Star is going to stay for a while yet, I think. And tonight I shall raise a glass to whoever designed it all those years back. Prost!

by tf at February 03, 2018 11:55 AM

January 21, 2018

Tomas Frydrych

Discovering Snowshoes

Discovering Snowshoes

I have thought about getting a pair of snowshoes a few times over the years, but never did. The copious quantities of snow at the tail end of last year finally gave me the needed nudge. Of course, as invariably happens, all that early snow summarily thawed away on the very day the snowshoes arrived, and I haven't had a chance to play with them until this week.

Having never snowshoed before, I thought an easy potter around the Ochils might provide just the right sort of an introduction, and it did (in fact I was having so much fun, I pottered around for over six hours till the last light). Perhaps it is the fact that I Telemark, and so am used to things dangling underfoot, but I found walking in snowshoes to be an entirely natural, zero learning curve, sort of a thing.

I was pleasantly surprised by the huge reduction in effort snowshoes provide. It does not come so much from not sinking so deep, as I imagined it would have, but rather from the way in which the snowshoes glide. Even when sinking half a meter or so, you don't need to lift your foot out, rather, as the foot starts moving forward, the snowshoe floats up to the surface. I'd go as far as to say that in deep snow this requires lot less effort than skinning would have, particularly with today's wide skis.

But where the snowshoes really come into their own is coming down hill. In nice deep soft snow I am able to move at a pace that is considerably faster than I would be walking down in the summer, indeed, not far off my running pace (though, admittedly, as a runner I am a slow descender). By the same token, I now understand why snowshoers feature so prominently in avalanche victim statistics -- it's really easy to get carried away (not unlike skiing, but skiers have had avalanche awareness drilled into them for decades, and it is paying off).

When I was shopping for the snowshoes, I had a set of fairly specific requirements:

  • Mountaineering-type, so they could cope with steeper terrain (means to an end),
  • Not too heavy, so as not to be too much pain to carry when things get more interesting,
  • Suitable for the Scottish conditions, with our variable depth windblown snow cover, which means making contact with the rock beneath it time from time (i.e., steel, rather than plastic / aluminium, and not a design where the membrane is attached by wrapping it around the frame).

In the end, in spite of the eye watering cost, I settled on the MSR Lighting Ascent, which ticks all the boxes: the flat steel frame promises an all around traction, and the membrane is attached inside it, rather than wrapped. Also, they get good reviews.

I was so encouraged by my wee Ochils potter earlier in the week that yesterday I took my snowshoes for their first proper outing up and over Beinn Each onto Stuc a'Chroin and back. Ideal conditions, snow at places waist deep, and excellent fun. But also an opportunity to test the snowshoes in some more challenging terrain, including patches of steeper névé. All in all just over eight hours of true winter wonderland, of which I wore the showshoes for at least seven (they only came off for the short steep descent from Beinn Each, and the final 50m of the Stuc).

That they work well in soft snow I already knew, but I was impressed with the traction provided by the frame and the crampon when going up firm névé. The main limiting factor here is that beyond certain gradient the toe of the boot, rather than the rotating crampon, starts making contact with the ground, at which point the traction is compromised. The angle at which this happens is quite steep, steep enough to be stabbing the slope with the pick of an ice axe, rather than the spike, once the real crampons come on.

I got caught out this way on a short section of the Stuc. The main problem was not so much that I wasn't wearing real crampons, but that I was still using poles, while on a gradient that really called for an ice axe. Awkward shuffling off to a gentler slope to get the proper tools out followed (obviously, this is not a fault of the snowshoes, but a simple error of judgement).

Similarly, traction descending on firm névé is excellent, and broadly speaking, I found that I can descend comparable gradient to what I can sensibly ascend. In deep snow, however, the snowshoes become problematic on very steep ground, they have a tendency to run away more easily than just boots, and you can't really bum slide very well with them on. (And again, you will quite likely find yourself with poles rather than an axe.)

The main limitation of the MSR Lightning Ascent is poor lateral rigidity; this is a feature of the frame design (though the bindings don't help, on that below), and it makes traversing a firm slope very awkward. I have quickly realised that for short sections it is much more efficient to sidestep such ground, facing into the slope, but best of all is to pick a different line where possible, or to put the crampons on.

The bindings I am not hugely impressed with. They are designed to fit a variety of boots (I expect I could make them fit the Sorels I use to clear the drive), but really the best thing I can say about them is that they are easy to get out of fast. They are hard to tension right when putting them on, and two or three stops were needed each time to make adjustments. This does not improve the lateral stiffness either -- I am thinking for this sort of a technical snowshoe it would make sense to have crampon style step in bindings.

But all in all, would I buy them again? Definitely! Should have done so long time ago.

by tf at January 21, 2018 03:52 PM

January 16, 2018

Tomas Frydrych

Pinto Bean Soup

Pinto Bean Soup

My love of lentils and legumes of all sort goes as far back as I can remember. In recent years, the pinto has become my firm favourite among the beans, for it's a versatile legume of a gentle flavour that is easy to work with. The burrito use aside, the pinto is an excellent foundation for a bean salad, great in chili, and once you taste it baked with tomatoes, you will never want to eat Heinz again. And then there is the soup.

While I enjoy cooking, I don't always have the time for elaborate and time-consuming recipes. Fortunately good homemade food doesn't necessarily mean hours over the stove, and the pinto soup is an example of that -- it takes me under half an hour to make. The ingredients are simple, only the pinto beans, onion and chillies (fresh or crushed) are required, plus some stock; if you have carrots around, then they make a good addition, as does a bit of garlic, but you will get an excellent soup with just onion and chillies.

Being of the 'cooking is an art, not science' school of thought, I consider quantities mere minutia dictated by taste. But as a rough guideline, 400g of dry beans will make around three litres of the soup. For that I use two medium onions, and maybe a couple of larger carrots; chillies to personal taste.

Soak the beans over night (you can get away with less, but it impacts on the cooking time), then cook till soft -- using a pressure cooker hugely speeds this up. You will have to work out the exact timing for your pressure cooker yourself, but in ours, at 0.4 bar, pre-soaked pinto beans take 8min. Now, the secret to a good pinto bean soup is not to drain the cooking liquid, i.e., you should cook the beans in about as much water as you want in the final soup.

While the beans are cooking, chop the onion, not too fine, and fry it off with the chillies until nice and soft (I use rapeseed oil, I find the gentle flavour works well with the subtle flavour of the pinto). Add any garlic to the onion near the end.

Once the beans are ready mix in the onions, and any other ingredients, then add stock to taste (I quite like the Knorr stock pots, usually use one vegetable and one herb pot, but you might prefer something more wholesome and homemade instead). Bring to boil and cook (not pressure-cook!) for about 5min, or if using carrots, until they are soft.

That's it. As many foods, the flavour will develop if it sits for a time rather than being served immediately. It will keep in the fridge for a couple of days, or it can be easily sterilised in a kilner-type jar if you want to keep it longer, or there is not enough space in the fridge.

by tf at January 16, 2018 10:56 AM

January 13, 2018

Tomas Frydrych

And Time to Back Off

And Time to Back Off

Forecast is not great -- high winds, increasing in the course of the day, temperature likely above zero regardless of altitude, and precipitation arriving by an early afternoon. The sort of a day when it's not worth carrying a tripod, or driving too far, yet at the same time not bad enough to just stay at home all weekend and brood (as I know I would).

So here I am at Inverlochlarig; before first light, in the hope of beating the worst of the weather. In the recent years this has become my preferred way into the 'Crianlarich' hills. I like tackling these seven Munros in a single continuous run -- just about the only enjoyable route I have been able to come up with near me that has climbing to distance ratio comparable to some of the bigger rounds. But that would be in the summer, and on a cracker of a day.

Today the plan is less ambitious: head up Beinn Tulaichean, and then, depending on conditions and time, onto Cruach Ardrain, and perhaps Stob Garbh, one way or another returning via Inverlochlarig Glen. I have not been up Beinn Tulaichean from this side for some two decades, and my memories from the last time are rather vague, so this outing has a degree of (welcome) novelty.

In the view of the SE wind I decide to give the usual walkers' path a miss, and instead head up the western flank of the hill, in the shelter of the SW spur. This turns out to be a good choice, with only light wind. I eventually join the main ridgeline somewhere at around the 600m contour line. Here my pocket anemometer registers just over 40mph (I carry one having realised I tend to overestimate wind speeds and hence underestimate forecasts). And spindrift. Time for some extra layers and the goggles.

Visibility is dropping rapidly with height, and by the time I reach the flatter area around the 750m contour line, it's down to ten yards. The compass comes out, from here on I am moving on a bearing as visibility continues to drop further. The terrain is quite complex here, lot of large boulders, with big gaps between them, now covered -- but not necessarily filled -- with snow. I narrowly avoid falling into a large hole that appears out of nowhere right in front of me just before the gradient steepens again.

There are two sets of fairly recent footprints here -- mine was the first car in the car park, so I am guessing from yesterday. I follow them cautiously, while keeping an eye on the needle, one can't be too careful; I loose them somewhere along the way.

I have reached a point where the ground starts descending again. I know I am near the summit now, but in the view of the complex terrain I need to get an accurate location fix. An altimeter would have been useful in these conditions, but I forgot to reset it earlier (a rare, and annoying, omission). I get the phone out; I prefer the map and compass, it sharpens the mind, but I am not a Luddite. I am 120m from the summit cairn, just a bit off the little col below it.

I get a bearing, reach the col. The light is so flat now that even in the goggles it is impossible to adequately judge the gradient under my feet. There is a step down, but I can't tell how big. I get on my knees, only with my face this close to the ground I can see it's not too steep, and it shouldn't be more than a couple of feet down. I descend gingerly.

On the other side the ground starts rising steeply -- the final 30m of ascent to the summit cairn. As I start climbing up I catch a sight of what I think at first to be a small cornice above; in fact it's the fault line from an avalanche -- I am taken aback, the ground under my feet does not feel like avalanche debris, but for a short while I can see the fault clearly enough, including the poorly bonded layers within it. I realise that what I thought was an old line of obscured footprints couple of meters to left of me is a track made by some more recent debris coming from above.

I retreat back over the dip in the col to a safe place to assess the situation. The limited visibility is debilitating: though I am sure I am not more than twenty yards from it, the fault line is just a fuzzy shadow, if that, and I have no idea what the ground above it is like. The part of the slope I was on is unlikely to avalanche again, but on what I have seen so far, it is not unlikely that if I load the ground above the fault line it could release; I can't take the risk.

I don't mind not reaching the summit, but I hate giving up. I study the map. It seems it might be possible to contour hundred or so meters to the east and gain the summit from there. I take a bearing and start pacing the 100m, and voila, here are the two sets of footprints I saw earlier, heading the same way. But after only 50m or so they disappear under what this time is unmistakably avalanche debris, the whole eastern aspect of the summit is covered by it, as far down as I can see, while above me the same fault line continues beyond the limit of visibility.

I decide to pace the entire 100m. From here I can see that the avalanche is delimited by a rocky rib, but it seems too steep to climb it. I retrace my steps back to my safe spot. It's only now I notice that, inexplicably, there is almost no wind at this altitude. I must be in the lee of the Stob Binnein ridge, which also explains the heavy snow deposits on the ground above me.

On a windy day like today, one must not waste an opportunity like this. The flask comes out, I eat my piece; I am quite content now. Then a back bearing -- while I should be able to follow my footprints back the way I came, you never know.

I descend the usual tourist route, mostly following the two pairs of footprints I saw earlier. I can see the pair were conscious of the avalanche risk, taking a sensible line; indeed, a bit lower down they dug a snow pit on their way up.

I reach the snow line, with views of Loch Doine and Loch Voil. Time to take some pictures, and shed some layers; I am overheating. The jacket goes in the bag ... and the rain starts almost immediately. But who cares? I am glad of yet another good day in the hills.

by tf at January 13, 2018 09:08 PM

December 31, 2017

Tomas Frydrych

If Running were Everything ...

If Running were Everything ...

As a lad I used to spend Hogmanay with my friends at some remote and basic cabin, far away from the noise and clutter of the city. There were two customs we invariably welcomed the New Year in with. We chucked one of our mates into the nearest pond to mark his birthday (which meant cutting a hole though the ice the evening before). And then we sat down and each wrote a letter to themselves, reflecting on the year just gone by, hoping for the future, one of the more responsible lads charged with keeping the, gradually thickening, envelopes from Hogmanay to Hogmanay.

I still have that old envelope full of my teenage dreams somewhere, though it’s been many years since I’ve added a page; different times, different place. Yet, I was reminded of it yesterday reflecting on 2017, recalling with unexpected clarity that every year reading the previous year’s letter I was struck by how differently it panned out, indeed, how often those very aspirations were swept away by the flow of time.

If running was everything, and running stats something to worry about, with a mere 1,353km run and just 47,100m ascended, this would have been a decisively poor year. But running is not everything, and I couldn’t care less about stats.

It kicked off pretty well, with a late February trip to the remote Strathfarrar hills, providing minimal support to John Fleetwood on his Strathfarrar Watershed challenge. It was the first bigger outing I was able to do since October of the year before, and one which exceeded expectations—well worth an Achilles heel injury I picked up along the way, even if it kept me out of the hills for the next couple of months.

As always, our two week holiday in Assynt didn’t disappoint. The highlights included an extended variation on the Coigach Horseshoe, a run from Inchnadamph to Kylesku over the Stack of Glencoul (something I wanted to do for years, but never got to) and, what ultimately turned out to be my best, most memorable, day in the hills this year, a run taking in the south ridge of Ben More Assynt. That too came at a cost, another foot injury, one that, unfortunately, has plagued me for the rest of the year.

June brought the West Highland Way Race, on which Linda and I were crewing for our friend David, and while the whole trail running / ultra scene is not my kind of a thing, this was a truly special experience, and all in all possibly the most memorable weekend of our year.

In July Linda and I spent an excellent weekend fast-packing in the Cairngorms, and I also managed to squeeze in the Eastern Mamores and Grey Corries that eluded me last year, plus a couple of fun days on the south side of Glen Etive. But by the end of July I could no longer ignore the nasty plantar fasciitis I picked up in Assynt. By mid September the foot seemed back to normal, but only lasted a couple of trail runs while on a visit to Portland, OR, and I haven’t run since.

I admit, over the last five months I have really missed running, not least because of the inescapable loss of fitness and the sniggering bathroom scale. There really is nothing like it, the simplicity, the lack of faffing, the fact I can run seven days a week from my front door if I want to.

Yet, at the same time, that gap created new opportunities. I have been spending more time in the woods, with no objectives, just binoculars and/or a camera. In many ways this has been very liberating, bringing back memories, and reminding me how much I miss proper forests in Scotland.

Then there have been numerous wee camping trips. I much prefer these to just single day hikes. I like the peace and quiet of the night in the hills you get even in the middle of a raging storm, the uninterrupted time, to think, to listen to audio books (having reached an age where reading glasses have become a necessity, I avoid reading in the tent). The early mornings, the first light. (But also, during the single days out I always find myself wishing I was running, knowing that most of the time I could travel farther, along a more interesting route.)

And then, of course, after some nine months of planning, last May we have launched runslessepic.scot, offering bespoke guiding services, navigation courses, as well as a rudimentary Hillcraft for Runners course. We are planning some guided hillrunning weekends in the summer, watch this space ...

So yes, that was my year. 2018? All I know is, it starts tomorrow, and I am going for a run first thing!

by tf at December 31, 2017 10:02 AM

December 23, 2017

Tomas Frydrych

The Crew that Slept in

The Crew that Slept in

The West Highland Way Race, with its 30+ year history, can only be described an iconic classic. So when earlier this year our friend David got a place, Linda and I enthusiastically volunteered to join Gita (his partner) and McIver (their collie) to do the crewing. Little did we know what we were letting ourselves in for ...

For those who do not know, the West Highland Way is Scotland's premier long distance walking route that goes from Milngavie near Glasgow to Fort William. It is some 96 miles long, and involves nearly 15,000 feet of vertical ascent. Each year many thousands of people walk it, typically taking around a week to finish. The competitors in the Race, run since 1988, must complete the route in no more than 35 hours, and for that they receive a coveted commemorative crystal goblet.

What sets the Race apart from most other running events is that the prize giving ceremony only takes places after all runners finish, so that all the runners, and crews, can be present; this makes for a very special occasion with a unique, hard to describe, atmosphere. But I am jumping ahead here.

Let's rewind to Friday evening, 23 June 2017. Linda and I arrive in Milngavie about an hour before the 1am start. We made no special arrangements to meed David and Gita here, which immediately shows our lack of grasp of the scale of the event -- there must be over a thousand or so folk milling around the railway station! We wander about for a while, and make a couple of visits to the registration point, but there is no sign of our friends.

Having more or less reconciled ourselves to not finding David, we bump into him by sheer chance just before the briefing. He seems in good spirits. Gita has already left to get some sleep, and we wander off to High Street, leaving David to his own thoughts.

There is a visible Polis presence, for whom I expect tonight makes a change from the typical Friday night in Milngavie. I am hoping to get some pictures of David as they set off, but, of course, I fail to spot him.

The Crew that Slept in

Then off to Balmaha for a little sleep. It is only at this point, as we make steady progress in a column of hundreds of vehicles, I begin to appreciate the importance of the 1am start. Our arrangement is to get together with Gita at 3am, so I get up about that time to go to the loo -- to my dismay the visitor centre and its toilets are closed, my already low opinion of the way the Loch Lomond and Trossachs National Parks is run sinking even more. In contrast, the Oak Inn has opened specially at 2am, but with all the good will in the world its toilet simply can't cope.

Linda calls Gita and we are told to look for the annoying orange flashing lights. It's a recovery van, with three laddies trying to fix Gita's headlights, which both blew on her drive to here. This is not a great news. The laddies are nice enough, but I am sceptical of success when one of them confides in me that the Kangoo uses 'strange giant bulbs' they've never seen before (referring to an H4!), and which, obviously, they don't have with them. At this point the most important thing is to shoo them away, because David should be arriving shortly, and there is nothing to be gained by him knowing about any of this.

The Crew that Slept in

He arrives bang on time, on good form, has some food and is off again. Gita stays behind waiting for daylight, while Linda and I set off in hope of finding H4 bulbs somewhere at 5am on Sunday morning; we succeed eventually at Dumbarton Euro Garages, after no luck in the, rather fortified, BP garage in Alexandria.

The next crew stop is Ben Glas farm. Here only one vehicle per crew is allowed, so we regroup first, make ourselves some cooked breakfast among the midges, change Gita's bulbs ... lot of time to kill, so a visit to the Falls of Falloch, deserted at this early hour.

The Crew that Slept in

At Ben Glas we don't have long to wait as David arrives at the check point slightly ahead of time, but convinced he is going too slow (we are aiming for a sub 24h finish).

By the time we arrive at Tyndrum the lack of sleep is beginning to catch up with us. We are operating on our own time, where everything is measured from a zero at Milngavie to (hopefully) just under 24 in Fort William. We have completely lost any sense of how that might relate to 'normal' time. In this private timezone it is the middle of an afternoon, and it comes as a bit of a shock that we can't get three fish suppers from the Real Food Cafe, because they only put on the friers after breakfast! Fortunately it's not a long wait till 11am, and, our fish suppers in Gita's car, we are off to the Auchtertyre check point.

The Crew that Slept in

We don't have long to wait. David arrives on schedule, but the effort is beginning to show. Some food, change of clothes, and he is off. For some reason, I decide that since the stove is out I might just as well make a flask coffee and soup for the next stop -- I don't know why, with hindsight this does not make that much sense, but by now none of us are operating at full metal capacity, so I am faffing about for bit with the food before we head on.

Next stop Bridge of Orchy. By the time we get there we are all properly knackered. The girls decide to get some sleep, but I don't sleep well in daylight and tend to wake up with a nasty headache, so I go for a walk instead. It starts raining almost immediately -- the 'weather' we knew was to come for the second half of the race is nearly with us.

A 45min walk does my brain good, but also stirs my bowels, so a quick trip to the hotel is due. My conscience doesn't allow me to just use the facilities, so I sit at the bar for a bit nursing a pint of lemonade, before making good on why I really came here (I am fairly certain I fell asleep in the cubicle, for I do not think I was that long but by the time I step out there is a long queue, and everyone is giving me the evil eye).

Outside the sun is back out, which is good. As I am about to turn down the road toward the bridge, I catch a glimpse of a runner that moves lot like David. Nah, the clothes are wrong; except then I vaguely remember him changing at Auchtertyre ... sh!t, it's David right enough, a long, very long, time ahead of our schedule.

He is glad to see me, thinking I have been waiting for him here on the corner! Should I tell him??? I excuse myself and sprint down the hill where both girls are still soundly asleep. There are some muffled words from inside the cars, which I can't hear clearly, but can venture a guess, then a lot of commotion. At the same time, there are car shenanigans taking place, parking is very tight here and with our tail gate open the other crew can't open theirs or something. A lot of our stuff falls out onto the road in the process. David does not stop long, and the only reason this pit-stop is not a disaster is that the coffee and soup are already made from Auchtertyre!

By the time we get to Glencoe ski centre the weather has arrived in earnest: it's cold, windy and pissing down. My head is feeling like a giant hangover, I try to sleep for a bit, but it's not helping and neither coffee nor sugar are making any difference. Time to stop feeling sorry for myself, the way the weather is just now I think it is likely that the organisers might insist that the runners are accompanied from now on, so I go to get changed.

But there is no sign of David, and we are all getting rather nervous. He arrives some twenty minutes later than we expected him to, visibly exhausted, soaked to skin and very cold. He is a sight for sore eyes, and all three of us are thinking this is it, but nobody wants to broach the subject.

Eventually, as a round about way, I ask 'do you want me to come along?', fully expecting him to say he was calling it, but instead he simply says 'yes'. There are lots of guts in those three letters, and this, ultimately, will become the moment that in the following weeks and months we will keep returning to.

And so we are off, walking, rather slowly, down the road. By the time we get to Kings House my headache is gone, and I am operating quite normally again (nothing like a bit of exercise!), keeping an eye on the pace, doing the math. I am aware I am talking too much, but conditions are so crap I feel I need to, so neither of us has time to think about that.

Up on the high ground above Kinlochleven it's very windy and our feet are in an inch or two of freezing water more or less constantly. We are moving slower than we need to be, and I am dreading the prospect of getting changed in this weather in a car park. But we pick up the pace a bit on the descent, even overtaking a few people who overtook us earlier on.

Just as we reach the village the sun comes out briefly, blowing some of the bleakness away. And to my great relief Linda and Gita managed to find some space inside the sports centre where the check point is. We don't have time to hang about here, the last two legs were both slower than the 24h pace, claiming back the buffer David built up to Bridge or Orchy. So just getting changed, a bite to eat, hot tea, an official kit check (from here on support runner is mandatory).

We manage a good pace on the climb out, but less so once the route starts descending the other side; I am reminded of the old fellrunner's wisdom, it's not the climbs that get you. The ground here is rough, and after 80 miles David's feet are hurting.

I am not much company, it takes all my effort to concentrate on setting the pace. At times I feel quite bad about pushing him, but I am determined not to let him finish in 24:02; we are either going to make it under 24h, or blow up properly, and just now it could go either way. There is another runner who joins us on the climb out of nowhere, and he makes up for me with conversation.

As we are approaching Lundavra I am glad to hear David saying that if the ground was a bit better he feels he could still do some running, so when we hit the good path beyond, I pick up the pace a bit, but there is no response from behind me. At this point I think that's it, the 24h dream is gone. But in fact David perks up not much later. I turn around at the bottom of the big descent -- it's an amazing sight, a line of bobbing head torches as far as I can see.

I am concerned about the climb out, but it turns out David is still climbing well, and as we start the final descend to Fort William gets proper second wind. We are running about 6-7min/km, overtaking quite a few people, and I am having hard time keeping up with him. We lose some of the energy on the final stretch of the road, which feels much longer than it should be, but that no longer matters, we are going to make it, and David eventually finishes in 23:42:31.

The Crew that Slept in

And then it's the prize giving the next day. This is hard to describe, it really needs to be experienced. 2017 was a particularly special year, with Rob Sinclair setting a new race record of 13:41:08. This is a truly amazing feat.

But as I sit there that morning to my mind, the new record is not as amazing as Nicole Brown, the last finisher, coming in just a few minutes earlier, in 34:40:28. Having been out the previous night in the awful weather for just five hours or so, I can honestly say I would not have stuck it out for another twenty hours of the same if you were paying me. And this, I think, is what the West Highland Way Race is ultimately about.

So yes, if you get a chance to crew on the West Highland Way, do so, it is worth it, unique, and unforgettable.

PS: The organisers recommend using two crew teams, and with hindsight this is wise. We just could not resist the temptation of seeing the start of the race, and did underestimate the fatigue that would bring.

by tf at December 23, 2017 06:07 PM

December 20, 2017

Chris Lord

My Impossible Story

Keeping up my bi-yearly blogging cadence, I thought it might be fun to write about what I’ve been doing since I left Mozilla. It’s also a convenient time, as it coincides with our work being open-sourced and made public (and of course, developed in public, because otherwise what’s the point, right?) Somewhat ironically, I’ve been working on another machine-learning project, though I’m loathe to call it that, as it uses no neural networks so far, and most people I’ve encountered consider those to be synonymous. I did also go on a month’s holiday to the home of bluegrass music, but that’s a story for another post. I’m getting ahead of myself here.

Some time in March I met up with some old colleagues/friends and of course we all got to chatting about what we’re working on at the moment. As it happened, Rob had just started working at a company run by a friend of our shared former boss, Matthew Allum. What he was working on sounded like it would be a lot of fun, and I had to admit that I was a little jealous of the opportunity… But it so happened that they were looking to hire, and I was starting to get itchy feet, so I got to talk to Kwame Ferreira and one thing lead to another.

I started working for Impossible Labs in July, on an R&D project called ‘glimpse’. The remit for this work hasn’t always been entirely clear, but the pitch was that we’d be working on augmented reality technology to aid social interaction. There was also this video:

How could I resist?

What this has meant in real terms is that we’ve been researching and implementing a skeletal tracking system (think motion capture without any special markers/suits/equipment). We’ve studied Microsoft’s freely-available research on the skeletal tracking system for the Kinect, and filling in some of the gaps, implemented something that is probably very similar. We’ve not had much time yet, but it does work and you can download it and try it out now if you’re an adventurous Linux user. You’ll have to wait a bit longer if you’re less adventurous or you want to see it running on a phone.

I’ve worked mainly on implementing the tools and code to train and use the model we use to interpret body images and infer joint positions. My prior experience on the DeepSpeech team at Mozilla was invaluable to this. It gave me the prerequisite knowledge and vocabulary to be able to understand the various papers around the topic, and to realistically implement them. Funnily, I initially tried using TensorFlow for training, with the mind that it’d help us to easily train on GPUs. It turns out re-implementing it in native C was literally 1000x faster and allowed us to realistically complete training on a single (powerful) machine, in just a couple of days.

My take-away for this is that TensorFlow isn’t necessarily the tool for all machine-learning tasks, and also to make sure you analyse the graphs that it produces thoroughly and make sure you don’t have any obvious bottlenecks. A lot of TensorFlow nodes do not have GPU implementations, for example, and it’s very easy to absolutely kill performance by requiring frequent data transfers to happen between CPU and GPU. It’s also worth noting that a large graph has a huge amount of overhead that will be unrelated to the actual operations you’re trying to run. I’m no TensorFlow expert, but it’s definitely a particular tool for a particular job and it’s worth being careful. Experts can feel free to look at our repository history and tell me all the stupid mistakes I was making before we rewrote it 🙂

So what’s it like working at Impossible on a day-to-day basis? I think a picture says a thousand words, so here’s a picture of our studio:

Though I’ve taken this from the Impossible website, this is seriously what it looks like. There is actually a piano there, and it’s in tune and everything. There are guitars. We have a cat. There’s a tree. A kitchen. The roof is glass. As amazing as Mozilla (and many of the larger tech companies) offices are, this is really something else. I can’t overstate how refreshing an environment this is to be in, and how that impacts both your state of mind and your work. Corporations take note, I’ll take sunlight and life over snacks and a ball-pit any day of the week.

I miss my 3-day work-week sometimes. I do have less time for music than I had, and it’s a little harder to fit everything in. But what I’ve gained in exchange is a passion for my work again. This is code I’m pretty proud of, and that I think is interesting. I’m excited to see where it goes, and to get it into people’s hands. I’m hoping that other people will see what I see in it, if not now, sometime in the near future. Wish us luck!

by Chris Lord at December 20, 2017 01:00 PM

December 18, 2017

Tomas Frydrych

Strathfarrar Watershed (A View from the Sidelines)

Strathfarrar Watershed (A View from the Sidelines)

I suspect most of those reading this have never heard of John Fleetwood. Recently someone described John as 'quietly getting on with doing extraordinary mountain journeys with zero fanfare', which about sums him up. Behind that 'extraordinary' hide a few other adjectival phrases, of which perhaps the most important is 'preferably in winter', yet his accounts of these ventures are a bit understated. So here is one mortal's peripheral story of the Strathfarrar Watershed.

I first met John some fifteen years ago in the Christian Rock and Mountain Club. Hillrunning wasn't yet on my personal radar, the shared passion was mountains and climbing. John was a determined (some might even say driven!) winter climber and an alpinist, and though to my recollection I only ever climbed with him on the same rope once (he was climbing much harder stuff than I even aspired to), there were many shared trips, drams, songs and stories (and vegetarian curries; John was about the only vegetarian I knew in those days, and so always volunteering to take care of the food).

As all of my friends, present and past, would undoubtedly readily confirm, I am not very good at keeping in touch, and so we lost contact for a number of years. Time passes rather fast, bringing with it some significant birthdays among the old CRMC crowd, and a reunion meet in the Yorkshire Dales couple of years ago.

By then, hillrunning had become my main passion, and I was (still/again) training for the Assynt Traverse. John was just back from a rather epic traverse of the Alps, and there was much to talk about. I never talked running with John before, and realised quickly that we share a very similar take on it, though we practise it on quite different levels. And he was the first (and last) person that I came across who knew exactly what the Assynt Traverse was!

Consequently, when John got in touch at the start of this year about his plans to attempt a winter traverse of the Strathfarrar watershed, I readily agreed to go along. All we needed was a good dump of snow, which a storm at the end of February helpfully provided. And so on the morning of the 27th we find ourselves at the gate on the Glen Strathfarrar private road. (And if you intend to read any further, I suggest you read John's account before carrying on, what follows will make more sense.)

There was never any question of me accompanying John. Even at the peak of my physical condition this outing would be well beyond my limits, and I am not even remotely at my peak. And so as John heads up the little farm track to gain the hills north of the glen, I assemble my bike and set off along the road. The plan is to cycle to Monar Lodge, run along Loch Monar to gain the high ground over Creag na Gaoithe, eventually joining John's route at Bidean an Eoin Derig; follow it to the Maol-bhuidhe bothy where we will meet, for some warm food, dry clothes and spare batteries. Then, perhaps, I'll accompany John for a bit up to Sgurr na Lapaich, before making my own way down to Monar Lodge to pick up the bike ...

Strathfarrar Watershed (A View from the Sidelines)

But that's all still ahead of us. It is a crisp morning, promising a clear sunny day ahead. An inch or so of snow on the road makes the cycle quite arduous, though the stunning scenery is more than making up for that. But soon my feet are freezing, and I can't think of any explanation why I packed SIDI racing shoes rather than Specialized Defroster winter boots. By the time I reach the far side of Loch a'Mhuillidh, I can't feel my toes and have to stop to put on an extra pair of socks, which helps a bit.

Strathfarrar Watershed (A View from the Sidelines)

The glen is full of deer, there must be thousands of them, feeding on the hay the estate provides. They are somewhat unpredictable, particularly the younger bucks, and so care is needed, especially where the road splits the herd. I slowly gain height, there is more snow, and some pushing to be done, before I reach the lodge -- the 25k or so takes me some three hours, a lot longer than I expected.

After a quick early lunch basking in sunshine, I put on my mudclaws and set off along the loch. The sky is blue, no cloud to speak of, the loch like a mirror -- the centre of the high pressure must be bang on the top of here.

Strathfarrar Watershed (A View from the Sidelines)

The jog is very pleasant, though the temperature has crept up a bit, melting the snow, and so for the entire 10km I run in an inch or two of ice-cold water. I don't mind, days like these don't come around often, and I think the cold feet a price worth paying.

Strathfarrar Watershed (A View from the Sidelines)

When I eventually stop for some oatcakes and cheese at the foot of Creag na Gaoithe, a wisp of cloud appears from somewhere and it suddenly gets rather cold. I don't hang around and start plodding up.

The snow on the sun-exposed hillside is saturated with water, and my cold feet are doing my head in: I start worrying about the inevitable temperature drop on the higher ground, about how far I still have to go today ... in this game, the head is everything.

Nevertheless, the sun is back out on the ridge, surface frozen and runnable, my feet warming up quickly. I pause briefly at the foot of the arrete that leads to the summit of Bidean an Eoin Deirg, wondering if I need to put on crampons, settling for an ice axe only, and quickly regretting it. Conditions are tricky, and exposure on both sides considerable. And, of course, now I am in a place I can't put them on ... I carefully backtrack onto a small platform lower down -- how many times over the years have I got caught out like this?

A bite to eat on the summit, then over to the Sgurr a'Chaorachain trig point. There are some footprints here, and I nearly descend down the N ridge by mistake following them. But I realise quickly enough. The compass comes out to double check, for in the afternoon light the climb out of Bealach Coire Choinnich onto to Sgurr Choinnich seems improbably steep and monolithic. I even briefly contemplate dropping down into Coire Choinnich to avoid it, but the slope there is obviously heavily loaded, the risk of triggering an avalanche high.

As it happens, the ascent is straight forward, the ground at an amenable gradient (as the map clearly shows), but the snow is deep, at times waist deep. I don't even pause on Sgurr Coinnich, I am well behind schedule, reaching Bealach Bhernais exactly at sunset.

Strathfarrar Watershed (A View from the Sidelines)

There are decisions to be made. The next section of John's route is difficult navigation-wise, and the deep snow will make progress hard. I have no idea how far behind me John is, but I do know that should he catch up with me I could not keep up with him. But more than anything, I am tired, have been on my feet for over nine hours, and my lack of fitness is beginning to show.

I decide to take the bailout route -- there really is only one such option today, and it's here in Bealach Bhernais. I should say, this is not something I am desperately devising here in the dropping temperature while watching the stunning sunset. Rather, it is something we discussed over a vegetable curry the previous night in the warmth of the (most excellent, would recommend to a friend) Black Isle Berries Bunkhouse. On these big winter ventures planning is key to survival, and John's planning is nothing if not meticulous.

The bailout route means heading west to pick up the stalker's path that leads into Coire Beithe, following this past Loch an Laoigh to eventually pick up the path into Coire na Sorna, past Loch Calavie and down to the bothy over open hill. It's still 17km or so to go, but an easy 17km compared to the watershed line.

I enjoy the sunset, then get the head torch out and reset the altimeter. The initial descent is awkward, and the stalker's path hard to locate, but once I do, it is a decent pony track, and to my surprise I am running rather well down the gentle gradient. Once I get beyond Loch an Laoigh, I find a huge track where the map indicates a path.

After a short while my head torch beam starts picking up some strange, spooky aberrations ahead. This turns out to be heavy machinery, with high vis jackets left hanging on the operator seats. Even in the darkness, I am saddened by the intrusion, we do not value landscape anywhere near enough in this wee country of ours.

There is only one small snag with the bailout route: when I printed out my map at home, I didn't print enough of it. I am missing perphas no more than 1/2 km, but unfortunately it includes the place where the Coire na Sorna path leaves the track I am on. To avoid dropping too low and off my map, I decide to leave the unenjoyable track early and head for the open hillside. The Sorna path is, in fact, yet another big track, which I intersect at around the 300m contour line.

A short climb, then the track levels out, Loch Calavie should be on my right. Yet my (reasonably powerful BD Icon Mk I) head torch beam is not picking it up. There seems to be just a black bottomless abyss there. This is disconcerting. I stop to check the map. It turns out I am standing no further than a foot away from the edge of the water, I can hear it splashing when I am still, but somehow if I shine my beam further out, there is no reflection whatsoever. Spooky.

I carry on, and, not being able to see the loch that well, I miss its eastern end, where another path I want to take branches off. But I am sufficiently alert to realise almost immediately when the main track starts climbing again. The foot of the loch is a proper peat bog, and it takes me a while to negotiate it, before a brief spell on the new path. Then on a bearing down to Loch Cruoshie. There are a few obvious re-entrants here, serving as useful tick offs, and my navigation is bang on.

The final unknown is whether the outflows from Loch Cruoshie will be manageable to cross or not. There is an alternative, but it means a fair detour which I would prefer to avoid. They are freezing, knee deep, but very mercifully slow flowing. Not far to go now, perhaps the reason why I become somewhat complacent about navigation even though there is a thick mist hanging around. As a result locating the bothy takes me longer than it should have, not ideal after wading through the icy water. I am much relieved when I finally spot its outline at the far reach of my head torch beam.

It's 10.45pm and I am glad this 15h day is at an end. I have a quick look around. There is a wheel barrow in the 'utility room', with what looks like a sack of coal in it -- if wishes were horses ... I don't know who gets a bigger fright, whether me or the mice sheltering in it; alas no coal. I make my dinner, stick a candle into the window for John, and promptly fall asleep.

At 4.30am, roughly the time I think John might be arriving, I get up to scan the hillside for light. There is none, so I replace the nearly burnt out candle with a fresh one, and crawl back into the sleeping bag till 7.30am. Time for porridge as the day is breaking out. Still no sign of John, but I am not concerned, not yet. Another glorious day is beginning, and there are pictures to be taken.

Strathfarrar Watershed (A View from the Sidelines)

Nevertheless, as time progresses I am aware there is a cut off point beyond which I can't stay here. I have only a small amount of food left, and a bit extra I left with the bike, but not much. I need to leave here no later than noon. But if John doesn't arrive by then, we have a more serious situation anyway, I suspect. I decide not to worry prematurely, and John appears half an hour later.

He is visibly exhausted and the bottoms of his walking poles have turned into giant ice balls. Yet, he doesn't stay long, just enough to eat, change some clothes, and have a go at the ice balls with an ice axe. In spite of the very hard snow conditions he is determined to carry on. Having been up there just a few hours earlier, I know exactly what he is up against, and I find the level of mental stamina required to carry on quite astonishing.

There is no question of me joining him for Sgurr na Lapaich. I am too spent, and my right heel is rather tender, has been since midday of the previous day. I suspected a giant blister to start with, but as there was nothing to be done about that until I got to the bothy, I didn't bother. But to my surprise, when I took my socks off previous evening, there was no external damage, which is more disconcerting than a giant blister would have been. So I need an easy option.

The relatively easy exit route takes me into the bealach between An Cruachan and An Soccach, down a stalker's path along Allt Riabhachain and then through the Drochaid nam Meall Bhuidhe bealach to pick up the path leading into Glean Innis an Loichel. It's been overcast since mid day, but the sun comes out for a bit in the afternoon just as I enter the glen. Then a bit of road running to get back to Monar Lodge. All in all just under six hours.

The pain in my right heel has got progressively worse during the day, and for some reason is particularly acute on the bike. But there is no snow left on the road, and I can see some serious weather coming in, so push hard, taking just seventy five minutes to get back to the car. But not before the weather arrives, the last part of the cycle in freezing rain and stinging hail. I spare a thought for John up there on the high ground, better him than me.

Off to Beauly where I devour a fish supper. I hope I'll be able to stay in the Bunkhouse again -- John was hopeful of a 36h finish, so we did not book another night. I am in luck. Early start next day, back at Struy for 5:30am as agreed. I try to sleep in the car for a bit more, but it gets too cold without the engine running, so I get up and go for a walk. I get a call from John at 6.30am -- he has only six miles left, but says he is moving slowly.

The big question is what is 'slowly' in the Fleetwood parlance? I expect it's more like my fast than my slow, so start walking up the forest track John will come down to meet him. Another nice cold morning. Just before the track emerges from the forest, there is a giant, iced over, and hard to avoid, puddle and I lament not wearing walking boots.

Once on the open hillside I can see quite far, but there is no sign of John. I wonder if we might have somehow missed each other, and hurry back. As I do, I register another, quite appealing, forestry track going off to the right, which under different circumstances I'd explore, but the last thing I want is for John having to wait for me.

No sign of John at the car, so I get the camera out and head back to the village, to take pictures of snowdrops and study the grave stones, as you do. There is lot of history here, but not much life. I chat to a couple of drivers of forestry trucks waiting for the time when they are allowed into the forest.

Time moves on, three hours and counting since the call. I keep an eye on the track on the hill, but no sign of John. Then a rather dishevelled figure emerges down the road; I take do a double take, the direction is wrong, but yes, it is John, with tales of a dead end forestry tracks and dense sitka. I am very glad to see him, the last couple of hours I was beginning to worry about him for the first time since he set off.

by tf at December 18, 2017 08:42 PM

November 03, 2017

Tomas Frydrych

Regarding Microspikes

Regarding Microspikes

Recently there has been some chatter about using lightweight footwear in the winter hills, and in that context microspikes have been mentioned. As someone who uses microspikes a lot, I'd really like to warn quite emphatically against taking microspikes into the hills as a substitute for crampons -- in some ways wearing microspikes can be considerably more dangerous than just wearing boots without crampons.

Don't get me wrong, I really like microspikes; they are an excellent tool for winter running.

Regarding Microspikes

But they only work in a very limited range of conditions. Specifically, they are only suitable for moderately steep slopes, roughly speaking, slopes you can consistently keep the entire sole of your foot on the ground, and they only work well on pure, exposed, ice and hard neve. They do not work if the hard surface is covered by even a fairly small amount of loose, non-compacting, snow (e.g. blown-on dry powder), and they do not work on the cruddy snow that much of Scottish winter is made of -- the 9mm spikes are too short to find purchase.

But the real problem with microspikes is not that they have limits, all tools do, but rather that (a) they go from a superb secure grip to zero traction in a fraction of a second, and (b) that this tends to happen on a much steeper ground than would have if I were just wearing boots. With boots the loss of traction tends to be gradual, and I get plenty of warning to get the crampons out, or just to back off. In contrast, the microspikes will happily, and effortlessly, take me on a ground that in boots alone I would have long been aggressively kicking steps. This means that slipping with microspikes is likely to be a much more serious proposition than slipping with just my boots on. What gradient are you comfortable self arresting on? 10°? 30°? 45°?

This is not some just theoretical musing, it's something I have learnt the hard way. One January some years back I was doing my regular training run which takes in Ben Vorlich and Stuc a'Chroin from Braeleny Farm. The hills were in early winter conditions, and as was my habit at the time, I brought my standard winter gear of ice axe, microspikes and crocs (the latter for the several river crossing along the way). Ben Vorlich was nicely iced up and windswept, and the micros were working a treat. From the distance the Stuc a'Chroin 'Nordwand' did not look too bad, plenty of bare rock, and so I decided (to use a technical climbing term) 'to take a look at it'.

Regarding Microspikes

I gained height fairly quickly, and as I did the snow condition, and my traction, progressively deteriorated, until I reached an awkward steep groove where it was obvious that if I carried on any further I would not be able to back off. As I started down-climbing the true limitations of the microspikes became painfully obvious: if my traction going up was poor, it was nothing compared to going down. The next half hour, spent kicking in short step after step, was some of the tensest time I have ever experienced in winter hills (I once had a few awkward minutes in the Man Trap, nowhere near as bad, I dare say).

I made it safely to the foot of the buttress eventually and headed over to the broad corrie that in the summer is used to avoid the Nordwand. The snow conditions there were superb. The iced up neve put a big grin on my face as I made rapid progress up, though the upper section was way too steep for the micros, and I had to make great effort to keep at least my toebox on the slope over the final metres. But my axe placements were bombproof, the sun was shining, and my previous escapade was promptly forgotten.

Regarding Microspikes

It is perhaps the sunshine, so rare in Scottish winter, that explains that a month later I am back, again wearing the micros. By now the winter is full on, and the Nordwand is plastered with snow -- I have no intention heading up there, I have learnt my lesson. Or so I think.

The first warning signs come on the descent to Bealach an Dubh Choirein. There is more snow, and a short steep section that needs to be down-climbed proves very awkward. It is a sign of things to come. The conditions in the NE corrie are much changed as well, the line I took out of here last time is topped up by a steep wall and a cornice, and is out of the question both because of the gradient and the avalanche risk. At the same time there is no sign of the perfect neve, and as I make my way up along the north edge of the corrie, I am struggling for any sort of a grip in the cruddy snow. I weave my way up through a series of awkward traverses and rocky steps, kicking and cutting, at times down to the vegetation. All that in the full knowledge that had I been wearing crampons, I wouldn't have given this sort of ground a second thought.

That day I decided to have a simple policy for my winter runs -- if the terrain is serious enough to require carrying an ice axe, I take crampons. No exceptions. At times it is tempting not to, all that extra weight. Indeed there have been times on a run I wished I had micros instead of crampons; it is almost invariably followed by a relief that I have brought the crampons, when a few miles on conditions change. And so when I am packing my gear and that temptation comes, I just think back to those days and the temptation goes away. Life is too precious, and the winter hills don't stand for hubris.

P.S. As I have mentioned elsewhere, I use the Kahtoola KTS crampon for running.

by tf at November 03, 2017 02:35 PM

October 17, 2017

Tomas Frydrych

To Eat or not to Eat (contd)

To Eat or not to Eat (contd)

The disillusionment with the M&S curry aside, the biggest factor that forced me to rethink camping food was running. While Scotland's hills provide superb playground from short jogs to long days, it is the linking of multiple days together that opens up, literally, whole new horizons. Alas, none of my previous approaches to cooking was suited to self-supported multiday runs.

The problem is twofold. On the one hand, running is far too much impacted by the load we carry. I have never obsessed about weight, not beyond eliminating the unnecessary ('light weight' is a synonym of 'short lasting', and I prefer durable), but for running the elimination approach was not enough. I found out that a load of up to about 6kg impacts my pace, but generally not the quality of my running. However, once it gets above 9kg or so, there is very little genuine running taking place. I managed to cut the base kit, including 0.5l water carried, to about 6.5kg. That leaves about 2kg for food ... and brings me to other issue.

The energy burn while running is just that little bit higher. At the same time I don't like running over multiple days on a large calorific deficit: feeling hungry takes away from the fun, impacts one's metal capacity, and makes subsequent recovery longer. Yet running I can easily burn more than 6,000 kcal per day, while the theoretical (and unreachable) limit of what I can pack into 1kg of food is ~9,000 kcal (pure oil). In other words, I'll never carry enough food not to incur a deficit, which means I need to pay attention to the calorific density of the food I take to make the most of it.

Since we are talking calories and running, there is an additional issue to be aware of. The ultra-runner experience seems to suggest that while on the move we can only absorb ~250kcal/h. This is worth keeping in mind when planning the menu: the bulk of the calories needs to come from the evening meal, while during the day small but frequent food intake is the best strategy.

Doing it on the Cheap

Breakfast is easy -- 2 packets of plain instant porridge; no milk required, just add boiling water and 75g or so of 60% chocolate for extra calories. Stir thoroughly, let it sit for a couple of minutes.

During the day my staple food is nut and raisin mix (I like the Tesco Finest variety, but it's too expensive; you can make a nearly identical mix from the nuts and berries Lidl sells, at about half the price), and oatcakes and hard cheese (I am particularly fond of the rough Orkney oatcakes, and Comte). The benefit of oatcakes is lower GI index, which means a more steady supply of energy, plus they are relatively high in fat (the mentioned ones are about 120 kcal per oatcake). Hard cheese has probably the highest calorific density of any normal food, it does not perish quickly, and I happen to like it. If I need a sugar hit, I take Jelly Babies -- not as good as a gel in terms of the hit but lot cheaper, and more fun (4 Jelly Babies correspond to ~1 gel).

The evening meal is where the main challenge, but also the opportunity for eating well, lies. It takes no genius to realise that the M&S curry and Uncle Ben's rice combo fails badly on the calorific density count, for much of the content of both the rice packet and the tin is water, and water is dead weight, i.e., negative calories. Yet it is easy to prepare a good, cheap, home made, meal that is also lot lighter.

My firm favourite is to make a tomato-based sauce, usually with chorizo, some olives, pine nuts, or whatever else I have around / take fancy to. I reduce this to a thick paste and simply pack it into a zip-lock bag. The trick is to use only as little fat/oil as is necessary for the cooking process, and then take some nice olive oil in a small bottle instead. This reduces the mess in case the zip-lock bag fails you (I confess, I double bag, just in case). Nalgene make small leak-proof utility bottles perfect for the oil; I find the 30ml bottle is about right for a single meal, and the 60ml for two (adding ~250/500 kcal respectively).

I normally tend to have this sauce with Chinese-style noodles. Ultimately, I want something that requires as little cooking as possible, for if I can reduce the amount of cooking that I do, I can significantly reduce the weight of the cooking paraphernalia (on that below). After much searching, I have settled on Sainsburys brand of noodles in round nests; they only require 3 min boiling (which can be cut to less if I leave them to sit for a bit), and they fit neatly inside a Toakes 0.5l pot, which is just big enough to cook two of them.

(As far as reducing the cooking time goes, couscous is the best option, but while I love it, I find it does not fill me up, so I prefer some form of pasta.)

I don't bother heating up the sauce, I simply mix it with the noodles in my food bowl, and add the extra oil, depending on the sauce maybe bringing some Parmesan to sprinkle on the top (if you are anything like me, you will realise quickly that draining the noodles is an awful waste ... makes a great soup instead).

The main shortcoming of this approach is that food this prepared does not keep very long; how long will depend on the ingredients (one of the reasons I like using chorizo), and the ambient temperature. Personally, I am happy with this approach for a two night trip in the usual Scottish temperatures, but one needs to use common sense, and if in doubt, reheat everything thoroughly. The other issue is that I still end up carrying quite a bit of water in the food, making it hard to get more than couple of days of food out of my 2kg allowance.

The answer to both of these problems lies in dehydration, which I shall come to in a third instalment of these posts of my camping food 'journey'.

A Side Note: The Kitchen Sink

I always take a 'bowl' to eat from, it means the pot is free for making coffee while eating -- the bottom of an HDPE milk carton makes a superb camping bowl; it is lightweight, it folds flat, the HDPE withstands boiling water, and it gets simply recycled at the end of the trip (for two nests of noodles, you will need the bottom of a six pint carton).

I don't bother with a cup. I carry a 0.5l Nalgene wide-mouth HDPE bottle: during the day this is my water bottle (I make it a 'policy' not to carry more than 0.5l at any time during the day, in Scotland it is rare that more is needed, particularly if I take the Sawyer mini filter), and in the evening it becomes my cup. It is fine with boiling water, the screw top means I don't spill it by accident in the tent, it holds heat rather well, and it can double up as a hot water bottle during the night.

Once I realised that I only need a 0.5l pot for one person (0.7l for two), it became obvious that the ubiquitous gas camping stove is a lot of dead weight to lug about (as well as bulk). The smaller canister weighs around 230g for 110g of gas, while a decent small stove weighs around 80g (there are smaller stoves on the market, e.g., the 25g Chinese BRS-3000T; mine flares out so dangerously when reducing the flame once it's hot that I will not use it again, and would advise against buying it -- the 55g saved compared to a proper stove from a reputable manufacturer is not worth it). There is also the high cost of gas, exacerbated by the accumulation of partially empty canisters after each trip (that these canisters are not refillable is an ugly blot on the outdoor equipment industry green credentials).

I find that the most weight-, as well cost-, efficient solution for short trips is cooking on alcohol. Alcohol stoves come in different shapes and forms, but my favourite is the 30ml burner made by this guy. It is spill-proof (the alcohol is soaked up into a some sort of a foam), and weights 14g; together with the small stand he also sells, and an alu foil homemade windshield, it comes to around 30g. I need around 50g of alcohol per day, plus 50g extra to give myself a margin for spilling my coffee (or to pour boiling water into my shoes when they freeze solid overnight). Small plastic bottles seem to invariably weigh 20g regardless their size up to about 0.25l, so for one night outing this translates to about 160g less in weight (and about £4 cheaper) than gas (so I can treat myself to more chocolate!).

The things to be aware regarding alcohol cooking:

  • It stops being weight-efficient after about 3-4 days (alcohol contains about 1/2 the energy of gas per weight; the savings come from being able to take only what you need and the low weight of the bottle).
  • It takes longer to boil water on the above linked stove than it does on a good quality gas stove, and you really need a windshield; but time to cook is something I am never short of on my trips.
  • Most importantly, alcohol stoves can produce fairly high amounts of CO if the oxygen supply is restricted by, e.g., a windshield, so always make sure there is enough oxygen getting through to the flame and the tent is adequately ventilated (the latter applies to all stoves, some gas stoves are considerably worse than others).

To be continued ... (on dehydrating food)

by tf at October 17, 2017 05:34 PM

October 16, 2017

Tomas Frydrych

To Eat or not to Eat (Well)

To Eat or not to Eat (Well)

I have always liked my food; perhaps it's because I come from a place that obsesses over wholesome home cooking. I also like my food now more than I once used to; perhaps it's because my adoptive homeland doesn't do food particularly well (doesn't really 'get' food).

A good meal is one of those little, simple, pleasures that can put a smile on your face when there isn't much else to smile about, and this fully applies to eating in the outdoors.

My overnight ventures into the woods started at a time and a place where camping stoves did not really exist, a camping mat was something that two muscle-bound men carried to a lake for kids to float on, and a good warm (fur) kidney belt was one's most treasured possession. I think of those days with bemusement as I mentally survey my current weekend camping kit list -- we were unwitting practitioners of 'extreme ultralight' (except there was nothing particularly light about the coveted US Army issue rucksack, the cotton tarp, or the draughty sleeping bag). But back to food and eating.

My standard fare during those days was a half a kilo shop-bought tin of meat and sauce, cooked on an open fire, in the tin, with bread on the side. All in all, it made a pretty decent evening meal. (I can't remember what we ever ate for breakfast, but my lunch was invariably a tin of Soviet-made sardines in tomatoes sauce; it became a running joke, for they did not agree with me, but I couldn't resist them.)

In my early teens one of my friends found a WWII Wehrmacht issue petrol stove in his loft. It was bulky, heavy, and caused much excitement when he brought it along one weekend. It roared mightily, and promptly burned a neat finger sized hole through the bottom of his tin -- it amused us greatly, as we stirred our own tins on the fire, watching him trying to salvage what he could from his dinner.

But that was an exception. The only readily available stove on the market was a clone of the folding German Esbit. The flame was feeble, it was impossible to keep the hygroscopic fuel tablets dry, and the moisture in them made them explode and shoot burning bits all around. Every so often some younger lad would turn up with one, and we would happily munch on our warm food watching him fighting it, before giving up, and learning to cook the 'normal' way. The only time these solid fuel stoves came into play were our summer treks through the Tatras (and farther); there open fires were banned, and/or there was no natural fuel.

The week or so long treks required a different approach to food. Tins were out of the question because of the weight, and the silly stoves forced us to keep boiling of water to minimum. Our rations for the week came to a loaf of bread and a foot or so of salami for lunches (the culinary highlight of each day), oats and raisins for breakfast, and pasta (usually with sugar an raisins) for tea. The oats were pre-soaked over night to reduce the cooking time, and the pasta was only just brought to boil and left to sit till it was soft enough, meaning it was never very warm when we ate it. (We had some savoury option on the menu as well, but I can't recall what it was; I suspect my mind blocked it away for sanity sake. It might have been pasta with sardines.)

When I came to Scotland in mid '90s, I had a brief fling with ready made camping food -- all in all three dates I recall; we broke up quietly, were not a good match for each other. I did not like the food and could not afford the prices. It made me realise I like my food too much to suffer for no good reason. These foil packets offered nothing that the tins of my childhood did not offer, except with less flavour and at a premium price. And so I reverted to kind. For a number of years my basic camping food became a tin of M&S curry, cooked in the tin, and a packet of Uncle Ben's microwavable rice (a trick I learnt from a friend -- it needs no cooking, just a little hot water to warm it up).

Then one day, after a cancelled trip, Linda away, I made the mistake of heating up the tin of curry for my tea at home. It was terrible. I decided there and then that I deserved better, and so began my quest for good, home made, food on the go.

To be continued ... (with the stuff this post was meant to be about in the first place)

by tf at October 16, 2017 11:04 AM

October 13, 2017

Emmanuele Bassi

GLib tools rewrite

You can safely skip this article if you’re not building software using enumeration types and signal handlers; or if you’re already using Meson.

For more that 15 years, GLib has been shipping with two small utilities:

  • glib-mkenums, which scans a list of header files and generates GEnum and GFlags types out of them, for use in GObject properties and signals
  • glib-genmarshal, which reads a file containing a description of marshaller functions, and generates C code for you to use when declaring signals

If you update to GLib 2.54, released in September 2017, you may notice that the glib-mkenums and glib-genmarshal tools have become sligly more verbose and slightly more strict about their input.

During the 2.54 development cycle, both utilities have been rewritten in Python from a fairly ancient Perl, in the case of glib-mkenums; and from C, in the case of glib-genmarshal. This port was done to address the proliferation of build time dependencies on GLib; the cross-compilation hassle of having a small C utility being built and used during the build; and the move to Meson as the default (and hopefully only) build system for future versions of GLib. Plus, the port introduced colorised output, and we all know everything looks better with colors.

Sadly, none of the behaviours and expected input or output of both tools have ever been documented, specified, or tested in any way. Additionally, it turns out that lots of people either figured out how to exploit undefined behaviour, or simply cargo-culted the use of these tools into their own project. This is entirely on us, and I’m going to try and provide better documentation to both tools in the form of a decent man page, with examples of integration inside Autotools-based projects.

In the interest of keeping old projects building, both utilities will try to replicate the undefined behaviours as much as possible, but now you’ll get a warning instead of the silent treatment, and maybe you’ll get a chance at fixing your build.

If you are maintaining a project using those two utilities, these are the things to watch out for, and ideally to fix by strictly depending on GLib ≥ 2.54.

glib-genmarshal

  • if you’re using glib-genmarshal --header --body to avoid the “missing prototypes” compiler warning when compiling the generated marshallers source file, please switch to using --prototypes --body. This will ensure you’ll get only the prototypes in the source file, instead of a whole copy of the header.

  • Similarly, if you’re doing something like the stanza below in order to include the header inside the body:

    foo-marshal.h: foo-marshal.list Makefile
            $(AM_V_GEN) \
              $(GLIB_GENMARSHAL) --header foo-marshal.list \
            > foo-marshal.h
    foo-marshal.c: foo-marshal.h
            $(AM_V_GEN) (
              echo '#include "foo-marshal.h"' ; \
              $(GLIB_GENMARSHAL) --body foo-marshal.list \
            ) > foo-marshal.c
    

    you can use the newly added --include-header command line argument, instead.

  • The stanza above has also been used to inject #define and #undef pre-processor directives; these can be replaced with the -D and -U newly added command line arguments, which work just like the GCC ones.

  • This is not something that came from the Python port, as it’s been true since the inclusion of glib-genmarshal in GLib, 17 years ago: the NONE and BOOL tokens are deprecated, and should not be used; use VOID and BOOLEAN, respectively. The new version of glib-genmarshal will now properly warn about this, instead of just silently converting them, and never letting you know you should fix your marshal.list file.

If you want to silence all messages outside of errors, you can now use the --quiet command line option; conversely, use --verbose if you want to get more messages.

glib-mkenums

The glib-mkenums port has been much more painful than the marshaller generator one; mostly, because there are many, many more ways to screw up code generation when you have command line options and file templates, and mostly because the original code base relied heavily on Perl behaviour and side effects. Cargo culting Autotools stanzas is also much more of a thing when it comes to enumerations than marshallers, apparently. Imagine what we could achieve if the tools that we use to build our code didn’t actively work against us.

  • First of all, try and avoid having mixed encoding inside source code files that are getting parsed; mixing Unicode and ISO-8859 encoding is not a great plan, and C does not have a way to specify the encoding to begin with. Yes, you may be doing that inside comments, so who cares? Well, a tool that parses comments might.

  • If you’re mixing template files with command line arguments for some poorly thought-out reason, like this:

    foo-enums.h: foo-enums.h.in Makefile
            $(AM_V_GEN) $(GLIB_MKENUMS) \
              --fhead '#ifdef FOO_ENUMS_H' \
              --fhead '#defineFOO_ENUMS_H' \
              --template foo-enums.h.in \
              --ftail '#endif /* FOO_ENUMS_H */' \
            > foo-enums.h
    

    the old version of glib-mkenums would basically build templates depending on the phase of the moon, as well as some internal detail of how Perl works. The new tool has a specified order:

    • the HEAD stanzas specified on the command line are always prepended to the template file
    • the PROD stanzas specified on the command line are always appended to the template file
    • the TAIL stanzas specified on the command line are always appended to the template file

Like with glib-genmarshal, the glib-mkenums tool also tries to be more verbose in what it expects.


Ideally, by this point, you should have switched to Meson, and you’re now using a sane build system that generates this stuff for you.

If you’re still stuck with Autotools, though, you may also want to consider dropping glib-genmarshal, and use the FFI-based generic marshaller in your signal definitions — which comes at a small performance cost, but if you’re putting signal emission inside a performance-critical path you should just be ashamed of yourself.

For enumerations, you could use something like this macro, which I tend to employ in all my projects with just few, small enumeration types, and where involving a whole separate pass at parsing C files is kind of overkill. Ideally, GLib would ship its own version, so maybe it’ll be replaced in a new version.


Many thanks to Jussi Pakkanen, Nirbheek Chauhan, Tim-Philipp Müller, and Christoph Reiter for the work on porting glib-mkenums, as well as fixing my awful Parseltongue.

by ebassi at October 13, 2017 03:21 PM

October 09, 2017

Tomas Frydrych

Of Camera Bags

Of Camera Bags

There is no end of acquiring them, the search for the perfect camera bag seems endless. Here are some of mine, and some thoughts on them.

Ortlieb Protect

The now discontinued (looks like Ortlieb stopped making camera bags altogether), but still available, Protect is a compact, waterproof bag in the tradition of Ortlieb robustness, with a slider closure which is easy to operate in big gloves. The inside of the bag is made of a thick closed-cell foam that gives it rigidity, but, unusually for a camera bag, is not lined with fabric. It is officially IP54 rated (though I am fairly certain that when I first got mine it was sold as IP67; I believe there were issues with the slider seal in cold temperatures). Size wise it is just big enough for my old Lumix GF-2 with a 14-70mm kit lens.

The great thing about this bag is that it can be comfortably hung with a couple of carabiners on backpack shoulder straps, providing fast and easy on-the-go access. This makes it an excellent mountain biking and skiing solution for smaller cameras.

I got the Protect on a recommendation of a friend about a decade ago, and it has served me faithfully ever since. I love its simplicity and wish it was just a little bit bigger to accommodate my Lumix GX-8 camera, which brings me to the next bag ...

Ortlieb Compact-Shot

The Compact-Shot is yet another great, but discontinued, bag from Ortlieb. It is slightly bigger than the Protect, just enough for my Lumix GX-8 with a 12-40mm zoom, but unlike the Protect, the internal padding is lined with a soft cloth, as is normal for camera bags, and there is a small internal pocket. The zip closure is not as easy / fast to open as the Protect slider, and is quite awkward to close fully, but when closed the bag is IP67 rated.

The Compact-Shot has become my default bag of choice when I don't need to carry any extra lenses, and, chest mounted with a couple of carabiners, the bag I use for ski touring.

Thule Perspektive Compact Sling

The Perspektive CS is a roomy bum-bag. It is made from a water-repellent fabric, uses water-resistant zippers, and comes with a detachable stowaway rain cover. It is big enough to take my Lumix GX-8 together with 12-40mm and 40-150mm lenses (with either lens fitted), has a padded iPad Mini-sized pocket inside, as well as a phone pocket on the outside of the lid, and comes with a plenty of adjustable dividers for the inner space.

The waist strap with side stabilisers makes the bag very stable, enough to jog with. The bag is compact enough to combine with a small, high sitting, backpack, up to something like the OMM Adventure Light 20, which makes a good combination for fastpacking trips. The only thing I'd change on the belt is to extend the padding fully under the D-rings, as this would make it more comfortable (I have done a couple of very long fastpacking days with this bag, and was beginning to curse the D-rings near the end).

The one issue I have run into with this bag is that the rain cover is to easy to detach, and the connecting strap will often self-detach when the cover is on -- this makes it easy to loose when taking it off in windy conditions. But overall, this is a well thought out and made bag.

LowePro Flipside Trek BP 350 AW

The Flipside is my 'pottering in the woods' backpack, but also the camera bag I am most ambivalent about. On the upside, it is very comfortable to carry, the camera compartment is spacious enough when I want to bring the big lens and more, and the through-the-back access is handy.

But there are some, to me at least, fairly significant design flaws. The non-gear storage space is very limited, enough for a sandwich, a small water bottle, a light-weight jacket and perhaps an extra thin layer. The lack of internal space is aggravated by the mesh side pockets being both small (i.e., too small for the like of a litre Nalgene bottle) and rather shallow (the bottom half of the pockets is made from a non-stretchy material to make it more durable, but there is not enough of it, so, e.g., normal 0.5l drinks bottle cannot be inserted all the way to the bottom). It is possible to strap things, such as a tripod, on the outside of the bag, but then you have to forego of the built-in rain cover, which is rather snug fitting.

Had there been another 3+ or so litres of non-gear space in this bag, this would have been my ideal camera day-bag. As is, I have strapped an external 5 litre pouch on the back of it, but like I said, that makes the rain cover useless, which is sub-optimal in the normal Scottish weather.

Tenga BYOB 9

Tenga get around the basic problem with camera backpacks (they never really work well enough; see above) by providing a range of minimal padded camera inserts that you put into a bag of your choice. The model number is the depth of the bag in centimetres, and the BYOB 9 is just big enough for my Lumix GX-8 with 12-40mm lens + another lens of a similar size, and either another pancake prime, or a few extra bits and bobs, such as a remote control and a blower.

The great thing about the BYOB is how the sizes of the bags in the range were chosen -- for a given camera size you get optimally low profile bag easy to place at the top of a normal sized backpack. The main downside is that the padding is inexplicably thin (about half of that on my other camera bags); I'd prefer more protection for my kit. Also, although the fabric is water-repellent, the zip is not, so I always feel it necessary to put this inside a dry bag.

Crumpler Light Delight 200

My default running camera is Lumix GM-5 with a 14mm pancake prime lens, and it's proven rather difficult to find a good pouch for it that could be shoulder mounted. The closest I have come to is the Light Delight 200. It's slightly wider than ideal for the GM-5, so I padded it with a strip of an old sleeping map to stop it from moving about when I run. On the upside, the depth is just enough for a 20mm pancake fitted.

Overall this pouch is well made and well padded. The back has a Velcro strap for attaching it to Crumpler backpacks, but it can be attached quite well to OMM packs with a bit of a string, and some creative knotting.

The main downside is that the bag is not even remotely rain proof. Also, the top zip has two sliders which annoyingly rattle when running, so I promptly removed one of them. With that modification, I have happily run hundreds of miles with it.

by tf at October 09, 2017 01:39 PM

September 07, 2017

Tomas Frydrych

Thoughts on the Dumyat Path

Thoughts on the Dumyat Path

If, like me, you thought we saw the last of the heavy machinery on Dumyat, you were wrong. In the last few days diggers have arrived again to (at the expense of SP Energy Networks) graciously bestow upon us a new path from the Sheriff Muir road car park to the very summit.

Updated 9/9/2017, 09:15; see the end. Formal complaints to be addressed to SPEN on customercare@spenergynetworks.com

In broad strokes, the situation as it emerged is this: when SPEN was granted permission for the Beauly power line, it came on the condition that they will do some 'good work' for the locals in return; in the Stirling case this happens to include work on the Dumyat path.

That the main path is in need of some attention, and has been for some time, is not something I would dispute. There is a significant amount of erosion taking place, which I have written about at length before (complete with 60+ images documenting the erosion patterns). But what is happening on Dumyat just now is not the answer. As I see it, there are two big problems here: the contractor's approach, and the lack of understanding how the hill is used.

The contractor is rather heavy handed and appears ill prepared. There is an apparent lack of proper planning (let's just bring a big digger, that will do it), the lack of understanding the geology of the hill (didn't expect it to be this 'rocky', doh), the lack of any sympathy for the natural features of the landscape (levelling uneven sections of the exposed bedrock, really?!). SNH has guidelines on how upland paths should be constructed, and this is not it.

The extent, and progress, of the erosion on the hill varies along its length, depending on the gradient and what is found immediately below the surface. On the steep sections, in some cases the erosion exposes very loosely bound rock and/or gravel deposits, which then suffer from bad water run off. These are the places that most require some stabilisation and mitigation, but in fact this is mainly limited to two locations, both on the upper part of the hill (to be precise, around NS 8278 9772 and NS 8352 9763). These places would benefit from some drainage work, and perhaps relocation of the path, but it needs to be done sensitively and with care, not with a bulldozer.

In other instances of the steep ground, the erosion relatively quickly exposes bare, but solid bedrock. While it's not pretty when it is happening, it simply stops there once the rain cleans up the rock. Yes, if you have been going up this hill for many years, the path has changed dramatically at these places. But it is questionable whether any intervention will achieve anything meaningful here. For example, the contractors seem set on evening out the level exposed section of bedrock around NS 8157 9788 with loose soil. It is not clear to me what the objective of that is, and why such resurfacing is needed at all -- this part has remained stable for many years.

On the easier angled sections the path suffers limited water run off. The damage here falls into two main classes. There are some boggy areas in the vicinity of natural springs (notably NS 8270 9777 and NS 8319 9774). These would benefit from board walks being constructed; the alternative is re-routing the path, but on that see below.

Apart from these natural bogs, the damage on the easily angled parts of the path is almost entirely due to soil being moved by feet and wheels, the effect of water run off is minimal. As such the path tends to broaden (see the link above for more on this), but remains stable in terms of its depth. It is, again, arguable that these parts will benefit from the work being done in any way. The only answer here would be confining the traffic to a narrow corridor, and that brings me to the second problem with the work being undertaken.

It would appear that whoever approved the current solution has absolutely no understanding of how the hill is used. There are many of the regular local users who are quite happy with the hill in its rugged semi-natural state, and they will more likely than not avoid the new path. This will include many of the local hill runners, for whom the current landscape offers excellent and easily accessible training ground. And it will include the mountain bikers who really happen to like the hill the way it is, and even the way in which it evolves (mountain biking nowadays is generally not a means to travel distances, but rather it is about the challenge 'under the wheels').

And here lies the main problem. If the objective of this exercises is to provide the good citizens of Stirling with an easy, all-ability, access to Dumyat, then the contractors are following the wrong line. There is much to be said for such a path, but it would need to follow the natural contours of the hill. Such path would have the benefit of splitting the descending bike traffic from those on the path. But sanitising the current path along its length will simply result in much of the current traffic being shifted to its immediate sides, and the erosion will continue spreading.

When path work is done not understanding, or ignoring, the mountain bike use case, it will fail to relieve the erosion pressures; there are examples of this emerging elsewhere (Ben Lawers, Cairngorms). On Dumyat the bike use is well established, people have been riding bikes here for as long as mountain bikes existed, i.e., for over 20 years. It is wholly unjustifiable not to take them into account.

More so, like it or not, mountain bikes are part of Scotland's outdoor landscape, and they are going to stay, accounting for a significant chunk of Scotland's tourism revenue. Dumyat is a fairly insignificant knoll above Stirling, but it foreshadows issues that are emerging elsewhere in Scotland's bigger hills. Mountain biking is no longer the niche pursuit it once was, and we need to start seriously talking about how it fits in into the outdoor pursuit family and into our hills.

Update 8 Sep 2017

So, SPEN has now released a PR statement about the work, which includes a picture of a short segment a path upgrade from Ben Vorlich, and states:

SP Energy Networks is undertaking works to sensitively restore the existing Dumyat Hill and Cocksburn Reservoir Paths. This project will employ established upland path techniques to create a naturally formed route allowing areas of erosion to organically regenerate.

The works will form an entirely natural upland path developed in soil and stone ... This will help to prevent further severe erosion ... The aim is not to create a formal path but to replicate the existing path using the same materials in a form that will support ever increasing users and user groups visiting the area.

(Emphasis mine.)

I'll simply invite the reader to compare the PR speak and the imagery with what is, in fact, happening on Dumyat:

Sensitive use of heavy machinery:
Thoughts on the Dumyat Path

Established upland path techniques:
Thoughts on the Dumyat Path

Not a formal, but entirely natural, path:
Thoughts on the Dumyat Path

Controlling severe erosion (things are looking just great after one afternoon of rain):
Thoughts on the Dumyat Path

This needs to stop now. If, like me, you are concerned about what is happening on Dumyat, please send a formal letter of complaint with your concerns to SPEN on customercare@spenergynetworks.com.

Update 8 Sep 2017, 23:34

I have just returned from a brief visit to Dumyat, and, as hard it is to believe, things have taken further turn for the worse during today, as the following images will illustrate.

The first image shows the start of the track. An attempt has been made to neaten it up by laying down bits of turf along its sides. However, it should be noted that there is no topsoil present here. The area here was exposed down to bedrock, which during the Denny - Beualy construction was levelled out using grey industrial hardcore. The orange path in this picture is barely a couple of inches of soil that appears to have been scraped from the hollow on the left of the image, and the bits of turf were removed from elsewhere and simply laid on the old hardcore:

Thoughts on the Dumyat Path

The next image shows the old hard core and how the turf has been laid onto it. Considering we are now outside of the growing period, it is very unlikely that much of this will survive the wet winter months.

Thoughts on the Dumyat Path

The hollow that seems to have been used to excavate the soil for this section of the path, covered in badly damaged turf:

Thoughts on the Dumyat Path

The next image shows the start of the first rise. The original surface here was bedrock, part of which is still visible left of the path, and thin layer of intermittent turf, forming a pleasant green slope. The turf has been stripped, and the bedrock covered with a thin layer of topsoil brought from elsewhere:

Thoughts on the Dumyat Path

Along side the entire length of the track being worked on, there is extensive damage to the turf, which at places has been intentionally stripped for no obvious reason. The area in the first picture is of particular concern, because the loosely bonded gravel underneath has been exposed and will be subject to rapid water erosion -- this is the primary erosion pattern on the hill.

Thoughts on the Dumyat Path

Looking back down the initial rise:

Thoughts on the Dumyat Path

And the damage to the side of the track:

Thoughts on the Dumyat Path

This area originally contained a natural rock step. This has been incomprehensibly levelled out with large amount of material excavated from the left of the track:

Thoughts on the Dumyat Path

The next image shows the excavation area. As the result of the excavation the side of the hill has been exposed to water run off, and will deteriorate rapidly during the winter months:

Thoughts on the Dumyat Path

Another natural rocky feature being levelled out; the material for this seems to have been simply dug up to the side of it, leaving deep ditches on both sides. The track at this point is somewhere in the region of 7-8m wide:

Thoughts on the Dumyat Path Thoughts on the Dumyat Path Thoughts on the Dumyat Path

The next image shows the same area as the second image I posted earlier today; in the course of the day the contractor piled up a large amount of topsoil into this area obliterating the natural step at the end of the this section:

Thoughts on the Dumyat Path

Detail of the fill, this is well over a foot in depth:

Thoughts on the Dumyat Path

Just for reference, this is the old path that we are fixing here:

Thoughts on the Dumyat Path

And the area the top soil for the fill was excavated from:

Thoughts on the Dumyat Path

The contractor is McGowan Ltd, and they seem have a track record:

Thoughts on the Dumyat Path In case you find these images disturbing, let me assure you that they in fact don't do justice to the ugly reality, you might want to see for yourself if you are local.

Update 9 Sep 2017, 9:15

It appears the local representative of Cycling UK was given access to the plans for the path, available here. What is clear is that the work undertaken is not in keeping with the agreed plans. Notably, the section covered by the following images was supposed to be 'hand-built only' using stone pitching; what a mess:

Thoughts on the Dumyat Path Thoughts on the Dumyat Path Thoughts on the Dumyat Path Thoughts on the Dumyat Path Thoughts on the Dumyat Path

I am also concerned that the plan for the natural bog area around NS 8309 9761 is 'raised hardcore'.

Formal complaints to be addressed to SPEN on customercare@spenergynetworks.com.

by tf at September 07, 2017 10:18 AM

September 04, 2017

Tomas Frydrych

GPS Accuracy and the Automation Paradox

GPS Accuracy and the Automation Paradox

It's been a busy summer for UK's MRTs. Not a week has gone by without someone getting lost in our hills, without yet another call to learn how to use a map and compass and not to rely on phone apps. This in turn elicits other comments that the problem is not in the use of digital tools per se, but in not being able to navigate. True as this is, the calls for learning traditional navigation should not be dismissed as Luddite, for not being able to navigate competently and the use of digital technologies are intrinsically linked.

GPS Accuracy

Before getting onto the bigger problem, the question of GPS accuracy is perhaps worth digressing into. Our perception of what the GPS in our phone can do for us is skewed by our urban experience. We use mapping applications daily to locate street addresses, and we have got used to how accurate these things are in that context.

However, many of us do not appreciate that because GPS does not work well at all in cities, mobile phones use so called Assisted GPS. With A-GPS the accurate location is derived from the known positions of mobile phone masts and the presence of domestic wifi IDs, which street mapping vehicles collect and store in massive databases. And, obviously, A-GPS only works in cities and with a working Internet connection (which is why your phone will complain when you use the GPS while in the air-plane mode).

So how accurate is GPS alone?

First, there is the accuracy of the GPS service per se. This is the simple part: the US Government undertakes to operate the service in such a manner that a user in the worst location relative to the current position of the satellites can achieve grid accuracy of ±17m and altitude accuracy of ±37m in 95% of cases. You can often get better results, but need to allow for even bigger error 5% of the time.

Then there is how well the device on the ground can access and process that service. The above numbers assume a clear view of the sky down to 5° above horizon, allowing for the acquisition of 6 different satellite signals. They assume no weather interference, and a good quality receiver that makes full use of all the available information.

In the hills the real conditions often are nowhere near optimal, and the tiny GPS devices in watches and phones, with their tiny aerials, are not of the requisite standard. The real errors will be, possibly significantly, bigger (e.g., I have seen an error of some 200m on an iPhone 5 on one occasion in the upper Glen Nevis).

So how good are these numbers from the perspective of mountain navigation?

The altitude resolution error is potentially around a major contour line difference and deteriorates rapidly as the number of visible satellites drops. As such GPS estimated altitude is not much use for accurate navigation. (Altitude is very useful for mountain navigation, and a much better resolution is achievable with a barometric altimeter, when used correctly.)

But if you compare the location accuracy to what a moderately competent navigator in moderately challenging circumstances will be able to estimate without the GPS, the GPS wins hands down; this is what it's designed for. Nevertheless, the GPS based location cannot be assumed to be pinpoint accurate. In complex terrain the errors can be navigationally significant, and are not good enough to keep me safe -- there are many locations in the mountains where if I overshoot my target by 30m I will die.

This is, of course, no different than following a compass bearing. Neither the compass nor the GPS are magic bullets that will keep me safe. But with GPS we seem to be conditioned to trust the technology more than it merits. Competent navigation comes down not to the tools, but to making sound judgements based on the information provided by the tools, whether it's map, compass, or GPS. And that brings me to the Automation Paradox.

The Paradox of Automation

The Automation paradox can be formulated in different ways, but it comes down to this:

Automation leads to degradation of operator skills, while, at the same time, the skills required to handle automation failures are frequently considerably higher than average.

In an industrial field, the introduction of automation largely replaces a workforce of skilled craftsmen/women with a low skilled one. This is unavoidable; the craft skills come from practice of the craft. The automation of the process does away with the practice, and doing so removes the opportunities for practising the skills.

But the bigger problem with automation is this: when automated processes fail and require manual intervention, they tend to do so in atypical, complex, corner cases which require higher level of skill to handle, skill that the workforce does not have. In industrial fields this leads to the development of a small number of exceptionally skilled (and highly paid) experts who get called in when the automatic process fails.

Navigating by GPS is subject to the Automation Paradox; it takes away the grind of reading maps, taking bearings, pacing and timing distances. This is great while it goes well, for it leaves more time to enjoy the great outdoors, and so we do. But in doing so it deprives us of the opportunities to develop the rudimentary navigation skills.

But when it fails, there is every chance it will not be on a nice sunny day with cracking visibility. It will be when the weather is awful enough to interfere with the radio waves, or in a location where no satellites are visible. The competent navigator will simply turn the unit off and carry on, for she has other tools in the bag and knows how to use them. The rest will have to call in the experts to get them off the hill (assuming their phone has a signal).

And this is the problem with the GPS. It's not that it's not a useful tool, it is, in the hands of a competent navigator. But that competency is developed through deliberate and ongoing practice of the basics. What it does not come from is the following of a GPS track downloaded from somewhere on the Internet, and let's not delude ourselves, that's how the GPS gets generally used.

P.S. If you live in the Central Belt and don't know where to start, I offer Basic Mountain Navigation and Night-Time Navigation courses.

by tf at September 04, 2017 11:05 AM

August 11, 2017

Emmanuele Bassi

GUADEC 2017

Another year, another GUADEC — my 13th to date. Definitely not getting younger, here. 😉

As usual, it was great to see so many faces, old and new. Lots of faces, as well; attendance has been really good, this year.

The 20th anniversary party was a blast; the venue was brilliant, and watching people going around the tables in order to fill in slots for the raffle tickets was hilarious. I loved every minute of it — even if the ‘90s music was an assault on my teenage years. See above, re: getting older.

The talks were, as usual, stellar. It’s always so hard to chose from the embarrassment of riches that is the submission pool, but every year I think the quality of what ends up on the schedule is so high that I cannot be sad.

Lots and lots of people were happy to see the Endless contingent at the conference; the talks from my colleagues were really well received, and I’m sure we’re going to see even more collaboration spring from the seeds planted this year.


My talk about continuous integration in GNOME was well-received, I think; I had to speed up a bit at the end because I lost time while connecting to the projector (not enough juice when on battery to power the HDMI-over-USB C connector; lesson learned for the next talk). I would have liked to get some more time to explain what I’d like to achieve with Continuous.

Do not disturb the build sheriff

I ended up talking with many people at the unconference days, in any case. If you’re interested in helping out the automated build of GNOME components and to improve the reliability of the project, feel free to drop by on irc.gnome.org (or on Matrix!) in the #testable channel.


The unconference days were also very productive, for me. The GTK+ session was, as usual, a great way to plan ahead for the future; last year we defined the new release cycle for GTK+ and jump start the 4.0 development cycle. This year we drafted a roadmap with the remaining tasks.

I talked about Flatpak, FlatHub, Builder, performance in Mutter and GNOME Shell; I wanted to attend the Rust and GJS sessions, but that would have required the ability to clone myself, or be in more than one place at once.

During the unconference, I was also able to finally finish the GDK-Pixbuf port of the build system to Meson. Testing is very much welcome, before we bin the Autotools build and bring one of the oldest modules in GNOME into the future.

Additionally, I was invited to the GNOME Release Team, mostly to deal with the various continuous integration build issues. This, sadly, does not mean that I’m one step closer to my ascendance as the power mad dictator of all of GNOME, but it means that if there are issues with your module, you have a more-or-less official point of contact.


I can’t wait for GUADEC 2018! See you all in Almería!

by ebassi at August 11, 2017 01:33 PM

August 10, 2017

Emmanuele Bassi

Dev v Ops

In his talk at the 2017 GUADEC in Manchester, Richard Brown presented a set of objections to the current trend of new packaging systems — mostly AppImage, Snap, and Flatpak — from the perspective of a Linux distribution integrator.

I’m not entirely sure he managed to convince everybody in the attendance, but he definitely presented a well-reasoned argument, steeped in history. I freely admit I went in not expecting to be convinced, but fully expecting to be engaged and I can definitely say I left the room thoroughly satisfied, and full of questions on how we can make the application development and distribution story on Linux much better. Talking with other people involved with Flatpak and Flathub we already identified various places where things need to be improved, and how to set up automated tools to ensure we don’t regress.

In the end, though, all I could think of in order to summarise it when describing the presentation to people that did not attend it, was this:

Linux distribution developer tells application and system developers that packaging is a solved problem, as long as everyone uses the same OS, distribution, tools, and becomes a distro packager.

Which, I’m the first to admit, seems to subject the presentation to impressive levels of lossy compression. I want to reiterate that I think Richard’s argument was presented much better than this; even if the talk was really doom and gloom predictions from a person who sees new technologies encroaching in his domain, Richard had wholesome intentions, so I feel a bit bad about condensing them into a quip.

Of course, this leaves me in quite a bind. It would be easy — incredibly easy — to dismiss a lot of the objections and points raised by Richard as a case of the Italian idiom “do not ask to the inn-keeper if the house wine is good”. Nevertheless, I want to understand why those objections where made in the first place, because it’s not going to be the last we are going to be hearing them.

I’ve been turning an answer to that question in my head for a while, now, and I think I finally came up with something that tries to rise to the level of Richard’s presentation, in the sense that I tried to capture the issue behind it, instead of just reacting to it.


Like many things in tech, it all comes down to developers and system administators.

I don’t think I’m being controversial, or exposing some knowledge for initiates, when I say that Linux distributions are not made by the people that write the software they distribute. Of course, there are various exceptions, with upstream developers being involved (by volunteer or more likely paid work) with a particular distribution of their software, but by and large there has been a complete disconnect between who writes the code and who packages it.

Another, I hope, uncontroversial statement is that people on the Linux distribution side of things are mostly interested in making sure that the overall OS fits into a fairly specific view of how computer systems should work: a central authority that oversees, via policies and validation tools that implement those policies, how all the pieces fit together, up to a certain point. There’s a name for that kind of authority: system administrators.

Linux distributions are the continuation of system administration policies via other means: all installed Linux machines are viewed as part of the same shared domain, with clear lines of responsibility and ownership that trace from a user installation to the packager, to the people that set up the policies of integration, and which may or may not involve the people that make the software in the first place — after all, that’s what distro patches are for.

You may have noticed that in the past 35 years the landscape of computing has been changed by the introduction of the personal computer; that the release of Windows 95 introduced the concept of a mass marketable operating system; and that, by and large, there has been a complete disintermediation between software vendors and users. A typical computer user won’t have an administrator giving them a machine with the OS, validating and managing all the updates; instead of asking an admin to buy, test, and deploy an application for them, users went to a shop and bought a box with floppies or an optical storage — and now they just go to online version of that shop (typically owned by the OS vendor) and download it. The online store may just provide users with the guarantee that the app won’t steal all their money without asking in advance, but that’s pretty much where the responsibility of the owners of the store ends.

Linux does not have stores.

You’re still supposed to go ask your sysadmin for an application to be available, and you’re still supposed to give your application to the sysadmin so that they can deploy it — with or without modifications.

Yet, in the 25 years of their history, Linux distribution haven’t managed to convince the developers of

  • Perl
  • Python
  • Ruby
  • JavaScript
  • Rust
  • Go
  • PHP
  • insert_your_language_here

applications to defer all their dependency handling and integration to distro packagers.

They have just about managed to convince C and C++ developers, because the practices of those languages are so old and entrenched, the tools so poor, and because they share part of the same heritage; and TeX writers, for some weird reason, as you can witness by looking at how popular distribution package all the texlive modules.

The crux is that nobody, on any existing major (≥ 5% of market penetration) platform, develops applications like Linux distributors want them to. Nobody wants to. Not even the walled gardens you keep in your pocket and use to browse the web, watch a video, play a game, and occasionally make a phone call, work like that, and those are the closest thing to a heavily managed system you can get outside of a data center.

The issue is not the “managed by somebody” part; the issue is the inevitable intermediary between an application developer and an application user.

Application developers want to be able to test and have a reproducible environment, because it makes it easier for them to find bugs and to ensure that their project works as they intented; the easiest way to do that is to have people literally use the developer’s computer — this is why web applications deployed on top of a web browser engine that consumes all your CPU cores in a fiery pit are eating everybody’s lunch; or because software as a service even exists. The closest thing application developers have found to ship their working laptop to the users of our applications without physically shipping hardware, is to give them a read-only file system image that we have built ourselves, or a list of dependencies hosted on a public source code repository that the build system will automatically check out prior to deploying the application.

The Linux distribution model is to have system administrators turned packagers control all the dependencies and the way they interact on a system; check all the licensing terms and security issues, when not accidentally introducing them; and then fight among themselves on the practicalities and ideologies of how that software should be distributed, installed, and managed.

The more I think about it, the less I understand how that ever worked in the first place. It is not a mystery, though, why it’s a dying model.

When I say that “nobody develops applications like the Linux distributions encourages and prefers” I’m not kidding around: Windows, macOS, iOS, Electron, and Android application developers are heavily based on the concept of a core set of OS services; a parallel installable blocks of system dependencies shipped and retired by the OS vendor; and a bundling system that allows application developers to provide their own dependencies, and control them.

Sounds familiar?

If it does, it’s becase, in the past 25 years, every other platform (and I include programming languages with a fairly comprehensive standard library in that definition, not just operating systems) has implemented something like this — even in free and open source software, where this kind of invention mostly exists both as a way to replicate Linux distributions on Windows, and to route around Linux distributions on Linux.

It should not come as a surprise that there’s going to be friction; while for the past two decades architects of both operating systems and programming languages have been trying to come up with a car, Linux distributions have been investing immeasurable efforts in order to come up with a jet fueled, SRB-augmented horse. Sure: it’s so easy to run apt install foo and get foo installed. How did foo get into the repository? How can you host a repository, if you can’t, or don’t want to host it on somebody else’s infrastructure? What happens when you have to deal with a bajillion, slightly conflicting, ever changing policies? How do you keep your work up to date for everyone, and every combination? What happens if you cannot give out the keys to your application to everyone, even if the application itself may be free software?

Scalability is the problem; too many intermediaries, too many gatekeepers. Even if we had a single one, that’s still one too many. People using computers expect to access whatever application they need, at the touch of a finger or at the click of a pointer; if they cannot get to something in time for the task they have to complete, they will simply leave and never come back. Sure, they can probably appreciate the ease of installing 30 different console text editors, 25 IRC clients, and 12 email clients, all in various state of disrepair and functionality; it won’t really mean much, though, because they will be using something else by that time.

Of course, now we in the Linux world are in the situation of reimplementing the past 20 years of mistakes other platforms have made; of course, there will be growing pains, and maybe, if we’re careful enough, we can actually learn for somebody else’s blunders, and avoid falling into common traps. We’re going to have new and exciting traps to fall into!

Does this mean it’s futile, and that we should just give up on everything and just go back to our comfort zone? If we did, it would not only be a disservice to our existing users, but also to the users of every other platform. Our — and I mean the larger free software ecosystem — proposition is that we wish all users to have the tools to modify the software they are using; to ensure that the software in question has not been modified against their will or knowledge; and to access their own data, instead of merely providing it to third parties and renting out services with it. We should have fewer intermediaries, not more. We should push for adoption and access. We should provide a credible alternative to other platforms.

This will not be easy.

We will need to grow up a lot, and in little time; adopt better standards than just “it builds on my laptop” or “it works if you have been in the business for 15 years and know all the missing stairways, and by the way, isn’t that a massive bear trap covered with a tarpaulin on the way to the goal”. Complex upstream projects will have to start caring about things like reproducibility; licensing; security updates; continuous integration; QA and validation. We will need to care about stable system services, and backward compatibility. We will not be shielded by a third party any more.

The good news is: we have a lot of people that know about this stuff, and we can ask them how to make it work. We can take existing tools and make them generic and part of our build pipeline, instead of having them inside silos. We can adopt shared policies upstream instead of applying them downstream, and twisting software to adapt to all of them.

Again, this won’t be easy.

If we wanted easy, though, we would not be making free and open source software for everyone.

by ebassi at August 10, 2017 05:55 PM

July 26, 2017

Tomas Frydrych

The Unfinished Business of Stob Coir an Albannaich

The Unfinished Business of Stob Coir an Albannaich

I have a confession to make: I find great, some might think perverse, pleasure at times in bypassing Munro summits. It is the source of profound liberation -- once the need to 'bag' is overcome, a whole new world opens up in the hills, endless possibilities for exploring, leading to all kinds of interesting and unexpected places. Plans laid out in advance become mere sketches, to be refined and adjusted on the go and on a whim.

But I prefer to make such impromptu changes because I can rather than because my planning was poor, or because of factors beyond my control (of course, the latter often is just a euphemism for the former!); my ego does not relish that. Which is why today I have a firmer objective in mind than usual, namely Stob Coir an Albannaich.

Let me rewind. Some time back, while pouring over the maps, the satisfying line of Aonach Mor (the north spur of Stob Ghabhar) caught my eye. And so just over a week ago, I set off from Alltchaorunn in Glen Etive with the intention to take Aonach Mor onto Stob Ghabhar, and then follow the natural ridge line over Stob a' Bhruaich Leith to Meall Odhar, Meall nan Eun, Meall Tarsuinn and onto Stob Coir an Albannaich, and then back over Beinn Ceitlein. About 27km with 2,100m vertical, so I am thinking 6 hours.

But for the first half of the day there is thick low cloud hanging about, and above 500m visibility is very poor, wind quite strong. I don't mind being out in such conditions. That is often when the hills are at their most magical, the brief glimpses of the hidden world down below more memorable than endless blue sky.

The Unfinished Business of Stob Coir an Albannaich

Also, it's not just me who can't see, and I generally find I have many more close encounters with wildlife in conditions such as these; today is no exception, and I get to see quite a few small waders, and even a curlew. But I end up moving by the needle all the way to Meall Odhar, making slow progress.

The cloud clears just as I am having my lunch on the boundary wall below Meal Odhar. The second half of the day is a cracker. I quickly pick up the walkers' path and jog to the Meall nan Eun summit where four seniors are enjoying the sunshine. They tell me not to stop, that the sight of me is too demoralising; I am thinking to myself that I hope I'll still be able to get up the hills when I reach their age, they must have thirty years on me. The usual Scottish banter; an inherent part of the hill experience, as much as the rain, the bog and the midges.

From here onwards the running is good and flowing. As I am about to start the climb onto Albannaich, I do some mental arithmetic. I am five hours in, the planned descent from Albannaich looks precarious from here, and I have no suntan lotion. I decide to cut my losses, run down the glorious granite slabs below Meall Tarsuinn, and return via the Allt a' Chaorainn glen.

It turns out to be a good call, as it still takes me two hours to get back, and a touch of sun on my neck. Nevertheless, it leaves me with the sense of unfinished business.

And so this week I am back. Not the same route, obviously. Rather, I set off from Victoria Bridge, gain the natural ridge line via Beinn Toaig's south west spur, planning to descend Albannaich either over Cuil Ghlas, or Sron na h-Iolaire. It's a wee bit bigger outing than last week (I am expecting eight hours), but there is nothing like responding to a 'failure' with a little bit more ambition!

The weather is glorious, if anything just a bit too hot, views all around. Yet, looking over Rannoch Moor it's impossible not to reflect on how denuded of trees this landscape is. Just a small woodland around the Victoria Bridge houses, a couple of small sitka plantations, and an endless sea of bright green grass. During the autumn and winter months there is a little bit more colour, but this time of the year the monotonous green drives home to me how little varied the vegetation here is.

I make good progress along the ridge (no navigation required), skip (with the aforementioned degree of satisfaction) Meall nan Eun summit and arrive on Stob Coir an Albannaich in exactly five hours. The views are breathtaking; in spite of the heat there is no haze, and Ben Nevis can be seen clearly to the north.

The Unfinished Business of Stob Coir an Albannaich

After a brief chat with a couple of fellow hillgoers I decide to descend down the Cuil Ghlas spur, where slabby granite promises fun.

The heat is beginning to get to me, and I can't wait to take a dip in the river below. The high tussocky grass that abounds on these hills makes the descent from the ridge to Allt Coire Chaorach awkward, and the stream is not deep enough for a dip, but I at least get some fresh water and soak my cap. Not much farther down the river bed becomes a long section of granite slab; the water level is low, and so lot of it is dry to run on.

As an unexpected bonus, at the top of the slabby section is a beautiful pool: waist deep, with a smooth granite bottom, and even a set of steps in. I sit in for a while cooling down, then, refreshed, jog down the slabs. When they run out, I stay in the riverbed; hopping from boulder to boulder is lot more fun than the grassy bank, even if it's not any faster.

The floor of the glen is boggy and covered in stumps of ancient Caledonian pine; on a day like this, it is hard to not pine for their shadow. There is some new birch planted on the opposite side of the glen; perhaps one day there will be more.

34km / 2,300m ascent / 8h

by tf at July 26, 2017 05:41 PM

July 10, 2017

Tomas Frydrych

Eastern Mamores and the Grey Corries

Eastern Mamores and the Grey Corries

The Mamores offer some exceptionally good running. The landscape is stunning, the natural lines are first rate, and the surface is generally runner-friendly. The famed (and now even raced) Ring of Steal provides an obvious half day outing, but I dare to say the Mamores have a lot more to offer! On the western end it is well worth venturing all the way to Meall a'Chaorain for the remarkable change in geology and the unique views of Ben Nevis, but it is the dramatic 'loch and mountain' type of scenery (of a quality rare this far south) of the eastern end that is the Mamore's true crown jewel.

I have two days to play with, and rather than spending them both in the Mamores, I decide to combine the eastern Mamores with the Grey Corries that frame the other side of Glen Nevis. The forecast is not entirely ideal, it's looking reasonable for Saturday, but the forecasters cannot agree on the Sunday, most predicting heavy rain, and some no rain at all. Unfortunately, the eastern Mamores are subject to stalking, and this is probably my last chance of the summer to visit them -- I decide (as you do) to bet on the optimistic forecast.

Day 1 -- Eastern Mamores (20km, 2,200m ascent, 6h)

As I set off from Kinlochleven, it's looking promising. It is not raining, I can see all the way down Loch Leven, and (at the same time!) the tops of the hills above me. The sun even breaks through occasionally, reminding me I neither put on, nor brought with me, suntan lotion. I follow the path up Coire na Ba, enjoying the views. My solitude is interrupted by a single mountain hare. This is red deer country, and after recent trips to Assynt and the Cairngorms, it is impossible not to notice the relative paucity of life in these hills.

As I climb higher, the wind starts picking up, but it is not cold, and once I put on gloves I am comfortable enough in shorts and a shirt. The two summits of Na Gruagaichean are busy, the views braw. I carry on toward Binnein Mor, enjoying the cracking ridge line.

At Binnein Mor summit the well-defined path comes to an end, the Munroists venture no further. The short section beyond is the preserve of independent minds (and the Ramsay Round challengers). I take a faint path descending precariously directly off the summit, but this turns out to be a mistake. It is steep, there is poor grip, and slipping is not an option. I expect a better (and much faster) way would have been to take the north ridge, then cut down into the coire.

I end up traversing out of the precarious ground into the coire. There is another runner coming up the other way. We stop for a brief chat; he was planning to attempt the Ramsay Round solo this weekend, but decided to postpone in the view of the overnight forecast. I have lot of time for anyone taking on Ramsay solo, that is one very serious undertaking; I hope the guy gets his weather window soon.

I stop at the two small lochans to get some water and to have a couple of oatcakes and chunk of Comte for lunch (my staple long run food), then carry on past the bigger lochan up Binnein Beg. The wind has picked up considerably, and near the summit is probably exceeding 40mph. I do not linger.

The gently sloping old stalkers' path leading eventually into Coire an Lochain is delightful, my eyes feasting on the scenery on offer -- I cannot imagine anywhere else I would rather be just now. I follow the narrow track down to Allt Coire a'Bhinnein.

Eastern Mamores and the Grey Corries

The large coire formed by the two Binneins and the two Sgurrs is a classic example of glacial landscape. The retreating glacier left behind huge moraine deposits, forming spectacular steep ridges and dykes, the true nature of which is exposed at the couple of places where the vegetation has eroded, and the very ancient record of history is laid bare for all to see. The gravelly river itself too reminds me more of the untidy watercourses of the Alps than your typical Scottish mountain stream.

Rather than following the zigzag path, I take one of the moraine ridges into Coire an Lochain, then head up Sgurr Eilde Mor. The geology changes, the red tones of the hill due to significant presence of red granite. I find a small hollow sheltered from the worst of the wind and have another couple of oatcakes and Comte, then brace myself for the wind on the summit.

The gently sloping north east ridge Sgurr Eilde Mor again lies outwith the Munroist lands, and there is no path on it to speak off. It's a glorious afternoon, the sun is out, the sky is blue, and as I descend the sodden and slimy slopes of Meall Doire na h-Achlais toward the river, I spot a couple of walkers sunning themselves on the beach below near the river junction. We say hello as I wade across and follow the watercourse up north.

It's time to look for a suitable campsite. My original plan was to camp bit further on in the bealach between Meall a'Bhurich and Stob Ban, but it's far too windy for a high level camp. The opportunities down here are limited, the ground is saturated with water and the grass is very tussocky, but I find a good spot on a large flat sandy deposit inside the crook of the river. It's about 18 inches above the river level, and from the vegetation it would appear it does not flood too often, perhaps only during the snow melt.

The rain arrives not much later, and I have an early night. After getting an updated forecast on the InReach SE (heavy rain throughout the night and tomorrow), I make a slight revision to tomorrow's plans (deciding to leave out the two Sgurr Choinnichs), then listen to Michael Palin narrating Ian Rankin's Knots and Crosses; the Irish-sounding fake Scottish accents are mildly amusing, and I wonder why they could not get a native speaker to do the reading, but the story is too good to be spoilt by that.

It rains steadily all night and the sound of the river gradually changes. I eventually poke my head out to check on the water level, but there are no signs of it spilling out yet, and so I continue to sleep soundly till half seven.

Day 2 -- Grey Corries (27km, 1,750m ascent, 8h)

I start the day by knocking over the first pot of boiling water, but eventually get my coffee and porridge. The river rose by about six inches during the night. The rain has turned into light drizzle, cloud base is at no more than 600m. It is completely still, and as I faff about the midges are spurring me on.

I soon head into the clouds, up Meall a'Bhuirich, where I come across a family of ptarmigan, apart from the numerous frogs, the first real wildlife since the hare I saw yesterday morning. The chicks are quite grown up by now, and they seem unfazed by my presence. At the summit of the roundish hill I have to get the compass out, the visibility is no more than 30 yards and it would be easy to descend in a completely wrong direction.

More ptarmigan on Stob Ban, but no views from the summit. I was planning to descend directly into the bealach above Coire Rath, but this side of the hill is very steep and in the poor visibility I am unable to judge if this is, in fact, possible. I decide to take the well defined path heading east instead, and then traverse the north side of the hill at around the 800m contour line.

This proves to be a good choice and I even pick up a faint, runnable, path traversing at around 780m; perhaps an old stalkers' path. The cloud has lifted a bit and I am now just around its bottom edge; the views down into the coire are absolutely magical. While there is much to be said for sunshine, I have found over the years that some of the most precious moments in the hills come when there is none.

Eastern Mamores and the Grey Corries

I stop at the wee lochan for a bite to eat, and to have a drink, as this is the last water source until after the Grey Corries.

The Grey Corries quartzite ridge line is very fine and dramatic. The low cloud makes it even more so. While it would be hard to get lost here, I realise this is a great opportunity to practice some micro navigation, and so carefully track my progress along the ridge. The running is hard and very technical, and the wet quartzite is rather slippery. I make a mental note that if I ever decide to attempt the Ramsay solo, I should do so in a clockwise direction, so as to get the most serious and committing part of it out of the way on fresh legs (sod the aesthetics of finishing with the Ben!).

My plan is to go as far as the bealach before Sgurr Choinnich More and then descend through Coire Easain into Glen Nevis. The final part of the Stob Coire Easain south west ridge proves quite tricky. Pure, white, blocky quartzite abounds, and in the present conditions it is as slippery as a Zamboni smoothed ice ring -- I am taking my time not to inadvertently reach the coire head first. An unkindness of six or so ravens is circling noisily around me, and somehow in these eerie conditions, that group name seems more than appropriate.

Near the very end of the ridge, the quartzite seems to turn pink, almost red. On close inspection, the colour is due to a fine layer of lichen (my uninformed guess is Belonia nidarosiensis, but I could be wrong, not least since the alternative name Clathroporina calcarea suggests a somewhat different habitat). Under one of these large blocks there is a tiny spruce seedling, trying to carve a life for itself. I wonder where it came from, perhaps the seed arrived on the wind, perhaps attached to the sole of someone's boots; either way, it would have come some distance. I wish it good luck, for it will need it.

A quick stop for water and the last of my oatcakes and cheese, and then down Coire Easain. Up close and personal this turns out to be a real gem of a place. Whereas from above it has the appearance of a simple bowl, it is in fact a series of wet, bright green, terraces (sphagnum moss and frogs abound), with a myriad of small pools from which numerous quickly growing streams start their life, eventually joining up into the mighty Allt Coire Easain. There are a couple of large quartzite escarpments, the lower of which is clearly visible when looking up from Glen Nevis. With a little care, wet feet notwithstanding, this can be descended through the grassy break next to the east bank of the main stream.

Eastern Mamores and the Grey Corries

I cross the stream, and head south on a gently descending traverse, more or less aiming for Tom an Eite. The ground runs well, until it becomes more tussocky near the floor of the glen. I negotiate the edge of the peat hags by Tom an Eite, cross the beginnings of the Water of Nevis and start following Allt Coire a'Bhinnein along its western bank.

Initially, before a faint deer track appears, this is a bit of a trot through bog grass and peat hags strewn with numerous stumps of ancient Caledonian pine. I try to picture what it would have looked like before man came with an axe and a saw. It would have been quite a different, better, more vibrant, landscape. I am, inevitably, thinking of the previous weekend with Linda, the lovely running among the pines in upper Glen Quoich, and the encouraging progress of regeneration on Mar Lodge Estate. Perhaps one day we will see the pines returning here as well.

Nevertheless, this is yet another gem of a landscape. Midway up into the coire, for two, maybe three hundred yards, the river, at this point at least five yards wide already, is squeezed into a channel between two vertical plates of quartzite only a couple of feet apart. The effect is dramatic; I am peaking over the edge with a degree of unease, falling in would not bode well. I spot a ringed plover, and a short while later a ringed ouzel -- it's good to see some other life than the red deer which abounds.

Just above this constriction in the river, Allt a' Gharbh Coire is coming down on the right in a spectacular waterfall. I am taken aback by the sight, thinking this is, by far, the most dramatic waterfall I have seen in Scotland -- it is only a few weeks back since I watched the famous Eas a' Chual Aluinn from the summit of the Stack of Glencoul, but the waterfall I am looking at just now is in a different league. (I am so gobsmacked by the sight it never occurs to me to take a photograph!)

Soon I am climbing up to Coire an Lochain for the second time this trip -- this time I take the zigzags, my legs are beginning to feel tired. But soon I am on the path that skirts Sgorr Eilde Beag, which provides a most enjoyable run back down to Kinlochleven. As I come around the 'corner' formed by the south ridge, I bump into a group of youngsters heading up to the coire on an (DoE?) expedition.

'Does it get flatter soon?!'

They are the first people I have seen, never mind met, since yesterday afternoon -- on reflection there are benefits to weather less then perfect after all!

by tf at July 10, 2017 09:13 PM

June 30, 2017

Chris Lord

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

by admin at June 30, 2017 12:02 AM

June 29, 2017

Tomas Frydrych

The Debt of Magic

The Debt of Magic

My gran married young, and was widowed young, my current age. I have a very few regrets in life, but not getting to know grandpa is one of them. He was a great lover of nature, a working man with little spare time, escaping into the woods with binoculars and a camera whenever he could. A passion borne out by countless strips of film left behind. As I am getting older I too am drawn into the woods, increasingly not for 'adventure', but for the tranquility and the sense of awe it invariably brings. I sense we were kindred spirits, but I can only imagine, he died before my third birthday and I have no memories of him at all.

That in itself I find strange, for I have some very early memories. Hiding a hairbrush in the oven, not older than two, the commotion as it was 'found' (providing extra aroma for the Sunday roast). Sitting on a potty on the balcony of our new flat, not yet three, scaffolding all around, watching a cheery brickie in a red and black striped shirt at work (gone now, a scaffolding collapse some years later). Only just turned three, standing in front of the maternity hospital with my dad, waving, my sister just born.

These are all genuine enough, but at the same time mere fragments, without context and continuity. My first real memories come from the summer after my fifth birthday: it's August and my gran and I are going on a holiday in the Krkonoše mountains. A train, a bus, then a two hour hike to Petrova Bouda mountain refuge, our home for the next two weeks, wandering the hills, armed with a penknife and walking sticks gran fashioned out of some dead wood.

For me this was the time of many firsts. I learned my first (and only) German, 'Nein, das ist meine!', as my gran ripped out our sticks from the hands of a lederhosen-wearing laddie a few years older than me; he made the mistake of laying claim to them while we were browsing the inside of a souvenir shop (I often think somewhere in Germany is a middle aged man still having night terrors). It was the only time ever I heard my gran speaking German; she was fluent, but marked by the War (just as I am marked by my own history).

Up there in the hills I had my first encounter with the police state, squatting among the blaeberries, attending to sudden and necessary business, unfortunately, in the search for a modicum of privacy, on the wrong side of the border. The man in uniform, in spite of sporting a scorpion sub machine gun (the image of the large leather holster forever seared into my memory) stood no chance, and retreated hastily as gran rushed to the rescue. She was a formidable woman, and I was her oldest grandchild. Funnily enough, I was to have a similar experience a decade later in the Tatras, bivvying on the wrong side of the border only to wake up staring up the barrel of an AK-47, but that was all in the future then, though perhaps that future was already being shaped, little by little.

It was also the first time I drank from a mountain stream. A crystal clear water springing out of a miniature cave surrounded by bright green moss, right by the side of the path. Not a well known beauty spot sought after by many, but a barely noticeable trickle of water on the way to somewhere 'more memorable'. We sat there having our lunch. Gran hollowed out the end bit of a bread stick to make a cup and told me a story about elves coming to drink there at night. We passed that spring several times during those days, and I was always hoping to catch a glimpse of that magical world. I still do, perhaps now more than ever.

Gran's ventures into nature were unpretentious and uncomplicated: she came, usually on her old bicycle, she ate her sandwiches, and she saw. And I mean, really saw. Not just the superficially obvious, but the intricate interconnections of life, the true magic. One of her favourite places was a disused sand pit in a pine wood a few miles from where she lived. We spent many a summer day there picking cranberries and mushrooms, and then, while eating our pieces, watched the bees drinking from a tiny pool in the sand. Years later, newly married, I took Linda there, and I recall how, for the first time, I was struck by the sheer ordinariness of the place. Where did the magic go?

The magic is in the eye of the beholder. It is always there, it requires no superhuman abilities, no heroic deeds, no overpriced equipment. But seeing is an art, and a choice. It takes time developing, and a determination practicing. Gran was a seer, and she set me on the path of becoming one; I am, finally, beginning to make some progress.

In the years to come there were to be many more mountain streams. Times when the magic once only imagined by my younger self became real, tangible, perhaps even character-building. Times more intense, more memorable in the moment, piling on like cards, each on the top of the other, leaving just a corner here and corner there to be glimpsed beneath. The present consuming the past with the inevitability we call growing up.

Yet, ever so often it is worth pausing to browse through that card deck. There are moments when we stand at crossroads we are not seeing, embarking on a path that only comes into focus as time passes. Like a faint trace of a track on a hillside hard to see up close, but clearly visible from afar in the afternoon light, I can see now that my passion for the hills and my lifelong quest for the magic go back to the two weeks of a summer long gone in a place I have long since stopped calling home.

Gran passed away earlier this year. Among her papers was an A6 card, a hiking log from those two summer weeks. A laconic record of the first 64km of a life long journey she took me on; an IOU that shall remain outstanding.

by tf at June 29, 2017 03:27 PM

June 28, 2017

Chris Lord

Goodbye Mozilla

Today is effectively my last day at Mozilla, before I start at Impossible on Monday. I’ve been here for 6 years and a bit and it’s been quite an experience. I think it’s worth reflecting on, so here we go; Fair warning, if you have no interest in me or Mozilla, this is going to make pretty boring reading.

I started on June 6th 2011, several months before the (then new, since moved) London office opened. Although my skills lay (lie?) in user interface implementation, I was hired mainly for my graphics and systems knowledge. Mozilla was in the region of 500 or so employees then I think, and it was an interesting time. I’d been working on the code-base for several years prior at Intel, on a headless backend that we used to build a Clutter-based browser for Moblin netbooks. I wasn’t completely unfamiliar with the code-base, but it still took a long time to get to grips with. We’re talking several million lines of code with several years of legacy, in a language I still consider myself to be pretty novice at (C++).

I started on the mobile platform team, and I would consider this to be my most enjoyable time at the company. The mobile platform team was a multi-discipline team that did general low-level platform work for the mobile (Android and Meego) browser. When we started, the browser was based on XUL and was multi-process. Mobile was often the breeding ground for new technologies that would later go on to desktop. It wasn’t long before we started developing a new browser based on a native Android UI, removing XUL and relegating Gecko to page rendering. At the time this felt like a disappointing move. The reason the XUL-based browser wasn’t quite satisfactory was mainly due to performance issues, and as a platform guy, I wanted to see those issues fixed, rather than worked around. In retrospect, this was absolutely the right decision and lead to what I’d still consider to be one of Android’s best browsers.

Despite performance issues being one of the major driving forces for making this move, we did a lot of platform work at the time too. As well as being multi-process, the XUL browser had a compositor system for rendering the page, but this wasn’t easily portable. We ended up rewriting this, first almost entirely in Java (which was interesting), then with the rendering part of the compositor in native code. The input handling remained in Java for several years (pretty much until FirefoxOS, where we rewrote that part in native code, then later, switched Android over).

Most of my work during this period was based around improving performance (both perceived and real) and fluidity of the browser. Benoit Girard had written an excellent tiled rendering framework that I polished and got working with mobile. On top of that, I worked on progressive rendering and low precision rendering, which combined are probably the largest body of original work I’ve contributed to the Mozilla code-base. Neither of them are really active in the code-base at the moment, which shows how good a job I didn’t do maintaining them, I suppose.

Although most of my work was graphics-focused on the platform team, I also got to to do some layout work. I worked on some over-invalidation issues before Matt Woodrow’s DLBI work landed (which nullified that, but I think that work existed in at least one release). I also worked a lot on fixed position elements staying fixed to the correct positions during scrolling and zooming, another piece of work I was quite proud of (and probably my second-biggest contribution). There was also the opportunity for some UI work, when it intersected with platform. I implemented Firefox for Android’s dynamic toolbar, and made sure it interacted well with fixed position elements (some of this work has unfortunately been undone with the move from the partially Java-based input manager to the native one). During this period, I was also regularly attending and presenting at FOSDEM.

I would consider my time on the mobile platform team a pretty happy and productive time. Unfortunately for me, those of us with graphics specialities on the mobile platform team were taken off that team and put on the graphics team. I think this was the start in a steady decline in my engagement with the company. At the time this move was made, Mozilla was apparently trying to consolidate teams around products, and this was the exact opposite happening. The move was never really explained to me and I know I wasn’t the only one that wasn’t happy about it. The graphics team was very different to the mobile platform team and I don’t feel I fit in as well. It felt more boisterous and less democratic than the mobile platform team, and as someone that generally shies away from arguments and just wants to get work done, it was hard not to feel sidelined slightly. I was also quite disappointed that people didn’t seem particular familiar with the graphics work I had already been doing and that I was tasked, at least initially, with working on some very different (and very boring) desktop Linux work, rather than my speciality of mobile.

I think my time on the graphics team was pretty unproductive, with the exception of the work I did on b2g, improving tiled rendering and getting graphics memory-mapped tiles working. This was particularly hard as the interface was basically undocumented, and its implementation details could vary wildly depending on the graphics driver. Though I made a huge contribution to this work, you won’t see me credited in the tree unfortunately. I’m still a little bit sore about that. It wasn’t long after this that I requested to move to the FirefoxOS systems front-end team. I’d been doing some work there already and I’d long wanted to go back to doing UI. It felt like I either needed a dramatic change or I needed to leave. I’m glad I didn’t leave at this point.

Working on FirefoxOS was a blast. We had lots of new, very talented people, a clear and worthwhile mission, and a new code-base to work with. I worked mainly on the home-screen, first with performance improvements, then with added features (app-grouping being the major one), then with a hugely controversial and probably mismanaged (on my part, not my manager – who was excellent) rewrite. The rewrite was good and fixed many of the performance problems of what it was replacing, but unfortunately also removed features, at least initially. Turns out people really liked the app-grouping feature.

I really enjoyed my time working on FirefoxOS, and getting a nice clean break from platform work, but it was always bitter-sweet. Everyone working on the project was very enthusiastic to see it through and do a good job, but it never felt like upper management’s focus was in the correct place. We spent far too much time kowtowing to the desires of phone carriers and trying to copy Android and not nearly enough time on basic features and polish. Up until around v2.0 and maybe even 2.2, the experience of using FirefoxOS was very rough. Unfortunately, as soon as it started to show some promise and as soon as we had freedom from carriers to actually do what we set out to do in the first place, the project was cancelled, in favour of the whole Connected Devices IoT debacle.

If there was anything that killed morale for me more than my unfortunate time on the graphics team, and more than having FirefoxOS prematurely cancelled, it would have to be the Connected Devices experience. I appreciate it as an opportunity to work on random semi-interesting things for a year or so, and to get some entrepreneurship training, but the mismanagement of that whole situation was pretty epic. To take a group of hundreds of UI-focused engineers and tell them that, with very little help, they should organised themselves into small teams and create IoT products still strikes me as an idea so crazy that it definitely won’t work. Certainly not the way we did it anyway. The idea, I think, was that we’d be running several internal start-ups and we’d hopefully get some marketable products out of it. What business a not-for-profit company, based primarily on doing open-source, web-based engineering has making physical, commercial products is questionable, but it failed long before that could be considered.

The process involved coming up with an idea, presenting it and getting approval to run with it. You would then repeat this approval process at various stages during development. It was, however, very hard to get approval for enough resources (both time and people) to finesse an idea long enough to make it obviously a good or bad idea. That aside, I found it very demoralising to not have the opportunity to write code that people could use. I did manage it a few times, in spite of what was happening, but none of this work I would consider myself particularly proud of. Lots of very talented people left during this period, and then at the end of it, everyone else was laid off. Not a good time.

Luckily for me and the team I was on, we were moved under the umbrella of Emerging Technologies before the lay-offs happened, and this also allowed us to refocus away from trying to make an under-featured and pointless shopping-list assistant and back onto the underlying speech-recognition technology. This brings us almost to present day now.

The DeepSpeech speech recognition project is an extremely worthwhile project, with a clear mission, great promise and interesting underlying technology. So why would I leave? Well, I’ve practically ended up on this team by a series of accidents and random happenstance. It’s been very interesting so far, I’ve learnt a lot and I think I’ve made a reasonable contribution to the code-base. I also rewrote python_speech_features in C for a pretty large performance boost, which I’m pretty pleased with. But at the end of the day, it doesn’t feel like this team will miss me. I too often spend my time finding work to do, and to be honest, I’m just not interested enough in the subject matter to make that work long-term. Most of my time on this project has been spent pushing to open it up and make it more transparent to people outside of the company. I’ve added model exporting, better default behaviour, a client library, a native client, Python bindings (+ example client) and most recently, Node.js bindings (+ example client). We’re starting to get noticed and starting to get external contributions, but I worry that we still aren’t transparent enough and still aren’t truly treating this as the open-source project it is and should be. I hope the team can push further towards this direction without me. I think it’ll be one to watch.

Next week, I start working at a new job doing a new thing. It’s odd to say goodbye to Mozilla after 6 years. It’s not easy, but many of my peers and colleagues have already made the jump, so it feels like the right time. One of the big reasons I’m moving, and moving to Impossible specifically, is that I want to get back to doing impressive work again. This is the largest regret I have about my time at Mozilla. I used to blog regularly when I worked at OpenedHand and Intel, because I was excited about the work we were doing and I thought it was impressive. This wasn’t just youthful exuberance (he says, realising how ridiculous that sounds at 32), I still consider much of the work we did to be impressive, even now. I want to be doing things like that again, and it feels like Impossible is a great opportunity to make that happen. Wish me luck!

by Chris Lord at June 28, 2017 11:16 AM

June 20, 2017

Ross Burton

Identifying concurrent tasks in Bitbake logs

One fun problem in massively parallel OpenEmbedded builds is when tasks have bad dependencies or just bugs and you can end up with failures due to races on disk.

One example of this happened last week when an integration branch was being tested and one of the builds failed with tar error: file changed as we read it whilst it was generating the images. This means that the root filesystem was being altered whilst tar was reading it, so we've a parallelism problem. There's only a limited number of tasks that could be having this effect here so searching the log isn't too difficult, but as they say: why do something by hand when you can write a script to do it for you.

findfails is a script that will parse a Bitbake log and maintain the set of currently active tasks, so when it finds a task that fails it can tell you what other tasks are also running:

$ findfails log
Task core-image-sato-dev-1.0-r0:do_image_tar failed
Active tasks are:
 core-image-sato-sdk-ptest-1.0-r0:do_rootfs
 core-image-sato-dev-1.0-r0:do_image_wic
 core-image-sato-dev-1.0-r0:do_image_jffs2
 core-image-sato-dev-1.0-r0:do_image_tar
 core-image-sato-sdk-1.0-r0:do_rootfs```

We knew that there were changes to do_image_wic in that branch, so it was easy to identify and drop the patch that was incorrectly writing to the rootfs source directory. Sorted!

by Ross Burton at June 20, 2017 02:24 PM

June 13, 2017

Ross Burton

Dynamic source checksums in OpenEmbedded

Today we were cleaning up some old bugs in the Yocto Project bugzilla and came across a bug which was asking for the ability to specify a remote URL for the source tarball checksums (SRC_URI[md5sum] and/or SRC_URI[sha256um]). We require a checksum for tarballs for two reasons:

  1. Download integrity. We want to be sure that the download wasn't corrupted in some way, such as truncation or bad encoding.
  2. Security. We want to be sure that the tarball hasn't changed over time, be it the maintainer regenerating the tarball for an old release but with different content (this happens more than you'd expect, with non-trivial changes too), or alternatively a malicious attack on the file which now contains malware (such as the Handbrake hack in May).

The rationale for reading remote URLs for checksums was that for files that are changing frequently it would be easier to upgrade the recipe if the checksums didn't need to be altered too. For some situations I can see this argument, but I don't want to encourage practices that nullify the security checksums. For this reason I rejected the bug but thanks to the power of Bitbake I did provide a working example of how to do this in your recipe.

The trick is to observe that the only time the SRC_URI[md5sum] is read is during do_fetch. By adding a new function to do_fetch[prefuncs] (the list of functions that will be executed before do_fetch is executed) we can download the checksums and write the variable just before the fetcher needs it. Here is a partial example that works for GNOME-style checksums, where each upload generates foo-1.2.tar.bz2, foo-1.2.tar.xz, foo-1.2.sha256sum, and foo-1.2.md5sum. To keep it interesting the checksum files contain the sums for both compression types, so we need to iterate through the file to find the right line:

SRC_URI = "https://download.gnome.org/sources/glib/2.52/glib-2.52.2.tar.xz"
SHASUM_URI = "https://download.gnome.org/sources/glib/2.52/glib-2.52.2.sha256sum"

do_fetch[prefuncs] += "fetch_checksums"
python fetch_checksums() {
    import urllib
    for line in urllib.request.urlopen(d.getVar("SHASUM_URI")):
        (sha, filename) = line.decode("ascii").strip().split()
        if filename == "glib-2.52.2.tar.xz":
            d.setVarFlag("SRC_URI", "sha256sum", sha)
            return
    bb.error("Could not find remote checksum")
}

Note that as fetch_checksums is a pre-function for do_fetch it is only executed just before do_fetch and not at any other time, so this doesn't impose any delays on builds that don't need to fetch.

If I were taking this beyond a proof of concept and making it into a general-purpose class there's a number of changes I would want to make:

  1. Use the proxies when calling urlopen()
  2. Extract the filename to search for from the SRC_URI
  3. Generate the checksum URL from the SRC_URI

I'll leave those as an exercise to the reader though. Patches welcome!

by Ross Burton at June 13, 2017 01:43 PM

Tomas Frydrych

The Case for 'Make No Fire'

The Case for 'Make No Fire'

I agree with David Lintern that we (urgently) need a debate about the making of fires in our wild spaces, and I am grateful that he took the plunge and voiced that need. But while I think David's is, by far, the most sensible take on the matter among some of the other advice dished out recently, I want to argue that we, the anonymous multitude of outdoor folk, need to go a step further and make the use of open fire in UK wild places socially unacceptable. Not making a fire is the only responsible option available to us. Not convinced? Here is my case.

There are three key issues that need to be addressed when it comes to the responsible use of fire: 1. the risk of starting a wildfire, 2. the immediate damage a controlled fire causes, and, 3. the paucity of fuel and the damage caused by foraging for it.

Wildfire risk

There are two main ways in which a controlled fire can start a wildfire: an underground burn and overground spark. The former happens when a fire is located on the top of material that is itself combustible. Such material can smoulder underground for days, and travel some distances before flaring up. The most obvious risk here comes from peat, which happens to be the second largest carbon store on the planet (after the Amazonian rain forest), i.e., it burns extremely well, and most of which is in Scotland; majority of our wild places are covered in it -- it might seem obvious to some not to start a fire on peat, but I suspect many don't know what peat actually looks like, particularly when dry, or don't realise how ubiquitous it is.

Peat is not the only problem, underground roots can smoulder away for ages thanks to high resin content, and are very hard to put out, a couple of guys pissing into the fire pit is nowhere near enough. (I once spent over an hour extinguishing a fire someone built on the top of an old stump, it wouldn't stop sizzling in spite of copious amounts of water repeatedly poured onto it, it scared the hell out of me.)

Then there is the flying spark igniting stuff outside your controlled fire pit. Open fire always generates sparks, even wood burning stoves do, when the pot is not on. In the right conditions it takes a very tiny spark to get things going. Sparks can fly considerable distances, and might well jump any perceived safe buffer zone around your fire. The amount and size of sparks generated grows with the size of the fire, plus the bigger the fire, the bigger the updraft and the less control you have over where your sparks land.

In practice, the risk of wildfire can be reduced, but is hard to eliminate. It's ultimately a numbers game. If the individual chances of unwittingly starting a wild fire from a small controlled fire are 1 in 1000, then a 1000 people each making a fire once will start one wildfire. Whatever the actual numbers, the growth in participation works against us. Let's be under no illusion: an outdoor culture that accepts fire in wild spaces as a part of the game will start wildfires. It's not a question of whether, just of how often. Is that something we are happy to accept as a price worth paying? How often is OK? Once a year, once a decade? Once a month?

Immediate Damage

Fire is the process of rapid release of energy, and that energy has to go somewhere; in our wild places it goes somewhere where it should not, where it is not expected and in doing so it affects an irreversible change. Fire kills critters in the soil. The rising and radiating heat damages vegetation in the vicinity (it takes surprisingly little heat to cause lasting damage to trees, I reckon an irreversible damage happens to at least three times the distance to which a human face can comfortably bear the radiation). Such damage is not necessarily immediately obvious, but is there, and adds up with repeated use. A single fire under the Inchriach pines might seem as doing no harm, but tomorrow that's someone else fire in its place. (Next time you pass Inchriach, look up directly above the ignominious fire pit, compare the two sides of the pine.)

There are other, more subtle issues. Ash is a fertiliser; it is also alkaline, affecting soil acidity. The repeated dumping of ash around a given locus will inevitably change that ecosystem, particularly if it's naturally nutrient poor and/or acidic. Dumping ashes into water suffers from the same problem. Individually, these might be minute, seemingly insignificant changes but they are never isolated. We might feel like it, but we are not lonely travellers exploring vast sways of wilderness previously untouched by human foot. I am but one of many, and increasingly more, passing through any given of UK's wild places. The numbers, again, work against us.

Lot of folk seem to think that if they dig a fire pit, then replace the turf next day they are 'leaving no trace' -- that's not no trace, that's a scar with little superficial make up applied to it. It does not work even on the cosmetic level; it might look good when you are leaving, but doesn't last long. The digging damages the turf, particularly around the edges, as does the fire. The fire bakes the ground in the pit, making it hard for the replaced turf to repair its roots, and it will suffer partial or even complete dieback as a result. Even if the turf catches eventually, it takes weeks for the damaged border to repair -- the digging of single use fire pits, particularly at places that are used repeatedly by many is far more damaging than leaving a single, tidy fire ring to be reused. Oh, the sight of it offends you? The issues surrounding fire go lot deeper than the cosmetics.

Paucity of fuel

In the UK we have (tragically) few trees. It takes surprisingly large quantity of wood to feed even a small fire to just make a cup of coffee. It is possible to argue that the use of stoves, gas or otherwise, too has a considerable environmental impact, just less obvious, less localised, and that the burning of local wood is more environmentally friendly. It's a good argument, worth reflecting upon, but it only works for small numbers; it doesn't scale. Once participation gets to a certain level, it burns lot quicker than it grows, and in the UK we have long crossed that line.

There are many of us heading into the same locations, and it is always possible to straight away spot places where people make fire by how denuded of dead wood they are (been to a bothy recently?). This is not merely cosmetic, the removal of dead wood reduces biodiversity. Fewer critters on the floor mean fewer birds in the trees, and so on. Our 'wild' places suffer from the lack of biodiversity as is, no change for the worse is insignificant. If you have not brought the fuel with you, your fire is not locally sustainable, it's simple as that. If it's not locally sustainable, it has no place in our wild locations.

The Fair Share

It comes down to the numbers. As more of us head 'out there', the chances of us collectively starting a wildfire grow, as does the damage we cause locally by having our fires. We can't beat the odds, indeed, as this spring has shown, we are not beating the odds. There is only one course of action left to us and that's to completely abstain from open fires in our wild places. I use the word abstain deliberately. The making of fire is not a necessity, not even a convenience. It's about a brief individual gratification that comes at a considerable collective price in the long run.

As our numbers grow we need the personal discipline not to claim more than our fair share of the limited and fragile resource our wild places are. The aspirations of 20 or 30 years ago are no longer enough, what once might have been acceptable no longer can be. We must move beyond individual definitions of impact and start thinking in combined, collective, terms -- sustainable behaviour is not one which individually leaves no obvious visual trace, but one which can be repeated over and over again by all without fundamentally changing the locus of our activity. I believe the concept of fair share is the key to sustainable future. Any definition of responsible behaviour that does not consciously and deliberately take numbers into account is delusory. And fire doesn't scale.

by tf at June 13, 2017 10:05 AM

June 12, 2017

Tomas Frydrych

Eagle Rock and Ben More Assynt

Eagle Rock and Ben More Assynt

The south ridge of Ben More Assynt has been on my mind for a while, ever since I laid eyes on it a few years back from the summit. It's a fine line. Today is perhaps not the ideal day for it, it's fairly windy and likely to rain for a bit, but at least for now the cloud base is, just, above the Conival summit. I dither whether to take the waterproof jacket, it will definitely rain, but it's not looking very threatening just now, and it's not so cold. In the end common sense prevails and I add it to the bag, then set off from Inchnadamph up along Traligill river.

The rain starts within a couple of minutes, and by the time I reach the footbridge below the caves it is sustained enough for the jacket to come out of the bag. In a moment of fortuitous foresight I also take the camera out of its shower resistant shoulder pouch and put it in a stuff sack, before I carry on, past the caves, following Allt a'Bhealaich.

This is a familiar ground I keep returning to, fascinated by the stream which on the plateau above Cnoc nan Uamh runs mostly underground, yet leaving a clearly defined riverbed on the surface; a good running surface. Not far after the Cnoc there is a rather large sinkhole, new since the last time I was here. It's perhaps five meters in diameter, and about as deep, the grass around its edges beginning to subside further. I wonder where it leads, how big the underground space might be, would love to have seen it forming.

Another strange thing, strewn along the grassy river bed are large balls of peat, about the size of a medicine ball, and really quite round. I don't remember seeing these before either, and wonder how they formed and where they came from, presumably they were shaped, and brought down, by a torrent of water from higher up; the dry riverbed is a witness to significant amounts of water at least occasionally running through here on the surface.

As Allt a' Bhealaich turns south east into the steepsided reentrant below Bealach Trallgil, I start climbing up the eastern slopes of Conival to pick up the faint path that passes through the bealach, watching the cloud oozing out of it. The wind has picked up considerably as it channels through the narrow gap between Conival and Braebag, it doesn't bode well for the high ground above.

The bealach provides an entry into a large round natural cauldron, River Oykel the only break in its walls. On a good clear day its circumnavigation would provide a fine outing. Today it's filled with dense cloud, there is no chance of even catching sight of the dramatic cliffs of Breabag, though I can see briefly that the Conival summit is above the clouds, and I wonder if perhaps I might be lucky enough to climb above them later.

The rain hasn't let off, and as I descend toward Dubh Loch Mor I chastise myself (not for the first time) for not reproofing my jacket. But there is no time to dwell on such trivialities as being wet. The southwest bank of the loch is made of curious dunes, high and rounded just like sand dunes, but covered in short grass. I have seen nothing like this before in Scotland's hills. I have an inkling; this entire cauldron shows classic signs of glaciation and I expect under the thin layer of peat and vegetation of these dunes is the moraine the retreating glacier left behind.

I reset my altimeter, there is some navigation to be done to reach the bealach north of Eagle Rock, and I am about to enter the cloud. As I climb, the visibility quickly drops to about fifteen yards. Suddenly I catch the sight of a white rump, then another. Thinking these are sheep I carry on ... about ten yards up wind from me is a herd of a dozen or so deer, all facing away from me. I am spotted after a few seconds and we all stand very still looking at each other for what seems like ages. I have the distinct sense I am being studied as a strange curiosity, if not being outright mocked for being out in this weather.

I break the stalemate carrying on up the hill to the 600m line, then start contouring. The weather is truly miserable now, the wind picked up some more and I wish I brought the Buffalo gloves, they are made for days like these. The compass and map come out so I can track my progress on the traverse until the slope aspect reaches the 140 degrees I am looking for. I consider giving Eagle Rock the miss today, but decide to man up.

Perhaps it's the name, but I am rather surprised, dare I say, disappointed, by the tame character of this hill. It can be fairly accurately described as a rounded heap of coarse aggregate, with little soil and some vegetation filling up the cracks, I suspect it's a bigger brother of the smaller dunes below, a large moraine deposit from a long time ago. It is quite unpleasant to run on in the fell shoes, there is no give in it, and on every step multiple sharp edges are making themselves felt through the soles.

I reach the trig point, take a back bearing (if anything, the visibility is even worse) and start heading down. Suddenly a female ptarmigan shoots out from a field of slightly bigger stones just to the side of me, and starts running tight circles around me, on no more than a three feet radius. The photographer in me has a brief urge to get the camera out, but it seems unfair. I admire her pluckiness, no regard for her own safety, she repeatedly tries to side step me and launch herself at me from behind, and I expect had I let her, I'd have been in for some proper pecking. But she will not take me head on, for which I am grateful as we dance together.

I have no idea in which way to retreat, for I haven't caught the sight of her young; I assume they are somewhere to my right where she came from, so head away from there. She continues to circle around frantically for some twenty yards or so, and I begin to wonder whether I might in fact be heading toward her hatchlings. Then her tactic changes. She runs for about five yards ahead of me in the direction I am moving in, then crouches down watching until I get within three feet or so, then runs on another five yards, and so on. We travel this way some three hundred yards, then she flies off some ten yards to the side; for the first time she stands upright, tall and proud, wings slightly stretched, watching me to carry on down the hill, her job is done. A small flock of golden plovers applaud, her textbook performance deserves nothing less.

This brief encounter made me forget all about the miserable weather and my cold hands, a moment like this well outweighs hours of discomfort, and is perhaps even unachievable without them. The rain has finally eased off, but it's clear now the whole ridge will be in this thick cloud.

The line is fine indeed, narrow, for prolonged sections a knife edge. Surprisingly, above 750m or so the wind is relatively light, nothing like in the cauldron below, which is just as well. On a different day this would be an exhilarating outing. But I am taken aback by how slippery the wet gneiss is, even in the normally so grippy fell shoes. Along the ridge are a number of tricky points; they are not scrambles in the full sense of the word, just short awkward steps and traverses that in the dry would present little difficulty, but are very exposed. Slipping on any of these is not just a question of getting hurt, a fall either side of the ridge would be measured in hundreds rather than tens of meters. I have done a fair amount of 'real' climbing in the past, but I am struggling to recall the last time I have felt this much out of my comfort zone. There are no escape routes and after negotiating a couple of particularly awkward bits, I realise I am fully committed to having to reach Ben More.

I am glad when the loose quartzite eventually signals I am nearly there. Normally quite lethal in the wet, today it feels positively grippy compared to the gneiss back there. I still can't get my head around it. As I start descending the summit, a person, in what from distance looks like a bright orange onesie, emerges from the fog. I expect we are both surprised to meet anyone else on the hill. We exchange a few sentence, I mention how slippery the ridge is, thinking today I'd definitely not want to be on it in heavy boots.

I jog over Conival without stopping, down the usual tourist route. I was planning to head to Loch nan Cuaran, and descend from there, but am out of time, Linda will already be waiting for me at Inchnadamph. As I drop to 650m or so I finally emerge from the cloud into the sunlit glen below, shortly reaching the beautiful path in Gleann Dubh; I will it to go on longer. The rain is forgotten, my clothes rapidly drying off, this is as perfect as life gets.

I stop briefly at River Traligill, I am about to reenter that other, 'normal', world and my legs are covered in peat up to my thighs -- best not to frighten the tourists having a picnic in the carpark.

PS: I think it's high time we got rid of the term 'game birds', it befits neither us nor them.

26km / 1,700m ascent / 5h

by tf at June 12, 2017 10:47 AM

June 07, 2017

Tomas Frydrych

Assynt Ashes

Assynt Ashes

Today I walked through one of my favourite Assynt places, off the path well trodden, just me, birds, deer ... and ash from a recent wild fire. I couldn't but think of MacCaig's frogs and toads, always abundant around here, yet today conspicuous by their absence.

A flashback to earlier this year: I am just the other side of this little rise, watching a pair of soaring eagles, beyond the reach of my telephoto lens. A brief conversation with a passing local. I mention the delight of walking in the young birch woodland, the pleasure of seeing it burst into life after winter. He worries about it being destroyed by wild fire, had seen a few around here. I think him somewhat paranoid, I can't imagine it happening, not here.

Now I am weeping among the ashes. Over a tree, of which this landscape could bear many more, up in a rock face, years of carving out life away from human intrusion brought to an abrupt end, for what? Over this invasive, all destroying, parasitic species that we call human, that has long outlived its usefulness. Different tears, of sadness, of frustration.

I pity such emotional poverty that needs fire to find fulfilment in the midst of the wonders of nature. I curse those who encourage it, those who feed, and feed on, this neediness. The neo-romantic evangelists preaching Salvation through Adventure to electronic pews of awestruck followers. I loath what 'adventure' has come to represent in recent years, the endless, selfie-powered quest for publicity, for likes, the k-tching sound likes make, the distortion of reality they inflict, the mutual ego stroking.

Out of the ashes, from distance at least, life is slowly getting reborn. Yet, this is not the rebirth of a landscape that has adapted to being regularly swept by fire. On closer inspection, the new greenery is just couch grass and bracken, the latter rapidly colonising the space where heather once was. This is a landscape yet again reshaped by man, and yet again for worse not better. For what? For the delusion of primeval 'authenticity' (carefully to be documented by a smartphone for the 'benefit' of those less authentic)?

Can we please stop looking into the pond to see how adventurous we are, and maybe, just once, look for what is there instead?

by tf at June 07, 2017 08:45 PM

June 04, 2017

Damien Lespiau

Building and using coverage-instrumented programs with Go


tl;dr We can create coverage-instrumented binaries, run them and aggregate the coverage data from running both the program and the unit tests.

In the Go world, unit testing is tightly integrated with the go tool chain. Write some unit tests, run go test and tell anyone that will listen that you really hope to never have to deal with a build system for the rest of your life.

Since Go 1.2 (Dec. 2013), go test has supported test coverage analysis: with the ‑cover option it will tell you how much of the code is being exercised by the unit tests.

So far, so good.

I've been wanting to do something slightly different for some time though. Imagine you have a command line tool. I'd like to be able to run that tool with different options and inputs, check that everything is OK (using something like bats) and gather coverage data from those runs. Even better, wouldn't be neat to merge the coverage from the unit tests with the one from those program runs and have an aggregated view of the code paths exercised by both kind of testing?

A word about coverage in Go

Coverage instrumentation in Go is done by rewriting the source of an application. The cover tool inserts code to increment a counter at the start of each basic block, a different counter for each basic block of course. Some metadata is kept along side each of the counters: the location of the basic block (source file, start/end line & columns) and the size of the basic block (number of statements).

This rewriting is done automatically by go test when coverage information has been asked by the user (go test -x to see what's happening under the hood). go test then generates an instrumented test binary and runs it.

A more detailed explanation of the cover story can be found on the Go blog.

Another interesting thing is that it's possible to ask go test to write out a file containing the coverage information with the ‑coverprofile option. This file starts with the coverage mode, which is how the coverage counters are incremented. This is one of set, count or atomic (see blog post for details). The rest of the file is the list of basic blocks of the program with their metadata, one block per line:

github.com/clearcontainers/runtime/oci.go:241.29,244.9 3 4

This describes one piece of code from oci.go, composed of 3 statements without branches, starting at line 241, column 29 and finishing at line 244, column 9. This block has been reached 4 times during the execution of the test binary.

Generating coverage instrumented programs

Now, what I really want to do is to compile my program with the coverage instrumentation, not just the test binary. I also want to get the coverage data written to disk when the program finishes.

And that's when we have to start being creative.

We're going to use go test to generate that instrumented program. It's possible to define a custom TestMain function, an entry point of a kind, for the test package. TestMain is often used to setup up the test environment before running the list of unit tests. We can hack it a bit to call our main function and jump to running our normal program instead of the tests! I ended up with something like this:


The current project I'm working on is called cc-runtime, an OCI runtime spawning virtual machines. It definitely deserves its own blog post, but for now, knowing the binary name is enough. Generating a coverage instrumented cc-runtime binary is just a matter of invoking go test:

$ go test -o cc-runtime -covermode count

I haven't used atomic as this binary is really a thin wrapper around a library and doesn't use may goroutines. I'm also assuming that the use of atomic operations in every branch a "quite a bit" higher then the non-atomic addition. I don't care too much if the counter is off by a bit, as long as it's strictly positive.

We can run this binary just as if it were built with go build, except it's really a test binary and we have access to the same command line arguments as we would otherwise. In particular, we can ask to output the coverage profile.

$ ./cc-runtime -test.coverprofile=list.cov list
[ outputs the list of containers ]

And let's have a look at list.cov. Hang on... there's a problem, nothing was generated: we din't get the usual "coverage: xx.x% of statements" at the end of a go test run and there's no list.cov in the current directory. What's going on?

The testing package flushes the various profiles to disk after running all the tests. The problem is that we don't run any test here, we just call main. Fortunately enough, the API to trigger a test run is semi-public: it's not covered by the go1 API guarantee and has "internal only" warnings. Not. Even. Scared. Hacking up a dummy test suite and running is easy enough:


There is still one little detail left. We need to call this FlushProfiles function at the end of the program and that program could very well be using os.Exit anywhere. I couldn't find better than having a tiny exit package implementing the equivalent of the libc atexit() function and forbid direct use of os.Exit in favour of exit.Exit(). It's even testable.

Putting everything together

It's now time for a full example. I have a small calc program that can compute additions and substractions.

$ calc add 4 8
12

The code isn't exactly challenging:


I've written some unit-tests for the add function only. We're going to run calc itself to cover the remaining statements. But first, let's see the unit tests code with both TestAdd and our hacked up TestMain function. I've swept the hacky bits away in a cover package.


Let's run the unit-tests, asking to save a unit-tests.cov profile.

$ go test -covermode count -coverprofile unit-tests.cov
PASS
coverage: 7.1% of statements
ok github.com/dlespiau/covertool/examples/calc 0.003s

Huh. 7.1%. Well, we're only testing the 1 statement of the add function after all. It's time for the magic. Let's compile an instrumented calc:

$ go test -o calc -covermode count

And run calc a few times to exercise more code paths. For each run, we'll produce a coverage profile.

$ ./calc -test.coverprofile=sub.cov sub 1 2
-1
$ covertool report sub.cov
coverage: 57.1% of statements

$ ./calc -test.coverprofile=error1.cov foo
expected 3 arguments, got 1
$ covertool report error1.cov
coverage: 21.4% of statements

$ ./calc -test.coverprofile=error2.cov mul 3 4
unknown operation: mul
$ covertool report error2.cov
coverage: 50.0% of statements

We want to aggregate those profiles into one single super-profile. While there are some hints people are interested in merging profiles from several runs (that commit is in go 1.8), the cover tool doesn't seem to support these kind of things easily so I wrote a little utility to do it: covertool

$ covertool merge -o all.cov unit-tests.cov sub.cov error1.cov error2.cov

Unfortunately again, I discovered a bug in Go's cover and so we need covertool to tell us the coverage of the aggregated profile:

$ covertool report all.cov
coverage: 92.9% of statements

Not Bad!

Still not 100% though. Let's fire the HTML coverage viewer to see what we are missing:

$ go tool cover -html=all.cov


Oh, indeed, we're missing 1 statement. We never call add from the command line so that switch case is never covered. Good. Seems like everything is working as intended.

Here be dragons

As fun as this is, it definitely feels like very few people are doing this kind of instrumented binaries. Everything is a bit rough around the edges. I may have missed something obvious, of course, but I'm sure the Internet will tell me if that's the case!

It'd be awesome if we could have something nicely integrated in the future.

by Damien Lespiau (noreply@blogger.com) at June 04, 2017 04:36 PM

May 29, 2017

Damien Lespiau

Testing for pending migrations in Django

DB migration support has been added in Django 1.7+, superseding South. More specifically, it's possible to automatically generate migrations steps when one or more changes in the application models are detected. Definitely a nice feature!

I've written a small generic unit-test that one should be able to drop into the tests directory of any Django project and that checks there's no pending migrations, ie. if the models are correctly in sync with the migrations declared in the application. Handy to check nobody has forgotten to git add the migration file or that an innocent looking change in models.py doesn't need a migration step generated. Enjoy!

See the code on djangosnippets or as a github gist!

by Damien Lespiau (noreply@blogger.com) at May 29, 2017 04:15 PM

May 28, 2017

Tomas Frydrych

Fraochaidh and Glen Creran Woods

Fraochaidh and Glen Creran Woods

The hills on the west side of Glen Creran will be particularly appreciated by those searching for some peace and quiet. None of them reach the magic 3,000ft mark, and so are of no interest to the Munroist, while the relatively small numbers of Corbettistas follow the advice of the SMC guidebook and approach their target from Ballachuilish. Yet, the lower part of Glen Creran, with its lovely deciduous woodland, deserves a visit, and the east ridge of Fraochaidh offers excellent running.

Start from the large carpark at the end of the public road (NN 0357 4886). From here, you have two options. The first is to follow the marked pine marten trail to its most westerly point (NN 0290 4867). From here a path leads off in a SW direction; take this to an old stone foot bridge over Eas an Diblidh (NN 0273 4846; marked on OS 25k map).

Alternatively, set off back along the road until it crosses Eas an Diblidh, then immediately pick up the path heading up the hill (see the 25k map) to the aforementioned bridge; this is my preferred option, the surrounding woodland is beautiful, and the Eas Diblidh stream rather dramatic -- more than adequate compensation for the brief time spent on the road.

Whichever way you get to the bridge, take the level path heading SW; after just a few meters a faint track heads directly up the hill following the stream. In the spring the floor in this upper part of the woods is covered in a mix of bluebells and wild garlic, providing an unusual sensory experience.

The path eventually peters out and the woodland comes to an end. Above the woodland is a typical Scottish overgrazed hillside, and as you emerge from the woods buzzing with life, it's impossible not to be struck by the apparent lack of it. Follow the direction of the stream up to the bealach below Beinn Mhic na Ceisich (391m point on 25k map).

From the bealach head up N to the 627m summit and from here follow the old fence line onto the summit of Fraochaidh (879m). As indicated on the 25k map, this section is damp underfoot, the fence line follows the best ground. The final push onto Fraochaidh is steep, but without difficulties.

Once you have taken in the views from Fraochaidh summit, follow the faint path along its east ridge. The running and scenery are first class, with Sgorr Deargh forming the main backdrop; the path worn out to its summit, so obvious even from this distance, perhaps a cause for reflection on our impact on he hills we love.

Fraochaidh exhibits some interesting geology. The upper part of the mountain is made of slate, which, as you approach Bealach Dearg, briefly changes (to my untrained eye at least) to gneiss, promptly followed by a band of quartzite forming the knoll on its other side. Then, as the ridge turns NE, it changes to orange coloured limestone, covered in alpine flora, with excellent view back at Fraochaidh.

Follow the ridge all the way to Mam Uchdaich bealach where it is crossed by the Ballachuilish path. This has been impacted by recent forestry operations on the Glen Creran side, and a new, broad hard surface path zigzags toward the forestry track. As of the time of writing, it is still possible to pick up the original path near the first sharp turn, descending through a grassy fire break in the woods -- this much to be preferred.

The forestry track initially has little to commend it, other than being gently downhill, but for the last couple of kilometres it renters the lovely deciduous woodland for a pleasant final jog to the finish.

20km / 1600m ascent / ~4h

by tf at May 28, 2017 08:08 AM

May 27, 2017

Chris Lord

Free Ideas for UI Frameworks, or How To Achieve Polished UI

Ever since the original iPhone came out, I’ve had several ideas about how they managed to achieve such fluidity with relatively mediocre hardware. I mean, it was good at the time, but Android still struggles on hardware that makes that look like a 486… It’s absolutely my fault that none of these have been implemented in any open-source framework I’m aware of, so instead of sitting on these ideas and trotting them out at the pub every few months as we reminisce over what could have been, I’m writing about them here. I’m hoping that either someone takes them and runs with them, or that they get thoroughly debunked and I’m made to look like an idiot. The third option is of course that they’re ignored, which I think would be a shame, but given I’ve not managed to get the opportunity to implement them over the last decade, that would hardly be surprising. I feel I should clarify that these aren’t all my ideas, but include a mix of observation of and conjecture about contemporary software. This somewhat follows on from the post I made 6 years ago(!) So let’s begin.

1. No main-thread UI

The UI should always be able to start drawing when necessary. As careful as you may be, it’s practically impossible to write software that will remain perfectly fluid when the UI can be blocked by arbitrary processing. This seems like an obvious one to me, but I suppose the problem is that legacy makes it very difficult to adopt this at a later date. That said, difficult but not impossible. All the major web browsers have adopted this policy, with caveats here and there. The trick is to switch from the idea of ‘painting’ to the idea of ‘assembling’ and then using a compositor to do the painting. Easier said than done of course, most frameworks include the ability to extend painting in a way that would make it impossible to switch to a different thread without breaking things. But as long as it’s possible to block UI, it will inevitably happen.

2. Contextually-aware compositor

This follows on from the first point; what’s the use of having non-blocking UI if it can’t respond? Input needs to be handled away from the main thread also, and the compositor (or whatever you want to call the thread that is handling painting) needs to have enough context available that the first response to user input doesn’t need to travel to the main thread. Things like hover states, active states, animations, pinch-to-zoom and scrolling all need to be initiated without interaction on the main thread. Of course, main thread interaction will likely eventually be required to update the view, but that initial response needs to be able to happen without it. This is another seemingly obvious one – how can you guarantee a response rate unless you have a thread dedicated to responding within that time? Most browsers are doing this, but not going far enough in my opinion. Scrolling and zooming are often catered for, but not hover/active states, or initialising animations (note; initialising animations. Once they’ve been initialised, they are indeed run on the compositor, usually).

3. Memory bandwidth budget

This is one of the less obvious ideas and something I’ve really wanted to have a go at implementing, but never had the opportunity. A problem I saw a lot while working on the platform for both Firefox for Android and FirefoxOS is that given the work-load of a web browser (which is not entirely dissimilar to the work-load of any information-heavy UI), it was very easy to saturate memory bandwidth. And once you saturate memory bandwidth, you end up having to block somewhere, and painting gets delayed. We’re assuming UI updates are asynchronous (because of course – otherwise we’re blocking on the main thread). I suggest that it’s worth tracking frame time, and only allowing large asynchronous transfers (e.g. texture upload, scaling, format transforms) to take a certain amount of time. After that time has expired, it should wait on the next frame to be composited before resuming (assuming there is a composite scheduled). If the composited frame was delayed to the point that it skipped a frame compared to the last unladen composite, the amount of time dedicated to transfers should be reduced, or the transfer should be delayed until some arbitrary time (i.e. it should only be considered ok to skip a frame every X ms).

It’s interesting that you can see something very similar to this happening in early versions of iOS (I don’t know if it still happens or not) – when scrolling long lists with images that load in dynamically, none of the images will load while the list is animating. The user response was paramount, to the point that it was considered more important to present consistent response than it was to present complete UI. This priority, I think, is a lot of the reason the iPhone feels ‘magic’ and Android phones felt like junk up until around 4.0 (where it’s better, but still not as good as iOS).

4. Level-of-detail

This is something that I did get to partially implement while working on Firefox for Android, though I didn’t do such a great job of it so its current implementation is heavily compromised from how I wanted it to work. This is another idea stolen from game development. There will be times, during certain interactions, where processing time will be necessarily limited. Quite often though, during these times, a user’s view of the UI will be compromised in some fashion. It’s important to understand that you don’t always need to present the full-detail view of a UI. In Firefox for Android, this took the form that when scrolling fast enough that rendering couldn’t keep up, we would render at half the resolution. This let us render more, and faster, giving the impression of a consistent UI even when the hardware wasn’t quite capable of it. I notice Microsoft doing similar things since Windows 8; notice how the quality of image scaling reduces markedly while scrolling or animations are in progress. This idea is very implementation-specific. What can be dropped and what you want to drop will differ between platforms, form-factors, hardware, etc. Generally though, some things you can consider dropping: Sub-pixel anti-aliasing, high-quality image scaling, render resolution, colour-depth, animations. You may also want to consider showing partial UI if you know that it will very quickly be updated. The Android web-browser during the Honeycomb years did this, and I attempted (with limited success, because it’s hard…) to do this with Firefox for Android many years ago.

Pitfalls

I think it’s easy to read ideas like this and think it boils down to “do everything asynchronously”. Unfortunately, if you take a naïve approach to that, you just end up with something that can be inexplicably slow sometimes and the only way to fix it is via profiling and micro-optimisations. It’s very hard to guarantee a consistent experience if you don’t manage when things happen. Yes, do everything asynchronously, but make sure you do your book-keeping and you manage when it’s done. It’s not only about splitting work up, it’s about making sure it’s done when it’s smart to do so.

You also need to be careful about how you measure these improvements, and to be aware that sometimes results in synthetic tests will even correlate to the opposite of the experience you want. A great example of this, in my opinion, is page-load speed on desktop browsers. All the major desktop browsers concentrate on prioritising the I/O and computation required to get the page to 100%. For heavy desktop sites, however, this means the browser is often very clunky to use while pages are loading (yes, even with out-of-process tabs – see the point about bandwidth above). I highlight this specifically on desktop, because you’re quite likely to not only be browsing much heavier sites that trigger this behaviour, but also to have multiple tabs open. So as soon as you load a couple of heavy sites, your entire browsing experience is compromised. I wouldn’t mind the site taking a little longer to load if it didn’t make the whole browser chug while doing so.

Don’t lose sight of your goals. Don’t compromise. Things might take longer to complete, deadlines might be missed… But polish can’t be overrated. Polish is what people feel and what they remember, and the lack of it can have a devastating effect on someone’s perception. It’s not always conscious or obvious either, even when you’re the developer. Ask yourself “Am I fully satisfied with this” before marking something as complete. You might still be able to ship if the answer is “No”, but make sure you don’t lose sight of that and make sure it gets the priority it deserves.

One last point I’ll make; I think to really execute on all of this, it requires buy-in from everyone. Not just engineers, not just engineers and managers, but visual designers, user experience, leadership… Everyone. It’s too easy to do a job that’s good enough and it’s too much responsibility to put it all on one person’s shoulders. You really need to be on the ball to produce the kind of software that Apple does almost routinely, but as much as they’d say otherwise, it isn’t magic.

by Chris Lord at May 27, 2017 12:00 PM

May 19, 2017

Emmanuele Bassi

Further experiments in Meson

Meson is definitely getting more traction in GNOME (and other projects), with many components adding support for it in parallel to autotools, or outright switching to it. There are still bugs, here and there, and we definitely need to improve build environments — like Continuous — to support Meson out of the box, but all in all I’m really happy about not having to deal with autotools any more, as well as being able to build the G* stack much more quickly when doing continuous integration.

Now that GTK+ has added Meson support, though, it’s time to go through the dependency chain in order to clean up and speed up the build in the lower bits of our stack. After an aborted attempt at porting GdkPixbuf, I decided to port Pango.

All in all, Pango proved to be an easy win; it took me about one day to port from Autotools to Meson, and most of it was mechanical translation from weird autoconf/automake incantations that should have been removed years ago1. Most of the remaining bits were:

  • ensuring that both Autotools and Meson would build the same DSOs, with the same symbols
  • generating the same introspection data and documentation
  • installing tests and data in the appropriate locations

Thanks to the ever vigilant eye of Nirbheek Chauhan, and thanks to the new Meson reference, I was also able to make the Meson build slightly more idiomatic than a straight, 1:1 port would have done.

The results are a full Meson build that takes about the same time as ./autogen.sh to run:

* autogen.sh:                         * meson
  real        0m11.149s                 real          0m2.525s
  user        0m8.153s                  user          0m1.609s
  sys         0m2.363s                  sys           0m1.206s

* make -j$(($(nproc) + 2))            * ninja
  real        0m9.186s                  real          0m3.387s
  user        0m16.295s                 user          0m6.887s
  sys         0m5.337s                  sys           0m1.318s

--------------------------------------------------------------

* autotools                           * meson + ninja
  real        0m27.669s                 real          0m5.772s
  user        0m45.622s                 user          0m8.465s
  sys         0m10.698s                 sys           0m2.357s

Not bad for a day’s worth of work.

My plan would be to merge this in the master branch pretty soon; I also have a branch that drops Autotools entirely but that can wait a cycle, as far as I’m concerned.

Now comes the hard part: porting libraries like GdkPixbuf, ATK, gobject-introspection, and GLib to Meson. There’s already a GLib port, courtesy of Centricular, but it needs further testing; GdkPixbuf is pretty terrible, since it’s a really old library; I don’t expect ATK and GObject introspection to be complicated, but the latter has a non-recursive Make layout that is full of bees.

It would be nice to get to GUADEC and have the whole G* stack build with Meson and Ninja. If you want to help out, reach out in #gtk+, on IRC or on Matrix.


  1. The Windows support still checks for GCC 2.x or 3.x flags, for instance. 

by ebassi at May 19, 2017 05:20 PM

February 23, 2017

Chris Lord

Machine Learning Speech Recognition

Keeping up my yearly blogging cadence, it’s about time I wrote to let people know what I’ve been up to for the last year or so at Mozilla. People keeping up would have heard of the sad news regarding the Connected Devices team here. While I’m sad for my colleagues and quite disappointed in how this transition period has been handled as a whole, thankfully this hasn’t adversely affected the Vaani project. We recently moved to the Emerging Technologies team and have refocused on the technical side of things, a side that I think most would agree is far more interesting, and also far more suited to Mozilla and our core competence.

Project DeepSpeech

So, out with Project Vaani, and in with Project DeepSpeech (name will likely change…) – Project DeepSpeech is a machine learning speech-to-text engine based on the Baidu Deep Speech research paper. We use a particular layer configuration and initial parameters to train a neural network to translate from processed audio data to English text. You can see roughly how we’re progressing with that here. We’re aiming for a 10% Word Error Rate (WER) on English speech at the moment.

You may ask, why bother? Google and others provide state-of-the-art speech-to-text in multiple languages, and in many cases you can use it for free. There are multiple problems with existing solutions, however. First and foremost, most are not open-source/free software (at least none that could rival the error rate of Google). Secondly, you cannot use these solutions offline. Third, you cannot use these solutions for free in a commercial product. The reason a viable free software alternative hasn’t arisen is mostly down to the cost and restrictions around training data. This makes the project a great fit for Mozilla as not only can we use some of our resources to overcome those costs, but we can also use the power of our community and our expertise in open source to provide access to training data that can be used openly. We’re tackling this issue from multiple sides, some of which you should start hearing about Real Soon Now™.

The whole team has made contributions to the main code. In particular, I’ve been concentrating on exporting our models and writing clients so that the trained model can be used in a generic fashion. This lets us test and demo the project more easily, and also provides a lower barrier for entry for people that want to try out the project and perhaps make contributions. One of the great advantages of using TensorFlow is how relatively easy it makes it to both understand and change the make-up of the network. On the other hand, one of the great disadvantages of TensorFlow is that it’s an absolute beast to build and integrates very poorly with other open-source software projects. I’ve been trying to overcome this by writing straight-forward documentation, and hopefully in the future we’ll be able to distribute binaries and trained models for multiple platforms.

Getting Involved

We’re still at a fairly early stage at the moment, which means there are many ways to get involved if you feel so inclined. The first thing to do, in any case, is to just check out the project and get it working. There are instructions provided in READMEs to get it going, and fairly extensive instructions on the TensorFlow site on installing TensorFlow. It can take a while to install all the dependencies correctly, but at least you only have to do it once! Once you have it installed, there are a number of scripts for training different models. You’ll need a powerful GPU(s) with CUDA support (think GTX 1080 or Titan X), a lot of disk space and a lot of time to train with the larger datasets. You can, however, limit the number of samples, or use the single-sample dataset (LDC93S1) to test simple code changes or behaviour.

One of the fairly intractable problems about machine learning speech recognition (and machine learning in general) is that you need lots of CPU/GPU time to do training. This becomes a problem when there are so many initial variables to tweak that can have dramatic effects on the outcome. If you have the resources, this is an area that you can very easily help with. What kind of results do you get when you tweak dropout slightly? Or layer sizes? Or distributions? What about when you add or remove layers? We have fairly powerful hardware at our disposal, and we still don’t have conclusive results about the affects of many of the initial variables. Any testing is appreciated! The Deep Speech 2 paper is a great place to start for ideas if you’re already experienced in this field. Note that we already have a work-in-progress branch implementing some of these ideas.

Let’s say you don’t have those resources (and very few do), what else can you do? Well, you can still test changes on the LDC93S1 dataset, which consists of a single sample. You won’t be able to effectively tweak initial parameters (as unsurprisingly, a dataset of a single sample does not represent the behaviour of a dataset with many thousands of samples), but you will be able to test optimisations. For example, we’re experimenting with model quantisation, which will likely be one of multiple optimisations necessary to make trained models usable on mobile platforms. It doesn’t particularly matter how effective the model is, as long as it produces consistent results before and after quantisation. Any optimisation that can be made to reduce the size or the processor requirement of training and using the model is very valuable. Even small optimisations can save lots of time when you start talking about days worth of training.

Our clients are also in a fairly early state, and this is another place where contribution doesn’t require expensive hardware. We have two clients at the moment. One written in Python that takes advantage of TensorFlow serving, and a second that uses TensorFlow’s native C++ API. This second client is the beginnings of what we hope to be able to run on embedded hardware, but it’s very early days right now.

And Finally

Imagine a future where state-of-the-art speech-to-text is available, for free (in cost and liberty), on even low-powered devices. It’s already looking like speech is going to be the next frontier of human-computer interaction, and currently it’s a space completely tied up by entities like Google, Amazon, Microsoft and IBM. Putting this power into everyone’s hands could be hugely transformative, and it’s great to be working towards this goal, even in a relatively modest capacity. This is the vision, and I look forward to helping make it a reality.

by Chris Lord at February 23, 2017 04:55 PM

February 13, 2017

Emmanuele Bassi

On Vala

It seems I raised a bit of a stink on Twitter last week:

Of course, and with reason, I’ve been called out on this by various people. Luckily, it was on Twitter, so we haven’t seen articles on Slashdot and Phoronix and LWN with headlines like “GNOME developer says Vala is dead and will be removed from all servers for all eternity and you all suck”. At least, I’ve only seen a bunch of comments on Reddit about this, but nobody cares about that particular cesspool of humanity.

Sadly, 140 characters do not leave any room for nuance, so maybe I should probably clarify what I wrote on a venue with no character limit.

First of all, I’d like to apologise to people that felt I was attacking them or their technical choices: it was not my intention, but see above, re: character count. I may have only about 1000 followers on Twitter, but it seems that the network effect is still a bit greater than that, so I should be careful when wording opinions. I’d like to point out that it’s my private Twitter account, and you can only get to what it says if you follow me, or if you follow people who follow me and decide to retweet what I write.

My PSA was intended as a reflection on the state of Vala, and its impact on the GNOME ecosystem in terms of newcomers, from the perspective of a person that used Vala for his own personal projects; recommended Vala to newcomers; and has to deal with the various build issues that arise in GNOME because something broke in Vala or in projects using Vala. If you’re using Vala outside of GNOME, you have two options: either ignore all I’m saying, as it does not really apply to your case; or do a bit of soul searching, and see if what I wrote does indeed apply to you.

First of all, I’d like to qualify my assertion that Vala is a “dead language”. Of course people see activity in the Git repository, see the recent commits and think “the project is still alive”. Recent commits do not tell a complete story.

Let’s look at the project history for the past 10 cycles (roughly 2.5 years). These are the commits for every cycle, broken up in two values: one for the full repository, the other one for the whole repository except the vapi directory, which contains the VAPI files for language bindings:

Commits

Aside from the latest cycle, Vala has seen very little activity; the project itself, if we exclude binding updates, has seen less than 100 commits for every cycle — some times even far less. The latest cycle is a bit of an outlier, but we can notice a pattern of very little work for two/three cycles, followed by a spike. If we look at the currently in progress cycle, we can already see that the number of commits has decreased back to 55/42, as of this morning.

Commits

Number of commits is just a metric, though; more important is the number of contributors. After all, small, incremental changes may be a good thing in a language — though, spoiler alert: they are usually an indication of a series of larger issues, and we’ll come to that point later.

These are the number of developers over the same range of cycles, again split between committers to the full repository and to the full repository minus the vapi directory:

Developers

As you can see, the number of authors of changes is mostly stable, but still low. If we have few people that actively commit to the repository it means we have few people that can review a patch. It means patches linger longer and longer, while reviewers go through their queues; it means that contributors get discouraged; and, since nobody is paid to work full time on Vala, it means that any interruption caused by paid jobs will be a bottleneck on the project itself.

These concerns are not unique of a programming language: they exist for every volunteer-driven free and open source project. Programming languages, though, like core libraries, are problematic because any bottleneck causes ripple effects. You can take any stalled project you depend on, and vendor it into your own, but if that happens to the programming language you’re using, then you’re pretty much screwed.

For these reasons, we should also look at how well-distributed is the workload in Vala, i.e. which percentage of the work is done by the authors of those commits; the results are not encouraging. Over that range of cycles, Only two developers routinely crossed the 5% of commits:

  • Rico Tzschichholz
  • Jürg Billeter

And Rico has been the only one to consistently author >50% of the commits. This means there’s only one person dealing with the project on a day to day basis.

As the maintainer of a project who basically had to do all the work, I cannot even begin to tell you how soul-crushing that can become. You get burned out, and you feel responsible for everyone using your code, and then you get burned out some more. I honestly don’t want Rico to burn out, and you shouldn’t, either.

So, let’s go into unfair territory. These are the commits for Rust — the compiler and standard library:

Rust

These are the commits for Go — the compiler and base library:

Go

These are the commits for Vala — both compiler and bindings:

Vala

These are the number of commits over the past year. Both languages are younger than Vala, have more tools than Vala, and are more used than Vala. Of course, it’s completely unfair to compare them, but those numbers should give you a sense of scale, of what is the current high bar for a successful programming language these days. Vala is a niche language, after all; it’s heavily piggy-backing on the GNOME community because it transpiles to C and needs a standard library and an ecosystem like the one GNOME provides. I never expected Vala to rise to the level of mindshare that Go and Rust currently occupy.

Nevertheless, we need to draw some conclusions about the current state of Vala — starting from this thread, perhaps, as it best encapsulates the issues the project is facing.

Vala, as a project, is limping along. There aren’t enough developers to actively effect change on the project; there aren’t enough developers to work on ancillary tooling — like build system integration, debugging and profiling tools, documentation. Saying that “Vala compiles to C so you can use tools meant for C” is comically missing the point, and it’s effectively like saying that “C compiles to binary code, so you can disassemble a program if you want to debug it”. Being able to inspect the language using tools native to the language is a powerful thing; if you have to do the name mangling in your head in order to set a breakpoint in GDB you are elevating the barrier of contributions way above the head of many newcomers.

Being able to effect change means also being able to introduce change effectively and without fear. This means things like continuous integration and a full test suite heavily geared towards regression testing. The test suite in Vala is made of 210 units, for a total of 5000 lines of code; the code base of Vala (vala AST, codegen, C code emitter, and the compiler) is nearly 75 thousand lines of code. There is no continuous integration, outside of the one that GNOME Continuous performs when building Vala, or the one GNOME developers perform when using jhbuild. Regressions are found after days or weeks, because developers of projects using Vala update their compiler and suddenly their projects cease to build.

I don’t want to minimise the enormous amount of work that every Vala contributor brought to the project; they are heroes, all of them, and they deserve as much credit and praise as we can give. The idea of a project-oriented, community-oriented programming language has been vindicated many times over, in the past 5 years.

If I scared you, or incensed you, then you can still blame me, and my lack of tact. You can still call me an asshole, and you can think that I’m completely uncool. What I do hope, though, is that this blog post pushes you into action. Either to contribute to Vala, or to re-new your commitment to it, so that we can look at my words in 5 years and say “boy, was Emmanuele wrong”; or to look at alternatives, and explore new venues in order to make GNOME (and the larger free software ecosystem) better.

by ebassi at February 13, 2017 01:12 PM

February 11, 2017

Emmanuele Bassi

Epoxy

Epoxy is a small library that GTK+, and other projects, use in order to access the OpenGL API in somewhat sane fashion, hiding all the awful bits of craziness that actually need to happen because apparently somebody dosed the water supply at SGI with large quantities of LSD in the mid-‘90s, or something.

As an added advantage, Epoxy is also portable on different platforms, which is a plus for GTK+.

Since I’ve started using Meson for my personal (and some work-related) projects as well, I’ve been on the lookout for adding Meson build rules to other free and open source software projects, in order to improve both their build time and portability, and to improve Meson itself.

As a small, portable project, Epoxy sounded like a good candidate for the port of its build system from autotools to Meson.

To the Bat Build Machine!

tl;dr

Since you may be interested just in the numbers, building Epoxy with Meson on my Kaby Lake four Core i7 and NMVe SSD takes about 45% less time than building it with autotools.

A fairly good fraction of the autotools time is spent going through the autogen and configure phases, because they both aren’t parallelised, and create a ton of shell invocations.

Conversely, Meson’s configuration phase is incredibly fast; the whole Meson build of Epoxy fits in the same time the autogen.sh and configure scripts complete their run.

Administrivia

Epoxy is a simple library, which means it does not need a hugely complicated build system set up; it does have some interesting deviations, though, which made the porting an interesting challenge.

For instance, on Linux and similar operating systems Epoxy uses pkg-config to find things like the EGL availability and the X11 headers and libraries; on Windows, though, it relies on finding the opengl32 shared or static library object itself. This means that we get something straightforward in the former case, like:

# Optional dependencies
gl_dep = dependency('gl', required: false)
egl_dep = dependency('egl', required: false)

and something slightly less straightforward in the latter case:

if host_system == 'windows'
  # Required dependencies on Windows
  opengl32_dep = cc.find_library('opengl32', required: true)
  gdi32_dep = cc.find_library('gdi32', required: true)
endif

And, still, this is miles better than what you have to deal with when using autotools.

Let’s take a messy thing in autotools, like checking whether or not the compiler supports a set of arguments; usually, this involves some m4 macro that’s either part of autoconf-archive or some additional repository, like the xorg macros. Meson handles this in a much better way, out of the box:

# Use different flags depending on the compiler
if cc.get_id() == 'msvc'
  test_cflags = [
    '-W3',
    ...,
  ]
elif cc.get_id() == 'gcc'
  test_cflags = [
    '-Wpointer-arith',
    ...,
  ]
else
  test_cflags = [ ]
endif

common_cflags = []
foreach cflag: test_cflags
  if cc.has_argument(cflag)
    common_cflags += [ cflag ]
  endif
endforeach

In terms of speed, the configuration step could be made even faster by parallelising the compiler argument checks; right now, Meson has to do them all in a series, but nothing except some additional parsing effort would prevent Meson from running the whole set of checks in parallel, and gather the results at the end.

Generating code

In order to use the GL entry points without linking against libGL or libGLES* Epoxy takes the XML description of the API from the Khronos repository and generates the code that ends up being compiled by using a Python script to parse the XML and generating header and source files.

Additionally, and unlike most libraries in the G* stack, Epoxy stores its public headers inside a separate directory from its sources:

libepoxy
├── cross
├── doc
├── include
│   └── epoxy
├── registry
├── src
└── test

The autotools build has the src/gen_dispatch.py script create both the source and the header file for each XML at the same time using a rule processed when recursing inside the src directory, and proceeds to put the generated header under $(top_builddir)/include/epoxy, and the generated source under $(top_builddir)/src. Each code generation rule in the Makefile manually creates the include/epoxy directory under the build root to make up for parallel dispatch of each rule.

Meson makes is harder to do this kind of spooky-action-at-a-distance build, so we need to generate the headers in one pass, and the source in another. This is a bit of a let down, to be honest, and yet a build that invokes the generator script twice for each API description file is still faster under Ninja than a build with the single invocation under Make.

There are sill issues in this step that are being addressed by the Meson developers; for instance, right now we have to use a custom target for each generated header and source separately instead of declaring a generator and calling it multiple times. Hopefully, this will be fixed fairly soon.

Documentation

Epoxy has a very small footprint, in terms of API, but it still benefits from having some documentation on its use. I decided to generate the API reference using Doxygen, as it’s not a G* library and does not need the additional features of gtk-doc. Sadly, Doxygen’s default style is absolutely terrible; it would be great if somebody could fix it to make it look half as good as the look gtk-doc gets out of the box.

Cross-compilation and native builds

Now we get into “interesting” territory.

Epoxy is portable; it works on Linux and *BSD systems; on macOS; and on Windows. Epoxy also works on both Intel Architecture and on ARM.

Making it run on Unix-like systems is not at all complicated. When it comes to Windows, though, things get weird fast.

Meson uses cross files to determine the environment and toolchain of the host machine, i.e. the machine where the result of the build will eventually run. These are simple text files with key/value pairs that you can either keep in a separate repository, in case you want to share among projects; or you can keep them in your own project’s repository, especially if you want to easily set up continuous integration of cross-compilation builds.

Each toolchain has its own; for instance, this is the description of a cross compilation done on Fedora with MingW:

[binaries]
c = '/usr/bin/x86_64-w64-mingw32-gcc'
cpp = '/usr/bin/x86_64-w64-mingw32-cpp'
ar = '/usr/bin/x86_64-w64-mingw32-ar'
strip = '/usr/bin/x86_64-w64-mingw32-strip'
pkgconfig = '/usr/bin/x86_64-w64-mingw32-pkg-config'
exe_wrapper = 'wine'

This section tells Meson where the binaries of the MingW toolchain are; the exe_wrapper key is useful to run the tests under Wine, in this case.

The cross file also has an additional section for things like special compiler and linker flags:

[properties]
root = '/usr/x86_64-w64-mingw32/sys-root/mingw'
c_args = [ '-pipe', '-Wp,-D_FORTIFY_SOURCE=2', '-fexceptions', '--param=ssp-buffer-size=4', '-I/usr/x86_64-w64-mingw32/sys-root/mingw/include' ]
c_link_args = [ '-L/usr/x86_64-w64-mingw32/sys-root/mingw/lib' ]

These values are taken from the equivalent bits that Fedora provides in their MingW RPMs.

Luckily, the tool that generates the headers and source files is written in Python, so we don’t need an additional layer of complexity, with a tool built and run on a different platform and architecture in order to generate files to be built and run on a different platform.

Continuous Integration

Of course, any decent process of porting, these days, should deal with continuous integration. CI gives us confidence as to whether or not any change whatsoever we make actually works — and not just on our own computer, and our own environment.

Since Epoxy is hosted on GitHub, the quickest way to deal with continuous integration is to use TravisCI, for Linux and macOS; and Appveyor for Windows.

The requirements for Meson are just Python3 and Ninja; Epoxy also requires Python 2.7, for the dispatch generation script, and the shared libraries for GL and the native API needed to create a GL context (GLX, EGL, or WGL); it also optionally needs the X11 libraries and headers and Xvfb for running the test suite.

Since Travis offers an older version of Ubuntu LTS as its base system, we cannot build Epoxy with Meson; additionally, running the test suite is a crapshoot because the Mesa version if hopelessly out of date and will either cause most of the tests to be skipped or, worse, make them segfault. To sidestep this particular issue, I’ve prepared a Docker image with its own harness, and I use it as the containerised environment for Travis.

On Appveyor, thanks to the contribution of Thomas Marrinan we just need to download Python3, Python2, and Ninja, and build everything inside its own root; as an added bonus, Appveyor allows us to take the build artefacts when building from a tag, and shoving them into a zip file that gets deployed to the release page on GitHub.

Conclusion

Most of this work has been done off and on over a couple of months; the rough Meson build conversion was done last December, with the cross-compilation and native builds taking up the last bit of work.

Since Eric does not have any more spare time to devote to Epoxy, he was kind enough to give me access to the original repository, and I’ve tried to reduce the amount of open pull requests and issues there.

I’ve also released version 1.4.0 and I plan to do a 1.4.1 release soon-ish, now that I’m positive Epoxy works on Windows.

I’d like to thank:

  • Eric Anholt, for writing Epoxy and helping out when I needed a hand with it
  • Jussi Pakkanen and Nirbheek Chauhan, for writing Meson and for helping me out with my dumb questions on #mesonbuild
  • Thomas Marrinan, for working on the Appveyor integration and testing Epoxy builds on Windows
  • Yaron Cohen-Tal, for maintaining Epoxy in the interim

by ebassi at February 11, 2017 01:34 AM

January 11, 2017

Emmanuele Bassi

Constraints editing

Last year I talked about the newly added support for Apple’s Visual Format Language in Emeus, which allows to quickly describe layouts using a cross between ASCII art and predicates. For instance, I can use:

H:|-[icon(==256)]-[name_label]-|
H:[surname_label]-|
H:[email_label]-|
H:|-[button(&lt;=icon)]
V:|-[icon(==256)]
V:|-[name_label]-[surname_label]-[email_label]-|
V:[button]-|

and obtain a layout like this one:

Boxes approximate widgets

Thanks to the contribution of my colleague Martin Abente Lahaye, now Emeus supports extensions to the VFL, namely:

  • arithmetic operators for constant and multiplication factors inside predicates, like [button1(button2 * 2 + 16)]
  • explicit attribute references, like [button1(button1.height / 2)]

This allows more expressive layout descriptions, like keeping aspect ratios between UI elements, without requiring hitting the code base.

Of course, editing VFL descriptions blindly is not what I consider a fun activity, so I took some time to write a simple, primitive editing tool that lets you visualize a layout expressed through VFL constraints:

I warned you that it was primitive and simple

Here’s a couple of videos showing it in action:

At some point, this could lead to a new UI tool to lay out widgets inside Builder and/or Glade.

As of now, I consider Emeus in a stable enough state for other people to experiment with it — I’ll probably make a release soon-ish. The Emeus website is up to date, as it is the API reference, and I’m happy to review pull requests and feature requests.

by ebassi at January 11, 2017 02:30 PM

December 25, 2016

Tomas Frydrych

A Year in the Hills

A Year in the Hills

TL;DR: ~440 hours of running, 3,000km travelled, 118km ascended, an FKT set on the Assynt Traverse. Yet, the numbers don't even begin to tell the story ...

It's been a good year, with some satisfying longer days in the hills: an enjoyable two day round of Glen Affric & Kintail in April (still in full-on winter conditions), a two day loop around Glen Lyon in May (taking in the Lawers and Carn Mairg ridges), a round of the seven Crianlarich Munros (East to West in May, West to East in June), a two day trot through the Mamores in September, a three day run through the Cairngorms in October (with some of the most amazing light I have ever seen, and shoes turning into solid blocks of ice over night). There have also been many great shorter days, the Carn Eighe Horse Shoe and the Coigach Horse Shoe come to mind. But the highlight of my year, without any question, was the July Assynt Traverse, at 74km of largely off track running, and some 6,400m of a vertical ascent, by far the most physically challenging thing I have ever attempted, setting a new FKT (23h 54min) an icing on the cake.

The Assynt Traverse had been haunting me since the summer of 2013. During those three years I had gone through a random mixture of great enthusiasm, physical setbacks (from too much enthusiasm!), and self doubt (as the scale of the challenge had become clear). I came very close to not attempting it (again), thinking failure was inevitable. Fortunately, a brief, incidental, conversation with a friend helped me to refocus -- at this scale DNF is never a failure, just an attempt, the only real possibility of a failure is a DNS, a failure of the mind. From that point on the rest was just logistics, and some running in the most beautiful landscape I know!

The real significance of the Traverse for me, however, was neither in completing it, nor in setting the FKT. Rather, the Traverse turned out to be a condensed essence of the totality of my running experiences, neatly packaged into a single day. As such it brought much clarity into my understanding of why I run, and, in particular, what drives me into the hills.

Obviously, there are the views, at times but brief glimpses, at times sustained (and, far too often, none at all). There are the brief encounters with wildlife: the sense of awe over a golden eagle you nearly run into, the envy of a raven playing with the wind that to me, a supposedly superior species, is proving such a nuisance (and in Assynt, the ever present frogs and toads).

Then there is the whole mind over matter thing, like when merely three and half hours into your twenty four hour venture the body declares it can't go any further, but your mind knows it's nothing but load of BS, and you somehow manage to carry on. There is the simple enjoyment of running, six continuous hours of it negotiating the ridges of the Ben More Assynt massive, hopping from boulder to boulder under blue skies. There is that sense of complete physical and mental liberation as the dopamine high goes through the roof after fifteen hours of hard graft. There is the need to hold it together, sleep deprived in the wee hours on Quinag, simply because there is no other alternative, it's just you and the hills.

All of the above are reasons why I run hills. But the reason that exceeds all of the above is the time to think it affords me, time to reflect in the peace and quiet, senses sharpened by physical exertion -- that is the real reason why I run, and why I unashamedly enjoy running in my own company.

In a place like Assynt, in the midst of the seemingly immutable, aeons old landscape, it is impossible to escape the sense of one's own transience and insignificance. The knowledge that these hills have been around long before me, and will remain long after my brief intrusion somehow puts everything into perspective. The hills ask not merely 'what are you doing here?' but also 'what do you do when you are not here?', and 'why?'. They question our priorities, our commitments, or the the lack of thereof. They encourage us to look forward beyond the immediate horizon of tomorrow, of the next pay check.

There is much thinking to be done in twenty four hours, and on the back of that some decisions have been made in the weeks that followed, some plans laid, there are some changes on the horizon for the coming year. It's too early days to say more for now, maybe in a couple of months.

As for my running, the Ramsay Round has been in my thoughts since the morning after Assynt -- I am toying with the unsupported solo option (I don't think I have it in me to meet the 24h limit anyway, so might just as well, and it simplifies the logistics), but I expect a realistic timetable for that is 2018. I am hoping for some more multiday runs, there is so much exploring to be still done in this wee country of ours, so little time.

Happy 2017!

by tf at December 25, 2016 11:16 AM

December 17, 2016

Emmanuele Bassi

Laptop review

Dell XPS 13 (Developer Edition 2016)

After three and a half years with my trusty mid-2013 MacBook Air, I decided to get a new personal laptop. To be fair, my Air could have probably lasted another 12-18 months, even though its 8GB of RAM and Haswell Core i7 were starting to get pretty old for system development. The reason why I couldn’t keep using it reliably was that the SSD had already started showing SMART errors in January, and I already had to reset it and re-install from scratch once. Refurbishing the SSD out of warranty is still an option, if I decided to fork over a fair chunk of money and could live without a laptop for about a month1.

After getting recommendations for the previous XPS iterations by various other free software developers and Linux users, I waited until the new, Kaby Lake based model was available in the EU and ordered one. After struggling a bit with Dell’s website, I managed to get an XPS 13 with a US keyboard layout2 — which took about two weeks from order to delivery.

The hardware out of the box experience is pretty neat, with a nice, clean box; very Apple-like. The software’s first boot experience could be better, to say the least. Since I chose the Developer Edition, I got Ubuntu as the main OS instead of Windows, and I have been thoroughly underwhelmed by the effort spent by Dell and Canonical in polishing the software side of things. As soon as you boot the laptop, you’re greeted with an abstract video playing while the system does something. The video playback is not skippable, and does not have volume controls, so I got to “experience” it at full blast out of the speakers.

Ubuntu’s first boot experience UI to configure the machine is rudimentary, at best, and not really polished; it’s the installer UI without the actual installation bits, but it clearly hasn’t been refined for the HiDPI screen. The color scheme has progressively gone worse over the years; while all other OSes are trying to convey a theme of lightness using soft tones, the dark grey, purple, and dark orange tones used by Ubuntu make the whole UI seem heavier and oppressive.

After that, you get into Unity, and no matter how many times I try it, I still cannot enjoy using it. I also realized why various people coming from Ubuntu complain about the GNOME theme being too heavy on the whitespace: the Ubuntu default theme is super-compressed, with controls hugging together so closely that they almost seem to overlap. There is barely no affordance for the pointer, let alone for interacting through the touchscreen.

All in all, I resisted half a day on it, mostly to see what was the state of stock Ubuntu after many years of Fedora3. After that, I downloaded a Fedora 25 USB image and re-installed from scratch.

Sadly, I still have to report that Anaconda doesn’t shine at all. Luckily, I didn’t have to deal with dual booting, so I only needed to interact with the installer just enough to tell it to use the stock on disk layout and create the root user. Nevertheless, figuring out how to tell it to split my /home volume and encrypt it required me to go through the partitioning step three times because I couldn’t for the life of me understand how to commit to the layout I wanted.

After that, I was greeted by GNOME’s first boot experience — which is definitely more polished than Ubuntu’s, but it’s still a bit too “functional” and plain.

Fedora recognised the whole hardware platform out of the box: wifi, bluetooth, webcam, HiDPI screen. On the power management side, I was able to wring out about 8 hours of work (compilation, editing, web browsing, and a couple of Google hangouts) while on wifi, without having to plug in the AC.

Coming from years of Apple laptops, I was especially skeptical of the quality of the touchpad, but I have to say I was pleasantly surprised by its accuracy and feedback. It’s not MacBook-level, but it’s definitely the closest anyone has ever been to that slice of fried gold.

The only letdowns I can find are the position of the webcam, which is on the bottom of the panel and to the left, which makes for very dramatic angles when doing video calls, and requires you never type if you don’t want your fingers to be in the way; and the power brick, which has its own proprietary connector. There’s a USB-C port, though, so there may be provisions for powering the laptop through it.

The good

  • Fully supported hardware (Fedora 25)
  • Excellent battery life
  • Nice keyboard
  • Very good touchpad

The bad

  • The position of the webcam
  • Yet another power brick with custom connector I have to lug around

Lenovo Yoga

Thanks to my employer I now have a work laptop as well, in the shape of a Lenovo Yoga 900. I honestly crossed off Lenovo as a vendor after the vast amounts of stupidity they imposed on their clients — and that was after I decided to stop buying ThinkPad-branded laptops, given their declining build quality and bad technical choices. Nevertheless, you don’t look a gift horse in the mouth.

The out of the box experience of the Yoga is very much on par with the one I had with the XPS, which is to say: fairly Apple-like.

The Yoga 900 is a fairly well made machine. It’s an Intel Sky Lake platform, with a nice screen and good components. The screen can fold and turn the whole thing into a “tablet”, except that the keyboard faces downward, so it’s weird to handle in that mode. Plus, a 13” tablet is a pretty big thing to carry around. On the other hand, folding the laptop into a “tent” and using an external keyboard and pointer device is a nice twist on the whole “home office” approach. The webcam is, thankfully, centered and placed at the top of the panel — something that Lenovo has apparently changed in the 910 model, when they realised that folding the laptop would put the webcam at the bottom of the panel.

On the software side, the first boot experience into Windows 10 was definitely less than stellar. The Lenovo FBE software was not HiDPI-aware, which posed interesting challenges to the user interaction. This is something that a simple bit of QA would have found out, but apparently QA is too much to ask when dealing with a £1000 laptop. Luckily, I had to deal with that only inasmuch as I needed to get and install the latest firmware updates before installing Linux on the machine. Again, I went for Fedora.

As in the case of the Dell XPS, Fedora recognised all components of the hardware plaform out of the box. Even the screen rotation and folding works out of the box — though it can still get into inconsistent states when you move the laptop around, so I kind of recommend you keep the screen rotation locked until you actually need it.

On the power management side, I was impressed by how well the sleep states conserve battery power; I’m able to leave the Yoga suspended for a week and still have power on resume. The power brick has a weird USB-like connector to the laptop which makes me wonder what on earth were Lenovo engineers thinking; on the other hand, the adapter has a USB port which means you can charge it from a battery pack or from a USB adapter as well. There’s also a USB-C port, but I still haven’t tested if I can put power through it.

The keyboard is probably the biggest let down; the travel distance and feel of the keys is definitely not up to par with the Dell XPS, or with the Apple keyboards. The 900 has an additional column of navigation keys on the right edge that invariably messes up my finger memory — though it seems that the 910 has moved them to Function key combinations.5 The power button is on the right side of the laptop, which makes for unintended suspend/resume cycles when trying to plug in the headphones, or when moving the laptop. The touchpad is, sadly, very much lacking, with ghost tap events that forced me to disable the middle-click emulation everywhere4.

The good

  • Fully supported hardware (Fedora 25)
  • Solid build
  • Nice flip action
  • Excellent power management

The bad

  • Keyboard is a toy
  • Touchpad is a pale imitation of a good pointing device

  1. Which may still happen, all things considered; I really like the Air as a travel laptop. 

  2. After almost a decade with US layouts I find the UK layout inferior to the point of inconvenience. 

  3. On my desktop machine/gaming rig I dual boot between Windows 10 and Ubuntu GNOME, mostly because of the nVidia GPU and Steam. 

  4. That also increased my hatred of the middle-click-to-paste-selection easter egg a thousandfold, and I already hated the damned thing so much that my rage burned with the intensity of a million suns. 

  5. Additionally, the keyboard layout is UK — see note 2 above. 

by ebassi at December 17, 2016 12:00 AM