Planet Closed Fist

August 15, 2019

Emmanuele Bassi

Another layer

Five years (and change) ago I was looking at the data types and API that were needed to write a 3D-capable scene graph API; I was also learning about SIMD instructions and compiler builtins on IA and ARM, as well as a bunch of math I didn’t really study in my brush offs with formal higher education. The result was a small library called Graphene.

Over the years I added more API, moved the build system from Autotools over to Meson, and wrote a whole separate library for its test suite.

In the meantime, GStreamer started using Graphene in its GL element; GTK 3.9x is very much using Graphene internally and exposing it as public API; Mutter developers are working on reimplementing the various math types in their copies of Cogl and Clutter using Graphene; and Alex wrote an entire 3D engine using it.

Not bad for a side project.

Of course, now I’ll have to start maintaining Graphene like a proper grownup, which also means reporting its changes, bug fixes, and features when I’m at the end of a development cycle.

While the 1.8 development cycle consisted mostly of bug fixes with no new API, there have been a few major internal changes during the development cycle towards 1.10:

  • I rewrote the Euler angles conversion to and from quaternions and matrices; the original implementation I cribbed from here and there was not really adequate, and broke pretty horribly when you tried to roundtrip from Euler angles to a transformation matrix and back. This also affected the conversion between Euler angles and quaternions. The new implementation is more correct, and as a side effect it now includes not just the Tait–Bryan angles, but also the classic Euler angles. All possible orders are available in both the intrinsic and extrinsic axes variants.
  • We’re dealing with floating point comparison and with infinities a bit better, now; this is usually necessary because the various vector implementations may have different behaviour, depending on the toolchain in use. A shout out goes to Intel, who bothered to add an instruction to check for infinities only in AVX 512, making it pointless for me, and causing a lot more grief than necessary.
  • The ARM NEON implementation of graphene_simd4f_t has been fixed and tested on actual ARM devices (an old Odroid I had lying around for ARMv7 and a Raspberry Pi3 for Aarch64); this means that the “this is experimental” compiler warning has been removed. I still need to run the CI on an ARM builder, but at least I can check if I’m doing something dumb, now.
  • As mentioned in the blog posts above, the whole test suite has been rewritten using µTest, which dropped a dependency on GLib; you still need GLib to get the integration with GObject, but if you’re not using that, Graphene should now be easier to build and test.

On the API side:

  • there are a bunch of new functions for graphene_rect_t, courtesy of Georges Basile Stavracas Neto and Marco Trevisan
  • thanks to Marco, the graphene_rect_round() function has been deprecated in favour of the more reliable graphene_rect_round_extents()
  • graphene_quaternion_t gained new operators, like add(), multiply() and scale()
  • thanks to Alex Larsson, graphene_plane_t can now be transformed using a matrix
  • I added equality and near-equality operators for graphene_matrix_t, and a getter function to retrieve the translation components of a transformation matrix
  • I added interpolation functions for the 2, 3, and 4-sized vectors
  • I’m working on exposing the matrix decomposition code for Gthree, but that requires some untangling of messy code so I’ll be in the next development snapshot.

On the documentation side:

  • I’ve reworked the contribution guide, and added a code of conduct to the project; doesn’t matter how many times you say “patches welcome” if you also aren’t clear on how those patches should be written, submitted, and reviewed, and if you aren’t clear on what constitutes acceptable behaviour when it comes to interactions between contributors and the maintainer
  • this landed at the tail end of 1.8, but I’ve hopefully clearly documented the conventions of the matrix/matrix and matrix/vector operations, to the point that people can use the Graphene API without necessarily having to read the code to understand how to use it

This concludes the changes that will appear with the next 1.10 stable release, which will be available by the time GNOME 3.34 is out. For the time being, you can check out the latest development snapshot available on Github.

I don’t have many plans for the future, to be quite honest; I’ll keep an eye out for what GTK and Gthree need, and I expect that once Mutter starts using Graphene I’ll start receiving bug reports.

One thing I did try was moving to a “static” API reference using Markdeep, just like I did for µTest, and drop yet another dependency; sadly, since we need to use gtk-doc annotations for the GObject introspection data generation, we’re going to depend on gtk-doc for a little while longer.

Of course, if you are using Graphene and you find some missing functionality, feel free to open an issue, or a merge request.

by ebassi at August 15, 2019 09:16 AM

June 20, 2019

Tomas Frydrych

The Fitzroys

My shoulder hurts from the seatbelt, plumes of steam bellowing from underneath the bonnet; I am sure I can smell petrol. Shaking fingers fumbling with the buckle. The driver side door won’t budge. Scrambling out over the gear stick. I put some distance between myself and the car, expecting it to burst into flames any moment.

Too many Hollywood movies.

There in the boot are some of my most treasured worldly possessions, not least a near new pair of Scarpa Fitzroy boots; even at the heavily discounted £99, a purchase I could ill afford. As the initial adrenaline spike wears off in the chill of the crisp winter morning, I pluck up the courage to retrieve them, then watch a glorious dawn over the distant hills. It will be a cracker, as Heather predicted.

In the years that followed the Fitzroys and me had trodden all over the Scottish hills, hundreds of miles of fine ridges, and even finer bogs; in the heat of the summer, and through the winters. Later on I added the Cumbraes for the winter climbing days, and Mescalitos for the summer, but the latter turned out to be terrible for walking, and so the Fitzroys remained my main summer boot. 20+ years later I still have them, still use them from time to time. One of the better £99 I have ever spent; they don’t make them like they used to.

Some twenty minutes later a car comes down the icy ungritted road, and the driver kindly lends me his mobile phone to call the AA. When there is no sign of the recovery truck an hour later I walk over to a nearby farm where the lights are now on, and ask to use their phone. As I step out back into the fine morning a couple of minutes later, the yellow truck rolls over the hill — ‘Wow, that was quick!’ says the farmer. I just laugh.

It’s a slow journey home, and I prattle, worried what Linda will say about me writing off our car; money is tight and the car is (or rather ‘was’) the key to maintaining our sanity. The AA man brings me back to reality: ‘You are lucky, son, I have seen less damage where the people did not make it.’

For a long time that is to be my first waking up thought; every day in the hills a bonus, none of them taken for granted.

by tf at June 20, 2019 11:03 AM

June 14, 2019

Tomas Frydrych

North Coast 500 — An Alternative View

The NC500 is a travesty. The idea of car touring holidays harkens back to the environmental ignorance of mid 20th century and is wholly unfit for these days of an unfolding environmental catastrophe. VisitScotland, the Scottish Government, and all those who lend their name to promoting this anachronism, should be ashamed of themselves.

The principal contribution of the NC500 to the northwest highlands is pollution — particulates from tyres and Diesel engines, chemical toilet waste dumped into roadside streams and on beaches, camping detritus and fire rings decorating any and every scenic spot reachable by car. Quiet villages along the north coast have been turned into noisy thoroughfares. During the main season the stream of traffic is incessant.

Campervans wherever you look. Invariably two up front (plus a dog), the back loaded with stacks of Heinz beans from supermarkets down south. The professional types in their shiny T5s with eye-wateringly expensive conversions, the retired couples in ever bigger mobile homes, at times towing a small car behind. Even now, still off season, you will find them hogging passing places along every minor Assynt road from an early afternoon on.

Once upon a time the quiet roads of the northwest highlands provided some of the best cycle touring in the country. No more, thanks to the ever growing volume of traffic and vehicle size. Many of these vehicles are too wide for the single track roads to safely pass a pedestrian, never mind a cyclist. An opportunity missed.

This is not sustainable, the putative benefits to the local communities are failing to materialise (or so I hear from the locals), the real beneficiaries the car makers, the campervan hire places, the oil companies. Politicians congratulating themselves on being seen doing something for the Highlands without spending any money. Meanwhile the roads crumble.

It will not (cannot) work. We only need to look at the Icelandic experience to see what’s coming.

Following the economic crash of 2008 the Icelanders thrown themselves wholeheartedly into developing tourism, it was all that was left. Myriads of camper van hire companies sprung up chasing the tourist buck. It didn’t take long for the country to be overrun and completely overwhelmed by them. It dawned rapidly that they bring nothing to local communities while causing no end of environmental damage; legislation banning all forms of car-camping outwidth officially designated campsites followed.

This is what needs to happen in Scotland, now.

Some will undoubtedly come out to blame the locals and the Highland Council for the associated problems: lack of adequate (read ‘free’) campervan facilities. Let’s think about this for a moment: people in purpose built luxury vehicles worth tens of thousands of pounds expecting the highlanders to, directly, or indirectly through their council tax, subsidise their holidays. Not sure what the right word for this is. Entitlement? Greed?

If the economic model for the revival of the highlands is to be tourism (which shouldn’t be automatically assumed to be the right answer, BTW), it needs to be built on bringing people in, rather than, as the NC500 does, taking them through. The Scottish Government and VisitScotland should be promoting places and communities, not roads; physical activities such as walking, cycling, paddling, not driving. This area has so much to offer that reducing it to iPhone snaps taken from a car window, as the NC500 ultimately does, is outright insulting.

I am sure my view that we should ban all roadside car camping as they did in Iceland, and heavily tax camper vans and mobile homes, will not be very popular. The campervan has become the ultimate middle class outdoor accessory, virtually all my friends have one. You will be hard pressed to find an outdoor influencer that doesn’t; vested interests creating blind spots big enough to park a Hymer in.

But I’ll voice it anyway, the Highlands deserve better.

by tf at June 14, 2019 03:36 PM

June 12, 2019

Damien Lespiau

jk - Configuration as code with TypeScript

Of all the problems we have confronted, the ones over which the most brain power, ink, and code have been spilled are related to managing configurationsBorg, Omega, and Kubernetes - Lessons learned from three container-management systems over a decade

This post is the first of a series introducing jk. We will start the series by showing a concrete example of what jk can do for you.

jk is a javascript runtime tailored for writing configuration files. The abstraction and expressive power of a programming language makes writing configuration easier and more maintainable by allowing developers to think at a higher level.

Let’s pretend we want to deploy a billing micro-service on a Kubernetes cluster. This micro-service could be defined as:

service:
  name: billing
  description: Provides the /api/billing endpoints for frontend.
  maintainer: damien@weave.works
  namespace: billing
  port: 80
  image: quay.io/acmecorp/billing:master-fd986f62
  ingress:
    path: /api/billing
  dashboards:
    - service.RPS.HTTP
  alerts:
    - service.RPS.HTTP.HighErrorRate

From this simple, reduced definition of what a micro-service is, we can generate:

  • Kubernetes Namespace, Deployment, Service and Ingress objects.
  • A ConfigMap with dashboard definitions that grafana can detect and load.
  • Alerts for Prometheus using the PrometheusRule custom resource defined by the Prometheus operator.
apiVersion: v1
kind: Namespace
metadata:
  name: billing
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: billing
  name: billing
  namespace: billing
spec:
  revisionHistoryLimit: 2
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  template:
    metadata:
      labels:
        app: billing
    spec:
      containers:
      - image: quay.io/acmecorp/billing:master-fd986f62
        name: billing
        ports:
        - containerPort: 80
apiVersion: v1
kind: Service
metadata:
  labels:
    app: billing
  name: billing
  namespace: billing
spec:
  ports:
  - port: 80
  selector:
    app: billing
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: billing
  namespace: billing
spec:
  rules:
  - http:
      paths:
      - backend:
          serviceName: billing
          servicePort: 80
        path: /api/billing
apiVersion: v1
data:
  dashboard: '[{"annotations":{"list":[]},"editable":false,"gnetId":null,"graphTooltip":0,"hideControls":false,"id":null,"links":[],"panels":[{"aliasColors":{},"bars":false,"dashLength":10,"dashes":false,"datasource":null,"fill":1,"gridPos":{"h":7,"w":12,"x":0,"y":0},"id":2,"legend":{"alignAsTable":false,"avg":false,"current":false,"max":false,"min":false,"rightSide":false,"show":true,"total":false,"values":false},"lines":true,"linewidth":1,"links":[],"nullPointMode":"null","percentage":false,"pointradius":5,"points":false,"renderer":"flot","repeat":null,"seriesOverrides":[],"spaceLength":10,"stack":false,"steppedLine":false,"targets":[{"expr":"sum
    by (code)(sum(irate(http_request_total{job=billing}[2m])))","format":"time_series","intervalFactor":2,"legendFormat":"{{code}}","refId":"A"}],"thresholds":[],"timeFrom":null,"timeShift":null,"title":"billing
    RPS","tooltip":{"shared":true,"sort":0,"value_type":"individual"},"type":"graph","xaxis":{"buckets":null,"mode":"time","name":null,"show":true},"yaxes":[{"format":"short","label":null,"logBase":1,"max":null,"min":null,"show":true},{"format":"short","label":null,"logBase":1,"max":null,"min":null,"show":true}]},{"aliasColors":{},"bars":false,"dashLength":10,"dashes":false,"datasource":null,"fill":1,"gridPos":{"h":7,"w":12,"x":12,"y":0},"id":3,"legend":{"alignAsTable":false,"avg":false,"current":false,"max":false,"min":false,"rightSide":false,"show":true,"total":false,"values":false},"lines":true,"linewidth":1,"links":[],"nullPointMode":"null","percentage":false,"pointradius":5,"points":false,"renderer":"flot","repeat":null,"seriesOverrides":[],"spaceLength":10,"stack":false,"steppedLine":false,"targets":[{"expr":"histogram_quantile(0.99,
    sum(rate(http_request_duration_seconds_bucket{job=billing}[2m])) by (route) *
    1e3","format":"time_series","intervalFactor":2,"legendFormat":"{{route}} 99th
    percentile","refId":"A"},{"expr":"histogram_quantile(0.50, sum(rate(http_request_duration_seconds_bucket{job=billing}[2m]))
    by (route) * 1e3","format":"time_series","intervalFactor":2,"legendFormat":"{{route}}
    median","refId":"B"},{"expr":"sum(rate(http_request_total{job=billing}[2m])) /
    sum(rate(http_request_duration_seconds_count{job=billing}[2m])) * 1e3","format":"time_series","intervalFactor":2,"legendFormat":"mean","refId":"C"}],"thresholds":[],"timeFrom":null,"timeShift":null,"title":"billing
    Latency","tooltip":{"shared":true,"sort":0,"value_type":"individual"},"type":"graph","xaxis":{"buckets":null,"mode":"time","name":null,"show":true},"yaxes":[{"format":"ms","label":null,"logBase":1,"max":null,"min":null,"show":true},{"format":"short","label":null,"logBase":1,"max":null,"min":null,"show":true}]}],"refresh":"","schemaVersion":16,"style":"dark","tags":[],"time":{"from":"now-6h","to":"now"},"timepicker":{"refresh_intervals":["5s","10s","30s","1m","5m","15m","30m","1h","2h","1d"],"time_options":["5m","15m","1h","6h","12h","24h","2d","7d","30d"]},"timezone":"browser","title":"Service
    \u003e billing","uid":"","version":0}]'
kind: ConfigMap
metadata:
  labels:
    app: billing
    maintainer: damien@weave.works
  name: billing-dashboards
  namespace: billing
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  labels:
    app: billing
    maintainer: damien@weave.works
    prometheus: global
    role: alert-rules
  name: billing
spec:
  groups:
  - name: billing-alerts.rules
    rules:
    - alert: HighErrorRate
      annotations:
        description: More than 10% of requests to the billing service are failing
          with 5xx errors
        details: '{{$value | printf "%.1f"}}% errors for more than 5m'
        service: billing
      expr: |-
        rate(http_request_total{job=billing,code=~"5.."}[2m])
            / rate(http_request_duration_seconds_count{job=billing}[2m]) * 100 > 10
      for: 5m
      labels:
        severity: critical

What’s interesting to me is that such an approach shifts writing configuration files from a big flat soup of properties to a familiar API problem: developers in charge of the platform get to define the high level objects they want to present to their users, can encode best practices and hide details in library code.

For the curious minds, the jk script used to generate these Kubernetes objects can be found in the jk repository.

Built for configuration

We’re building jk in an attempt to advance the configuration management discussion. It offers a different take on existing solutions:

  • jk is a generation tool. We believe in a strict separation of configuration data and how that data is being used. For instance we do not take an opinionated view on how you should deploy applications to a cluster and leave that design choice in your hands. In a sense, jk is a pure function transforming a set of input into configuration files.

  • jk is cross domain. jk generates JSON, YAML, HCL as well as plain text files. It allows the generation of cross-domain configuration. In the micro-service example above, grafana dashboards and Kubernetes objects are part of two different domains that are usually treated differently. We could augment the example further by defining a list of AWS resources needed for that service to operate (eg. an RDS instance) as Terraform HCL.

  • jk uses a general purpose language: javascript. The configuration domain attracts a lot of people interested in languages and the result is many new Domain Specific Languages (DSLs). We do not believe those new languages offer more expressive power than javascript and their tooling is generally lagging behind. With a widely used general purpose language, we get many things for free: unit test frameworks, linters, api documentation, refactoring tools, IDE support, static typing, ecosystem of libraries, …

  • jk is hermetic. Hermeticity is the property to produce the same output given the same input no matter the machine the program is being run on. This seems like a great property for a tool generating configuration files. We achieve this with a custom v8-based runtime exposing as little as possible from the underlying OS. For instance you cannot access the process environment variables nor read file anywhere on the filesystem with jk.

  • jk is fast! By being an embedded DSL and using v8 under the hood, we’re significantly faster than the usual interpreters powering DSLs.

Hello, World!

The jk “Hello, World!” example generates a YAML file from a js object:

// Alice is a developer.
const alice = {
  name: 'Alice',
  beverage: 'Club-Mate',
  monitors: 2,
  languages: [
    'python',
    'haskell',
    'c++',
    '68k assembly', // Alice is cool like that!
  ],
};

// Instruct to write the alice object as a YAML file.
export default [
  { value: alice, file: `developers/${alice.name.toLowerCase()}.yaml` },
];

Run this example with:

$ jk generate -v alice.js
wrote developers/alice.yaml

This results in the developers/alice.yaml file:

beverage: Club-Mate
languages:
- python
- haskell
- c++
- 68k assembly
monitors: 2
name: Alice

Typing with TypeScript

The main reason to use a general purpose language is to benefit from its ecosystem. With javascript we can leverage typing systems such as TypeScript or flow to help authoring configuration.

Types help in a number of ways, including when refactoring large amounts of code or defining and documenting APIs. I’d also like to show it helps at authoring time by providing context-aware auto-completion:

Types - autocompletion

In the screenshot above we’re defining a container in a Deployment and the IDE only offers the fields that are valid at the cursor position along with the accompanying type information and documentation.

Similarly, typing can provide some level of validation:

Types - autocompletion

The IDE is telling us we haven’t quite defined a valid apps/v1 Deployment. We are missing the mandatory selector field.

Status and Future work

Albeit being still young, we believe jk is already useful enough to be a contender in the space. There’s definitely a lot of room for improvement though:

  • Helm integration: we’d like jk to be able to render Helm charts client side and expose the result as js objects for further manipulation.
  • Jsonnet integration: similarly, it should be possible to consume existing jsonnet programs.
  • Native TypeScript support: currently developers need to run the tsc transpiler by hand. We should be able to make jk consume TypeScript files natively a la deno.
  • Kubernetes strategic merging: the object merging primitives are currently quite basic and we’d like to extend the object merging capabilities of the standard library to implement Kubernetes strategic merging.
  • Expose type generation for Kubernetes custom resources.
  • More helper libraries to generate Grafana dashboards, custom resources for the Prometheus operator, …
  • Produce more examples: it’s easy to feel a bit overwhelmed when facing a new language and paradigm. More examples would make jk more approachable.

Try it yourself!

It’s easy to download jk from the github release page and try it yourself. You can also peruse through the (currently small amount of) examples.

June 12, 2019 10:21 AM

June 01, 2019

Emmanuele Bassi

More little testing

Back in March, I wrote about µTest, a Behavior-Driven Development testing API for C libraries, and that I was planning to use it to replace the GLib testing API in Graphene.

As I was busy with other things in GTK, it took me a while to get back to µTest—especially because I needed some time to set up a development environment on Windows in order to port µTest there. I managed to find some time over various weekends and evenings, and ended up fixing a couple of small issues here and there, to the point that I could run µTest’s own test suite on my Windows 10 box, and then get the CI build job I have on Appveyor to succeed as well.

Setting up MSYS2 was the most time consuming bit, really

While at it, I also cleaned up the API and properly documented it.

Since depending on gtk-doc would defeat the purpose, and since I honestly dislike Doxygen, I was looking for a way to write the API reference and publish it as HTML. As luck would have it, I remembered a mention on Twitter about Markdeep, a self-contained bit of JavaScript capable of turning a Markdown document into a half decent HTML page client side. Coupled with GitHub pages, I ended up with a fairly decent online API reference that also works offline, falls back to a Markdown document when not running through JavaScript, and can get fixed via pull requests.

Now that µTest is in a decent state, I ported the Graphene test suite over to it and, now I can run it on Windows using MSVC—and MSYS2, as soon as the issue with GCC gets fixed upstream. This means that, hopefully, we won’t have regressions on Windows in the future.

The µTest API is small enough, now, that I don’t plan major changes; I don’t want to commit to full API stability just yet, but I think we’re getting close to a first stable release soon; definitely before Graphene 1.10 gets released.

In case you think this could be useful for you: feedback, in the form of issues and pull requests, is welcome.

by ebassi at June 01, 2019 11:15 AM

April 19, 2019

Tomas Frydrych

Let It Burn

Let It Burn

A flame stretching up to heaven. The newsrooms can’t get enough, a journalist’s dream come true. You, me, everyone, glued to our screens, riveting stuff. (Honey, make us some popcorn, will you?)

“We shall rebuild it!”, echoes through the corridors of power, “Money is no object!”.

I have no doubt.

A flame stretching up to heaven. The newsroom’s embarrassed. A 15 year old making noise, a traffic jam, a pensioner chained to railings. (Nutters. Collective shrugging of shoulders.)

“Full force of the law!”, echoes through the corridors of power (money is the object).

I have no doubt.

A skeleton thin polar bear 400 miles from home.

by tf at April 19, 2019 07:25 AM

April 14, 2019

Emmanuele Bassi

(New) Adventures in CI

One of the great advantages of moving the code hosting in GNOME to GitLab is the ability to run per-project, per-branch, and per-merge request continuous integration pipelines. While we’ve had a CI pipeline for the whole of GNOME since 2012, it is limited to the master branch of everything, so it only helps catching build issues post-merge. Additionally, we haven’t been able to run test suites on Continuous since early 2016.

Being able to run your test suite is, of course, great—assuming you do have a test suite, and you’re good at keeping it working; gating all merge requests on whether your CI pipeline passes or fails is incredibly powerful, as it not only keeps your from unknowingly merging broken code, but it also nudges you in the direction of never pushing commits to the master branch. The downside is that it lacks nuance; if your test suite is composed of hundreds of tests you need a way to know at a glance which ones failed. Going through the job log is kind of crude, and it’s easy to miss things.

Luckily for us, GitLab has the ability to create a cover report for your test suite results, and present it on the merge request summary, if you generate an XML report and tell the CI machinery where you put it:

  artifacts:
    reports:
      junit:
        - "${CI_PROJECT_DIR}/_build/report.xml"

Sadly, the XML format chosen by GitLab is the one generated by JUnit, and we aren’t really writing Java classes. The JUnit XML format is woefully underdocumented, with only an unofficial breakdown of the entities and structure available. On top of that, since JUnit’s XML format is undocumented, GitLab has its own quirks in how it parses it.

Okay, assuming we have nailed down the output, how about the input? Since we’re using Meson on various projects, we can rely on machine parseable logs for the test suite log. Unfortunately, Meson currently outputs something that is not really valid JSON—you have to break the log into separate lines, and parse each line into a JSON object, which is somewhat less than optimal. Hopefully future versions of Meson will generate an actual JSON file, and reduce the overhead in the tooling consuming Meson files.

Nevertheless, after an afternoon of figuring out Meson’s output, and reverse engineering the JUnit XML format and the GitLab JUnit parser, I managed to write a simple script that translates Meson’s testlog.json file into a JUnit XML report that you can use with GitLab after you ran the test suite in your CI pipeline. For instance, this is what GTK does:

set +e

xvfb-run -a -s "-screen 0 1024x768x24" \
    meson test \
         -C _build \
         --timeout-multiplier 2 \
     --print-errorlogs \
     --suite=gtk \
     --no-suite=gtk:gsk \
     --no-suite=gtk:a11y

# Save the exit code, so we can reuse it
# later to pass/fail the job
exit_code=$?

# We always run the report generator, even
# if the tests failed
$srcdir/.gitlab-ci/meson-junit-report.py \
        --project-name=gtk \
        --job-id="${CI_JOB_NAME}" \
        --output=_build/${CI_JOB_NAME}-report.xml \
        _build/meson-logs/testlog.json

exit $exit_code

Which results in this:

Some assembly required; those are XFAIL reftests, but JUnit doesn’t understand the concept

The JUnit cover report in GitLab is only shown inside the merge request summary, so it’s not entirely useful if you’re developing in a branch without opening an MR immediately after you push to the repository. I prefer working on feature branches and getting the CI to run on my changes without necessarily having to care about opening the MR until my work is ready for review—especially since GitLab is not a speed demon when it comes to MRs with lots of rebases/fixup commits in them. Having a summary of the test suite results in that case is still useful, so I wrote a small conversion script that takes the testlog.json and turns it into an HTML page, with a bit of Jinja templating thrown into it to avoid hardcoding the whole thing into string chunks. Like the JUnit generator above, we can call the HTML generator right after running the test suite:

$srcdir/.gitlab-ci/meson-html-report.py \
        --project-name=GTK \
        --job-id="${CI_JOB_NAME}" \
        --output=_build/${CI_JOB_NAME}-report.html \
        _build/meson-logs/testlog.json

Then, we take the HTML file and store it as an artifact:

  artifacts:
    when: always
    paths:
      - "${CI_PROJECT_DIR}/_build/${CI_JOB_NAME}-report.html"

And GitLab will store it for us, so that we can download it or view it in the web UI.

There are additional improvements that can be made. For instance, the reftests test suite in GTK generates images, and we’re already uploading them as artifacts; since the image names are stable and determined by the test name, we can create a link to them in the HTML report itself, so we can show the result of the failed tests. With some more fancy HTML, CSS, and JavaScript, we could have a nicer output, with collapsible sections hiding the full console log. If we had a place to upload test results from multiple pipelines, we could even graph the trends in the test suite on a particular branch, and track our improvements.

All of this is, of course, not incredibly novel; nevertheless, the network effect of having a build system in Meson that lends itself to integration with additional tooling, and a code hosting infrastructure with native CI capabilities in GitLab, allows us to achieve really cool results with minimal glue code.

by ebassi at April 14, 2019 04:02 PM

April 12, 2019

Tomas Frydrych

The ‘Truly Clean Green Energy’ Fallacy

The ‘Truly Clean Green Energy’ Fallacy

A recent UKH Opinion piece dealing with the ecological cost of the forthcoming Glen Etive micro hydro, and micro hydro in general, includes this statement: ‘[we can] produce large amounts of truly “clean and green” energy ... through solar, offshore ... and tidal energy solutions’. I have come across permutations of this argument before, and it strikes me that our assessment of the environmental cost of renewables, and our understanding of renewables in general, is somewhat simplistic, glossing over what it is renewables actually do.

Power plants, like the rest of our world, are subject to the law of conservation of energy. They don’t make energy, they convert energy from one form into another, so that it could be transported and released elsewhere. Renewables differ in one important respect: their source energy is being extracted directly from the ecosystems of their installation. In other words, renewables are principally mining operations of a resource that is an intrinsic part of active ecological processes; the fact that we cannot see the thing being mined by the naked eye doesn’t make it any less so.

Thus, renewables have a built-in ecological cost, for it is not possible to extract a significant amounts of energy from an ecosystem without affecting a material change in it -- there are no ‘truly clean and green’ renewables. This is something that is not talked about, I suspect not least because the renewables industry is more about the source energy being ‘free’ than ‘clean’. Yet, we cannot afford to assume this to be inconsequential.

The scale of the problem is easier to grasp when viewed from the other end of the transaction, so let’s talk wind. A typical wind turbine today is rated a bit over 2MW, so a large farm of 150 turbines, such as the Clyde Wind Farm alongside the M74, generates around 350MW. We know that when the 300,000 homes this represents consume this energy, it radically alters the ecosystem around them, changing ambient temperature, humidity, light and noise levels, etc. It is unreasonable to expect that the ecosystems from which this energy was extracted to start with will not experience a transformation of a comparable magnitude.

Sticking with wind, it’s been known for some time that wind farms create their own micro-climates, with one of the easily measurable effects being an increase in temperature at ground level. A study by Harvard researchers Miller and Keith published last year concluded that if all US power generation was switched to wind, this would lead to an overall continental US temperature increase of 0.24C.

The above study has been misrepresented by the popular press as ‘wind turbines cause global warming’, which is, obviously, not what it says; what it does show is that renewables have their own, significant, ecological impact. As Keith summarised it,

If your perspective is the next 10 years, wind power actually has—in some respects—more climate impact than coal or gas, if your perspective is the next thousand years, then wind power is enormously cleaner than coal or gas.

The problem is that the 1,000 year perspective currently focuses strictly on combating (the easily commercialised) effects of warming, while glossing over more subtle questions of ecological integrity. This needs to change.

There is an additional issue with renewables. Since no engineering processes is, or can be, 100% efficient, not all of the source energy extracted is converted to electricity. Some of it leaks back into the ecosystem in other forms. The classic example of this is noise. We know that people living in proximity to wind turbine installations report health issues, but the effect is far wider. We are aware of major effect on birds, bats, and, moving off shore, marine mammals; and, of course, these ecosystems are home to myriad of other tightly interconnected species.

This is not meant to be a diatribe against renewables. Given that the public opinion is firmly against nuclear energy, renewables are all we have in combating Climate Change, for Climate Change does change everything, if we don’t get it under control, nothing else is of consequence. Nor is this a diatribe against wind power; I have used wind as a convenient example, these problems are I believe intrinsic to the whole class of renewable energy, and I expect we will be hearing increasingly more on this subject in the future.

What I want to do here is simply to draw an attention to the fact that all renewables come with intrinsic ecological costs that go beyond the obvious and visible things like loss of habitat. I draw two conclusions from these observations:

  1. The single most important thing in addressing Climate Change is not switching to renewables but a radical cut to our energy consumption. Renewables might allow us to keep the planet cooler, but they do not necessarily preserve its ecosystems.

  2. I find the ‘renewable X bad, renewable Y good’ line of reasoning a form of NIMBY-ism, missing the big picture. We can’t afford to exclude any form of renewables en masse, because when it comes to renewables too much of any one thing is going to be bad news for something somewhere. And in Scotland our renewables portfolio is too heavily weighted toward wind, it needs to be more balanced.

by tf at April 12, 2019 09:29 AM

April 06, 2019

Tomas Frydrych

On Delayed Gratificiation

On Delayed Gratificiation

Some of the photographers of old get rather upset when folk say ‘film slows you down’, so I won’t say that, but I’ll say it slows me down for sure. It’s not just the ‘on location’ pace, but also the time it takes before I get to see what I tried to visualise.

It starts with the negative development. While the process itself is not particularly time consuming, for reasons both environmental and economic, I tend to develop my B&W negatives in batches of two, and I rarely shoot more than a roll a week, and sometime life permits a lot less than that.

The delay between imagining and seeing becomes even more pronounced with colour. The chemicals are designed to be mixed in batches for six rolls each, and once mixed don’t have very long shelf life. And so in the fridge the exposed films go, until I have at least four of them, which, considering I am mostly focusing on B&W these days, can take a while (this weekend I am developing slides I took in October and November of last year; quite exciting, a couple of frames there that I thought at the time had some promise).

And then there is the printing (for me photography is mainly about the print, I have nothing against digital display of photos, but it doesn’t do it for me personally). Sometime the print takes four hours in the darkroom, sometime fifteen, and I can maybe find a day or two a month for this. All in all, in the last six months I have made eight photographs, and have another four or so waiting to be printed.

I expect, you, the reader, might be asking why on earth would anyone do this in this day and age? I could give here a whole list of reasons but the simple and most honest answer is ‘because I enjoy it’; the tactile nature of it, the very fact it doesn’t involve a computer (which I spend far too much time with as is).

Of course, this delayed gratification can (and does) at times turn into a delayed disappointment; the cover image is my witness. But all in all I am finding that in this instant world of ours the lack of immediate feedback is more of a benefit than a hindrance, the inability to see the result right here and now makes photography sort of an exercise in patience, even faith, for as it was said long time ago

Faith is confidence in what we hope for and assurance about what we do not see, this is what the ancients were commended for.

And faith, in turn, inspires dreams, and dreams are what the best photographs are made of.

by tf at April 06, 2019 03:08 PM

March 14, 2019

Emmanuele Bassi

A little testing

Years ago I started writing Graphene as a small library of 3D transformation-related math types to be used by GTK (and possibly Clutter, even if that didn’t pan out until Georges started working on the Clutter fork inside Mutter).

Graphene’s only requirement is a C99 compiler and a decent toolchain capable of either taking SSE builtins or support vectorization on appropriately aligned types. This means that, unless you decide to enable the GObject types for each Graphene type, Graphene doesn’t really need GLib types or API—except that’s a bit of a lie.

As I wanted to test what I was doing, Graphene has an optional build time dependency on GLib for its test suite; the library itself may not use anything from GLib, but if you want to build and run the test suite then you need to have GLib installed.

This build time dependency makes testing Graphene on Windows a lot more complicated than it ought to be. For instance, I need to install a ton of packages when using the MSYS2 toolchain on the CI instance on AppVeyor, which takes roughly 6 minutes each for the 32bit and the 64bit builds; and I can’t build the test suite at all when using MSVC, because then I’d have to download and build GLib as well—and just to access the GTest API, which I don’t even like.


What’s wrong with GTest

GTest is kind of problematic—outside of Google hijacking the name of the API for their own testing framework, which makes looking for it a pain. GTest is a lot more complicated than a small unit testing API needs to be, for starters; it was originally written to be used with a specific harness, gtester, in order to generate a very brief HTML report using gtester-report, including some timing information on each unit—except that gtester is now deprecated because the build system gunk to make it work was terrible to deal with. So, we pretty much told everyone to stop bothering, add a --tap argument when calling every test binary, and use the TAP harness in Autotools.

Of course, this means that the testing framework now has a completely useless output format, and with it, a bunch of default behaviours driven by said useless output format, and we’re still deciding if we should break backward compatibility to ensure that the supported output format has a sane default behaviour.

On top of that, GTest piggybacks on GLib’s own assertion mechanism, which has two major downsides:

  • it can be disabled at compile time by defining G_DISABLE_ASSERT before including glib.h, which, surprise, people tend to use when releasing; thus, you can’t run tests on builds that would most benefit from a test suite
  • it literally abort()s the test unit, which breaks any test harness in existence that does not expect things to SIGABRT midway through a test suite—which includes GLib’s own deprecated gtester harness

To solve the first problem we added a lot of wrappers around g_assert(), like g_assert_true() and g_assert_no_error(), that won’t be disabled depending on your build options and thus won’t break your test suite—and if your test suite is still using g_assert(), you’re strongly encouraged to port to the newer API. The second issue is still standing, and makes running GTest-based test suite under any harness a pain, but especially under a TAP harness, which requires listing the amount of tests you’ve run, or that you’re planning to run.

The remaining issues of GTest are the convoluted way to add tests using a unique path; the bizarre pattern matching API for warnings and errors; the whole sub-process API that relaunches the test binary and calls a single test unit in order to allow it to assert safely and capture its output. It’s very much the GLib test suite, except when it tries to use non-GLib API internally, like the command line option parser, or its own logging primitives; it’s also sorely lacking in the GObject/GIO side of things, so you can’t use standard API to create a mock GObject type, or a mock GFile.

If you want to contribute to GLib, then working on improving the GTest API would be a good investment of your time; since my project does not depend on GLib, though, I had the chance of starting with a clean slate.


A clean slate

For the last couple of years I’ve been playing off and on with a small test framework API, mostly inspired by BDD frameworks like Mocha and Jasmine. Behaviour Driven Development is kind of a buzzword, like test driven development, but I particularly like the idea of describing a test suite in terms of specifications and expectations: you specify what a piece of code does, and you match results to your expectations.

The API for describing the test suites is modelled on natural language (assuming your language is English, sadly):

  describe("your data type", function() {
    it("does something", () => {
      expect(doSomething()).toBe(true);
    });
    it("can greet you", () => {
      let greeting = getHelloWorld();
      expect(greeting).not.toBe("Goodbye World");
    });
  });

Of course, C is more verbose that JavaScript, but we can adopt a similar mechanism:

static void
something (void)
{
  expect ("doSomething",
    bool_value (do_something ()),
    to_be, true,
    NULL);
}

static void
{
  const char *greeting = get_hello_world ();

  expect ("getHelloWorld",
    string_value (greeting),
    not, to_be, "Goodbye World",
    NULL);
}

static void
type_suite (void)
{
  it ("does something", do_something);
  it ("can greet you", greet);
}


  describe ("your data type", type_suite);

If only C11 got blocks from Clang, this would look a lot less clunkier.

The value wrappers are also necessary, because C is only type safe as long as every type you have is an integer.

Since we’re good C citizens, we should namespace the API, which requires naming this library—let’s call it µTest, in a fit of unoriginality.

One of the nice bits of Mocha and Jasmine is the output of running a test suite:

$ ./tests/general 

  General
    contains at least a spec with an expectation
      ✓ a is true
      ✓ a is not false

      2 passing (219.00 µs)

    can contain multiple specs
      ✓ str contains 'hello'
      ✓ str contains 'world'
      ✓ contains all fragments

      3 passing (145.00 µs)

    should be skipped
      - skip this test

      0 passing (31.00 µs)
      1 skipped


Total
5 passing (810.00 µs)
1 skipped

Or, with colors:

Using colors means immediately taking this more seriously

The colours go automatically away if you redirect the output to something that is not a TTY, so your logs won’t be messed up by escape sequences.

If you have a test harness, then you can use the MUTEST_OUTPUT environment variable to control the output; for instance, if you’re using TAP you’ll get:

$ MUTEST_OUTPUT=tap ./tests/general
# General
# contains at least a spec with an expectation
ok 1 a is true
ok 2 a is not false
# can contain multiple specs
ok 3 str contains 'hello'
ok 4 str contains 'world'
ok 5 contains all fragments
# should be skipped
ok 6 # skip: skip this test
1..6

Which can be passed through to prove to get:

$ MUTEST_OUTPUT=tap prove ./tests/general
./tests/general .. ok
All tests successful.
Files=1, Tests=6,  0 wallclock secs ( 0.02 usr +  0.00 sys =  0.02 CPU)
Result: PASS

I’m planning to add some additional output formatters, like JSON and XML.


Using µTest

Ideally, µTest should be used as a sub-module or a Meson sub-project of your own; if you’re using it as a sub-project, you can tell Meson to build a static library that won’t get installed on your system, e.g.:

mutest_dep = dependency('mutest-1',
  fallback: [ 'mutest', 'mutest_dep' ],
  default_options: ['static=true'],
  required: false,
  disabler: true,
)

# Or, if you're using Meson < 0.49.0
mutest_dep = dependency('mutest-1', required: false)
if not mutest_dep.found()
  mutest = subproject('mutest',
    default_options: [ 'static=true', ],
    required: false,
  )

  if mutest.found()
    mutest_dep = mutest.get_variable('mutest_dep')
  else
    mutest_dep = disabler()
  endif
endif

Then you can make the tests conditional on mutest_dep.found().

µTest is kind of experimental, and I’m still breaking its API in places, as a result of documenting it and trying it out, by porting the Graphene test suite to it. There’s still a bunch of API that I’d like to land, like custom matchers/formatters for complex data types, and a decent want to skip a specification or a whole suite; plus, as I said above, some additional formatted output.

If you have feedback, feel free to open an issue—or a pull request wink wink nudge nudge.

by ebassi at March 14, 2019 03:01 PM

February 24, 2019

Emmanuele Bassi

Episode 2.a: Building GNOME

In the GNOME project community, the people building the code are represented by two separate but equally important groups: the maintainers, who release the code, and the release team, who release GNOME. These are their stories.


Developing a software project can be hard, but building one ought to be simpler, right? After all, it’s kind of a binary state for software projects after any change: either the change broke the build, or it didn’t—and broken builds do not get released, right?

Oh, you sweet summer child. Of course it’s not that simple.

Building software gets even more complicated when it comes to complex, interdependent projects composed by multiple modules like GNOME; each module has dependencies lower in the stack, and reverse dependencies—that is, modules that depend on the interfaces it provides—higher in the stack.

If you’re working on a component low in the stack, especially in 2002, chances are you only have system dependencies, and those system dependencies are generally shipped by your Linux distribution. Those dependencies do not typically change very often, or at the very least they assume you’re okay with not targeting the latest, bleeding edge version. If push comes to shove, you can always write a bunch of fallback code that gets only ever tested by people running on old operating systems—if they decide on a whim to try and compile the latest and greatest application, instead of just waiting until their whole platform gets an upgrade.

Moving upwards in the GNOME stack, you start having multiple dependencies, but generally speaking those dependencies move at the same speed as your project, so you can keep track of them. Unlike other projects, the GNOME platform never had issues with the multiplication of dependencies—something that will come back to bite the maintainers later in the 2.x development cycle—but even so, with GNOME offering code hosting to like-minded developers, it didn’t use to be hard to keep track of things; if everything happens on the same source code repository infrastructure you can subscribe to the changes for your dependencies, and see what happens almost in real time.

Applications, finally, are another beast entirely; here, dependencies can be many, and spanning across different system services, different build systems, different code hosting services, and different languages. To minimise the causes of headaches, you can decide to target the versions packages by the Linux distribution you’re using, unless you’re in the middle of a major API version shift in the platform—like, say, the one that happened between GNOME 1 and 2. No Linux distribution packager in their right mind would ship releases for highly unstable core platform libraries, especially when the development cadence of those libraries outpaces the cadence of the distribution packaging and update process. For those cases, using snapshots of the source under revision control is the only option left for you as an upstream maintainer.

So, at this point, your options are limited. You want to install the latest and greatest version of your dependencies—unstable as they might be—on your local system, but you don’t want to mess up the rest of the system in case the changes introduce a bug. In other words: if you’re running GNOME as your desktop, and you upgrade a dependency for your application, you might end up breaking the your own desktop, and then spend time undoing the unholy mess you made, instead of hacking on your own project.

This is usually the point where programmers start writing scripts to modify the environment, and build dependencies and applications into their own separate prefix, using small utilities that describe where libraries put their header files and shared objects in order to construct the necessary compiler and linker arguments. GNOME libraries standardised to one of these tools, called pkg-config, before the 2.0 release, replacing the per-project tools like glib-config or gtk-config that were common during the GNOME 1.x era.

Little by little, the scripts each developer used started to get shared, and improved; for instance, instead of hard coding the list of things to build, and the order in which they should be built, you may want to be able to describe a whole project in terms of a list of modules to be built—each module pointing to its source code repository, its configuration and build time options, and its dependencies. Another useful feature is the ability to set up an environment capable of building and running applications against the components you just built.

The most successful script for building and running GNOME components, jhbuild, emerged in 2002. Written by James Henstridge, the maintainer of the GTK bindings for Python, jhbuild took an XML description of a set of modules, built the dependency tree for each component, and went through the set in the right order, building everything it could, assuming it knew how to handle the module’s build system—which, at the time, was mostly a choice between Autotools and Autotools. Additionally, it could spawn a shell and let you run what you built, or compile additional code as if you installed every component in a system location. If you wanted to experience GNOME at its most bleeding edge, you could build a whole module set into a system prefix like /opt, and point your session manager to that location. Running a whole desktop environment out of a CVS snapshot: what could possibly go wrong, right? Well, at least you had the option of going back to the safe harbours of your Linux distibution’s packaged version of GNOME.

Little by little, over the years, jhbuild acquire new features, like the ability to build projects that were not using Autotools. Multiple module sets, one for each branch of GNOME, and one for tracking the latest and greatest, appeared over the years, as well as additional module sets for building applications, both hosted on gnome.org repositories and outside the GNOME infrastructure. Jhbuild was even used on non-Linux platforms, like macOS, to build the core GNOME platform stack, and let application developers port their work there. Additionally, other projects like X11 and the freedesktop.org stack that we’ll see in a future episode, will publish their own module sets, as many of the developers in GNOME moved through the stack and brought their tools with them.

With jhbuild consuming sets of modules in order to build the GNOME stack, the question becomes: who maintained those sets? Ideally, the release team would be responsible for keeping the modules up to date whenever a new dependency was added, or an old dependency removed. As the release team was responsible of deciding which modules belonged in the GNOME release, they would be the ones best positioned to update the jhbuild sets. There was a small snag in this plan, though: the release team already had its own tool for building GNOME from release archives produced and published by the module maintainers, in the correct order, to verify that the whole GNOME release would build and produce something that could be packaged by Linux (and non-Linux) distributors.

The release team’s tool was called GARNOME, and was based on the GAR architecture designed by Nick Moffitt, which itself was largely based on the BSD port system. GARNOME was developed as a way for the release team to build and test alpha releases of GNOME during the 2.0 development cycle. The main difference between jhbuild and GARNOME was the latter’s focus on release archives, compared to the former’s focus on source checkouts. The main goal of GARNOME was really to replicate the process of distributors taking various releases and packaging them up in their preferred format. While editing jhbuild’s module sets was a simple matter of changing some XML, the GARNOME recipes were a fairly complicated set of Make files, with magic variables and magic include directives that would lead to building and installing each module, using Make rules to determine the dependencies. All of this meant that there was not only no overlap between who used jhbuild and who used GARNOME, but also no overlap between who contributed to which project.

Both jhbuild and GARNOME assumed you had a working system and development toolchain for all the programming languages needed to build GNOME components; they also relied on a whole host of system dependencies, especially when it came to talking to system services and hardware devices. While this was relatively less important for GARNOME, whose role was simply to build the whole of GNOME, jhbuild started to suffer from these limitations as soon as GNOME projects began interacting much more with the underlying services offered by the operating system.

It’s important to note that none of this stuff was automated; it all relied on human intervention for testing that things would not blow up in interesting ways. We were far, far away from any concept of a continuous integration pipeline. Individual developers had to hunt down breakage in library releases that would have repercussions down the line when building other libraries, system components, or applications. The net result was that building GNOME was only possible if everything was built out of release archives; anything else was deeply unstable, and proved to be hard to handle for both seasoned developers and new contributors alike, the more complexity was piled on the project.

GARNOME was pretty successful, and ended up being used for a majority of the GNOME 2 releases, until it was finally retired in favour of jhbuild itself, using a special module set that pointed to release archives instead of source code repositories. The module set was maintained by the release team, and published for every GNOME release, to let other developers and packagers reproduce and validate the process.

Jhbuild is still used to this day, mostly for the development of system components like GTK, or the GNOME Shell; application building has largely shifted towards containerised systems, like Flatpak, which have the advantage of being easily automated in a CI environment. These systems are also much easier to use from a newcomer perspective, and are quite more reliable when it comes to the stability of the underlying middleware.

The release team switched away from jhbuild for validating and publishing GNOME releases in 2018, long into the GNOME 3 release cycle, using a new tool called BuildStream which not only builds the GNOME components but it also builds the lower layers of the stack including the compiler toolchain, to ensure a level of build reproducibility that jhbuild and GARNOME never had.

References

by ebassi at February 24, 2019 05:00 PM

February 14, 2019

Emmanuele Bassi

Episode 2.2: Release Day

For all intents and purposes, the 2.0 release process of GNOME was a reboot of the project; as such, it was a highly introspective event that just so happened to result in a public release of a software platform used by a large number of people. The real result of this process was not in the bits and bytes that were compiled, or interpreted, into desktop components, panel applets, or applications; it was, instead, a set of design tenets, a project ethos, and a powerful marketing brand that exist to this day.

The GNOME community of developers, documenters, translators, and designers, was faced with the result of Sun’s user testing, and with the feedback on documentation, accessibility, and design, and had two choices: double down on what everyone was doing, and maintain the existing contributor and user bases; or adapt, take a gamble, and change—possibly even lose contributors, in the hope of gaining more users, and more contributors, down the road.

The community opted for the latter gamble.

The decision was not without internal strife.

This kind of decisions is fairly common in all projects that reached a certain critical mass, especially if they don’t have a central figure keeping everything together, and being the ultimate arbiter of taste and direction. Miguel de Icaza was already really busy with Ximian, and even if the Foundation created the release team in order to get a “steering committee” to make informed, executive decisions in case of conflicts, or decide who gets to release what and when, in order to minimise disruption over the chain of dependencies, this team was hardly an entity capable of deciding the direction of project.

If effectively nobody is in charge, the direction of the project becomes an emergent condition. People have a more or less vague sense of direction, and a more or less defined end goal, so they will move towards it. Maybe not all at the same time, and maybe not all on the same path; but the sum of all vectors is not zero. Or, at least, it’s not zero all the time.

Of course, there are people that put more effort in order to balance the equation, and those are the ones that we tend to recognise as “the leaders”, or, in modern tech vernacular, “the rock stars”.

Seth Nickell and Calum Benson clearly fit that description. Both worked on usability and design, and both worked on user testing—with Calum specifically working on the professional workstation user given his position at Sun. Seth was the GNOME Usability project lead, and alongside the rest of the usability team, co-authored the GNOME Human Interface Guidelines, or HIG. The HIG was both a statement of intent on the design direction of GNOME, as well as a checklist for developers to go through and ensure that all the GUI applications would fit into the desktop environment, by looking and behaving consistently. At the very basic core of the HIG sat a few principles:

  1. Design your application to let people achieve their goals
  2. Make your application accessible to everyone
  3. Keep the design simple, pretty, and consistent
  4. Keep the user in control, informed on what happens, and forgive them for any mistake

These four tenets tried to move the needle of the design for GNOME applications from the core audience of standard nerds to the world at large. In order to design your application, you need to understand the audience that you wish to target, and how to ensure you won’t get in their way; this also means you should never limit your audience to people that are as able bodied as you are, or coming from the same country or socio-economical background; your work should be simple and reliable, in the sense that it doesn’t do two similar things in two different ways; and that users should be treated with empathy, and never with contempt. Even if you didn’t have access to the rest of the document, which detailed how to deal with whitespace, or alignment, or the right widget for the right action, you could already start working on making an application capable of fitting in with the rest of GNOME.

Consistency and forgiveness in user interfaces also reflected changes in how those interfaces should be configured—or if they should be configured at all.

In 2002 Havoc Pennington wrote what is probably the most influential essay on how free and open source software user interface ought to be designed, called “Free software UI”. The essay was the response to the evergreen question: “can free and open source software methodology lead to the creation of a good user interface?”, posed by Matthew Paul Thomas, a designer and volunteer contributor at Mozilla.

Free and open source software design and usability suffer from various ailments, most notably:

  • there are too many developers and not enough designers
  • designers can’t really submit patches

In an attempt at fixing these two issues the GNOME project worked on establishing a design and usability team, with the help of companies to jump start the effort; the presence of respected designers in leadership positions also helped creating and fostering a culture where project maintainers would ask for design review and testing. We’re still a bit far from asking for design input instead of a design review, which usually comes with pushback now that the code is in place. Small steps, I guess.

The important part of the essay, though, is on the cost of preferences—namely that there’s no such thing as “just adding an option”.

Adding an option, on a technical level, requires adding code to handle all the potential states of the option; it requires handling failure cases; a user interface for setting and retrieving the option; it requires testing, and QA, for all the states. Each option can interact with other options, which means a combinatorial explosion of potential states, each with its own failure mode. Options are optimal for a certain class of users, because they provide the illusion of control; they are also optimal for a certain class of developers, because they tickle the instinct of making general solutions to solve classes of problems, instead of fixing the actual problem.

In the context of software released “as is”, without even the implied warranty of being fit for purpose, like most free and open source software is, it removes stress because it allows the maintainer from abdicating responsibility. It’s not my fault that you frobnicated the foobariser and all your family photos have become encoded versions of the GNU C library; you should have been holding a lit black candle in your left hand and a curved blade knife in your right hand if you didn’t want that to happen.

More than that, though: what you definitely don’t want is having preferences to “fix” your application. If you have a bug, don’t add an option to work around it. If somebody is relying on a bug for their workflow, then: too bad. Adding a preference to work around a bug introduces another bug because now you encoded a failure state directly into the behaviour of your code, and you cannot ever change it.

Finally, settings should never leave you in an error state. You should never have a user break their system just because they put invalid data in a text field; or toggled the wrong checkbox; or clicked the wrong button. Recovering is good, but never putting the user in the condition of putting bad values in the system is the best approach because it is more resilient.

From a process standpoint, the development cycle of GNOME 1, and the release process of GNOME 2.0, led to a complete overhaul of how the project should be shepherded into releasing new versions. Releasing GNOME 2.0 led to many compromises: features were not complete, known bugs ended up in the release notes, and some of the underlying API provided by the development platform were not really tested enough before freezing them for all time. It is hard to decide when to shout “pencils down” when everyone is doing their own thing in their own corner of the world. It’s even harder when you have a feedback loop between a development platform that provides API for the rest of the platform, and needs validation for the changes while they can still be made; and applications that need the development platform to sit still for five minutes so that they can be ported to the new goodness.

GNOME was the first major free software project to switch away from a feature-based development cycle and towards a time-based one, thanks to the efforts of Jeff Waugh on behalf of the release team; the whole 2.0 development cycle was schedule driven, with constant reminders of the various milestones and freeze dates. Before the final 2.0 release, a full plan for the development cycle was proposed to the community of maintainers; the short version of the plan was:

The key elements of the long-term plan: a stable branch that is extremely locked-down, an unstable branch that is always compilable and dogfood-quality, and time-based releases at 6 month intervals.

Given that the 2.0 release happened at the end of June, that would put the 2.2 release in December, which would have been problematic. Instead, it was decided to have a slightly longer development cycle, to catch all the stragglers that couldn’t make the cut for 2.0, and release 2.2 in February, followed by the 2.4 release in September. This period of adjustment led to the now familiar release cadence of a March and a September releases every year. Time based releases and freezing the features, API, translatable strings—and code, around the point-zero release date—ensured that only features working to the satisfaction of the maintainers, designers, and translators would end up in the hand of the users. Or, at least, that was the general plan.

It’s important to note that all of these changes in the way GNOME as a community saw itself, and the project they were contributing to, are probably the biggest event in the history of the project itself—and, possibly, in the history of free and open source software. The decision to focus on usability, accessibility, and design, shaped the way people contributing and using GNOME think about GNOME; it even changed the perception of GNOME for people not using it, for good or ill. GNOME’s brand was solidified into one of care about design principles, and that perception continues to this day. If something user visible changes in GNOME it is assumed that design, usability, or accessibility had something to do with it—even when it really didn’t; it is assumed that designers sat together, did user studies, finalized a design, and then, in one of the less charitable versions, lobbed it over the wall to module maintainers for its implementation with no regard for objections.

That version of reality is so far removed from ours it might as well have superheroes flying around battling monsters from other dimensions; and, yet, the GNOME brand is so established that people will take it as an article of faith.

For all that, though, GNOME 2 did have usability studies conducted on it prior to release; the Human Interface Guidelines were written in response to those studies, and to established knowledge in the interaction design literature and community; the changes in the system settings and the menu structures were done after witnessing users struggle with the equivalent bits of GNOME 1. The unnecessary settings that littered the desktop as a way to escape making decisions, or as a way to provide some sort of intellectual challenge to the developers were removed because, in the end, settings are not the goal for a desktop environment that’s just supposed to launch applications and provide an environment for those applications to exist.

This was peak GNOME brand.

On June 27, 2002, GNOME 2.0 was released. The GNOME community worked days and nights for more than a year after releasing 1.4, and for more than two years after releasing 1.2. New components were created, projects were ported, documentation was written, screenshots were taken, text was translated.

Finally, the larger community of Linux users and enthusiasts would be able to witness the result of all this amazing work, and their reaction was: thanks, I hate it.

Well, no, that’s not really true.

Yes, a lot of people hated it—and they made damn well sure you knew that they hated it. Mailing lists, bug trackers, articles and comments on news websites were full of people angrily demanding their five clocks back; or their heavily nested menu structure; or their millisecond-precision animation settings; or their “Miscellanous” group of settings in a “Miscellaneous” tab of the control centre.

A lot of people simply sat in front of GNOME 2, and managed to get their work done, before turning off the machine and going home.

A few people, though, saw something new; the potential of the changes, and of the focus of the project. They saw beyond the removed configuration options; the missing features left for a future cycle; the bugs caused by the massive changes in the underlying development platform.

Those few people were the next generation of contributors to GNOME; new developers, sure, but also new designers; new documentation writers; new translators; new artists; new maintainers. They were inspired by the newly refocused direction of project, by its ethos, to the point of deciding to contribute to it. GNOME needed to be ready for them.


Next week the magic and shine of the release starts wearing off, and we’re back to flames and long discussions on features, media stacks, inclusion of applications in the release, and what happens when Novell decides to go on a shopping spree, in the episode titled “Honeymoon phase”.

by ebassi at February 14, 2019 04:00 PM

January 24, 2019

Emmanuele Bassi

Episode 2.1: On Brand

In the beginning of the year 2000, with the 1.2 release nearly out of the door, the GNOME project was beginning to lay down the groundwork for the design and development of the 2.0 release cycle. The platform had approached a good level of stability for the basic, day to day use; whatever rough edges were present, they could be polished in minor releases, while the core components of the platform were updated and transitioned incrementally towards the next major version. Before mid-2000, we already started seeing a 1.3 branch for GTK, and new libraries like Pango and GdkPixbuf were in development to provide advanced text rendering and replace the old and limited imlib image loading library, respectively. Of course, once you get the ball rolling it’s easy to start piling features, refactorings, and world breaking fixes on top of each other.

If X11 finally got support for loading True Type fonts, then we would need a better API to render text. If we got a better API to render text, we would need a better API to compose Unicode glyphs coming from different writing systems, outside of the plain Latin set.

If we had a better image loading library, we could use better icons. We could even use SVG icons all over the platform, instead of using the aging XPM format.

By May 2000, what was supposed to be GTK 1.4 turned into the development cycle for the first major API break since the release of GTK 1.0, which happened just the year before. Of course, the 1.2 cycle would continue, and would now cover not just GNOME 1.2, but the 1.4 release as well.

It wasn’t GTK itself that led the charge, though. In a way, breaking GTK’s API was the side effect of a much deeper change.

As we’ve seen in the first chapter’s side episode on GTK, GLib was a simple C utility library for the benefit of GTK itself; the type system for writing widgets and other toolkit-related data using object orientation was part of GTK, and required depending on GTK for writing any type of object oriented C. It turns out that various C projects in the GNOME ecosystem—projects like GdkPixbuf and Pango—liked the idea of having a type system written in C and, more importantly, the ability to easily build language bindings for any API based on that type system. Those projects didn’t really need, or want, a dependency on a GUI toolkit, with its own dependency on image loading libraries, or windowing systems. Moving the type system to a lower level library, and then reusing it in GTK would have neatly solved the problem, and made it possible to create a semi-standard C library shared across various projects.

Of course, since no solution survives contact with software developers, people depending on GLib for basic data structures, like a hash table, didn’t want to suddenly have a type system as well. For that reason, GLib acquired a second shared library containing the GTK type system, upgraded and on steroid; the “signal” mechanism to invoke a named list of functions, used by GTK to deliver windowing system events; and a base object class, called GObject, to replace GTK’s own GtkObject. On top of that, GObject provided properties associated to object instances; interface types; dynamic types for loadable modules; type wrappers for plain old data types; generic function wrappers for language bindings; and a richer set of memory management semantics.

Another important set of changes finding their way down to GLib was the portability layer needed to make sure that GTK applications could run on both Unix-like, and non-Unix-like operating systems. Namely, Windows, for which GTK was getting a backend, alongside additional backends for BeOS, macOS, and direct framebuffer rendering. The Windows backend for GTK was introduced to make GIMP build and work on that platform, and increase its visibility, and possibly the amount of contributions—something that free and open source software communities always strive towards, even if it does increase the amount of feature request, bug reports, and general work for the maintainers. It’s important to note that GTK wasn’t suddenly becoming a cross-platform toolkit, meant to be used to write applications targeting multiple platforms; the main goal was always to allow extant Linux applications to be easily ported to other operating systems first, and write native, non-Linux applications as a distant second.

With a common, low level API provided by GLib and GObject, we start to see the beginning of a more comprehensive software development platform for GNOME; if all the parts of the platform, even the ones that are not directly tied to the windowing system, follow the same semantics when it comes to memory management, type inheritance, properties, signals, and coding practices, then writing documentation becomes easier; developing bindings for various programming languages becomes a much more tractable problem; creating new, low level libraries for, say, sending and receiving data from a web server, and leveraging the existing community becomes possible. With the release of GLib and GTK 2.0, the GNOME software development platform moves from a collection of libraries with a bunch of utilities built on top of GTK to a comprehensive set of functionality centered on GLib and GObject, with GTK as the GUI toolkit, and a collection of libraries fullfilling specific tasks to complement it.

Of course, that still means having libraries like libgnome and libgnomeui lying around, as a way to create GNOME applications that integrate with the GNOME ecosystem, instead of just GTK applications. GTK gaining more GNOME-related features, or more GNOME-related integration points, was a source of contention inside the community. GTK was considered by some GNOME contributors to be a “second party” project; it had its own release schedule, and its own release manager; it incorporated feedback from GNOME application and library developers, but it was also trying to serve non-GNOME communities, by providing a useful generic GUI toolkit out of the box. On the other side of the spectrum, some GNOME developers wanted to keep the core as lean as possible, and accrue functionality inside the GNOME libraries, like libgnome and libgnomeui, even if those libraries were messier and didn’t receive as much scrutiny as GLib and GTK.

Most of 2001 was spent developing GObject and Pango, with the latter proving to be one of the lynchpins of the whole platform release. As we’ve seen in episode 1.5, Pango provided support for complex, non-latin text, a basic requirement for creating GUI applications that could be used outside of the US and Europe; in order to get there, though, various pieces of the puzzle had to come together first.

The first, big piece, was adding the ability for applications to render TrueType fonts on X11, using fontconfig to configure, enumerate, and load the fonts installed in a system, and the Xft library, for rendering glyphs. Before Xft, X applications only had access to core bitmap fonts, which may look impressive if all you have a is thin terminal in 1987, but compared to the font rendering available on Windows and macOS they were already painfully out of date by about 10 years. During the GNOME 1.x cycle some components with custom rendering code already started using Xft directly, instead of going through GTK’s text rendering wrappers around X11’s core API; this led to the interesting result of, for instance, the text rendering in Nautilus pre-1.0 looking miles better than every other GTK 1 application, including the rest of the desktop components.

The other big piece of the puzzle was Unicode. Up until GTK 2.0, all text inside GTK applications was pretty much passed as it was to the underlying windowing system text rendering primitives; that mostly meant either ASCII, or one of the then common ISO encodings; this not only imposed restrictions on what kind of text could be rendered, but it also introduced additional hilarity when it came to running applications localized by somebody in Europe on a computer in the US, or in Russia, or in Japan.

Taking advantage of Unicode to present text meant adding various pieces of API to GLib, mostly around the Unicode tables, text measurement, and iteration. More importantly, though, it meant changing all text used inside GTK and GNOME applications to UTF-8. It meant that file system paths, translations, and data stored on non-Unicode systems had to be converted—if the original encoding was available—or entirely rewritten, if you didn’t want your GUI to be a tragedy of unintelligible text.

If the written word was in the process of being transformed to another format, pictures were not in a better position. The state of the art for raster images in a GTK application was still XPM, a bitmap format that was optimised for storage inside C sources, and compiled with the rest of the application. Sadly, the results were less than stellar, compared to more common formats like JPEG and PNG; additionally, the main library used to read image assets, imlib, was very much outdated, and mostly geared towards loading image data into X11 pixmaps. Imlib provided entry points for integrating with the GTK drawing API, but it was a separate project, part of the Enlightenment window manager—which was already moving to its replacement, imlib2. The work to replace imlib with a new GNOME-driven library, called GdkPixbf, began in 1999, and was merged into the GTK repository as a way to provide API to load images in various formats like PNG, GIF, JPEG, and Windows BMP and ICO files directly into GTK widgets. As an additional feature, GdkPixbuf had an extensible plugin system which allowed writing out of tree modules for loading image formats without necessarily adding new dependencies to GTK. Another feature of GdkPixbuf was a transformation and compositing API, something that imlib did not provide.

All in all, it took almost 2 years of development for GTK 1.3 to turn into GTK 2.0, with the first stable releases of GLib, Pango, and GTK cut in March 2002, after a few months of feature freeze needed to let GNOME and application developers catch up with the changes, and port their projects to the new API. The process of porting went without a hitch, with everyone agreeing that the new functionality was incredibly easy to use, and much better than what came before. Developers were so eager to move to the new libraries that they started doing so during the development cycle, and constantly kept up with the changes so that every source code change was just a few lines of code.

Yes, of course I’m lying.

Porting proceeded in fits and jumps, and it was either done at the very beginning, when the difference between major versions of the libraries were minimal and gave a false impression of what the job entailed; or it happened at the very end of the cycle, with constant renegotiation of every single change in the platform, a constant barrage of questions of why platform developers were trying to impose misery on the poor application developers, why won’t you think of the application developers…

Essentially, like every single major version change in every project, ever.

On top of the usual woes of porting, we have to remember that GNOME 1.4 started introducing technology previews for GNOME 2.0, like GConf, the new configuration storage. While Havoc Pennington had written GConf between 2000 and 2001, and concentrated mostly on the development of GTK 2 after that, it was now time to start using GConf as part of the desktop itself, as a replacement for the configuration storage used by components by the Panel, and by applications using libgnome, which is when things got heated.

Ximian developer Dieter Maurer thought that some of the trade-offs of the GConf implementation, mostly related to its use of CORBA’s type system, were enough of a road block that they decided to write their own configuration client API, called bonobo-config, which re-implemented the GConf design with a more pervasive use of CORBA types and interfaces, thanks to the libbonobo work that was pursued as part of the GNOME componentisation effort. Bonoboc-config had multiple backends, one of which would wrap GConf, as, you might have guessed it already, a way to route around a strongly opinionated maintainer, working for a different company.

The last bit is probably the first real instance of a massive flame war caused by strife between commercial entities vying for the direction of the project.

The flames started off as a disagreement in design and technical direction between GConf and bonobo-config maintainers, confined to the GConf development mailing list, but soon spiralled out of control once libgnome was changed to use bonobo-config, engulfing multiple mailing list in fires that burned brighter than a thousand suns. Accusations of attempting to destroy the GNOME projects flew through the air, followed by rehashing of every single small roadblock ever put in the way of each interested party. The newly appointed release manager for GNOME 2.0 and committer of the dependency on bonobo-config inside libgnome, Martin Baulig, dramatically quit his role and left the community (only to come back a bit later), leaving Anders Carlsson to revert the controversial change—followed by a new round of accusations.

Totally normal community interactions in a free software project.

In the end, concessions were made, hatred simmered, and bonobo-config was left behind, to be used only be applications that decided to opt into its usage, and even then it was mostly used as a wrapper around GConf.


Fonts, Unicode, and icons” seems like a small set of user-visibile features for a whole new major version of a toolkit, and desktop environment. Of course, that byline kind of ignores all the work laid down behind the scenes, but end users don’t care about that, right? We’ll see what happens when a major refactoring to pay down the technical debt accrued over the first 5 years of GNOME, and clear the rubble to build something that didn’t make designers, documentation writers, and QA testers cry tears of blood, in the next episode, “Release Day”, which will be out in two weeks time, as next week I’m going to be at the GTK hackfest, trying to pay down the technical debt accrued over the 3.0 development cycle of the toolkit. I’m sure it’ll be fine, this time. It’s fine. We’re going to be fine. It’s fine. We’re fine.

References

by ebassi at January 24, 2019 04:00 PM

January 17, 2019

Emmanuele Bassi

Episode 2.0: Retrospective

Hello, everyone, and welcome back to the History of GNOME! If you’re listening to this in real time, I hope you had a nice break over the end of 2018, and are now ready to begin the second chapter of our main narrative. I had a lovely time over the holidays, and I’m now back working on both the GNOME platform and on the History of GNOME, which means reading lots of code by day, and reading lots of rants and mailing list archives in the evening—plus, the occasional hour or so spent playing videogames in order to decompress.

Before we plunge right back into the history of GNOME 2, though, I wanted to take a bit of your time to recap the first chapter, and to prepare the stage for the second one. This is a slightly opinionated episode… Well, slightly more opinionated than usual, at least… As I’m trying to establish the theme of the first chapter as a starting point for the main narrative for the future. Yes, this is an historical perspective on the GNOME project, but history doesn’t serve any practical purposes if we don’t glean trends, conflicts, and resolutions of past issues that can apply to the current period. If we don’t learn anything from history then we might as well not have any history at all.

The first chapter of this history of the GNOME project covered roughly the four years that go from Miguel de Icaza’s announcement in August 1997 to the release of GNOME 1.4 in April 2001—and we did so in about 2 hours, if you count the three side forays on GTK, language bindings, and applications that closed the first block of episodes.

In comparison, the second chapter of the main narrative will cover the 9 years of the 2.x release cycles that went from 2001 to 2010. The duration of the second major cycle of GNOME is not just twice as long as the first, it’s also more complicated, as a result of the increased complexity for any project that deals with creating a modern user experience—one not just for the desktop but also for the mobile platforms that were suddenly becoming more important in the consumer products industry, as we’re going to see during this chapter. The current rough episode count is, at the moment I’m reading this, about 12, but as I’m striving to keep the length of each episode in the 15 to 20 minutes range, I’m not entirely sure how many actual episodes will make up the second chapter.

Looking back at the beginning of the project we can say with relative certainty that GNOME started as a desktop environment in a time when desktops were simpler than they are now; at the time of its inception, the bar to clear was represented by Windows 95, and while it was ostensibly a fairly high bar to clear for any volunteer-driven effort, by the time GNOME 1.4 was released to the general public of Linux enthusiasts and Unix professionals, it was increasingly clear that a new point of comparison was needed, mostly courtesy of Apple’s OS X and Microsoft’s Windows XP. Similarly, the hardware platforms started off as simpler iterations over the PC compatible space, but vendors quickly moved the complexity further and further into the software stack—like anybody with a WinModem in the late ‘90s could tell you. Since Linux was a blip on the radars of even the most widespread hardware platforms, new hardware targeted Windows first and foremost, and support for Linux appeared only whenever some enterprising volunteer would manage to reverse engineer the chipset du jour, if it appeared at all.

As we’ve seen in the first episode of the first chapter, the precursors to what would become a “desktop environment” in the modern sense of the term were made of smaller components, bolted on top of each other according to the needs, and whims, of each user. A collection of LEGO bricks, if you will, if only the bricks were made by a bunch of different vendors and you had to glue them together to build something. KDE was the very first environment for Linux that tried to mandate a more strict integration between its parts, by developing and releasing all if its building blocks as comprehensive archives. GNOME initially followed the same approach, with libraries, utilities, and core components sharing the same CVS repositories, and released inside shared distribution archives. Then, something changed inside GNOME; and figuring out what changed is central to understanding the various tensions inside a growing free and open source software project.

If desktop environments are the result of a push towards centralisation, and comprehensive, integrated functionality exposed to the people using, but not necessarily contributing to them, splitting off modules into their own repositories, using their own release schedules, their own idiosynchrasies in build systems, options, coding styles, and contribution policies, ought to run counter to that centralising effort. The decentralisation creates strife between projects, and between maintainers; it creates modularisation and API barriers; it generates dependencies, which in turn engender the possiblity of conflict, and barriers to not just contribution, but to distribution and upgrade.

Why, then, this happens?

The mainstream analytical framework of free and open source software tells us that communities consciously end up splitting off components, instead of centralising functionality, once it reaches critical mass; community members prefer delegation and composition of components with well-defined edges and interactions between them, instead of piling functionality and API on top of a hierarchy of poorly defined abstractions. They like small components because maintainers value the design philosophy that allows them to provide choice to people using their software, and gives discerning users the ability to compose an operating system tailored to their needs, via loosely connected interfaces.

Of course, all I said above is a complete and utter fabrication.

You have no idea of the amounts of takes I needed to manage to get through all of that without laughing.

The actual answer would be Conway’s Law:

organizations which design systems […] are constrained to produce designs which are copies of the communication structures of these organizations

We have multiple contributors, typically highly opinionated, typically young or, at least, without lots of real world experience. Worse case, the only experience available comes from years of computer science lessons, where object orientation reigns supreme, and it’s still considered a good idea despite all the evidence to the contrary.

These multiple contributors end up carving their own spaces, because the required functionality is large, and the number of people working on it is always smaller than a 100% coverage. New functionality is added; older modules are dropped because “broken”, or “badly designed”; new dependencies are created to provide shared functionality, or introduced as abstraction layers to paper over multiple modules offering sligthly different takes on how some functionality ought to be implemented, or what kind of dependencies they require, or what kind of language or licensing terms ought to be used.

Complex free software projects with multiple contributors working on multiple components, favour smaller modules because it makes it easier for each maintainer to keep stuff in their head without going stark raving mad. Smaller modules make it easier to insulate a project against strongly opinionated maintainers, and let other, strongly opinionated maintainers, route around the things they don’t like. Self-contained modules make niche problems tractable, or at least they contain the damage.

Of course, if we declared this upfront, it would make everybody’s life easier as it would communicate a clear set of expectations; it would, on the other hand, have the side effect of revealing the wardrobe malfunction of the emperor, which means we have to dress up this unintended side effect of Conway’s Law as “being about choice”, or “mechanism, not policy”, or “network object model”.

The first chapter in the history of the GNOME project can be at least partially interpreted within this framework; the idea that you can take a complex problem space and partition it until each issue becomes tractable individually, and then build up the solution out of the various bits and pieces you managed to solve, letting it combine and recombine as best as it can to suit the requirements of the moment, platform, or use case. Throw in CORBA as an object model for good measure, and you end up with a big box of components that solve arbitrarily small issues on their own, and that can theoretically scale upwards in complexity. This, of course, ignores the fact that combinatorial explosions of interactions make things very interesting for anybody developing, testing, and using these components—and I use “interesting” in the “oh god oh god we’re all going to die” sense of the word.

More importantly, and on a social level, this framework allows project maintainers to avoid having to make a decision on what should work and what shouldn’t; what is supported and what isn’t; and even what is part of the project and what falls outside of it. If there some part of the stack that is misbehaving, wrap it up; even better, if there are multiple competing implementations, you can always paper over them with an abstraction layer. As long as the API surface is well defined, functionality is somebody else’s problem; and if something breaks, or mysteriously doesn’t work, then I’m sure the people using it are going to be able to fix it.

Well, it turns out that all the free software geeks capable of working on a desktop environment are already working on one, which by definition means that they are the only ones that can fix the issues they introduced.

Additionally, and this is a very important bit that many users of free and open source software fail to grapple with: volunteer work is not fungible—that is, you cannot tell people doing things on their spare time, and out of the goodness of their hearts, to stop doing what they are doing, and volunteer on something else. People just don’t work that way.

So, if “being about choice” is on the one end of the spectrum, what’s at the other? Maybe a corporate-like structure, with a project driven by the vision of a handful of individuals, and implemented by everyone else who subscribes to that vision—or, at least, that gets paid to implement it.

Of course, the moment somebody decides to propose their vision, or work to implement it, or convince people to follow it, is the moment when they open themselves up to criticism. If you don’t have a foundational framework for your project, nobody can accuse you of doing something wrong; if you do have it, though, then the possibilities fade away, and what’s left is something tangible for people to grapple with—for good or ill.

At the beginning of the GNOME project we had very few individuals, with a vision for the desktop; while it was a vision made of components interoperating to create something flexible and adaptable to various needs, it still adhered to specific design goals, instead of just putting things together from disparate sources, regardless of how well the interaction went. This led to a foundational period, where protocols and interfaces were written to ensure that the components could actually interoperate, which led to a somewhat lacklustre output; out of three 1.x minor releases all we got was a panel, a bunch of clock applets, and a control centre. All the action happened on the lower layers of the stack. GTK became a reasonably usable free software GUI toolkit for Linux and other Unix-like operating systems; the X11 world got a new set of properties and protocols to deal with modern workflows, in the form of the EWMH; applications and desktop modules got shared UI components using CORBA to communicate between them.

On a meta level, the GNOME project established a formal structure on itself, with the formation of a release team and a non-profit foundation that would work as a common place to settle the internal friction between maintainers, and the external contributions from companies and the larger free and open source software world.

Going back to our frame of reference to interpret the development of GNOME as a community of contributors, we can see this as an attempt to rein in the splintering and partition of the various components of the project, and as a push towards its new chapter. This tension between the two efforts—one to create an environment with a singular vision, even if driven by multiple people; and the other, to create a flexible environment that respected the various domains of each individual maintainer, if not each individual user—defined the first major cycle, as it would (spoiler alert) every other major cycle.

Now that the foundational period was over, though, and the challenges provided by commercial platforms like Windows and OS X had been renewed, the effort to make GNOME evolve further was not limited to releasing version 2.0, but to establish a roadmap for the future beyond it.


Next week we’re going to dive right back into the development of GNOME, starting with the interregnum period between 1.4 and 2.0, in which our plucky underdogs had finally become mainstream enough to get on Sun and IBM radars, and had to deal with the fact that GNOME was not just a hobby any more, in the episode titled: “On Brand”.

References

by ebassi at January 17, 2019 02:45 PM

December 31, 2018

Tomas Frydrych

Why I like Film Photography

Why I like Film Photography

Film is experiencing something of a renaissance these days, as attested on social media (#ishootfilm #filmisnotdead), and, perhaps more importantly, by the reappearance of numerous previously discontinued film emulsions—there is, again, money to be made from film.

The contemporary Church of Analogue Photography is a broad one. On the one end sit the old hands, the ‘patriarchs’ who never gave up on film in the first place. Their’s is the voice of experience, of continuity in the old quest for handmade excellence. They worry about tonality and longevity, the perfect negative, the perfect print. They are, for most, whether consciously or not, practitioners of fine art, understanding the potential of film, but also that analogue on its own doesn’t mean it’s any good.

Then there are the ‘youngsters’, of the new generation of photographers who grew up with both feet firmly planted in the digital world. They seem drawn to film because of the mystique of the past, because it is different, less sterile, less predictable, less mainstream. They have stumbled through the wardrobe into this weird magical world, and they are in love.

At times their passion gets the better of them, and the patriarchs sometimes look down at them—they are yet to discover that film alone doesn’t a worthwhile picture make. But by the same token they are less constrained by the received wisdom, reverently passed down from the old masters, as to what makes a good picture, and that’s not such a bad thing. And let’s face it, they are the reason companies like Kodak are resurrecting emulsions.

And of course, there are the born again film photographers, those of us who didn’t keep faith and abandoned film for the convenience of digital some years back, but somehow, for a variety of reasons, found our way back into the fold—perhaps because in some ways it is an opportunity to wind back the clock a bit, perhaps because we have run into insurmountable limitations of digital in our own creative quest. Or perhaps, surprisingly, it’s just pure old economics. (‘What?!’, I hear you gasp.)

When I started using film again this year after a break of some 17 years, it was definitely the latter. I had for some time pined after a medium format camera, and had my eye on the Fujifilm GFX50S. Alas, there was a small snag. Even though the Fuji is at the ‘entry level’ end of the medium format spectrum, I was looking at a better part of £15k to ‘enter’; in the end not even a looming significant birthday was enough to justify such an expense.

Film to the rescue! A very decent medium format film set up, plus a scanner, can be had for a 1/10th of the cost. ‘But, surely, the film is expensive!’

Is it? The £13.5k I have saved will buy so much film that by the time I break even with the upfront cost of a MF digital set up, the digital camera will be obsolete several times over; film cameras don’t suffer from such a problem, an excellent camera from a mid 20th century is still an excellent camera today.

Yet, while my reasons for returning to film were entirely pragmatic, and my intention was merely the hybrid workflow (scan, then post process as ‘normal’), film rather quickly got to me as a medium in its own right; there is just something about it.

The the old hands will tell you the something is tonality: film can represent considerably greater number of shades of colour, or grey scale, than a 12- or 14- bit digital sensor. This is, of course, impossible to show online, but is obvious when comparing a darkroom print with a digital one. But to be honest, for me that’s not it. For me that ‘something’ is the process that film brings with it, the enforced learning.

The modern digital camera is a wonderful tool. Things like multipoint exposure measuring mean that in 90% cases it will take an acceptable picture, which then usually can be muscled into a reasonable shape later. The cheapness of the digital image means that to photograph a sunrise I can simply take a few hundred pictures over a couple of hours, then pick one later. With a mirror less camera, I can do exposure compensation by the eye. Autofocus and manual focus assist mean my images are rarely unfocused.

But (all) such automation brings with itself the dumbing down of the operator skills that have come to be known as the Automation Paradox. It means I need not to understand how light gets turned into an image, what ‘exposure’ represents, how the morning gets born. Film lays that bare. You proudly work that hand-held light meter, expecting a perfect image, and bang, it’s too dark, or too light, or, in the B&W case, just an uninteresting monolithic grey. You don’t know why, because you are not used to taking detailed notes, yet.

And so I find that with film everything becomes more deliberate: I think more about what I see, what I want to convey, about light, structure, texture. About how that sunrise is going to unfold. The whole process becomes less about ‘taking’ and more about ‘seeing’, I stop being on the outside of the picture, and get sucked right in.

It’s not that I can’t do all of that with a digital camera, but I don’t have to, and so, being a lazy git at heart, I don’t; film forces me to, and that makes it, if nothing else, a wonderful learning medium.

But the process doesn’t end there. I enjoy developing films, learning how the choice of developer and time impacts the result (not to mention home development significantly reducing the costs). And the real fun comes in the darkroom. The tactile nature of it all, working out how to unlock the potential glimpsed in the negative: deciding what to burn and what to dodge, crafting the tools to do that, choosing what multigrade combination will best represent the mental image I have. The whole trial and error on the way to The Print.

I don’t necessarily believe that film is better, merely that film and digital are significantly different media. The true strength of film comes out when printed, while digital has a natural impedance match to digital presentation in an online world—I think there is little to be gained by film if the intended end is the screen. There are also subjects at which digital excels—wildlife and sports photography come immediately to mind. It would be silly to limit oneself to one or the other. But all in all, I find film to be more fun, and in particular B&W film.

In the digital world B&W is more often than not just a last ditch effort to rescue a bad image, and it shows; with film it’s always a fully independent form, one that requires a different way of looking at the world, an ability to see in monochrome. Without the safety net of arbitrary colour remixing in post irreversible decisions need to be made at the point of shooting: light assessed, filters chosen. But the upside of that is that it’s precisely the indiscriminate remixing of colour that is the reason why so many a digital B&W image jars, feels artificial, unreal; IR aside, film B&W images tend to always look somehow ‘natural’.

Perhaps the biggest thing I have learnt this year taking pictures is that B&W landscape is very hard to do well, very unforgiving. Hence most of my better B&W images this year have been of trees, and even people, only two or three true landscapes that I am happy with—here is my challenge for 2019.

Have a good one!

by tf at December 31, 2018 08:35 PM

December 25, 2018

Tomas Frydrych

'18 through the Lens

'18 through the Lens

I was going to write the usual annual retrospective, but sometimes life just gets in the way. They say a picture is worth a thousand words; perhaps it is, so anyway, here are a few.

'18 through the Lens A bit of solitude in the hills is good for the soul. An early morning at Shenavall, waiting for John during his epic winter round.

'18 through the Lens Discovering the joys of snowshoeing in the Trossachs.

'18 through the Lens I spent lot of time this year exploring places closer to home, discovering little overlooked gems of nature here and there.

'18 through the Lens And thinking about trees. One of my favourite, almost local, spots.

'18 through the Lens And farther from home.

'18 through the Lens After years of talking about it, Linda and I visited Iceland this year; of course, while Scotland was basking in sun, we picked the worst Icelandic summer in 100+ years, but we did glimpse the sun once or twice.

'18 through the Lens And while we liked Iceland a lot, this still remains our favourite place in the world.

'18 through the Lens For reasons, I spent lot more time taking pictures this this year, mostly landscapes, treescapes, and birds. This unremarkable picture of a greenshank is one that I had to work the hardest for -- bird photography in the wild is very hard.

'18 through the Lens And after a 17 year break, I started using film again. Not out of nostalgia, but as a means into the world of medium format and bigger cameras, but it really got to me as a medium in its own right. There just is something about film.

'18 through the Lens I have been particularly drawn into the wonderful world of Black and White film photography. Had it not been for the Iceland trip, most of my pictures this year would have been B&W.

That's it really. Have good holidays everyone!

by tf at December 25, 2018 10:32 AM

December 14, 2018

Emmanuele Bassi

And I’m home

It’s almost the end of the year, so it’s time for a recap of the previous episodes, I guess.

The tl;dr of 2018: started pretty much the same; massive dip in the middle; and, finally, got better at the very end.

The first couple of months of the year were pretty good; had a good time at the GTK hackfest and FOSDEM, and went to the Recipes hackfest in Yogyakarta in February.

In March, my wife Marta was diagnosed with breast cancer; Marta already had (different types of) cancer twice in her life, and had been in full remission for a couple of years, which meant she was able to cope with the mechanics of the process, but it was still a solid blow. Since she had already gone through a round of radiotheraphy 20 years ago—which had likely a hand in the cancer appearing now—her only option was surgery to remove the whole breast tissue and the associated lymph nodes. Not fun, but surgery went well, and she didn’t even need chemotherapy, so all in all it could have been way, way worse.

While Marta and I were dealing with that, I suddenly found myself out of a job, after working five years at Endless.

To be fair, this left me with enough time to help out Marta while she was recovering—which is why I didn’t come to GUADEC. After Marta was back on her feet, and was able to raise her right arm above her head, I took the first vacation in, I think, about four years. I relaxed, read a bunch of books, played some video games, built many, many, many Gundam plastic models, recharged my batteries—and ended up finally having time to spend on a project that I had pushed back for a while, because I really needed to add writing and producing 15 to 20 minutes of audio every week, after perusing thousands of email archives and old web pages on the Wayback Machine. Side note: donate to the Wayback Machine, if you can. They provide a fundamental service for everybody using the Web, and especially for people like me who want to trace the history of things that happen on the Web.

Of course I couldn’t stay home playing video games, recording podcasts, and building gunplas forever, and so I had to figure out where to go to work next, as I do enjoy being able to have a roof above my head, as well as buying food and stuff. By a crazy random happenstance, the GNOME Foundation announced that, thanks to a generous anonymous donation, it would start hiring staff, and that one of the open positions was for a GTK developer. I decided to apply, as, let’s be honest, it’s basically the dream job for me. I’ve been contributing to GNOME components for about 15 years, and to GTK for 12; and while I’ve been paid to contribute to some GNOME-related projects over the years, it was always as part of non-GNOME related work.

The hiring process was really thorough, but in the end I managed to land the most amazing job I could possibly hope for.

If you’re wondering what I’ll be working on, here’s a rough list:

  • improving performance, especially on less powerful devices
  • identify and land new features
  • identify and fix pain points for current consumers of GTK

On top of that, I’ll try to do my best to increase the awareness of the work being done on both the GTK 3.x stable branch, and the 4.x development branch, so expect more content appearing on the development blog.

The overall idea is to ensure that GTK gets more exposure and mindshare in the next 5 years as the main toolkit for Linux and Unix-like operating systems, as well better functionality for application developers that want to make sure their projects work on other platforms.

Finally, we want to make sure that more people feel confident enough to contribute to the core application development platform; if you have your pet feature or your pet bug inside GTK, and you want guidance, feel free to reach out to me.


Hopefully, the next year will not look like this one, and will be a bit better. Of course, if we in the UK don’t all die in the fiery chaos that is the Brexit circus…

by ebassi at December 14, 2018 07:00 PM

Episode 1.c: Applications

First of all, I’d like to apologise for the lateness of this episode. As you may know, if you follow me on social media, I’ve started a new job as GTK core developer for the GNOME Foundation—yes, I’m actually working at my dream job, thank you very much. Of course this has changed some of the things around my daily schedule, and since I can only record the podcast when ambient noise around my house is not terrible, something had got to give. Again, I apologise, and hopefully it won’t happen again.


Over the course of the first chapter of the main narrative of the history of the GNOME project, we have been focusing on the desktop and core development platform produced by GNOME developers, but we did not really spend much time on the applications—except when they were part of the more “commercial” side of things, like Evolution and Nautilus.

Looking back at Miguel’s announcement, though, we can see “a complete set of user friendly applications” in the list of things that the GNOME project would be focusing on. What good a software development platform and environment are, if you can’t use them to create and run applications?

While GIMP and GNOME share a great many things, it’s hard to make the case for the image manipulation program to be part of GNOME; yes: it’s hosted on GNOME infrastructure, and yes: many developers contributed to both projects. Nevertheless, GIMP remains fairly independent, and while it consumes the GNOME platform, it tends to do so in its own way, and under its own direction.

There’s another issue to be considered, when it comes to “GNOME applications”, especially at the very beginning of the project: GNOME was not, and is not, a monolithic entity. There’s no such thing as “GNOME developers”, unless you mean “people writing code under the GNOME umbrella”. Anyone can come along, write an application, and call it “a GNOME application”, assuming you used a copyleft license, GTK for the user interface, and the few other GNOME platform libraries for integrating with things like settings. At the time, code hosting and issue trackers weren’t really a commodity like nowadays—even SourceForge, which is usually thought to have always been available, would become public in 1999, two years after GNOME started. GNOME providing CVS for hosting your code, infrastructure to upload and mirror release archives, and a bug tracker, was a large value proposition for application developers that were already philosophically and technologically aligned with the project. Additionally, if you wanted to write an application there was a strong chance that you had contributed, or you were at least willing to contribute, to the platform itself, given its relative infancy. As we’ve seen in episode 1.4, having commit access to the source code repository meant also having access to all the GNOME modules; the intent was clear: if you’re writing code good enough for your application that it ought to be shared across the platform, you should drive its inclusion in the platform.

As we’ve seen all the way back in episode 1.1, GNOME started off with a few “core” applications, typically utilities for the common use of a workstation desktop. In the 1.0 release, we had the GNOME user interface around Miguel de Icaza’s Midnight Commander file manager; the Electric Eyes image viewer, courtesy of Carsten Haitzler; a set of small utilities, in the “gnome-utils” grab bag; and three text editors: GXedit, gedit, and gnotepad+. I guess this decision to ship all of them was made in case a GNOME user ended up on a desert island, and once saved by a passing ship after 10 years, they could be able to say: “this is the text editor I use daily, this is the text editor I use in the holidays, and that’s the text editor I will never use”.

Alongside this veritable text editing bonanza, we could also find a small PIM suite, with GnomeCal, a calendar application, and GnomeCard, a contacts applications; and a spreadsheet, called Gnumeric.

The calendar application was written by Federico Mena in 1998, on a dare from Miguel, in about ten days, and it attempted to replicate the offerings of commercial Unix operating systems, like Solaris. The contacts application was written by Arturo Espinosa pretty much at the same time. GnomeCal and GnomeCard could read and export the standard vCal and vCard formats, respectively, and that allowed integration with existing software on other platforms, as an attempt to “lure away” users from those platforms and towards GNOME.

Gnumeric was the brain child of Miguel, and the first real attempt at pushing the software platform forward; the original GNOME canvas implementation, based on the Tk canvas, was modified not only to improve the performance, but also to allow writing custom canvas elements in order to have things like graphs and charts. The design of Gnumeric was mostly mutuated from Excel, but right from the start the idea was to ensure that the end result would surpass Excel and its limitations—which was a somewhat tall order for an application developed by volunteers; it clearly demonstrates the will to not just copy commercial products, but to improve on them, and deliver a better experience to users. Gnumeric, additionally, came with a plugin infrastructure that exposed the whole workbook, sheet, and cells to each plugin.

Both the PIM applications and the spreadsheet application integrated with the object model effort, and provided components to let other applications embed or manipulate their contents and data.

While Gnumeric is still active today, 20 years and three major versions later, both GnomeCal and GnomeCard were subsumed into what would become one of the centerpieces of Ximian: Evolution.

GNOME 1.2 remained pretty much similar to the 1.0, from an application perspective. Various text editors were moved out of the release, and went along at their own pace, with gedit being the main survivor; Electric Eyes fell into disrepair, and was replaced by the Eye of GNOME as an image viewer. The newly introduced ggv, a GTK-based GUI layer around ghostscript, was the postscript document viewer. Finally, for application developers, a tool called “Glade” was introduced as a companion to every programmer’s favourite text editor. Glade allowed creating a user interface using drag and drop from a palette of components; once you were done with it, it would generate the C code for you—alongside the needed Autotools gunk to build it, if it couldn’t find any. The generated code was limited to specific files, so as long as you didn’t have the unfortunate idea of hand editing your user interface, you could change it through Glade, generate the code, and then hook your own application logic into it.

Many projects of that era started off with generated code, and if you’re especially lucky, you will never have to deal with it, unless, of course, you’re trying to write something like the history of the GNOME project.

For GNOME 1.4 we only see Nautilus as the big change in the release, in terms of applications. Even with an effort to ensure that applications, as well as libraries, exposed components for other applications to reuse, most of the development effort was spent into laying down the groundwork of the desktop itself; applications came and went, but were leaf nodes in the graph of dependencies, and as such required less coordination in their development, and fewer formalities when it came to releasing them to the users.

It would be a long time before somebody actually sat down, and decided what kind of applications ought to be part of the GNOME release.


With this episode, we’ve now reached the end of the first chapter of the history of the GNOME project. The second chapter, as I said a few weeks ago, will be on January 17th, as for the next four weeks I’m going to be busy with the end of the year holidays here in London.

Once we’re back, we’re going to have a little bit of a retrospective on the first chapter of the history of GNOME, before plunging directly into the efforts to release GNOME 2.0, and what those entailed.

So, see you next year for Chapter 2 of the History of GNOME.

References

by ebassi at December 14, 2018 06:00 PM

December 06, 2018

Emmanuele Bassi

Episode 1.b: Bindings

Back in episode 1.1 and 1.2 we talked a bit about “language bindings” as a resource available to GNOME application developers that did not want to deal with C as the programming language for their projects.

Language bindings for GTK appeared pretty much as soon as the GIMP adopted it, in order to write plugins for custom image processing effects; adding new filters just by dropping a file in a well-known location made GIMP extensible without requiring recompiling the whole application. So it’s not weird that the original announcement for the GNOME project mentions applications and desktop components written in Guile: since the very beginning, the intent was to only use C for core libraries of the GNOME platform, in order to have an application development stack consumable through programming languages other than C.

Of course, best laid plans, and all that.

If we look at the history of the bindings in GNOME we can divide it into two major eras: before introspection, and after introspection. We’re going to concentrate on the former, and leave the latter when we move on to the second chapter of the main narrative.

The first few bindings for GTK appearing on the scene were Objective C, C++, and Guile. The first two were definitely easier to achieve, as both Objective C and C++ are compatible with the C standard of the time; it was easy to build shallow abstractions over the C API, and if you needed something more complicated, you could always drop into C. Guile, on the other hand, used a LISP grammar; and even though it managed to call into C exactly the same, you could not really expose a pointer to a C data structure and tell developers to go to town on it, so you had to be careful in both what you exposed, and what you abstracted away.

After Objective C, C++, and Guile, Perl and Python bindings started to appear, as the market share for both those languages increased.

The shared problem among all of them was that GTK was growing as a toolkit, and its API surface started to exceed the capacity for a human being to hand code the various entry points by reading a C header file and translating it into the intended code for each programming language to trampoline into the underlying library.

It is a well established fact that programmers, left to their own devices, will try to program their way out of every problem. In this particular case, each binding started growing its own set of tools to generate code from C headers; then the script were modified to read C headers and ancillary metadata, needed to handle cases where C was sadly not enough to describe the API and the relation between widgets, like inheritance; or the idiosynchrasies of the GTK code base, like constructor arguments (the precursor to GObject’s properties) and signals.

The C++ bindings grew a veritable cornucopia of exceedingly 1997 decisions, like:

  • a set of hand coded header files that included parsing directives and metadata, alongside real C++ code
  • a set of m4 macros to do text processing over the generated files to replace some of the directives
  • a lexer and a parser, generated via flex and bison, to generate C++ code out of the generated headers

I looked at the early C++ bindings and I’m still in awe of the Rube Goldberg machine that was used to build them; the only things missing in the project build were a small marble, an hamster wheel, two pool cues, and a bowling bowl.

Every binding, though, had their own way of generating code, and thus their own way of storing metadata to describe the GTK API. To avoid the proliferation of ad hoc formats, and to provide a relatively official description of the API, GTK developers decided to settle to a common definition file, mutuated from the Guile bindings.

The definition file used an S-expression syntax, and it described:

  • enumeration types, which contained not only the C symbol, but also a “nick name”, that is a string that could be used to map it to the numeric value
  • boxed types, that is plain old data structures
  • object types, that is classes in the type system
  • functions, each with its return type and arguments

The definition files were originally a mix of hand written changes on top of the output generated by a script that would parse the C header files; this meant that they would go out of sync pretty quickly. Additionally, they were lacking a lot of ancillary information because the script did not know anything about the GTK type system. Various language bindings took the definition files and copied them into their own source repositories, to tweak them to fit their code generation steps; this introduced an additional drift into the already cumbersome process of keeping language bindings up to date with a fast moving library like GTK.

An attempt at improving and standardising the definition file format came in the late 1999 and early 2000, courtesy of Elliot Lee and Havoc Pennington. The s-expression syntax was maintained, but the data set was extended to allow matching objects with namespaces; methods with classes; and parameters with their types, names, and default values.

Of course, this still required generating C code out of a formally agnostic S-expression, generated from C code; this meant that bindings for dynamic languages were all but dynamic, and that additions to the GTK API would still require an additional step in order to be consumed by anything that wasn’t C, or C adjacent. For a while, though, this was good enough for application developers, and the ability to quickly write small tools or large applications, in Python, or Perl, or PHP, or Ruby, made the GNOME platform quite attractive.

Still, many applications in the GNOME project were written in C because that’s what C developers will do when given then chance — and that’s what we’re going to see in next week’s side episode.

References

by ebassi at December 06, 2018 04:00 PM

November 29, 2018

Emmanuele Bassi

Episode 1.a: The GIMP Toolkit

The history of the GNOME project is also the history of its core development platform. After all, you can’t have a whole graphical environment without a way to write not only applications for it, but its own components. Linux, for better or worse, does not come with its own default GUI toolkit; and if we’re using GNU as the userspace environment we’re still out of something that can do the job of putting things on the screen for people to use.

While we can’t tell the history of GNOME without GTK, we also cannot tell the history of GTK without GNOME; the two projects are fundamental entwined, both in terms of origins and in terms of direction. Sure, GTK can be used in different environments, and on different platforms, now; and having a free-as-in-free software GUI toolkit was definitely the intent of the separation from GIMP’s own code base; nevertheless, GTK as we know it today would not exist without GNOME, just like GNOME would not have been possible without GTK.

We’ve talked about GTK’s origin as the replacement toolkit for Motif created by the GIMP developers, but we haven’t spent much time on it during the first chapter of the history of the GNOME project.

If you managed to fall into a time hole, and ended up in 1996 trying to write a GUI application on Linux or any commercial Unix, you’d have different choices depending on:

  • the programming language you wish to use in order to write your application
  • the license you wish to use once you release your application

The academic and professional space on Unix was dominated by OpenGroup’s Motif toolkit, which was mostly a collection of widgets and utilities on top of the X11 toolkit, or Xt. Xt’s API, like all pre-Xorg X11 libraries API, is very 1987: pointers hidden inside type names; global locks; explicit event loop. Xt is also notable because it lacks basically any feature outside of “initialise an application singleton”; “create this top level window”, either managed by a window manager compatible with the Inter-Client Communication Conventions Manual, or an unmanaged “pop up” window that is left to the application developer to handle; and an event dispatch API, in order to write your own event loop. Motif integrated with Xt to provide everything else: from buttons to text entries; from menus to scroll bars.

Motif was released under a proprietary license, which required paying royalties to the Open Group.

If you wanted to release your application under a copyleft license, you’d probably end up writing something using another widget collection, released under the same terms as X11 itself, and like Motif based on the X toolkit, called the “X Athena widgets”—which is something I would not wish on my worst enemy; you could also use the GNUstep project toolkit, a reimplementation of the NeXT frameworks; but like on NeXT, you’d have to use the (then) relatively niche Objective C language.

If you didn’t want to suffer pain and misery with the Athena widgets, or you wanted to use anything except Objective C, you’d have to write your own layout, rendering, and input handling layer on top of raw Xlib calls — an effort commonly known amongst developers as: “writing a GUI toolkit”.

GIMP developers opted for the latter.

Since GTK was a replacement for an extant toolkit, some of the decisions that shaped its API were clearly made with Motif terminology in mind: managed windows (“top levels”), and unmanaged windows (“pop ups”); buttons and toggle buttons; menus and menu shells; scale and spin widgets; panes and frames. Originally, GTK objects were represented by opaque integer identifiers, like objects in OpenGL; that was, fortunately, quickly abandoned in favour of pointers to structures. The widget hierarchy was flat, and you could not derive new widgets from the existing ones.

GTK as a project provided, and was divided into three basic and separate libraries:

  • GLib, a C utility library, meant to provide the fundamental data structures that the C standard library does not have: linked lists, dynamic arrays, hash tables, trees
  • GDK, or the GIMP drawing toolkit, which wrapped Xlib function calls and data types, like graphic contexts, visuals, colormaps, and display connections
  • GTK, or the GIMP toolkit, which contained the various UI elements, from windows, to buttons, to labels, to menus

Once GTK was spun off into its own project in late 1996, GTK gained the necessary new features needed to write applications and libraries. The widget-specific callback mechanism to handle events was generalised into “signals”, which are just a fancy name for the ability to call a list of functions tied to a well known name, like “clicked” for buttons, or “key-press-event” for key presses. Additionally, and more importantly, GTK introduced a run time type system that allowed deriving new widgets from existing ones, as well as creating new widgets outside of the GTK source tree, to allow applications to define their own specialised UI elements.

The new GTK gained a “plus”, to distinguish it from the in-tree GTK of the early GIMP.

Between 1996 and 1997, GTK development was mostly driven by the needs of GIMP, which is why you could find specialised widgets such as a ruler or a color wheel in the standard API, whereas things like lists and trees of widgets were less developed, and amounted to simple containers that would not scale to large sets of items—as any user of the file selection widget at the time would be able to tell you.

Aside from the API being clearly inspired by Motif, the appearance of GTK in its 0.x and 1.0 days was very much Motif-like; blocky components, 1px shadows, aliased arrows and text. Underneath it all, X11 resources like windows, visuals, server-side allocated color maps, graphic contexts, and rendering primitives. Yes, back in the old days, GTK was fully network transparent, like every other X11 toolkit. It would take a few more years, and two major API cycles, for GTK to fully move away from that model, and towards its own custom, client-side rendering.

GTK’s central tenets about how the widget hierarchy worked were also mutuated in part from Motif; you had a tree of widgets, starting from the top level window, down to the leaf widgets like text labels. Widgets capable of holding other widgets, or containers, were mostly meant to be used as layout managers, that is UI elements encoding a layout policy for their children widgets—like a table, or an horizontal box. The layout policies provided by GTK eschewed the more traditional “pixel perfect” positioning and sizing, as provided by the Windows toolkits; or the “struts and springs” model, provided by Apple’s frameworks. With GTK, you packed your widgets inside different containers, and those would be sized by their contents, as well as by packing options, such as a child filling all the available space provided by its parent; or allowing the parent widget to expand, and aligning itself within the available space.

As we’ve seen in episode 1.2, one of the first things the Red Hat Advanced Development labs did was to get GTK 1.0 out of the door in advance of the GNOME 1.0 release, in order to ensure that the GNOME platform could be consumed both by desktop components and by applications alike. While that happened, GTK started its 1.2 development cycle.

The 1.2 development effort went into stabilisation, performance, and the occasional new widget. GLib was split off from GTK’s repository at the start of the development cycle, as it was useful for other, non-GUI C projects.

As a result of “Project Bob”, Owen Taylor took the code initially written by Carsten Haitzler, and wrote a theming engine for GTK. GTK 1.0 gave you API inside GDK to draw the components of any widget: lines, polygons, ovals, text, and shadows; the GDK API would then call into the Xlib one, and send commands over the wire to the X server. In GTK 1.2, instead of using the API inside GDK, you’d go through a separate layer inside GTK; you had access to the same primitives, augmented by a state tracker for things like colors, background pixmaps, and fonts. That state was defined via an ancillary configuration file, called a “resource” file, with its own custom syntax. You could define the different colors for each different state for each widget class, and GTK would store that information inside the style state tracker, ready to be used when rendering.

Additionally, GTK allowed to load “engines” at run time: small pieces of compiled C code that would be injected into each GTK application, and that replaced the default drawing implementation provided by GTK. Engines would have access to the style information, and, critically, as we’re going to see in a moment, to the windowing system surface that the widget would be drawn into.

It’s important to note that theme engines in GTK 1.2 could only influence colors and fonts; additionally, widgets would be built in isolation one from the other, from disjoint primitives. As a theme engine author you had no idea if the text you’re drawing is going to end up in a button, or inside a label; or if the background you’re rendering is going to be a menu or a top level window. Which is why theme engine authors tried to be sneaky about this; due to how GTK and GDK were split, the GDK windowing system surface needed to have an opaque back pointer to the GTK widget that owned it. If you knew this detail, you could obtain the GTK widget currently being drawn, and determine not only the type of the widget, but also its current position within the scene graph. Of course, this meant that theme engines, typically developed in isolation, would need to be cognisant of the custom widgets inside applications, and that any bug inside an engine would either break new applications, which had no responsibility towards maintaining an internal state for theme engines to poke around with, or simply crash everything that loaded it. As we’re going to see in the future, this approach to theming—at the same time limited in what it could achieve out of the box, and terrifyingly brittle as soon as it was fully exploited—would be a problem that both GTK developers and GNOME theme developers would try to address; and to the surprise of absolutely nobody, the solution made a bunch of people deeply unhappy.

Given that GTK 1.2 had to maintain API and ABI compatibility with GTK 1.0, nothing much changed over the course of its development history. By the time GNOME 1.2 was released, the idea was to potentially release GTK 1.4 as an interim version. A new image loading library, called GdkPixbuf, was written to replace the aging imlib2, using the GTK type system. Additionally, as we saw in episode 1.5, Owen Taylor was putting the finishing touches on a text shaping library called Pango, capable of generating complex text layouts through Unicode strings. Finally, Tim Janik was hard at work on a new type system, to be added to GLib, capable of being used for more than just GUI development. All of these changes would require a clear cut with the backward compatibility of the 1.x series, which meant that the 1.3 development cycle would result in GTK 2.0, and thus would be part of the GNOME 2.0 development effort—but this is all in the future. Or in the past, if you think fourth dimensionally.

Next week, we’re going to see what happens when people that really don’t want to use C to write their applications need to deal with the fact that the whole toolkit and core platform they are consuming is written in object oriented C, in the side episode about language bindings.

References

by ebassi at November 29, 2018 04:00 PM

November 22, 2018

Emmanuele Bassi

Episode 1.5: End of the road

With version 1.2 released, and the Foundation, well, founded, the GNOME community was free to spend its time on planning the next milestones for the project, namely: the last release of the 1.x cycle, and the first of the 2.x one.

GNOME 1.4 was supposed to be a technology preview for some of the changes in the platform that were going to be the centerpiece of the GNOME 2.0 core platform; additionally, applications such as Nautilus, Evolution, and Gnumeric were going to be added to the release as official GNOME components. Maciej Stachowiak volunteered to work as the release manager, and herd all the module maintainers in order to put all the cats in a row for long enough to get distribution-ready tarballs out of them.

From the technological side, one of the long term integration jobs was finally coming to fruition: a specification for interoperation between GNOME and the various window managers available on X11. The work, which was started back in the pre-1.0 beta days by Carsten Haitzler with the goal to make Enlightenment work with the GNOME components, attempted to specify various properties and protocols to be implemented by window managers and toolkits, in order to allow those toolkits to negotiate capabilities and perform operations that involved the window manager, such as making windows full screen, for media players and presentation tools; listing the number of virtual workspaces available, and moving windows across them; defining “struts”, or areas of the screen occupied by special parts of the desktop, such as panels, and that would not be covered when resizing or maximising windows; specifying properties for icons, window titles, and other ancillary information to be displayed by session components such as windows and task lists; creating protocols for embedding windows from different processes, used to implement panel applets, and shared components.

The Extended Window Manager Hints specification, built on top of the original Inter-Client Communication Convention Manual, or ICCCM, was the first attempt at real cross-desktop collaboration, with developers for toolkits, window managers, and desktop environments working together in order to let applications behave somewhat consistently, regardless of the environment in which they were being used. Alongside the EWMH, additional specifications like the XEMBED protocol, the clipboard specification, and the XSETTINGS specification, were created as foundations to the concept of a “free desktop” stack, and we’ll see where that will lead in the GNOME 2 days.

The creation of the EWMH gave a further push towards the establishment of a default window manager for GNOME. Enlightenment had steadily fallen out of favour, given its increase in complexity and its own push towards becoming a full environment in its own right. Additionally, its release planning had become somewhat erratic, with Linux distributions often forced to package development snapshots of dubious quality. While Enlightenment was never considered the default window manager for the GNOME desktop environment, it was the most integrated for a long while, especially compared to the older, or more “free spirited” ones, meant mostly to be used by people building their own environments out of tinfoil, toothpicks, hopes, and prayers. Nevertheless, it became clear that having at least a sensible default, released alongside the rest of the project’s modules and maintained within the same community, and more importantly with the same goals, was necessary. The choice for a window manager fell on Sawmill, a window manager written in a Lisp-like language called “rep”, and originally released around the same time as GNOME 1.2. Sawmill, later renamed Sawfish to avoid a naming collision with a log analysis software for Linux, was, like many Lisp-based projects, programmable by the user; it was possible to write rules to match windows created by specific applications, and apply policies, such as automatic maximisation, or moving to specific workspaces at launch, or saving the geometry and state of the windows when closing them. The window manager decorations were also themable, because of course they were.

Another change introduced for GNOME 1.4 as a technology preview in preparation for GNOME 2.0 was GConf, a configuration storage and notification subsystem written by Havoc Pennington. For the 0.x snapshots and 1.0 releases, GNOME provided a simple key/value configuration system called gnome-config. Each application would use the gnome-config API to store and retrieve their settings, which were saved inside flat, INI-like files in an hidden location under the user’s home directory. The system was fairly simple, but not very reliable, as it could theoretically leave files in an inconsistent state; it wasn’t at all performant, as it needed to traverse hierarchies on the file system to get to a key/value pair; there was no database schema for the keys and values, and it made it impossible to validate the contents of the configuration. Additionally, storing complex values required a fair amount of manual serialisation and deserialisation, so application developers typically ended up writing their own code for storing settings in flat configuration files under their own directory. It was in theory possible to use gnome-config to store and read settings for the desktop itself, but without a schema to define the contents of the settings and without any type of access arbitration there was no guarantee that different system components would end up stomping on each other’s toe, and there was no way for interested components or applications to be notified of changes to the configuration. The limitations of the API led to limitations in the user interface: all settings and preferences were by necessity applied explicitely, to allow saving them to a file, and ensure a consistent state of the underlying storage.

The goal of GConf was to provide a consistent storage for preferences. Values would be addressed via a path, like a file system; read and write operations would go through a session daemon, which was the onl ccomponent responsible for reading and writing the data to disk, and would be able to subscribe listeners to changes to a specific path. This way, applications could listen to system configuration changes and update themselves in response to them; not only that, but utilities could read and write well-known configuration keys, decentralising the configuration of the environment and application from the system settings control panel. Notification and daemon-mediated writes also made it possible to write instant apply preferences dialogs, getting rid of the “Apply” button.

All seemed to be going well for the 1.4 relase, and for the 2.0 planning.

In January 2001, Helix Code renamed itself to Ximian, once it couldn’t secure a trademark on the former name; nothing outside of the name of the company really changed.

In March 2001 Eazel released, at long last, Nautilus 1.0, and simultaneously dropped a bomb on the community: the company was going to lay off most of its employees, after failing to raise enough capital to keep the operations going. Eazel finally closed down in June 2001, after only two years of operations. The Dot-com bubble that propelled the IT industry into unsustainable heights was finally bursting, and everything was crashing down, with the startups that did not manage to get acquired by bigger companies closing left and right, leaving engineers scrambling to get a new job in a bearish market, and leaving cheap hardware, as well as cheap Herman Miller chairs, in the rubbish pile behind their vacant offices.

Eazel published all their remaining code, and its employees promised to remain involved in the maintenance of their projects while the community picked up what they could; which, in fairness, they did, until most of the engineering team moved to Apple’s web browser team.

Nautilus did languish after GNOME 1.4, but as a centerpiece for GNOME 2.0, and with the port to GTK 2.0, it got progressively whittled down to remove the more startuppy bits, while remaining a file manager with first class remote volume browsing; the GNOME community was left with a lot of the technical debt that remained unpaid for the following 15 years, as a result of Eazel’s abrupt end.

The second edition of GUADEC, held in Copenhagen in April 2001, right in the middle of the Eazel drama, nevertheless featured more attendees and more presentations, including one that would prove fundamental in setting the future direction of the project. Sun’s usability engineer Calum Benson presented the results of the first round of user testing on GNOME 1.4, and while the results were encouraging in some areas, they laid bare the limits of the current design approach of a mish-mash of components. If GNOME wanted to be usable by professionals, not curating the offering of the desktop was not a sustainable option any more.

Sun had shipped GNOME 1.4 as a technology preview for Solaris 8, and conducted the first extensive set of user tests on it by sitting professional users in front of GNOME and asking them to perform certain tasks, while recording their actions. The consistency, or lack thereof, of the environment was one of the issues that the testing immediately identified: identical functionality was labelled differently depending on the component; settings like the fonts to be used by the desktop were split across various places; duplication of functionality, mostly for the sake of duplication, was rampant. Case in point: the GNOME panel shipped with not one, not two, but five different clock applets—including, of course, the binary clock, because nothing says “please quickly tell me if it’s time for a very important meeting with my boss” like having to read the answer in binary, from fake leds on panel 48 pixels tall. Additionally, all those clocks, like all the panel applets, could be added and removed by sheer accident, clicking around on the desktop. The panel itself could simply go away, and there was no way to revert to a working state without nuking the settings for the user. The speed of the few animated components, like automatically hiding the panel to increase the available on screen space, could be controlled down to the millisecond, just in case you cared, even if the refresh rate of your display could never have the same resolution. Nautilus had enough settings that they had to be split into a class-based system, with Novice, Intermediate, and Expert levels, as if managing your files required finishing RPG campaigns, and acquiring enough experience points from your dungeon master.

The most important consequence of the usability study, though, was that in order for GNOME to succeed as a desktop environment for everyone, it had to stop being made just for the people that wrote it, or that could spend hours of time tweaking the options, or learning how the environment worked first. Not everyone using GNOME would be a white male geek in their 20s, writing free software. Web designers, hardware engineers, financial analysists, musicians, artists, writers, scientists, school teachers; GNOME 2.0 would need a set of guidelines for applications and for the desktop, to ensure a consistent experience for every user, similar to how the platforms provided by Apple and Microsoft defined a common experience for the operating system.

In 2001, though, the Linux desktop encountered an unexpected problem, in the same way a sentence encounters a full stop. Apple, fresh out of both the return of its co-founder Steve Jobs, the subsequent successful product launches for new lines of personal computers, and after releasing what was nominally a beta in late 2000, unveiled their new operating system, OS X. Sure, it was slow, buggy, and definitely nothing more than an alpha release; but it was a shiny, nice looking desktop environment working on top of a Unix-like user space—precisely what the free software equivalents were trying to achieve. The free desktop community was so worried about Microsoft that it never really considered Apple to be a contender.

September 2001 saw a new point release of OS X, and in just six months from its initial unveiling, Apple had managed to fix most of the instability of the previous snapshot; on top of that, they announced that all new Apple computers starting from January 2002 would be sold with OS X as the default operating system. GNOME 2.0 was released in June 2002, and in August 2002 Apple released the second point release of OS X, which improved performance, hardware support, and added accelerated compositing to the stack. Apple rose back from the ashes of the ‘90s with a new, compelling software platform, and Linux found itself a new, major competitor outside of Windows. A competitor hungry for the same kind of professional and amateur markets that Linux was trying to target; and, unlike Linux, Apple had both a hardware platform and a single driving force, with enough resources to sustain this effort.

The fight for the survival of GNOME, and of the Linux desktop, was back on, and GNOME had to start from 2.0.


We have now reached the point in the story where the GNOME community of volunteers and the ecosystem of companies around them is working on a new major release of the core platform, the desktop, and the applications around it, so it’s a good place to take a break. Writing the History of GNOME is quite intensive: there’s a lot of material that needs to be researched in order to write the 2500 words of an episode, edit them, record them, and edit the recording.

For the next three weeks we’re going to have some short side episodes on technologies and applications relevant to GNOME; the first side episode is going to be about GTK; the second will be about how GNOME wrote language bindings in the early years; and the third episode will be about the first GNOME applications that entered the project.

After these side episodes, we’re going to pause for four weeks, while I gather my notes for the next chapter in our history of the GNOME project; we are going to come back on January 17th, with the story of how GNOME got its groove back with version 2.0, in the next chapter: “Perfection, improved”.

References

by ebassi at November 22, 2018 02:00 PM

November 17, 2018

Richard Purdie

Post exercise collapse

Its no secret that I’ve some kind of energy problem. What is less known is how horrible the effects can be, or how plain weird the pattern is. People see me manage the motorcycling, the mountain biking or cycling but they don’t see what happens afterwards.

My current working model is that exercise generates some kind of toxin. That toxin builds up in the body and its effects peak around 24-28 hours after the exercise that generated it. The symptoms of that are best described as “flu like”, lethargy, feeling cold, a brain fog, inability to concentrate, feeling washed out, muscle and joint pains, tinnitus, teeth nerve pain and general mood changes. Those feelings can go on for around 2 weeks. There is no respiratory side to it and it never seems to be actual flu.

How serious is it? At its worst, after two days of motorcycling I collapsed semi-conscious and was out of it for 18 hours. My body had run so low on energy it lost the ability to regulate its temperature properly (thankfully I collapsed onto a duvet).

Whatever the toxin is, it appears to attack the nervous system, hence the various joint/muscle pains, tinnitus, teeth sensitivity and so on. If I’ve kept going and not given in to it either (through shear willpower or pain killers) I ended up with a permanent hand/limb tremor. Thankfully I also discovered biotin (Vitamin B7) appears to accelerate healing of that and after around a year, I managed to stop shaking, a major win.

The closest thing in medical knowledge that matches is paracetamol overdose, the key match being the timeframe. You don’t see symptoms of that for 24-48 hours after overdose. This lead to the realisation that its treatment also seems to help me. Its treated with NAC (N-acetyl-Cysteine) which is thankfully freely available and is a non-essential amino acid so its comparatively safe with low side effect risks.

Have I talked to a doctor about this? In short, yes, a lot. They have run a ton of tests over around a decade and there are few types of specialists I’ve not seen at some point. There are a ton of things we know it isn’t and some abnormalities. They don’t match anything they recognise. The two interesting data points are that my liver is unhappy about something (always raised GGT and sometimes raised ALT/AST/ALP) and prolactin is elevated. No idea about the prolactin (a story in its own right) but the liver fits the toxin poisoning model.

Its taken me that decade to figure out the pattern and to come up with the current coping strategy which takes the recovery from two weeks to 2-3 days. At one point I felt like I was accumulating damage (the tremor in particular), I’m pleased to say that it feels much less so now.

There are swings and roundabouts as the NAC appears to help a huge amount but may expose/cause other vitamin deficiency (B2?).

I personally suspect there is some genetic glitch somewhere, not enough to threaten life but enough to mean backup pathways which are less efficient are being used. There is solid science behind NAC increasing levels of Glutathione which is a master anti-oxidant and the way the body cleans up toxins. B2 is needed for recycling Glutathione and biotin is being studied in nervous system disease.

I guess I’m putting this out there in case it helps anyone else or that someone with knowledge of biochemistry could give any further insight into this.

by Richard at November 17, 2018 11:06 AM

November 15, 2018

Emmanuele Bassi

Episode 1.4: Founding the Foundation

As is the case of many free and open source software projects, GNOME’s planning and discussions mostly happen online, on a variety of mediums, like IRC, mailing lists, and issue trackers. Some of those mediums fade out over time, replaced by other, newer ones; or simply because they become less efficient.

The first 10 to 15 years of GNOME can be clearly traced and reconstructed from mailing list archives, which is a very good thing, otherwise this podcast would not be possible without spending hundreds of hours in compiling an oral history of the project, with the chance of getting things wrong—or, at least, wronger than I do—or simply omitting details that have long since been forgotten by the people involved.

Nevertheless, it’s clear that some of the planning and discussion cannot really happen over a written medium; the bandwidth is simply not there. Shared, physical presence is much more efficient for human beings, in order to quickly communicate or iterate over an idea; the details can be summarised later, once everyone is on the same page.

By the year 2000, the GNOME project was already more than 2 years old, had its first major release, and it was now encompassing various companies, as well as volunteers around the globe.

While some of those people may have ended up sharing an office while working for the same company, and of course everyone was on IRC pretty much 24/7, the major stakeholders and contributors had yet to meet in one place.

Through some aggressive fundraising, what was supposed to be a small conference organised by Mathieu Lacage in March for the benefit of the students of the Ecole National Supérieure des Télécommunications in Paris was scaled up to allow the attendance of 40 GNOME developers from around the world for four days of presentations and discussions.

The main theme of the meeting was fairly ambitious: laying down the foundation for the next major release of GNOME, starting from the core platform libraries, with a new major version of GTK; the introduction of a comprehensive text rendering API called “Pango”; and a more pervasive use of Bonobo components in the stack.

Additionally, it allowed companies like Eazel, Ximian, and Red Hat, to present their work to the larger community.

Owen Taylor and Tim Janik presented their plans for GTK 1.4, including a new type system and improved integration with language bindings; Owen also presented Pango, a library for text rendering in non-Latin localisations, with Unicode support, and support for bidirectional and complex text. GTK was also going to be available on Windows and BeOS, thanks to the efforts of Tor Lillqvist and Shawn Amundson, respectively. Havoc Pennington was working on a new text editing widget, based on Tk’s multi-line text entry. Language bindings, for C++, Python, and Ada, were also presented, as well as applications targeting the GNOME platform.

Out of the four days of presentations, planning, discussions, and hacking came two major results:

  • the creation of a “steering committee”, with the goal of planning and directing the development efforts for GNOME 2.0
  • the push for the creation of a legal entity capable of collecting donations on behalf of the GNOME project, and act as a point of reference between the community and the commercial entities that wanted to contribute to GNOME

As a side note, Telsa Gwynne’s report of GUADEC’s first edition is also the first time I’ve seen an explicit mention of the “Old Farts Club”, as well as the rule of being over 30 in order to enter it; I think we can add that to the list of major achievements of the conference.

After GUADEC, in May 2000, GNOME 1.2 was released, as part of the stabilisation of the 1.x platform, and GNOME 1.4 was planned by the steering committee to be released in 2001, with the 2.0 development happening in parallel.

The process of creating the GNOME Foundation would take a few additional months of discussions, and in July 2000 the foundation mailing list was created for various stakeholders to outline their positions. The initial shape of the Foundation was modelled on the Apache Software Foundation, as both a forum for the technical direction of the project, and a place for corporations to get involved with the project itself. The goals for this new entity, as summarised by Bart Decrem, an Eazel co-founder, were:

  1. Providing a forum to determine the overall technical direction of GNOME
  2. Promoting GNOME
  3. Foster collaboration and communication among GNOME developers
  4. Manage funds donated to the GNOME project

There was a strong objection on having corporations being able to dictate the direction of the project, so one of the stated non-goals was for the Foundation to not be an industry consortium, similar to The Open Group. The Foundation would also not hire developers directly.

In order to avoid corporate dominance, no company would be allowed to be a member of the foundation: if a company wanted to have a voice in the direction of the project they could hire a Foundation member, and thus have a representative in the community. Additionally, there would be a limit on directors on the board working for the same company.

As the Foundation was going to be incorporated in the US, one way to avoid both under-representation of non-US members and the potential of fragmentation through separate entities in each large geographical region, was to be more open about both the membership and the board election process. GNOME contributors would be able to join the Foundation as long as they were actively involved with the project, and each member would be eligible to be elected as a director. Companies would be part of an advisory organ, not directly involved in shaping the project.

The idea of having the Foundation be the structure for setting the technical direction of the project was dropped fairly quickly, replaced by its function to be the place for settling controversial decisions, leaving the maintainers of each module in charge of their project.

It is interesting to note that many of the discussions that were part of the Foundation’s initial push have yet to be given an answer, more than 15 years later. If the Foundation is meant to be a forum for module maintainers, how do we define which modules should be part of GNOME, and which ones shouldn’t? Is being hosted on GNOME infrastructure enough to establish membership of a module? And, if so, who gets to decide that a module should be hosted on GNOME infrastructure? Is GNOME the desktop, or is that just a project under the GNOME umbrella? Are applications part of GNOME? The GNOME project is, to this day, still re-evaluating those questions.

Alongside the push from Eazel, Red Hat, and Ximian to get the Foundation going, came the announcement that Sun was going to support GNOME as the desktop for their Solaris operating system, in order to replace the aging CDE. To that goal, Sun was going to share the resources of its engineering, design, and QA teams with the GNOME project. Additionally, IBM, HP, and Dell wanted to support GNOME through the newly created Foundation.

Surprisingly, the discussions over the Foundation proceeded quickly; the self-imposed deadline for the announcement was set for August 15, 2000, three years after the first announcement of the GNOME project, to presented at the Linux World Expo, a trade fair with a fair amount of media exposure. The creation of the actual legal entity, an initial set of bylaws, and the election of a board of directors would follow.

Having a few of the hot startups in the Linux space, as well as well established companies in the IT sector, come together and announce they were putting their weight behind the GNOME project would, of course, be spun in a way that was adversarial to Microsoft, and so it was. The press release at the LWE pushed the angle of a bunch of companies joining together to challenge Microsoft, using a bunch of free code wrote by hacker weirdos to do so.

The announcement of the GNOME Foundation did not impress the KDE project, which released a statement trying to both downplay the importance of GNOME and of the companies that pledged resources to the GNOME project.

In November 2000, after finalising the initial set of bylaws for the Foundation and opening the membership to the people contributing to the project, GNOME held the first ever elections for the position of director of the board. With 33 candidates, a pool of 370 possible voters, and 330 valid ballots in the box, the first eleven directors were:

  • Miguel de Icaza (Helix Code)
  • Havoc Pennington (Red Hat)
  • Owen Taylor (Red Hat)
  • Jim Gettys (Compaq)
  • Federico Mena Quintero (Helix Code)
  • Bart Decrem (Eazel)
  • Daniel Veillard (W3C)
  • Dan Mueth (Eazel)
  • Maciej Stachowiak (Eazel)
  • John Heard (Sun Microsystems)
  • Raph Levien (Eazel)

Additionally, the advisory board was thus composed:

  • Compaq
  • Eazel
  • Free Software Foundation
  • Gnumatic
  • Helix Code
  • Henzai
  • IBM
  • Object Management Group
  • Red Hat
  • Sun Microsystems
  • VA Linux

After the election, the new board started working in earnest on the process for incorporating the foundation and registering it as a non-profit entity; this took until March 2001, after a couple of false starts. In the meantime, the main topics of discussions were:

  • the foundation bylaws, needed for the incorporation, the tax-exempt status, and for opening a bank account in order to receive membership fees from the advisory board
  • the GNOME 1.4 release management, handled by Maciej Stachowiak
  • the preparation for the 2nd edition of GUADEC, to be held in Denmark

Additionally, the GNOME Foundation was going to work on establishing a trademark for the project, both as the name GNOME and for the project’s logo.

Originally, the GNOME logo was not a logo at all. It was part of a repeating pattern for one of the desktop backgrounds, designed by Tuomas Kuosmanen, who also designed Wilber, the GIMP mascot. Tuomas reused the foot pattern as an icon for the panel, namely the button for the launcher menu, which contained a list of common applications, and let the user add their own.

In a typical free software spirit, and with a certain amount of bravery considering the typical results for such requests, Red Hat decided to host a competition for the logo setting as the prize for the winning submission a graphic tablet; they also asked contestants to use GIMP to create the logo, which, sadly, precluded the ability to get vector versions of it. In the end, many good submissions notwithstanding, the decision fell to a modified version of the original foot, also done by Tuomas—only instead of a right foot, it was a left foot, shaped like a “G”.

Leaving aside the administrivia of the Foundation for a moment, let’s go back to the technical side of the GNOME project, and take small detour to discuss the tools used by GNOME developers. Over the years these tools have changed, in many cases for the better, but it helps to understand why these changes were made in the first place, especially for newcomers that did not experience how life was way back when developers had to bang rocks together to store the code, use leaves and twigs to compile it, and send pigeons to file bug reports.

GNOME code repositories started off using CVS. If you know, even in passing, what Git is, you can immediately think of CVS as anything that Git isn’t.

CVS was slow; complicated; obscure; unforgiving; not extensively documented; with a terrible user experience; and would fail in ways that could leave both the local copy of the code and the remote one in a sorry state for everyone.

No, hold on.

Sorry, that’s precisely like Git.

Well, except “slow”.

Unlike Git, though, all the operations on the source revisions were done on the server, which meant that you didn’t have access to the history of the project unless you were online, and that you couldn’t commit intermediate states of your work without sending them to the server. Branching was terrible, so it was only done when strictly necessary. These limitations influenced many of the engineering practices of the time; you had huge change log files in place of commit logs; releases were only marked as such by virtue of having generated an archive, as tagging was atrocious; the project history was stored per-file, so you would not have the ability to see a change in its entirety unless you manually extracted a patch between two revisions; conflicts between developers working on the same tree were a daily occurance, and made integration of different work a pain.

It was not odd to have messy history in the revision control, as well as having to ask the CVS administrators to roll back a change to a previously backed up version, to compensate for some bad commit or source tree surgery.

Due to how GNOME components were initially developed — high level modules with shared functionality which were then split up — having commit access to one module’s repository allowed access to every other repository. This allowed people to work on multiple modules, and encouraged contributions across the whole code base, especially from newcomers. As a downside, it would lead to unreviewed commits and flames on mailing lists.

All in all, though, the “open doors” policy for the repositories worked well enough, and has been maintained over the years, across different source revision control software, and has led to not only many “drive by” patches, but also to a fair number of bugs being fixed.

Talking about bugs, the end of 2000 was also the point when the GNOME project moved to Bugzilla as their bug tracking system.

Between the establishment of the project and October 2000, GNOME used the same software platform also used by Debian to track bugs in the various modules. The Debian Bug Tracking System was, and still is, email based. You’d write an email, fill out a couple of fields with the module name and version, add a description of the issue and the steps to reproduce it, and then send it to submit@bugs.gnome.org. The email would be sent to the owner of the module, who would then be able to reply via email, add more people to the email list, and in general control the status of the bug report through commands sent, you guessed, by email. The web interface at bugs.gnome.org would show the email thread, and let other people read and subscribe to it by sending an email, if they were experiencing the same issue, or were simply interested in it.

By the late 2000, the amount of traffic was enough to make the single machine dedicated to the BTS keel over and die; so, a new solution was being sought, and it presented itself in the form of Bugzilla, a bug tracking system originally developed by Mozilla as a replacement for the original Netscape in house bug tracker once Netscape published their code in the open.

The web-friendly user interface; the database-backed storage for bug reports; and the query system made Bugzilla a very attractive proposition for the GNOME project. Additionally, Eazel and Ximian were already using Bugzilla for their own projects, which made the choice much more easy to make. Bugzilla went on to be the bug tracking system for GNOME for the following 18 years.

By the end of the millenium, GNOME was in a good position, with a thriving community of developers, translators, and documentation writers; taking advantage of the licensing woes of the “Kompetition”, and with a major release under its belt, the project now had commercial backing and a legal entity capable of representing, and protecting, the community. The user base was growing on Linux, and with Sun’s committment to move to GNOME for their next version of Solaris, GNOME was one step away from becoming a familiar environment for millions of users.

This is usually when the ground falls beneath your feet.


While GNOME developers, community members, and companies were off gallivanting in the magical world of foundations and transformative projects, the rest of the IT world was about to pay its dues. The bubble that had been propelling the unsustainable amount of growth of the previous 3 years was about to burst spectacularly.

We’re going to see the effects of the end of the Dot com bubble on the GNOME project in next week’s episode, “End of the road”.

by ebassi at November 15, 2018 04:00 PM

November 08, 2018

Emmanuele Bassi

Episode 1.3: Land of the bonobos

With the GNOME 1.0 release, and the initial endorsement of the project by Linux distributors like Red Hat and Debian, it was only a matter of time before a commercial ecosystem would start to coalesce around the GNOME project. The first effort was led by Red Hat, with its Red Hat Advanced Development laboratories. Soon, others would follow.

Two companies, in particular, shaped the early landscape of GNOME: Ximian and Eazel.

Ximian was announced alongside GNOME 1.0, at the Linux Expo 1999, by none other than the project creator and lead, Miguel de Icaza, and his friend and GNOME contributor, Nat Friedman; it was meant to be a company that would work on GNOME, and provide support for it, in a model similar to the Red Hat one, but with a more focused approach than a whole Linux distribution. Initially, the name of the company was “International GNOME Support”, before being renamed Helix Code first, and Ximian in 2001, when it proved impossible to secure a trademark on Helix Code. For the benefit of clarity, I’ll use “Ximian” throughout this episode, even if we’re going to cover events that transpired while the company name was still called “Helix Code”.

Ximian’s initial effort was mostly spent towards raising capital to hire developers to work on GNOME and its applications, using the extant community as a ready made talent pool, like many other companies did, and still do to this day.

Ximian focused on:

  • providing GNOME as a commercially supported independent platform on top of existing Unix-like operating systems
  • jump-starting an application ecosystem that would target GNOME and focused on enterprise-oriented platforms

The results of these two areas of focus were Red Carpet and Ximian GNOME, for the former; and Evolution, for the latter.

Red Carpet was a way to distribute software developed at its own pace, on top of different Linux and Unix-like operating systems. Whether you were running Red Hat Linux, SuSE, Debian, Mandrake, or Solaris; whether your platform was Intel-based or PPC-based; whether you had a fully supported OS or you were simply an enthusiast and early adopter; you’d be able to download a utility that checked the version of a known list of components on your system; downloaded the latest version of those components and their dependencies from the Ximian server; installed the packages that would fit with your system; and kept them up to date every time a new version was released upstream. All of this was managed through a user-friendly interface, definitely nicer than the alternatives provided at the time by any other distribution, with clear indications of software sources; out of date packages; and suggested updates.

Through Red Carpet, Ximian created Ximian GNOME, a software distribution channel that delegated the maintenance of the desktop environment and selected applications, to an entity outside of the one that provided the core OS, and allowed to keep GNOME updated at the pace of upstream development, minus the time for QA.

Of course, this whole approach relied on the desktop being fully separate from the core OS, something that was possible only back in the early days of Linux as a competitive workstation operating system; additionally, it wasn’t something devoid of risks. Linux installations have always leaned towards heavy customisation post-installation, with the combinatorial explosion of packages available for every user and system administator to install on top of a base system, and without a clear separation of namespaces between the core OS, the graphical environment, and the user applications. Bolting a whole new desktop environment on top of a fluid base system could be problematic at best, and it made upgrading the base system, the desktop environment, and the applications an interesting challenge—with “interesting” defined as “we’re all going to die”. Red Carpet did allow, though, for keeping a stable base underneath a fast moving target like GNOME, assuming that you only cared about upgrading GNOME.

Another advantage of Red Carpet was the ability to provide a standard upgrade path for a fleet of systems, which is what made it attractive for system administrators.

Outside of Red Carpet, Ximian was working on Evolution, an email client and groupware platform for enterprise environment.

Email clients for Linux, like text editors and IRC clients, are a dime a dozen, but mostly terminal-based, and a poor fit in an enterprise environment, as they would not integrate with groupware services, like calendars, events, or shared address books. Sure, you could script your way out of most anything, if you were playing at being a BOFH, but Carol in Human Resources would not be able to get away with not receiving her email because Charlie made an error writing a procmail rule, and sent all the notifications halfway to Siberia.

Groupware suites are pretty much a requirement in corporate environments, and since contracts with those corporations can make or break a company, having a client capable of not only providing the same features, but also integrating with existing infrastructure is a fundamentally interesting proposition.

While Ximian tried to break into the commercial space through the tried and tested route of support contracts and enterprise software, Eazel chose a different, and somewhat risker route.

Eazel was a case of building a company around an idea 10 years too early; the idea in question being: providing remote storage for files and application, complete with browsing remote volumes and folders from your file manager as if they were on local storage. All of this would happen on the nascent Linux desktop, which meant creating a fair amount of the necessary infrastructure and shaping the design and user interaction at the same time.

Eazel was founded by many former Apple employees, including Andy Hertzfeld and Darin Adler, who were the technical leads of the original Macintosh team; and Susan Kare, the designer of the Macintosh icons and typefaces.

The main entry point for users of Eazel services was a new file manager, called Nautilus, coupled to a virtual file system abstraction that would be designed specifically for graphical user interfaces, to avoid blocking the UI while file operations on resources with large latency, like a network volume, were in progress. Given that fuzzy licensing situation around KDE and Qt, Eazel decided to work with the GNOME community, and took the remarkable step of developing Nautilus in the open.

Eazel started working their way upstream with the GNOME community around the late 1999/early months of 2000, thanks to Maciej Stachowiak, an early hire and a GNOME developer who worked on Guile, as well as various GNOME applications and components. Of course, the first thing the core GNOME developers did when approached by Eazel was to port their code from its early C++ incarnation to C, to fit in with the rest of the platform. The interesting thing that happened was that Eazel developers complied with that request, and stuck with the project.

At the time, GNOME’s file manager was a GUI layer around Miguel de Icaza’s Midnight Commander, a Norton Commander clone; MC was mostly used as a terminal application, even if it could integrate with different GUI toolkits, like Tk and GTK when running under X11. The accretion of various GUI led to cruft accumulating as well, and some of the design requirements for a responsive UI in a desktop environment poorly fit in with how a terminal UI would work. Additionally, maintenance was, as it’s wont, mostly volunteer-based. Federico Mena spent time, while working at the RHAD labs, on the GNOME integration, and that was the closest thing to somebody being paid to work on the MC code base. As the work on Nautilus progressed both from Eazel and the GNOME community, the scale slowly tipped in favour of the more integrated, desktop-oriented file manager with paid maintenance.

Of course, what we call Nautilus (or “Files”) today was a very different beast than what it was when Eazel introduced it to the GNOME community in February 2000. Folder views could have annotations, you could add emblems on files and folders, set custom icons, and even custom backgrounds. The file view was a canvas, with the ability to zoom in and out the grid of icons, or even stretch the icons and change their sizes, as well as their position. You could have live preview of files directly inside Nautilus without opening a different application, and thanks to the work done at Red Hat by Christopher Blizzard on integrating the Mozilla web rendering engine with GTK, Nautilus could also browse the web, or more likely your company’s intranet, or WebDAV shares.

While working on Nautilus, Eazel also provided functionality to the core GNOME platform; mainly, the GNOME VFS library was made available outside of Nautilus as a way to perform file operations outside of the simple POSIX API, and other applications quickly started using it to access remote volumes, or to integrate functionality like copying and moving files; libeel, a library for custom widgets used in Nautilus; and librsvg, a library for rendering SVG files, mostly graphical assets used by Nautilus for its icons, saw adoption on the GNOME stack.

Despite the differences in the originating companies, both Nautilus and Evolution shared a common design philosophy: componentisation. Not just plugins and extension modules, but whole components for integrating functionality into, and from, those applications.

The “object model environment” that begat the GNOME project’s acronym, and was simmering in the background, started getting into full swing between GNOME 1.0 and 1.2, with complex applications exposing components for other applications to reuse and re-arrange, as well as taking components themselves and adding new functionality without necessarily adding new dependencies.

Instead of raw CORBA, Eazel and Ximian worked on OAF, the object activation framework, a way to enumerate, activate, and watch components on a system; and Bonobo, a library that made it easy to write GUI components for GNOME applications to use, reducing the CORBA boilerplate.

The idea was that applications would mostly be “shells” that instantiated components, either provided by themselves or by other projects, and build their UI out of them. Things like menus and actions associated with those menus could be described in XML, everyone’s favourite markup language, and exposed as part of the component. GNOME and its applications would be a collection of libraries and small processes, combined and recombined as the users needed them.

Of course, the first real application of this design was to embed the Minesweeper game into Gnumeric, the spreadsheet application, because why wouldn’t you?

The effort didn’t stop there, though; Evolution was, in fact, a shell with email client, calendar, and address book components. Nautilus used Bonobo to componentise functionality outside the file management view, like the web rendering, or the audio playback and music album view.

This was the heyday of the component era of GNOME, and while its promises of shiny new functionality were attractive to both platform and application developers, the end result was by and large one of untapped potential. Componentisation requires a large initial effort in designing the architecture of an application, and it’s really hard to introduce after the fact without laying waste to working code. As an initial roadblock it poorly fits with the “scratch your own itch” approach of free and open source software. Additionally it requires not just a higher level of discipline in the component design and engineering, it also depends on comprehensive and extensive documentation, something that has always been the Achille’s Heel of many an open source project.

In practice, the componentisation started and stopped around the same time as GNOME 2 was being developed, and for a long while in the history of the project it was the proverbial dead albatross stuck around the neck of the platform.

For GNOME developers in mid-2000, though, this was a worthy goal to pursue, so they worked hard on stabilising the platform while applications iterated over it.

The project’s efforts moved in two prongs: minor releases for the 1.x platform, and planning towards the 2.0 release

GNOME 1.2 was released as a minor improvement over the existing 1.0 platform in May 2000, and a similar 1.4 minor release was planned for mid-2001; meanwhile, the effort of integrating the functionality spun off in libraries like Bonobo and OAF, gnome-vfs and librsvg, into the core platform was well underway.


Ximian and Eazel helped shape GNOME’s future not just by creating products based on the GNOME desktop and platform, or by hiring GNOME developers; they also contributed to establish two very important parts of the GNOME community, that exist to this day: GUADEC, the GNOME conference; and the GNOME Foundation.

Next week we’re going to witness the birth of both GUADEC and the Foundation, and we’ll take a small detour to look at the tools used by the GNOME project to host their code and track the bugs contained in said code, in “Founding the Foundation”.

References

by ebassi at November 08, 2018 04:00 PM

November 01, 2018

Emmanuele Bassi

Episode 1.2: Desktop Wars

The year is 1998.

In an abandoned warehouse in San Francisco, in a lull between the end of the previous rave and the beginning of the next, the volume of the electronica has been turned all the way down; the strobes and lasers have been turned off, and somebody cracked open one of the black tinted windows, to let some air in. On one of the computers, made by parts scavenged here and there, with a long since forgotten beer near its keyboard, a script is running to compile a release archive of GNOME 0.20. The script barely succeeds, and the results are uploaded to an FTP server, just in time for the rave to start. There’s no need to write an announcement, the Universe will provide.

At the same time, somewhere in Europe, in a room dominated by large glass windows, white walls with geometric art hanging off of them, and lots of chrome finish, the hum of 50 developers with headphones working in concert quiets down after the project leader, like an orchestra conductor, raises to his feet. He looks at every young developer, from the 16 years old newcomer with a buzz haircut, to the 25 years old grizzled old timer that will soon leave for his 5 years mandatory military service; he then looks down, and presses a key that runs the build of a pre-release for KDE 1.0. The build will of course succeed without a hitch, and the announcement will be prepared later, in the common room, with a civilised discussion between all project members.

The stage is thus evenly divided.

The players are:

  • KDE, a commune of software developers using a commercially backed toolkit to write a free software desktop environment
  • GNOME, a rag tag band of misfits and university students, writing a free software toolkit and desktop environment

These are the desktop wars.

I jest, of course. The reality was wildly different than the memes. Even calling them “the desktop wars” is really a misnomer — after all, we don’t call the endless, tiresome arguments between Emacs and vi users as “the text editor wars”; or the the equally exhausting diatribe between spaces and tabs aficionados as “the code indentation wars”. At most, you could call this a friendly competition between two volunteer projects in the same problem space, but that doesn’t make for good, click-bait-y, tribal division.

Far from being a clinical, cold, and corporate-like project, KDE was started by volunteers across the globe, even if it did have a strong European centre of mass; while it did use a version of Qt not released under the terms of a proper free software license, KDE had a strong ethos towards user freedom from the very beginning, being released under the terms of the GNU General Public License; and while it was heavily centralised, its code base was not a machine of perfect harmony that never failed to build.

GNOME, on the other hand, was not a Silicon Valley-by-way-of-Burning Man product of acid casualties and runaways; its centre of mass was nearer to the East coast of the US than to the West, even if GIMP was initially developed at Berkeley. GNOME was indeed perceived as the underdog, assembled out of a bunch of components developed at different paces, but its chaotic initial form was both the result of the first few months of alpha releases, and of the more conscious decision of supporting and integrating existing projects.

Nevertheless, the memes persisted, and the battle lines were drawn by the larger free and open source software community pretty much immediately, like it happened many times before, and will happen many times after.

The programming language was one of the factors of the division, certainly, bringing along the extant fights between C and C++ developers, with the latter claiming the higher technical ground by using a modern-ish language, compared to the portable assembly of the former. GNOME used the existence of bindings for other languages, like Perl, Python, Objective C, and Guile, as a way to counter-balance the argument, by including other communities and programming paradigms into the mix. Writing GNOME libraries surely was a game for C developers, but writing GNOME applications was supposedly to be all about choice. Or, at least, it would have been, once the GNOME libraries were done; while the project was in its infancy, though, the same people writing the libraries ended up writing the applications, which meant a whole lot of C being unleashed unto the unsuspecting world.

From a modern perspective, relying on C as the main programming language was probably the most contentious choice, but in the context of 1997 it would be hard to call it surprising. Sure, C++ was already fairly well known as a system level language, but the language itself was pretty much stuck to the 2nd edition of “The C++ Programming Language” book, published in 1989; the first ISO C++ standardisation came in 1998, followed by one 2011, 13 years later. Additionally, programmers had been bitten by the binary incompatibilities across compilers as well as different versions of the same compiler, while the language evolved; and the standard library was found lacking in both design and performance, to the point that any major C++ library, like Qt or Boost, ended up re-implementing the same large chunks of the same basic data types. In 1997, writing a complex, interdependent project like a desktop environment using C++ was the “edgy” effort, for lack of a better word, comparable to writing a desktop environment in, say, Rust in 2018.

Another consideration, tied into the support for multiple languages, was that basically all high level languages exposed the ability to interface their internals using the C binary interface, as it was needed to handle the OS-level integration, like file and network operations.

We could debate forever if C was the right choice — and there are people that still do that to this day, so we would be in good company — but in the end the choice was made, and it can’t be unmade.

By and large, though, the deciding factor that put somebody in either the KDE or the GNOME camp was social and political; fittingly, as the free and open source software movement is a social and political movement. The argument boiled down to a very simple fact: the toolkit chosen by the KDE project was, at the time, not distributed under a license that fit the definition of free software, and that made redistributing KDE a pain for everyone that was not the KDE project themselves.

The original Qt license, the Qt Free Edition License, required that your own project never depended on modifications of Qt itself, and that you licensed your own code under the terms of the GPL, the LGPL, or a BSD-like license. Writing libraries depending on Qt also required to jump through additional hoops, like sending a description of the project to Trolltech.

Of course, that put the KDE project in the clear: they were consuming Qt mostly as a black box, and they were releasing their own code under the terms of the GPL. It did place the distributors of KDE binaries on less certain grounds, though, with Debian outright refusing to package KDE as it would put the terms of the GPL used by KDE in direct conflict with the terms of the Qt Free Edition License; the license itself was really not conforming to the Debian Free Software Guidelines, so distributing Qt itself (as well as any other project that decided to use its license) was out of the question. If you wanted to use pre-built packages for KDE 1.0 on Debian, you had to download and install them from a third party repository, maintained by KDE themselves.

Other distributions, such as SuSE and Mandrake, were less finnicky about the details of the interaction between different licenses, and decided to ship binary builds of KDE as part of their main package repositories.

The last big name in the Linux distributions landscape at the time was Red Hat, and things were afoot there.

Just like Debian, Red Hat was less than enthused by the licensing issues of Qt and KDE, and saw a fully GPL and LGPL desktop environment as a solution for their commercial contracts. Of course, GNOME was mostly alpha quality software at the time, and had to catch up pretty quickly if it wanted to be a viable alternative to, well, shipping nothing and supporting roughly every possible combination of window managers and utilities.

Which is why, in 1998, Red Hat created the Red Hat Advanced Development Laboratories.

The RHAD labs were “an independent development group to work on problems of usability of the Linux operating system”: a few developers embedded in upstream communities, tasked with both polishing the many, many, many rough edges of GNOME, and taking over some of the unglamorous aspects of maintenance in a large project.

Under the watchful eye of Mark Ewing and Michael Fulbright, RHAD labs hired Elliot Lee, who wrote the CORBA implementation library ORBit, and worked on the componentisation of GNOME; Owen Taylor, who co-maintained GTK and shepherded the 1.0 release; Carsten “the rasterman” Haitzler, who wrote the Enlightenment window manager and worked on specifying how other window managers should integrate with GNOME; Jonathan Blandford, who worked on the GNOME control centre; and Federico Mena, who worked on GIMP, GTK, and on the GNOME GUI for the Midnight Commander file manager, as well as writing the first GNOME calendar application. In time, the RHAD would acquire the talents of other well-known GNOME contributors, like Havoc Pennington and Tim Janik, to work on GTK 2.0; Christopher Blizzard, to work on the then newly released Mozilla web browser; and David Mason, to work on the GNOME user documentation.

In September 1998, GNOME released version 0.30, to be shipped by Red Hat Linux 5.2 as a technology preview, thanks to the work of the people at the RHAD labs. Of course, it was not at all smooth sailing, and everyone there had to fight to keep GNOME from getting cut — mostly by convincing the Red Hat co-founder and CEO, Robert Young, that the project was in a much better state than it looked. The now infamous “Project Bob” was a mad dash of 36 hours for all the members of the RHAD labs to create and present a demo of GNOME, making sure it would work — at least, within the strict confines of the demo itself. Additionally, Carsten Haitzler would write a cool new theme using the newly added loadable module support in GTK, to show off the capabilities of the toolkit in its upcoming 1.2 release. Up until then, GTK looked fairly similar to Motif, but counting on his experience on the Enlightenment window manager, Haitzler added the ability to customise the appearance of each UI element in a variety of ways.

Of course, in the end, it was the theme that won over Red Hat management, and without it, “Project Bob” may have failed and spelled doom for the whole RHAD labs and thus the commercial viability of the GNOME project.

In December 1998, Trolltech announced that Qt would be released under the terms of a new license, called the “Q Public License”, or QPL. The QPL was meant to comply with the Debian Free Software Guidelines and lead to Debian being able to ship packages of Qt, but it did not really solve the issue of GPL compatibility, so, in essence, nothing changed: Debian and Red Hat would not ship KDE until its 2.0 release, 2 years later, when Trolltech relented, and decided to ship Qt 2.0 under the terms of both the QPL and of the GNU GPL.

By the beginning of 1999, KDE was about to release version 1.1, whereas GNOME locked down the features for a 1.0 release, which was announced in March 1999. In April 1999, Red Hat Linux 6.0 shipped with GNOME as its default desktop environment. The 1.0 release was presented at the Linux Expo, in May; the presentation was hectic, and plagued by the typical first release issues; GMC, the file manager, for instance, would crash when opening a new directory, but the session manager was modified to restart it immediately, so all you’d see was a window closing and re-opening in the desired location.

The splash of the first major release attracted attention; it was a signal that the GNOME developers were ready for a higher form of war, one that involved commercial products that could integrate with, and be based off of GNOME.

The desktop wars were entering a new phase, one where attrition and fights by proxy would dominate the following years.


Next week we’re going to zoom into the nascent ecosystem of companies that were born around GNOME, and focus on two of them: Ximian, or Helix Code as it was called at the time; and Eazel. Both companies defined GNOME’s future in more ways than just by adoping GNOME as a platform; they worked within the GNOME community to create products for the Linux desktop, and they shaped the technical and social decisions of the GNOME project well after the first chapter of its history.

Alongside that, we’re also going to look at the effort to bring about the era of components that was initially outlined with the GNOME acronym: a desktop and an object model. We’re going to see what happened to that, once we step into “The land of the bonobos”.

References

by ebassi at November 01, 2018 05:00 PM

October 25, 2018

Emmanuele Bassi

The History of GNOME

I’ve done a thing which may be of interest if you’re following the GNOME community.

As I said on Twitter, I have spare time, and I like boring people to death by talking about things that matter to me a lot; one of the things that matter to me is GNOME and its community—and especially its history.

Of course, I had to go and make it about liminal spaces and magic rituals, because that’s what makes it fun. This, though, is a magic ritual. I’m holding a seance, and I’m calling forth the past of the GNOME project for the people that live down its light-cone.

GNOME has the luxury of having a lot of people that stuck around—some even from the early days when there was no GNOME; there are also other people, though, some of them born after Miguel’s announcement, that are now starting to contribute to GNOME. I guess that means that it’s time to look back a bit, and give some more context to the history of the project.

I hope I won’t bore you that much with this; I hope that people will learn something new, or re-discover something that was forgotten. In general, I do hope people will have fun with it.

by ebassi at October 25, 2018 06:00 PM

Episode 1.1: GNOME

It is a long running joke in the GNOME community, that the acronym, or, more accurately, the backronym, that serves as the name of the project does not apply any more.

The acronym, though, has been there since the very first announcement of the project: GNOME, the GNU Network Object Model Environment.

The history of GNOME begins on August 15th, 1997.

NASA landed the Pathfinder on Mars just the month before.

Diana, Princess of Wales, would tragically die at the end of the month in Paris.

The number one song in the US charts is “I’ll be missing you”, by Puff Daddy.

On the 15th of August, Miguel de Icaza, a Mexican university student, announced the creation of the GNOME project, an attempt at creating “a free and complete set of applications and desktop tools, similar to CDE and KDE but based entirely on free software”.

To understand each part of that sentence, I’m afraid we will have to go back to a time forgotten by the laws of gods and men: the ‘80s.

In 1984, Richard Stallman started the GNU project. Don’t bother try to expand the acronym, it’s one of those nerdy things for which the explanation is just not as clever when said it out loud as it feels in one’s head. Incidentally, GNU is the reason why the G in GNOME is not silent. The history of GNU is an interesting topic, but we’ll avoid covering it here; if you want to, you can read all about it on the Free Software Foundation’s website.

GNU was, and it still is, an attempt at creating a Unix-compatible system that is completely free as in “software freedom”; the freedom in question is actually four different freedoms:

  • the freedom to use this operating system on whatever machine you want
  • the freedom to study it, down to its source code
  • the freedom to share it with others, without going through a single vendor
  • the freedom to modify it, if it does not do something you want

These four freedoms are enshrined in the “Copyleft” movement, which uses distribution licenses such as the GNU General Public License and Lesser General Public License, to create software programs and libraries, that respect those four freedoms.

GNU is a collection of tools, like the compiler suite and Unix-like command line utilities, in search of a kernel capable of running them — the attempt at creating a working, main stream GNU kernel is, shall we say, still in progress to this day. In 1991, though, Linus Torvalds, a student from Finland, created the Linux kernel, which was quickly adopted as the major platform for GNU, and the rest, as they say, is history — though, like GNU’s history, also not the one you’re going to hear in this podcast.

While the development tools and console utilities were mostly getting taken care of, the GUI landscape on Linux was composed by an heterogeneous set of tools, typically starting with a window manager; some task management tool, like a list of running programs and a way to switch between them; and smaller utilities, like launchers for common applications or monitoring tools for local and network resources.

Each environment was typically built from the ground up and customised within an inch of its life, a system tailored to the levels of micro-management and pain tolerance of each Linux user, and in those days, the levels of both were considerably high.

Large, integrated desktops were the remit of commercial Unix systems, like SunOS, AIX, HP/UX. One of those environments was the Common Desktop Environment, or CDE.

Created in 1993 by the Open Group, an industry consortium founded by the likes IBM, Sun, and HP, CDE was built around Motif, a commercial widget toolkit written in the late 80s by HP and DEC for the X display server, the mainstream graphics architecture on Unix and other Unix-compatible operating systems.

The Open Group quickly standardised CDE, and until the early 2000s, when Linux had started eating most of their lunches, it was considered the de facto standard desktop environment for commercial Unix platforms.

One of the things that Linux and the commercial Unix systems running on the Gibsons had in common was the X graphic system. This meant that you could write applications on your Unix system at university, or at work, and then run it on Linux at home, and vice versa.

As it’s often the case, Linux users wanted to bring some of the tools used on the Big Irons to their little platform, and in 1996 a group of C++ developers led by Matthias Ettrich, created the KDE project, a desktop environment in the same vein as the CDE project, as the name implies. Since Motif was written in C and released under an expensive proprietary license, they used a different widget toolkit, written in C++, as the basis for the desktop called “Qt” (pronounced “cute”), made by a Norwegian company called Trolltech.

Qt was released under a license called “The Qt Free Edition License”, which limited the redistributability of the code: you could get the source of the toolkit, but you could not redistribute modified versions of it. While this was good enough if you wanted to download and build the source on your personal computer, it put some strain on the people distributing Linux and its software, and it went against the Copyleft ethos of the GNU operating system that was the basis of Linux distributions — to the point that an effort to reimplement a Qt-compatible widget toolkit and releasing it under a Free Software license was started by the GNU project, called “Project Harmony”.

In the meantime, Motif was proving to be a hurdle for other free software projects as well.

In 1996, Spencer Kimball and Peter Mattis, two students of the University of California at Berkeley, wrote a raster graphics image editing tool using the C programming language, as part of a university project, and called it the “GNU image manipulation program”, or GIMP, as many have come to regret when searching the name on the Internet. As university students, they had access to Motif for free, so the initial implementation of the GIMP used that toolkit, but redistribution to the world outside university, as well as technical issues with Motif, led them to write an ad hoc GUI widget library for their application, called “The GNU Image Manipulation Program Toolkit”, or “GTK” for short.

A community of software developers, including people that would be influential to GNOME like Federico Mena and Owen Taylor, started to coalesce around GIMP; they saw the value of an independent, free software widget toolkit, so GTK was split off from the main GIMP code repository in the early 1997, and began a new life as an independent project, released under the terms of the GNU Library General Public License. The licensing terms of GTK allowed it to follow the four software freedoms — use, study, share, and modify — but it also allowed the creation of less-free software on top; as long as you distributed any eventual changes you did to GTK under the same license, your application’s code could be released under any other license you wanted. This major distinction with Motif and Qt pushed the newer, volunteer-driven GTK forward, while it filled the gaps with the older, commercially supported toolkits.

GTK had a fairly lean API, and its use of C quickly allowed developers to write “bindings”, that let other programmng languages use the underlying C API with a minimal translation layer to pass values around. Soon after, programmers of Perl, Python, C++, and Guile, an implementation of a dialect of the LISP programming language called Scheme, could use GTK to write plugins for GIMP, or complete, stand alone applications. Compared to KDE’s choice of Qt, which only supported C++ and Python, it was a clear advantage, as it exposed GTK to different ecosystems.

What was GNOME like, back in late 1997/early 1998?

The answer to that question is: an heterogeneous collection of tools, mostly sharing dependencies, and developed together, that occasionally got released and even more rarely did build out of the box without resorting to using random snapshots out of the source revision control.

You had a panel, with launchers for common applications, and with a list of running programs. There were the beginnings of a set of core applications: a help browser; a file manager; a suite of small utilities, mostly GUI ports of command line tools; games; an image viewer; a web browser based on an embeddable HTML renderer; a text editor. Notably, and unlike KDE with its own KWM, not a window manager.

The question of what kind of window manager should be part of a GNOME environment was punted to the users — it’s actually the first instance of a controversial thread on the GNOME mailing list, with multiple calls for an “officially sanctioned” window manager, typically opposed by people happy to let everyone use whatever they wanted — the externally developed Enlightenment was the most common choice at the time, but you could literally run GNOME with WindowMaker; GNUStep; or XF Window Manager 2; and you could still call it “GNOME”.

From a code organization perspective, GNOME started off as a single blob, which got progressively spun into separate components:

  • gnome-libs, a series of core libraries based off of GTK
  • gnome-core, which contained the session manager and the panel
  • gnome-graphics, which contained the Electric Eyes image viewer
  • gnome-games, a mixed bag of simple games
  • gnome-utils, a mixed bag of GUI utilities, like gtop
  • gnome-admin, which contained an SNMP monitoring tool and Gulp, the line printer configuration utility

While GNOME at the time was still a loosely connected set of components, the overall direction of the design was far more grandiose, though. If you remember the GNOME acronym from the announcement, it mentioned a “network object model”.

But what is an “object model”?

One of the things that Microsoft and Apple did with much fanfare, back in the early ‘90s, was introducing the concept of embeddable components provided by both the OS and applications.

You could create a spreadsheet, then take a section of it, embed it into your word processor document, and edit the table from within the word processor itself, instead of flip flopping between the two applications on the limited screen real estate available. The idea was that documents and applications would just be built out of blocks of data and controls, shared across the whole operating system. Those components were available to third party developers in programming suites like Visual Basic, Visual C++, or Delphi, and you could quickly create a well integrated application out of them.

Sun tried to push Java as the foundation for this design; Apple called their short-lived implementation of this technology “OpenDoc”; Microsoft called its much more successful version “OLE”, or: Object Linking and Embedding; and, of course, any self-respecting desktop environment competing with Windows needed something similar to match features.

In order to implement an infrastructure of objects and remote procedure calls that could be invoked on those objects you needed a communication system; OLE used COM, the Common Object Model; GNOME decided to use CORBA, or the Common Object Request Broker Architecture, which was not only meant to be used on local systems but it also worked on the network. As a CORBA implementation, GNOME started by using MICO, a C++ library, and then replaced it with to ORBit, a C library written specifically for the project and addressing some of the MICO shortcomings.

While ideally GNOME would provide components for everything, the initial beneficiary of this architecture was the only recognisable bit of the desktop environment that was visible to the user all the time: the panel.

The contents of the panel were small, self-contained applications that would use CORBA as a mechanism to negotiate being embedded into the panel own window to display their state, or pop up things like menus and other windows on user request.

The core applications started getting CORBA-based components, but we’ll have to wait until after GNOME 1.0 to get to a widespread adoption of this architecture.

Between August 1997 and May 1998, GNOME released various 0.10 snapshots to mark the progress of the project. By June 1998, GNOME 0.20 was released as a “pseudo-beta”, followed, in September 1998, by 0.30, the first named release of GNOME, called “Bouncing Bonobo”.

Between 0.20 and 0.30, though, something had happened: Red Hat, a Linux distribution vendor, founded the Red Hat Advanced Development Labs, hired a bunch of software developers that happened to contribute to GNOME, amongst other things, and the benevolent corporate overlords started taking notice of this Linux desktop.

Nobody really knew it at the time, but the First Desktop Wars had begun.

References

by ebassi at October 25, 2018 03:27 PM

Prologue

Prologue

History is one of my passions; more than offering a narrative of the past, it gives us context, and with context comes a better understanding of the people that made history, and of their choices.

This is true for every human endeavour, and so it’s also true of software projects.

Free and Open Source Software is a social and political movement before a techological one; as such, the history of a free software project should help us contextualise not just the technological decisions, but the personalities, the goals, and the constraintsthat influenced those decisions.

It’s often the case, especially in the field of software development, that newcomers to a specific project, or a subset of a project, end up facing some code that does not work; or that barely works, some of the time. The immediate reaction is typically trying to understand that code, that technology, that whole stack, in order to fix it, or replace it. Just learning what the code does in that specific instant, though, gives only a partial picture of how that code, that technology, that whole stack came into being.

It’s said that programmers think that bad code comes from bad programmers; in reality, bad code often comes from good people operating under different constraints than ours. Understanding those constraints, and how they changed over time, is the first step in understanding how to write better code.

This is especially true of projects that involve hundreds of developers, operating under wildly variable constraints, for decades.

Projects like GNOME.

GNOME is a free software project that aims to create an environment for computer users that is

  • easy to use
  • accessible for everyone
  • under a license that allows everyone to modify and contribute to it

As far as free and open source software projects go, it’s fairly old; it’s old enough to vote, and to buy a drink in the US. GNOME, as a community, attracts not only volunteer contributions, but commercial ones as well, and it provides the technological underpinnings for both volunteer and commercially driven products. It is a recognised brand, for better or worse, and it comes with opinionated decisions, as well as an ethos that not only drives the individuals that make it, but also the ones that use it.

More importantly, GNOME comes with a varied history, and a lot of baggage, deeply rooted in the choices made by thousands of people. It’s something that newcomers to the project often deal with by asking questions to whoever appears to be knowledgeable — but it’s not something that the the project itself offers.

So, I wanted to work on recontextualising GNOME.

Why me, though?

My history with GNOME does not date as far back as the project’s origins. I did start using Linux in 1997, around the time GNOME was created, but for the first couple of years I mostly used the VT for emails, IRC, Usenet, and the occasional Napster download (kids, ask your parents); a GUI was something I used to launch Netscape Navigator. I experimented with KDE, but I ended up settling on WindowMaker, because I enjoyed the non-Windows-y look of the desktop.

As a user, I switched to GNOME full time by the tail end of 1.x series, and went through the 2.0 transition with a stronger interest in the direction of the project — to the point that I started contributing to it.

GNOME is the reason I became a passionate believer not just in the freedom of releasing the source code of your work, but also in the necessity of a working environment for the people using a server, a workstation, a laptop, or even a phone. I learned principles of software development; architecture; security; privacy; hardware enablement; design and user experience; quality assurance and testing; accessibility. I got my first job through GNOME, and I met wonderful people that I consider friends, peers, and inspirations. I lived through a huge chunk of the history of the GNOME project, and in some cases I can kid myself into thinking I helped shape that history. This does not make me the most neutral or dispassionate person to lend his perspective on GNOME, but you’ll have to make do with me, I guess.

In the beginning this was to be a presentation, but Jonathan Blandford did an amazing keynote on the history of GNOME in Manchester, at GUADEC 2017, for the 20th anniversary of the project, where he went through most of the milestones and events of the previous three decades. I realised, at that point, that it would take somebody like me, who’s less talented than Jonathan, much more than an hour to do the same, let alone go through GNOME’s history in depth.

I started writing a series of blog posts, but halfway through I realised I was using the same kind of voice I use when writing presentation notes, which is when I realised I should probably just read them. Thus, a podcast.

So, let’s lay down some of the ground rules for this.

The first question is: who is this podcast for?

When I write, I have three broad categories of people that I want to reach; the first is composed by people that are new, or relatively new, to the GNOME project, and want to add more context to the history of the project and of the community they just joined. In a large, established project like GNOME, there’s a lot of insider information that you typically have to suss out in person, or that you end up gleaning if you hang out in the social channels long enough. Why is using flags in a UI such a problem? Why are settings expensive? What’s up with the clocks? There is a lot of established knowledge that is only evident by its final results, but context for those results is missing. It’s already hard to get started in a new community, so this podcast aims at lowering the curve for newcomers at least a bit.

The second audience is made by people that are not familiar with the GNOME project outside of having heard about it; they are familiar enough with Linux, but lack the “insider baseball” aspect of both the community and the technology. They may know that GNOME is a desktop environment, but have no idea what a desktop environment is made of.

The third audience is the GNOME developers themselves; people that have been embedded in the community long enough to know some, or many, of the ins and outs of the decisions made in the past, but they may be unfamiliar with the whole history of the project. I hope they’ll forgive me for going on tangents, espcially at the beginning.

The podcast is divided in chapters, roughly corresponding with each major version of the desktop. There will be one episode per week, with breaks between chapters. I’m not the fastest, nor the most adept at this medium, so I want to allow some leeway to maintain both quality and quantity.

Each episode will go through events in the history of the GNOME project, but I’ll try to take some time to expand on the context in the Free Software world, as well as the rest of the happenings around the same time.

In some cases, I may have to give some technological definition, mostly to ensure that we have a common vocabolary, especially for people who are not heavily invested in the stack itself, or for people that learned English as a second language.

The primary sources for this podcast are public mailing list archives; additionally, I’ll refer to blogs from GNOME developers, as well as other public sources, like interviews and presentations given at conferences. As I said earlier, I do have direct experience of a chunk of the GNOME timeline, but I’ll try my best to stick to public sources for the events that involved me as well; this is not a “tell-all” kind of podcast. Using only public sources has the advantage that I can refer to specific information, but has the downside that I might miss some of the backstory; I don’t want to create an oral history of the project, as that would be its own endeavour. I’m sure people involved in some of the events will have their own version, and I’ll gladly accept corrections.

Each episode will have a companion article, which will contain the script and the sources I’ve used.

Hopefully, this podcast will be interesting for both newcomers to the GNOME project, and for old timers as well. Looking back to what happened helps us shape what’s to come; and having a better understanding of the past can give us confirmation of our choices, or a chance to revisit them.

Finally, I wanted to have an excuse to say out loud, with apologies to Mike Duncan:

Hello, and welcome to the history of GNOME.

References

by ebassi at October 25, 2018 03:26 PM

History of GNOME: Table of Contents

  1. Episode 0: Prologue
    • What
    • Why
    • Who
    • How

Chapter 1: Perfection, achieved

  1. Episode 1.1: The GNU Network Object Model Environment
    • Linux and Unix
    • X11, and the desktop landscape
    • KDE and the Qt licensing
    • GNU Image Manipulation Program
    • GTK
    • Guile and language bindings
    • Network Object Model
  2. Episode 1.2: Desktop Wars
    • KDE vs GNOME
    • C++ vs C vs Perl vs Scheme vs ObjC vs …
    • Qt vs GTK
    • Red Hat vs the World
    • Project Bob
    • GTK 1.2 and themes
    • GNOME 1.0
  3. Episode 1.3: Land of the bonobos
    • Helix Code, and Red Carpet
    • Eazel, and Nautilus
    • Components, components, components
    • GNOME 1.2
  4. Episode 1.4: Founding the Foundation
    • GUADEC
    • The GNOME Foundation
    • The GNOME logo
    • Working on GNOME: CVS
    • Working on GNOME: Bugzilla
  5. Episode 1.5: End of the road
    • The window manager question
    • Sawmill/Sawfish
    • GConf
    • GNOME 1.4
    • Sun, accessibility, and clocks
    • Dot bomb
    • Eazel’s last whisper
    • Outside-context problem: OSX
  6. Side episode 1.a: GTK 1
  7. Side episode 1.b: Language bindings
  8. Side episode 1.c: GNOME applications

Chapter 2: Perfection, Improved

  1. Episode 2.0: Retrospective
  2. Episode 2.1: On brand
    • GTK 2.0: GTK Harder
    • Fonts, icons, and Unicode
    • Configuration woes
  3. Episode 2.2: Release day
    • Design, usability, accessibility, and brand
    • Human Interface Guidelines
    • The cost of settings
    • Time versus features
    • GNOME 2.0 reactions
  4. Side episode 2.a: Building GNOME

by ebassi at October 25, 2018 11:00 AM

October 07, 2018

Tomas Frydrych

Dr Beeching’s Unicorns

Dr Beeching’s Unicorns

Glen Ogle. Most of the time a place on the way to somewhere else, somewhere more exciting. Yet, for me also a special, magical place where years ago my inner eye first really glimpsed the beauty of this land.

We just got our first car, a decade old Astra Belmont, and were on a first road trip out of the Central Belt. Up to Loch Ness and Skye, if I recall correctly—truth be said, I no longer recall much of it, a vague memory of the Nessie exhibition, and a boat trip on the loch, zero memories of Skye. But I do remember Glen Ogle. The way it opened up in front of my eyes for the first time as we drove up from Lochearnhead. It cast a spell over me, one that has lasted throughout the years. Looking back, I think this was the moment this land became My Scotland.

We returned to explore the old railway line. Back then there was no path to speak off, the old embankment shrouded in trees. Nature doing its best to erase man’s intrusion. Years later I watched with dismay as so many of those trees were cleared during Route 7 construction—a valuable project for sure, but such chainsaw enthusiasm.

But before that it was like being transported into another, wild, mythical world, half expecting a ghost train to appear at any moment from around the corner underneath the dark canopy.

It didn’t. Instead we run into a giant locked up gate worthy of Alcatraz. A gate clearly meant to keep people, not animals, out. (Not such an uncommon sight in the days before the Land Reform Act; the ‘good old days’ weren’t really.) Undeterred, we climbed over for a wee bit more exploring, till an industrial size manure heap finally stopped us.

‘That would be one of Dr Beeching’s, then’, said my mother-in-law as we discussed our trip over the dinner table.

Dr Beeching, 1913 - 1985. An overpaid political appointee to the chair of the British Railways Board. His lasting legacy the decimation of UK’s railway infrastructure. All in all some 6000 miles of railway track, over a third of the UK total, decommissioned.

The construction of the Callander to Oban railway involved 13,000 workers and took fifteen years; undone by a stroke of a pen. A story repeated over and over across the land. Political expediency masquerading as fiscal prudence. Lack of long term strategic vision, the consequences of which are felt astutely more than half a century later, and will be much longer, perhaps for ever.

Today Linda’s doing her recce for the Glen Ogle race, and I am out for a wander with a camera on the other side of the glen. A blustery autumnal day. The colours are beautiful, the showers frequent, the wind gusts strong enough to knock my tripod over and send my bag of lenses rolling down the steep hill. I spend most of my time ‘waiting for the light’, left to my thoughts.

Another heavy patch of rain has painted a bright rainbow across the top of the glen. Below its arch a steady stream of relentless traffic slowly making its way up the road, the frustration of the drivers almost palpable in the air. Dr Beeching’s unicorns.

My Glen Ogle. A place of internal inter-generational conflict. There on the other side, standing on the old viaduct, the younger me protesting the tree felling, the loss of (perceived) wildness. On this side the present me, wishing the working railway was back. Not a change of values as such, merely of perspective.

Later today when I develop the film I will find out that nothing worthwhile came out of the pictures. But it never is about the pictures.

by tf at October 07, 2018 09:59 AM

September 14, 2018

Tomas Frydrych

A Prince Holding Court

A Prince Holding Court

I saw him as soon as I came over the rise. Hard not to, perched on a rock some hundred yards ahead, completely out of scale in this landscape.

I heard of them being seen around here time from time, but this is the first I have laid my eyes on one, the first time I have come close to one at all. A young bird judging by the colouring, just sitting there, that unmistakable long neck and large beak, the pronounced shoulders -- the royal posture of a sea eagle.

Of course, I have neither binoculars nor a camera, not even a phone. I am three hours into a short run that got a bit out of hand when, on the spur of a moment, I decided to wade through the loch into the no-man's land. But,

Don't need no Real-to-Reel
    Recorder
to tell me I've been there,
I ken that fine.*

And so I forget about the midges (and the dozens of ticks crawling over me) and just stand here watching, in awe.

The thing that takes me by surprise the most is not the eagle, but the other birds. There are about a dozen, perhaps more, mostly crows (hooded and not) sitting in a circle around it. A tight circle, two, three yards away, quite unconcerned, preening their feathers. Like a scene from the Jungle Book: a Prince holding Court.

I am seen. The eagle flaps lazily and flies in my direction, passing maybe thirty yards away, checking me out. An about turn, back at my eye level, this time no farther than fifteen yards. You read about the two-plus meter wing span, but o' my, pictures and numbers didn't prepare me for the reality.

The eagle lands back at its perch, the other birds settling once more around it; the Court is back in session. Eventually the Prince gets bored and takes off once more in the direction of the sea, and the Court rises again.

One of the carrion crows follows it, trying to peck its back.

Keep your friends close but your enemies closer.

The difference in size makes it look rather comical! The eagle seems unperturbed and, with a nonchalance of an apex predator, keeps rising and rising until it's just a dot high up in the sky.

And me? I'll be back here an hour later, with two cameras and five lenses, and the immutable Time laughing behind my back. Never mind, I ken that fine. (And you? You get a picture of a sunset instead.)

--

[*] Andrew Greig, Men on Ice.

by tf at September 14, 2018 08:50 AM

September 10, 2018

Tomas Frydrych

Of Eagles and Men

Of Eagles and Men

There are three of them up there, and what a racket! Correction: the racket, that’s just the two of them. She is soaring silent, near motionless, regal; aloof. Her path seemingly unalterable. On a mission permitting no distractions.

--

I am transported years back to a different place, studying a black and white photograph above a fireplace. Old Mr Thornburn shuffling around a table behind me. ‘Such a camaraderie I had not known before, nor have since,’ is all he (ever) said about it.

Up on the mantle piece a young Mr Thornburn. In a glass bubble on the tail end of a (seemingly immovable) Lancaster in a cloudless sky. To me, a two generations removed causal observer, a picture of peace and tranquility, of adventure.

See? The camera does lie! An what an illusion! For this is an image of nothing less than Life Itself suspended. For a few hours? For eternity? For a mission permitting no distractions.

--

The screeching grows louder. Lot of posturing, then just one brief clash. More posturing, but we (me, her, the two of them) know it’s all over. Age and experience triumphs over the virility of youth. The upstart, minus a few feathers, retreating; he will not be back.

Yet, we (me, her, the two of them) know it’s far from over. Merely the beginning of the end. The upstart will return, perhaps next year, perhaps the year after. Bigger, craftier, having the upper claw.

I watch the two of them soar higher and higher. And we (me, her, him) know that when that day comes it will be the other she takes back to her nest. Her mission allows no sentiment. But, for now at least, the inevitability of the future has been deferred.

[Though we (you and me, if not her and them) know chances are one or the other, if not all three, will get shot, trapped, poisoned, or just fly miles out to sea to drown—Scottish eagles seem to prefer such a fate to longevity, some (useless wastrels) opine.]

--

Mr Thornburn is gone now, and with him the memories he never spoke of. I didn’t then, but I understand now that some experiences are too profound to trivialise by telling, in turn making other experiences too trivial to merit it. My great grandfather’s journal, from yet an earlier war, contains such stories—stories that could never be shared with those who couldn’t understand, and didn’t need telling to those who did.

The younger me, too preoccupied with the (untold) tales of heroic deeds. The older me, too late, wanting to ask the (heavily pregnant) question that back then was hanging in the air.

Mime is a sheltered generation, collectively short on stories of substance. And so we have become compulsive tellers of trivialities, serial manufacturers of pseudo-heroic deeds, dressed up in multicoloured cloaks of fake profundity. Entertaining distractions from the imperative of the (ultimate) mission.

Tall tales of denial.

--

All that is left of the eagles are their distant calls, leaving me alone to my thoughts. ‘Generation goes, and generation comes, but the earth lasts forever.’

Perhaps.

The Earth is groaning underneath us. We can count on the going, but can we on the coming? How many more cycles are there left? Two, three?

If only the eagles could talk.

by tf at September 10, 2018 07:16 AM

September 04, 2018

Emmanuele Bassi

Reference counting in modern GLib

Reference counting is a fairly common garbage collection technique used in many projects; the core GNOME platform uses pretty much all the time, from container data types, to GObject.

Implementing reference counting in C is typically fairly easy. Add a field to your data type:

typedef struct {
  int ref_count;

  char *name;
  char *address;
  char *city;

  int age;
} Person;

Then initialise it when creating a new instance:

Person *
person_new (void)
{
  Person *res = g_new0 (Person, 1);

  res->ref_count = 1;

  return res;
}

Instead of a person_copy() and a person_free() pair of functions, you need to write a person_ref() and a person_unref() pair:

Person *
person_ref (Person *p)
{
  // Acquire a reference
  p->ref_count++;

  return p;
}

static void
person_free (Person *p)
{
  // Free the data
  g_free (p->name);
  g_free (p->address);
  g_free (p->city);

  // Free the instance
  g_free (p);
}

void
person_unref (Person *p)
{
  // Release a reference
  p->ref_count--;

  // If this was the last reference, free the data
  // associated to the instance and then the instance
  // itself
  if (p->ref_count == 0)
    {
      person_free (p);
    }
}

Of course, trivial doesn’t mean correct. For instance, the code above assumes that all reference acquisition and release operations will happen from the same thread; if that’s not the case, you’ll have to use atomic integer operations to increase and decrease the reference count. Additionally, the code above does not check for overflows in the reference counting, which means that the value could saturate and lead to leaks.

Reimplementing all the checks and behaviours is not only boring, but it inevitably leads to mistakes along the way. For this reason, GLib 2.58 introduced two new types:

  • grefcount, to implement simple reference counting
  • gatomicrefcount, to implement atomic reference counting

Both come with their own API, which leads to this code:

typedef struct {
  grefcount ref_count;
  // or: gatomicrefcount ref_count;

  // same as above
} Person;

Person *
person_new (void)
{
  Person *res = g_new0 (Person, 1);

  g_ref_count_init (&res->ref_count);
  // or: g_atomic_ref_count_init (&res->ref_count);

  return res;
}

Person *
person_ref (Person *p)
{
  g_ref_count_inc (&p->ref_count);
  // or: g_atomic_ref_count_inc (&p->ref_count);

  return p;
}

void
person_unref (Person *p)
{
  if (g_ref_count_dec (&p->ref_count))
  // or: if (g_atomic_ref_count_dec (&p->ref_count))
    {
      person_free (p);
    }
}

The grefcount and gatomicrefcount make it immediately obvious that the field is used to implement reference counting semantics; the API checks for saturation of the counters, and will emit a warning; the atomic operations are verified. Additionally, the API checks if you’re trying to pass an atomic reference count to the grefcount API, and vice versa, so you have a layer of protection there during eventual refactoring, even if the types are both integer aliases.

We did not stop there, though.

Adding a reference count field to a structure is not always possible; for instance, if you have ABI compatibility restrictions, or if the type defition is public, adding a field may just not be something you can do within the same API version. By way of an example: you may have a type meant to be typically placed on the stack, so it needs a public, complete declaration, in order to have a well-defined size at compile time. You may also want to pass around an instance of the type as the argument for a GObject signal, or as a property — but it may be expensive to copy the data around, so you really want to have an optional reference counting mechanism that is invisible to the vast majority of the use cases.

This is why we also added an allocator function that adds reference counting semantics to the memory areas it returns, called GRcBox:

typedef struct {
  char *name;
  char *address;
  char *city;

  int age;
} Person;

Person *
person_new (void)
{
  return g_rc_box_new0 (Person);
}

Person *
person_ref (Person *p)
{
  return g_rc_box_acquire (p);
}

void
person_unref (Person *p)
{
  // person_free() is copied from the code above; we use
  // g_rc_box_release_full() because we have data to free
  // as well, but there's a variant for structures that
  // do not have internal allocations
  g_rc_box_release_full (p, person_free);
}

As you can see, this cuts down the boilerplate and repetition considerably.

The data returned to you by g_rc_box_new() and friends is exactly the same as you’d get from g_new(), so you can pass it around to your API exactly the same — but it is transparently augmented with reference counting semantics. You acquire and release references without having an explicit reference counter. The only restriction is that you cannot reallocate the data, so calling realloc() is not allowed; and, of course, you cannot free the memory directly with free() — you need to release the last reference.

Similar to GRcBox, there’s a GAtomicRcBox, which provides atomic reference counting semantics, with a similar API.

Both GRcBox and GAtomicRcBox are Valgrind-safe, so you won’t get false positives or unreachable memory if run your code under Valgrind.

You don’t even need to have a structure to allocate: you can use GRcBox to allocate any memory area with a non-zero size. Incidentally, this is how we implemented a oft-requested feature: reference counted strings, which are also available in GLib 2.58.

Reference counted strings work exactly like any other C string, but instead of copying them and freeing them, you acquire and release references to them:

char *s = g_ref_string_new ("hello, world!");

// "s" is just like any other C string
g_print ("s['%s'] length = %d\n", s, strlen (s));

g_ref_string_release (s);

Reference counted strings can also be interned, which means that calling g_ref_string_new_intern() with the same string twice will give you a new reference the second time around, instead of allocating a new string; once the last reference to the interned string is dropped, the string is un-interned, and the resource allocated for it are freed.

Since you may be wary of passing around char * for reference counted strings, there’s a handy GRefString C type you can use to improve readability, like we have GStrv for char **:

GRefString *s = g_ref_string_new ("hello");

// ...

g_ref_string_release (s);

GRefString also has an autocleanup function, so you can do:

{
  g_autoptr(GRefString) s = g_ref_string_new ("hello!");

  // ...
}

and the string will automatically be released once it goes out of scope.

by ebassi at September 04, 2018 03:35 PM

August 18, 2018

Tomas Frydrych

M4/3: The Outdoor Camera System

M4/3: The Outdoor Camera System

It’s been 10 years since the birth of the M4/3 camera system. I got my first M4/3 (Lumix GF2) in 2010 and never looked back. Indeed, I am about to argue that during that decade M4/3 has become the best camera system both for the landscape photographer on the move and a wildlife photographer alike, hitting the sensor size sweet spot. And yet, it’s completely overlooked by the outdoor movers and shakers!

There are reasons for that. Some historical (there is a large contingent of outdoor photographers who switched to digital when it was being grafted into legacy 35mm systems). Some to do with misunderstanding of what determines digital picture quality and where the technology is. And let’s not be shy, there is some pure ‘mine is bigger than yours’.

Compared to all the other removable lens systems out there M4/3 has a whole bunch of things in its favour, here are some of them of the top of my head.

  • M4/3 is light on the back. My ‘complete’ outdoor camera kit, which consists of a camera, 24-80mm eq. zoom, a fisheye, 200-800mm eq. ‘bird’ lens, a full-sized tripod, spare batteries and a handful of filters weights 5.5kg. Apart from the tripod it fits inside a BYOB 10 insert – my mother-in-law goes around with a (considerably) bigger handbag than that! Now try that with a full-frame system; the equivalent ‘bird’ lens alone weights more than the whole lot and measures over 50cm.

  • M4/3 is light on the wallet. At £1,200 the aforementioned Leica ‘bird’ zoom is not cheap by any means, but it’s not out of the reach of us mere mortals. And that’s basically as expensive as it gets. Equivalent full-frame lenses start at ~£6,500 ... Similarly, top, professional grade, M4/3 cameras such as the Lumix G9 will set you back by ~£1,200 ... need I say more?

  • The M4/3 dual image stabilisation (5-axis in camera + in lens) is second to none. It means not only I can comfortably hold shots to 1/25s, but that if required I can hand-hold that 800mm eq. beast, and that is priceless.

  • If you want a camera that can take both excellent stills and video, the M4/3 is without a competition, and most of the lenses have been designed with this in view (i.e., zooms with constant aperture when zooming).

  • Ask yourself why is there such an abundance of sensor cleaning kits for full-frame cameras, but nothing for M4/3? As it happens, M4/3 comes with built-in sensor cleaning as standard. Taking pictures outdoors? Invaluable!

‘But the image quality, the bokeh ...’, I hear you say. OK, let’s talk some geekery.

Size Matters

Megapixels sell cameras, for the belief that image quality is to do with pixel count is more deeply entrenched in the psyche of the digital era photographer than anything else. Yet the reality is lot more complex, and in fact, more pixels are not necessarily better (back to that in a moment).

On its own, the MP value has only a bearing on how much the image can be enlarged. To get a good quality photographic print you need around 200 dpi (dots per inch), if you are looking for a truly bespoke work, and have a capable enough printer (a person, not a gadget), then 300 dpi. But that means an old 6MP camera is enough for an excellent print at around 10"-15" wide, while 20MP is good enough up to 17"-25" – a very few, even professional, photos will ever be printed to anything near that.

But what do I know? Here is what Colin Prior has to say on the subject of megapixels:

What do you need a 100MPs for? ... What do we need that resolution for? Nobody can answer that question! Anything more than 30MPs is total bonkers (An interview on the The Togcast Podcast, episode #26)

So, if you do the sort of photography that requires to regularly produce prints at A2 or bigger size then the M4/3 might not be a good fit, but do you? Most of us don’t, vast majority of today’s imagery is intended for digital comsumption, which puts much lesser demand on pixel density. A 27" iMac retina display has a resolution of 200 dpi, a typical consumer monitor 120 dpi, ‘4k’ means 4000 pixels wide – an 20.3MP M4/3 image exceeds all of those.

‘But lot of pixels means you can crop a lot!’ I prefer to compose my pictures on location instead, don’t you?

In contrast, the physical size of the sensor matters a great deal. M4/3 cameras have a sensor that has (a clue in the name) a 4:3 ratio, and a diagonal exactly 1/2 of a 35mm film negative (what in the digital age, to the amusement of medium and large format photographers everywhere, has come to be known ‘full frame’, FF for short). The relationship between the FF and M4/3 diagonals is referred to as a crop factor of 2.

The physical size of the sensor determines three things in particular: the total amount of light that falls on the sensor at any given light conditions, the focal length of a lens needed to achieve a given angle of view, and the perceived depth of field.

Equivalent Aperture

Because the M4/3 sensor area is a little bit over 1/4 of the FF sensor area, the total amount of light (the number of photons) falling on the sensor is ~1/4 of what would fall on a FF sensor if both lenses were set at the same f-stop, or the same amount of light that would fall on the FF sensor if its lens was closed down by 2 f-stops. This relationship is referred to as equivalent aperture.

The amount of light that falls on the sensor has bearing on noise. There is a whole field of science dedicated to the subject (signal theory), but in essence, in any real-world system that transmits information, the useful bits (signal) are always mixed with bits that should not be there (noise; think of it as the crackling of an old vinyl). The closer the size (amplitude) of the signal is to that of the noise, the more intrusive the latter becomes (quiet music, loud crackling). This relationship is described by signal-noise ratio (SNR).

In a digital sensor the amplitude of the noise is largely given by the quality of the circuitry, while the amplitude of the useful signal is given by the intensity of the light. When it’s dark, the signal is small and has to be amplified. This is where the ISO number comes in. With a digital camera the ISO number represents the amplification factor: the bigger, the more amplification. But the problem with amplification is that it increases not only the amplitude of the signal, but also of the noise.

But the important thing to understand, and I get the distinct impresion that lot photographers don’t, is that when it comes to digital, noise is not determined by the amount of light that falls on the sensor (indicated by effective aperture) but that falling on an individual pixel on the sensor. In other words, noise depends not only on the effective aperture, but also on the overall pixel count.

So, at the same f-stop a pixel on a 45MP FF sensor receives not 4x (effective aperture) but only ~1.7x the amount of light that a pixel on 20MP M4/3 sensor does, i.e., all other things being equal, the M4/3 lens needs to be opened up by less than a stop to achieve the same performance – the fact that the current generation of M4/3 cameras has stuck with sensible sensor pixel counts, whereas FF sensors are chasing ‘bonkers’ pixel densities, works hugely in the M4/3 favour.

In practical terms, the noise of the system determines two things: how quickly the image degrades when the ISO value is pushed up, and the dynamic range of the sensor (the difference between the lightest and darkest light level the sensor can capture). So what does it look the real world?

Regarding the high ISO usability it looks pretty good. The current crème de la crème of the M4/3 cameras, the Lumix G9, doesn’t show significant amounts of noise until above ISO 3200. In terms of outdoor photography this is very decent. And even my now somewhat ageing GX8 is in my experience perfectly fine at least ISO 800.

At the same time, M4/3 lenses tend to be fast. A typical zoom lens will come at f2.8, good enough to shoot any daytime landscape at ISO 100, and it gets even better with primes. The Olympus M.Zuiko fisheye opens up to f1.8, which is good enough for night time photography at ISO 100. Personally, I rarely need to set my M4/3 cameras to anything other than ISO 100, the main exception being bird photography with the big telephoto at fast shutter speeds (1/2500s or so); but even for that ISO 3200 is usually enough.

Then, of course, there is the excellent performance of current noise reduction algorithms that can be used in post-processing. In other words, it’s just like the mega pixels – in terms of low light noise the M4/3 cameras are well beyond what is necessary to take that perfect landscape image.

What about the dynamic range? The active range of the human eye is estimated to be ~10-14EV (the absolute dynamic range is about 24EV , but this is not actively usable since the eye takes a better part of an hour to fully adjust to a transition from light to darkness). A top end FF (as well as MF) sensors come bit under 15EV, while top end M4/3 cameras deliver bit over 13EV.

‘Aha!’, you say, ‘the FFs are almost 1.5 stops better!’

My answer to that is two fold: on the one hand 13 and a bit EV is more than enough in most landscape scenarios, and when it is not, it is easily remedied (either by an old fashioned graduated filter, or by that wonderful feature of modern digital cameras, exposure bracketing. On the other hand, 1.5 stop is not enough of a difference; I routinely use 3EV GND filter, which means I’d still have to do something even with a top end FF (or even MF) camera.

So to sum up, I don’t believe the ‘better image quality’ argument for FF sensors holds. It’s not that sensor size doesn’t come at a cost of quality, it does. But once the quality of the hardware gets beyond a certain point, it no longer matters, and the quality of the image comes down purely to what we do with the camera. And today’s M4/3 cameras are, in my personal view at least, well beyond that point.

Equivalent Focal Length

For a given angle of view, the focal length of the lens is proportional to the crop factor. So, if the ‘normal’ angle of view for FF sensor is created by a 50mm lens, for M4/3 the same is achieved with a 25mm lens, which would be said to be a ‘50mm equivalent’.

The big consequence of this, and the real strength of M4/3 as a system for the photographer on the move, as I already noted, is that the lenses are much smaller and lighter (not to mention cheaper). As Jim Frost once put it, ‘you don’t need a wheelbarow to carry it around’. Nothing to add to that really.

Depth of Field

While equivalent focal length is useful to compare the angle of view, two lenses of such equivalent focal length are not equivalent in other important respects. The most notable difference is the depth of field. Lenses of equivalent focal length have the same depth of field at apertures proportionate to the crop factor, i.e., the depth of field of a M4/3 25mm lens at f2.8 is the same as that of a 50mm FF lens at f5.6. In other words, M4/3 has a noticeably greater depth of field than a FF camera for the same angle of view and aperture.

The increased depth of field is the single biggest, and I think the only practically significant, difference between modern M4/3 and FF cameras; whether the M4/3 is deemed to be a good fit will depend on how much value shallow depth of field adds to one’s images.

There are photographic disciplines where a shallow depth of field is definitely a must, but personally I don’t think outdoor photography is necessarily one of them. First of all, it’s not so much of an issue with closeups, for at short distances even the M.Zuiko f1.8 fisheye will produce a decent bokeh (plus closeups can, more often than not, be done using a longer focal length). At long focal lengths this is not an issue at all, e.g., the depth of field on the Leica 100-400mm zoom is perfect for wildlife photography.

So in practical terms the depth of field only makes a noticeable difference at mid foreground distances with normal to wide angle lenses. To be specific, with an M4/3 12mm lens (24mm eq.) at f2.8 with far focus limit stretching to infinity, the near focus limit will never be more than ~3.5m. A FF 24mm lens at 2.8 can push that near limit to around 6.5m. But how much difference does this really make to me? If nothing else, if I really want that extra 3m of unfocused foreground, and I can’t achieve the desired composition at a longer focal length, I can fix that in post-processing.

In Conclusion

As far as I am concerned the M4/3 does it all. It’s a great system for outdoor photography, whether it’s landscape or wildlife, and after eight years of using it, I can’t envisage ever wanting to ‘step up’ to a full-frame system.

It’s not that I don’t understand the benefits of a bigger media, having shot thousands of pictures in years gone by on 35mm film, I do. But for vast majority of my pictures the M4/3 is the optimal system. And when it’s not? Well that’s where medium format comes in, of course. Full frame? But a relic of disposable cameras and holiday snaps from a long gone era of film. 😉


A Few Notes on M4/3 Cameras

Just a few comments on the cameras and lenses I have owned, with a random selection of pictures for illustration. Probably worth saying, the pictures have had a varied degree of post processing applied in Lightroom, but no Photoshop involved (life is too short for that).

Lumix DMC-GF2

My first M4/3 camera. Introduced in 2010 as a somewhat dumbed down successor to the hugely popular GF1, the GF2 delivered 12.1MP resolution with a dynamic range of 10.3. By today’s standards not much, but nevertheless in a combination with a 14mm pancake lens it proved to be an excellent setup for my outdoor activities. The RAW output made it amenable to decent amount of post processing, compensating somewhat for the limited dynamic range, and with little care it could produce pretty decent pictures.

M4/3: The Outdoor Camera System Loch Voil (Lumix DMC-GF2, Lumix G 20mm/f1.7)

M4/3: The Outdoor Camera System Larch Tree (Lumix DMC-GF2, Lumix G 14mm/f2.5)

Lumix DMC-GM5

The main limitation of the GF2 for me was that it did not have a viewfinder, only a fixed LCD screen. This made it hard to use in bright light, and also to compose images properly in general. Eventually I had enough ‘if only this was a bit to the left’ pictures to look for a replacement. The GM1 ticked all the boxes, but by the time I made up my mind (the GM1 was quite dear), its successor, GM5, came out in 2014.

To me the GM5 was a true marvel of technology. It was much smaller and much lighter than the GF2, at 260g (with the 14mm pancake lens) this was a camera I could fit into shoulder strap pouch and run with. Yet, it addressed what I thought were the main defficiencies of the GF2: it has a decent viewfinder and even some external mechanical controls, e.g., for exposure settings. And the 16MP sensor had a much improved dynamic range of 11.7 – this was a compact-like camera packing some serious punch.

In spite being four years old now, which in digital sensor technology is a lifetime, I still use the GM5, and indeed over the years it has produced some of the pictures that made me most happy. In many ways it remains the perfect outdoor camera for me. These days I usually refer to it as ‘the running camera’ on account of its size an weight, for it’s what I take when weight matters. Nevertheless, the dynamic range is enough for most outdoor photography scenarios, and with minimal post-processing the GM5 is capable of producing excellent quality prints of up to ~18" or so.

M4/3: The Outdoor Camera System Sgurr an Fhidleir and Lochan Tuath (Lumix DMC-GM5, Lumix G 14/f2.5)

M4/3: The Outdoor Camera System Bynack More (Lumix DMC-G5, Lumix G 14/2.5)

Lumix DMC-GX8

The Lumix GX series came as the spiritual successor of the GF1, a class of camera aimed at the serious amateur. The GX8, introduced in 2015, is arguably a pinnacle of that series (for reasons only known to Panasonic, the GX9 is a dumbed down version of the GX8, lacking some of the things that make the GX8 so great, not least the mechanical controls and weatherproofing).

The GX8 has a well thought out external controls, including a dedicated EV Compensation dial big enough to work in winter gloves, decent resolution tilting viewfinder, a tilting screen, is weatherproof, and all in all a pleasure to use. The in-body 5-axis stabilisation works extremely well, and the 20.3MP sensor has an excellent dynamic range of 12.6 – more often than not, the GX8 produces great looking images straight out the camera.

The main limitation of the GX8 is the presence of an anti-aliasing filter on the sensor, which results in some loss of sharpness in fine detail. Nevertheless it produces excellent prints up to ~24".

M4/3: The Outdoor Camera System Loch Leven (Lumix DMC-GX8, Olympus M.12-40mm / f2.8)

M4/3: The Outdoor Camera System Bealach Bhearnais (Lumix DMC-GX8, Olympus M.12-40mm / f2.8; hand-held at ISO 800)

Lumix GC-G9

The Lumix G series is aimed at the professional user. The much improved 20.3MP sensor has a dynamic range of over 13EV, no AA filter, and produces very sharp images that hold their own even alongside today’s best FF cameras. Like the GX8, the G9 is weatherproof, but the body is somewhat bigger and heavier (mostly due to an enlarged hand grip).

Some of the nice features of the GX8 are missing (no dedicated EV compensation dial, the viewfinder doesn’t tilt), but the number of customisable buttons is greater, and the little on-camera LCD is a nice touch. The viewfinder resolution has gone up, its image very bright and clear. The 5-axis stabilisation has been further improved, there is a sensor shifting high resolution mode, delivering 80MP (cool, but awkward in real life), and 4k / 6k mode for shooting stills.

I haven’t had the G9 for long, but the step up from the GX8 is palpable; while the GX8 is a contender for a serious use camera, the G9 is that camera without a question.

M4/3: The Outdoor Camera System Kolla, Iceland (Lumix DC-G9, Olympus M.12-40mm / f2.8)

M4/3: The Outdoor Camera System Ísafjörður, Iceland (Lumix DC-G9, Olympus M.12-40mm / f2.8)

A Few Notes on Lenses

Over the years, I have accumulated a collection of M4/3 lenses, but the ones I most often use are the following.

Lumix H-H014 14mm Pancake

When weight really matters, this is the sole lens I’ll pack, together with the GM5. At f2.5 it is reasonably fast, light, and very low profile.

Olympus M.Zuiko Digital ED Pro 12-40mm

This is my default lens, combined with either the GX8 or G9. It is weatherproof, f2.8, constant aperture while zooming (great for videos), and, as the whole M.Zuiko series, of excellent optical quality.

M4/3: The Outdoor Camera System Shenavall (DMC-GX8, Olympus M.12-40mm / f2.8)

M4/3: The Outdoor Camera System Beinn a'Chlaidheimh (DMC-GX8, Olympus M.12-40mm / f2.8)

Olympus M.Zuiko Digital ED Pro 8mm Fisheye

A true, 180 degree, fisheye. At f1.8, this is a great lens for both night time photography and daytime landscapes.

M4/3: The Outdoor Camera System Lochain Coire an Lochain, Breariach (DMC-GX8, Olympus M.8mm / f1.8; exposure bracketted)

M4/3: The Outdoor Camera System Bealach Beinn an Eoin (Lumix DC-G9, Olympus M.8mm / f1.8)

Leica DG Vario Elmar 100-400mm Zoom

The aforementioned ‘bird’ lens, brilliant for wildlife photography. F4-6.3, in-lens image stabilisation, excellent optical quality. At 1kg, not the lightest but, more often than not, worth it.

M4/3: The Outdoor Camera System Tawny Owl (DMC GX8, Leica DG 100-400 / f4.0-6.3; shot at dusk from a monopod at ISO 6400, with noise cleaned up in post processing)

M4/3: The Outdoor Camera System Greenshank (Lumix DC-G9, Leica DG 100-400 / f4.0-63; hand-held at ISO 400)

by tf at August 18, 2018 12:11 PM

August 14, 2018

Tomas Frydrych

More on the Triad & Decagon Stoves

More on the Triad & Decagon Stoves

I have mentioned the Vargo Triad and Decagon in an earlier post on Cooking with Alcohol. I have now had a chance to use both stoves for real, and the experience was, unfortunately, not so good.

The Triad

More on the Triad & Decagon Stoves The Vargo Triad is a lovely made pressurised titanium stove with a capacity of ~40ml. I bought it specifically with a view to a two week trip to Iceland this summer. When it arrived, it was a love at first sight, it is a real thing of beauty. And as you can see from the earlier post, it performed well both on the initial tests in our kitchen, as well as during experimental snow melting on a very nice, calm, day in the Ochils -- I felt confident the Triad was going to be perfect for us.

However, before going to Iceland I happened to take it on an overnight outing in the hills. It was during the heatwave, and so, unusually for me, I decided to just bivvy rather than take a tent, and, somewhat unexpectedly, it turned out to be a fairly windy evening, hitting around 40mph during the night on the high ground.

Not a big deal. I chose relatively sheltered locations to cook in, and also used some large stones as an external windbreak, with the stove itself inside a home made metal windshield. This should have been fine (and would have been with my Speedster stove). But both in the evening and the morning a gust of wind lasting a few seconds caused the stove to flare up quite uncontrollably, with flames of about 30cm. In both instances the stove would not return to a normal burn until the fuel burned out, and the second time this happened it was so bad, I decided to have my porridge cold rather than risk setting the hill on fire.

While I suspect this behaviour might have, to some extent at least, been exacerbated by the high and tight-fitting windshield I was using, it is I think primarily caused by the low profile of the stove (more on this below).

Based on that experience I would never use this stove unless it was on a very large non-flammable surface (at least a meter in diameter), and certainly not inside a tent. We ended up taking a Trangia to Iceland instead (and it really grew on me during those two weeks, but that's for another time).

The Decagon

More on the Triad & Decagon Stoves The Vargo Decagon is another beautifully made, but unfortunately very poorly designed stove. It seems that the overriding objective was to create a stove that would be indestructible, but in achieving this, Vargo created one that fails miserably at its primary function. The design suffers from three flaws, each one bad enough on its own.

Priming

The Decagon is impossible to prime. The packaging suggests that the stove will prime in 1-2 minutes, but that is just a wishful thinking. Even in the controlled environment of my kitchen, it takes on average 4-5 minutes to bloom. Vargo suggest a faster alternative priming method is to splash some alcohol around the base -- I find it hard to believe that they really suggest this, for while it indeed speeds things up, it results in a large uncontrollable flame, so is only possible if your stove is inside a large completely inflammable area.

This is a pity, because this flaw can be easily remedied by adding a priming ring; the picture above shows the stove with a priming ring made of glass fibre cable insulation, with which it can be primed in under 1 min.

Excessive Burner Cooling by the Pot

To make the Decagon indestructible, Vargo did not include any pot supports. Instead the pot sits on three small bumps on the top of the burner. The result of this is that it conducts heat directly from the burner itself, with clear impact on the gasification of the fuel; the amount of heat generated by the burner visibly grows as the pot warms up (just about the very opposite of an ideal burner behaviour, I think).

One of the consequences of this design is that the pot cannot be placed on until the burner has bloomed (as is noted in the instructions). But in fact the heat loss is so bad that if you put a pot of ice cold water on even once the stove is in full bloom, it will de-bloom in a matter of seconds, and go completely out shortly after. (So no, this is not a stove that could be used to melt snow, as I hoped.)

This flaw can, again, be worked around by using external pot supports that lift the pot slightly above the burner, but this can lead to the burner overheating and flaring up, so I would not recommend it. Which brings me to the biggest problem with the Decagon.

Safe Fill Level

The thing that really appealed to me about this stove is the large capacity of just under 60ml. Unfortunately, the stove cannot be safely filled to this level. If you do, liquid fuel will spew out of the jets during priming. This then creates a very strong positive feedback loop, causing the stove to get into an uncontrollable rocket-like flair; throw a tight windshield into the mix, and it will generate enough heat to not only pulverise silicon matting but also, literally, melt aluminium (but unlike the Triad, the Decagon doesn't need a windshield to get there).

When in this state the stove cannot be blown out, you either have to cover it with a large pot to deprive it of oxygen, or let it burn out. My experiments suggest that the Decagon safe fill is only about 40ml (but this makes it impossible to prime without a priming ring!).

In spite all of this, we took the Decagon with us to Iceland as a secondary stove, and used it a couple of times with a frying pan. Assuming it is not filled up more than the 40ml and precautions are taken so it can be safely left to burn out just in case, it is OK for that, I might use it again that way now that I have it, but I expect you have guessed by now, I'd not buy it again.

Lessons Learnt

I think the basic problem with both of these stoves is the jets are too close to the fluid level. It's worth noting that the Trangia instructions say not to fill their burner more than 3/4 of the depth of the bath, which means there there is at least 1cm gap between the fluid and the jets. In contrast, the instructions for the Vargo stoves say they must be filled to capacity to prime (and, indeed, will not prime otherwise) but that makes the jets very close to the fuel surface, and susceptible to spewing out liquid fuel when the alcohol boils. Whichever way you look at it, and whatever the burner design, this can never be considered a good thing.

So my quest for a winter alcohol stove goes on -- I think I am going to give the Trangia a shot in spite of its bulk. But in any case, I am definitely going to avoid any pressurised alcohol stove where there is not a sufficient gap between the jets and the fluid level.

by tf at August 14, 2018 12:44 PM

July 28, 2018

Tomas Frydrych

The Wind

The Wind

I saw the front in a distance. A solid wall of water, just obscuring where Kings House once stood (a view improved, I dare say). It was upon me before I had the tent up, a scramble to get inside, wait it out by candle light.

Half an hour or so and it’s over. A tiny light on Beinn Bheag, a kindred spirit I imagine. But there is no sign of the blood moon (unlikely as it was, the reason I rushed here this evening after work). Though the cloud has cleared somewhat, so there is still a tiny glimmer of hope, some time left.

Alas, it’s not to be. I notice a white cloud oozing from Lairig Gartain, then a rather menacing black one emerges from behind the Buchaille. But the real weather is behind my back. The wind has done a half circle turn and another wall of rain just passed Beinn Fhada and is advancing my way, fast. The torrents hit just as I am unzipping the tent. More time for idle thoughts by the candle light; much to be recommended.

The first lightning takes me by surprise. Sure, it was forecast, but out there just now it did not feel like a stormy sort of a day. One elephant, two elephants, three elephants, four elephants. One elephant, two elephants, ... eight elephants. Good. One elephant, ... three elephants. Less good; glad I didn’t camp any higher up, sparing a thought for the tiny light on Beinn Bheag.

It lasts another half hour. Time to put the boots on again, just in case the moon has appeared; there is still a little time left. But, naturally, there is no moon, just some sheet lightning far to the east. For some reason the road is very busy just now, so I decide to take some cliche long exposure pictures, but the rain returns before I am ready, so back into the tent. More lightning. This is it, out of time, turn in for the night.

When I wake up in the morning, the first thing I see are two unfamiliar blotches. As my eyes manage to focus, I make out two birch seeds stuck to the outside of the inner tent. They must have come on the wind during the night, possibly a long way – I can’t recall any birches around here.

They make me smile. Symbols of new life, of change; of the very possibility of change. But they make me sad at the same time, for, unwittingly, by my being here, their progress came to a sudden halt. Another hard to escape symbolism.

I know, it’s just a couple of birch seeds. But symbolism matters. As every religion ever understood, symbolism makes it possible to participate in what we do not see, in what is not (yet) here. Symbols turn the abstract and theoretical, into personal and tangible; they turn thoughts into surrogate experiences.

Take the plastic straw. It’s been pointed out (unhelpfully) that if you, and I, and everyone else, give up on plastic straws, nothing much will change, for most of that plastic pollution in our seas comes from fishing nets. Perhaps, but that is to miss the point.

The moment I decide ‘no more plastic straws’ is the moment I, personally, start owning the bigger problem, the moment I accept my complicity. The symbolic value of this act is immense, for without such an ownership, both individual and collective, real change cannot happen.

Perhaps that is also the reason we need the lynx back in Scotland.

I will be honest, I have read the NGO materials, the economic forecasts, the description of the methodologies used to derive these. They leave me cold – they do not read like economic forecasts from people rushing to make money out of the lynx. They read like statements from people who believe it is the right thing to do, but the only way to achieve that is to convince the world at large there is money to be made. I, for one, am not sold on this Lynx-the-tourist-attraction, not least because should the economic case fail, the whole project is doomed, and doomed for a long time after.

But the argument that, ecologically, and I think also morally, it’s the right, even necessary, thing to do, seems to me very strong. Lynx as a symbol of change, of accepting that our natural resources cannot be managed purely form the perspective of primitive market economics, that repairing damage done in the past rather than merely maintaining status quo has to be part of modern environmental ethos, that is, I think very potent and could have ramifications beyond the few roe deer that will not need to be culled.

Perhaps. I am neither an ecologist, nor a sheep farmer, but, FWIW, I know a bit about religion and symbolism.

I forgot all about my two seeds until just now, drying the tent in the garden. They are no longer there, just two small stains on the fabric. And now I am not even sure they were there in the first place. Did my eyesight deceive me? It would not be the first time.

It doesn’t matter. Just now there are birch seeds everywhere I look. There are thousands of them in the air, in my hair, they fall behind my shirt, and land in my pockets, my shoes are full of them. And the wind? The wind is picking up again!

by tf at July 28, 2018 06:23 PM

April 30, 2018

Tomas Frydrych

Six Months with Cotton Analogy®

Six Months with Cotton Analogy®

When the row over the National Trust for Scotland trademarking the name ‘Glencoe’ erupted last summer, I had never heard of a company called Hilltrek. But for a while then I had been on the look out for some clothes for pottering about the woods with binoculars and a camera during the winter months, and had not seen anything that would be well suited to the (sodden) Scottish conditions. And I liked what I saw at the Hilltrek website.

Hilltrek are a tiny Scottish company offering a range of clothing made from Ventile®. If you, like me, have not heard of Ventile® before, it’s a cotton fabric developed in the 1930s essentially for fire hoses. When subject to water its very dense weave swells so much it prevents water penetration. The swelling is not instantaneous, so a single layer of the material is not enough to keep a wearer completely dry when subject to lot of water, but two layers, so called Double Ventile®, are.

Hilltrek make clothes in three fabric options: Single Ventile®, Double Ventile®, and Cotton Analogy®. The latter is a single layer of Ventile combined with the Nikwax Analogy® lining, also used by the Paramo® range of clothes. It was this that caught my attention, for the Nikwax Analogy® lining is well proven, and while I have never owned any Paramo® clothing (I am more of a Buffalo man myself), I know many who swear by it, and I have seen it perform excellently in some ‘real’ Scottish and Welsh weather. Unlike Paramo®, Cotton Analogy® offers the natural feel of cotton, and the lack of the irritating rustling of nylon — I was sold.

The Conival Trousers

While I was looking for something to wear about the woods, with the Conival Trousers I got more than I bargained for — without exaggeration, these are the best outdoor trousers I have ever owned. Over the last six months I have spent somewhere in the region of thirty five days wearing them, from sodden days in the woods, to numerous big full on days in the hills, including multi-day camping trips in the snow and temperatures dropping below -10C. In all of this they performed impeccably.

The Conivals have a no-nonsense cut, can be customised at the point of ordering, and if you have special requirements, all you need to do is to lift the phone (the great thing about dealing with small companies). There are two zipped pockets on the back, and two front hand pockets; cargo pockets can be ordered as an extra.

Unlike typical waterproof fabrics, the Analogy® lining is pleasant enough to wear next to skin, so these really are trousers rather than over-trousers, and they breathe very well. I tend to sweat fairly heavily, and so I normally avoid wearing waterproofs until it is really raining — these are the first waterproof trousers I have owned that don’t feel like being inside a banya and that I am happy to wear all the time.

The two layer construction is quite warm. I have found them good down to a few degrees C below zero on their own, and with a pair of thin merino long johns in temperatures down to -10C. On the upper end, I find them fine to about 12C, beyond that they are too warm for me (but then I don't usually wear waterproofs in those sort of temperatures anyway, and I am so impressed I am saving up for the Single Ventile® version Hilltrek make).

I have heard it said of Paramo® trousers that if you kneel on wet ground the water gets through. I have knelt in the Conivals in mud and snow on numerous occasions, pitching a tent or resting calves on long steep front pointing stints, and I have not found that to be the case, perhaps it is the benefit of the Ventile® itself being shower proof (or perhaps it was just an evil rumour about Paramo®).

The Ventile® fabric is quite heavy compared to ‘modern’ ‘technical’ kit, but I am really growing sick and tired of this current obsession with weight, which invariably translates into equipment that lasts a season or two. Indeed, the Conivals have shown themselves to be (I admit, surprisingly) hard wearing. I have done a fair bit of sliding about in them, sometimes on quite coarse icy ground, without noticeable surface wear. Some of the stitching around where the front pockets merge the side seam is starting to come undone, but that’s easily fixed.

The main wear-related issue with the Conivals is to do with the Ventile® dye, which does not seem to penetrate deep into the fibre, so where the fabric creases regularly, it starts reverting to the natural colour of cotton, and this happens so easily that somewhat disconcertingly the trouser started showing these whitish marks from the very first short walk in them, and it gets progressively worse, though it does appear purely cosmetic.

Six Months with Cotton Analogy®

The biggest drawback of Ventile® is that, according to the manufacturer’s recommendation, it is supposed to be dry cleaned. For a jacket this might be OK, but for outdoor trousers this is not practical. A closer look at the Ventile® site shows that the fabric can be washed with soap. I have been washing mine in 30C using the Nikwax® Tech wash, and can report no ill effects.

(It's worth noting that as with all waterproof fabrics the special care requirements have naught to do with the fabric per se, but the DWR coating that is applied to it, which has largely worn off before the first wash. I have tried Nikwax® Cotton Proof per the manufacturer recommendation; it does not produce the same sort of beading the original DWR did. It does seem to slow the water absorption a bit, but I am not entirely convinced it merits the expense.)

The Assynt Jacket

The Assynt jacket is billed as ‘ideal for field sports, nature watching and photography’. It has a corresponding cut with a waist level draw cord, two voluminous, low down, front pockets with stud closures, two chest level hand warming pockets, and a 5” high collar, with a stowaway hood.

In terms of size, based on the official size chart I am bang on for S, and indeed, have found the chest size to allow for adequate layering for winter use. But the sleeves are a different story. If anything the nominal size suggests these should be too long for me, but in fact they are well on the short size (1-2” shorter than on any other jacket of a comparable size I own), which becomes very noticeable with more layers underneath.

The snug fitting collar is the jacket’s best feature, keeping the dreich weather a bay. The stowing of the hood works better than is usual with such an arrangement, but unavoidably results in a hood of a low volume. This is the jacket’s main limitation. I have used it on a couple of fairly full on mountain days to see what it would be like, and the hood is not up to the task (this is not the intended use, and there are other jackets in the Hilltrek range that come with big volume, helmet-compatible, hoods).

Other minor drawback is that the hand warming pockets don’t have any closures, and, as they are not Ventile lined, this makes them draughty in moderately strong side-on wind. This feels as a bit of an oversight within the overall well thought out design.

All in all, I have found the jacket to be excellent within the parameters for which it was intended. I do wish the hood was bigger, I find I keep it out most of the time, simply because Scotland, and a bigger, non stowable, hood would make this a much more versatile garment.

None of the Hilltrek clothing is cheap, especially if you decide to do some customisation, but not incomparable to prices of some big brand mass produced outdoor kit. On the other hand, I expect it to last longer. I own a very nice Gore-Tex jacket from a big brand name that cost a similar amount as the Assynt jacket. It’s my ‘special occasions’ jacket, for on past experiences I know that in intensive use it wouldn’t last more than a season. I have no such quibbles with the Hilltrek clothing, there is a sturdy feel to it, and it is obvious that it was not only made in Scotland, but also for Scotland.

by tf at April 30, 2018 09:57 AM

April 16, 2018

Tomas Frydrych

Cooking with Alcohol

Cooking with Alcohol

In the last couple of years I have become a great fan of alcohol stoves. For three reasons. On short trips they are very weight-efficient. Alcohol is a much more environmentally friendly fuel than gas. And alcohol stoves are cheap to run!

As I have mentioned before, through my childhood and teenage years outdoor cooking involved an open fire. My first real stove was MSR WhisperLite™ purchased in the Mountain Equipment Coop in Vancouver in ’96. I still have it, with the original seals and that, though I haven’t used it for some years. The truth is that petrol stoves really come into their own on long remote trips and I don’t do those. And they take a bit of getting used to, the priming can easily get out of hand!

(On one particularly memorable occasion in Glen Brittle in the late ’90s the WhisperLite™ got me invited to cook in the kitchen of a giant luxury mobile home by a kind German couple who thought my stove was broken when I misjudged the volume of the priming fuel resulting in a flare worthy of Grangemouth. The trick, I learnt eventually, is to use little cotton wool and meths, but by then I also realised that this, excellent, stove was a poor match to my needs.)

And so, like everyone else, I switched to gas.

Gas stoves, without a question, win on the convenience front. There is no risk of spilling stinky fuel, no priming. But they have their drawbacks, not least the fuel is expensive and environmentally unfriendly — the LPG gas brings with it the whole oil industry baggage, the cartridges are manufactured in the Far East then shipped around the globe, and, being non-refillable, they end up in landfill (or left in a bothy); these things increasingly bother me.

Gas stoves are also rather weight inefficient. I didn’t fully appreciate this until I started thinking of multi day running trips, and was forced to rationalise the weight of my kit. My first move was, of course, a lighter gas stove, the 25g BRS-3000T. It only took a couple of trips to realise this was a dangerous piece of crap (mine flares uncontrollably sideways at any attempt to reduce the flame; sometimes we really get what we pay for).

In any case, if the objective is to reduce weight, even the lightest of gas stoves doesn’t help much, for the fundamental problem lies with the canister: on the one hand, I have very little control over how much fuel I take, and on the other the canister is far too heavy. So if my requirement is, say, for 60g of gas, I have to take 110g, plus the 120g of the canister; if I need 120g, I have to take 220g of it, plus 180g of the canister, etc.

Just as petrol/kerosine stoves beat gas in the weight game for long trips, alcohol stoves do so for short trips. Alcohol has two big advantages: it is very easy to store and transport, with minimal weight overheads, and it is very easy to burn, making it possible to create simple, light stoves.

Of course, burning alcohol produces only about half the amount of energy per weight as gas. But for short trips this is more than offset by the weight of the canister: if you need 60g of gas you have to pack 230g of fuel + canister; for an alcohol stove the equivalent will come to ~140g. Broadly speaking the weight game works out in favour of alcohol, or at least level pegging, until you need enough fuel to take the big 460g gas canister. How long a trip that is will depend on your cooking style, but in my case that is 3+ solo nights when snow melting, and something like 10+ nights in the summer.

And alcohol is cheap, and the environmental footprint is much smaller. There are, of course, downsides, most notably cooking with alcohol takes longer, how much longer will depend on the actual stove, so let’s talk about the stoves.

Alcohol burners come in two basic types: pressurised and unpressurised. An unpressurised stove is really just a small bowl holding the fuel, burning the vapours as they rise from the surface. While this works perfectly fine, such an open bath stove is potentially quite dangerous because of the risk of spilling the burning fuel; this is easily remedied by filling the bowl with some kind of a fireproof porous material. The simplicity means unpressurised stoves are usually home, or cottage, made.

In case of a pressurised stove, the fuel vapour is expelled under pressure from an enclosed fuel reservoir through a series small holes, resulting in discrete jets of flame. Unlike with petrol/kerosene this pressure is simply created by heating up the fuel in the reservoir, and is not very high. It, however, means that the stove has to have some way of priming. Most often it comes in the form of an open bath in the centre of the burner. The best known pressurised alcohol stove is undoubtedly the Trangia, but this type of burner can also be made fairly easily at home from a beer can, e.g., the famous Penny Stove — beer can stoves are neat and really fun to experiment with (but they are also quite large and fragile).

(Before going any further, it is worth saying that alcohol stoves always need a windshield, the flame is just too feeble to cope with even a slight breeze. The cheapest, and also most lightweight option is to make some from a double layer of kitchen foil. If you look after it, it will last quite a while, but it is too light for use in real wind, though perfectly fine for in-tent use. Of course, alternatives, commercial or otherwise, exist.)

Back to stoves. So, which one is better, pressurised or not? The cued up reader, who undoubtedly now expects a detailed discussion on fuel efficiency of the different designs is going to be most disappointed in me. As exciting as carefully measuring the fuel burnt by different models to find the Ultimate Stove is, when it comes to alcohol such comparisons are of a very limited value.

The fuel efficiency of any stove really comes down to a single thing: is the vapourised fuel mixed with enough oxygen to allow complete combustion? In the case of all alcohol stoves the mixing happens above the burner, and so is given more by the size of the pot, its distance from the burner, and the airflow provided by the windshield, than the design of the burner itself. Consequently any comparison is only valid for the one specific testing configuration, and you will almost certainly be able to come up with a different setup to produce quite different results.

OK, but, which is better? They both have advantages and disadvantages. The great thing about unpressurised burners is you can put in as little or as much fuel in as you want, and if there is any left, you screw the lid on, and it will keep till the next time. Also the variety with the absorbent material is the safest alcohol stove there is (and one shouldn’t really underestimate the danger of spilling the burning fuel, as the flames are nigh invisible).

The main advantage of a pressurised stove is a higher rate of burn, i.e., it cooks faster. But it is quite difficult to make a really tiny one, because below certain size the priming/gasification doesn’t work very well. Also the usual method of priming from an open bath on the top of the burner is super inefficient, and for there to be enough fuel in the bath, the stove generally needs to be filled near to capacity. For the smaller stoves, this will be around 30ml — if like me you only use 50ml per day and less than 15ml at a time, this a nuisance, as there is always unavoidable significant loss due to continued evaporation while the stove cools down before you can pour the excess out of it, and there is always spillage draining it.

If the burner doesn’t sit directly on the ground, it is possible to prime from below, using a small vessel, e.g., a bottle cap. This is much more efficient and needs just a few drops of fuel. But it is quite tricky to get right and requires practice — if you use too much fuel, you get a flair up, not ideal in a tent!

An unpressurised stove is great for solo summer use. Mine is of the makeup case variety; if you look carefully, you can see it among my other cooking paraphernalia in the title picture (taken on an unexpectedly cold autumn morning in the Cairngorms; the -4C meant I had to boil an extra pot that morning to pour into the running shoes to soften them up).

The stove came from redspeedster on eBay (you could easily make your own, but it’s not worth sourcing the materials for just one). It has a 30ml capacity, and with the nice pot supports he also makes comes to 24g. In my setup using 1/2l pot it will bring 400ml from 8C to rolling boil in about 12min, using 11g of fuel — yes, it’s slow, but then I rarely need rolling boil, so my actual ‘boil’ time is ~8 min, and really, I have all the time in the world, after all I am escaping the time-obsessed rat race.

But once you start looking at cooking for more than one person the unpressurised stove becomes impractical. I still want something small, i.e., not the family sized Trangia, but nevertheless something faster.

The Vargo Triad fits the bill. It’s a nicely made little gadget, and has about double the rate of burn of my makeup stove, bringing 0.8l of water from 8C to rolling boil in just under 13min, using 23g of fuel. This will do nicely for our next summer trip, I reckon. It’s a pity the burner does not have a cap, but I have cut a circle from a silicon baking sheet to cover it, which reduces fuel evaporation after the stove is extinguished and is cooling down.

[Updated 14/8/18 -- I have run into significant issues in real use of this stove, see this post]

Cooking with Alcohol

My current quest is to find an alcohol stove I’d be happy with for winter use. During winter I tend to heat up about twice the amount of water than I do during the summer (~3l), while at the same time snow melting roughly doubles the energy requirements (I am sure I could cut this down by manning up, but TBH, the winter brings enough misery as it is). The Triad at near-full fill of 35g of fuel will just bring 1/2 litre of water from snow to boil in 20min — for winter solo use I think this is borderline, I’d prefer something that would do about 0.8l at a time and a bit faster.

The Vargo Decagon looks a possible option. The 60ml capacity should be enough to melt 0.8l from snow, and it appears to have considerably higher burn rate than the Triad. But by all accounts, the Decagon is very slow priming, and unlike the Triad the pot can’t go on until the priming is finished (the pot sits directly on the top of the burner, so it conducts heat away from the burner); it also doesn’t lend itself as well to bottom priming as the Triad, nor can be so easily capped. Nevertheless, I am keen to give it try, preferably while there is still some snow in the local hills.

[Updated 14/8/18 -- I have run into significant issues in real use of this stove, see this post]

by tf at April 16, 2018 12:46 PM

March 15, 2018

Emmanuele Bassi

pkg-config and paths

This is something of a frequently asked question, as it comes up every once in a while. The pkg-config documentation is fairly terse, and even pkgconf hasn’t improved on that.

The problem

Let’s assume you maintain a project that has a dependency using pkg-config.

Let’s also assume that the project you are depending on loads some files from a system path, and your project plans to install some files under that path.

The questions are:

  • how can the project you are depending on provide an appropriate way for you to discover where that path is
  • how can the project you maintain use that information

The answer to both questions is: by using variables in the pkg-config file. Sadly, there’s still some confusion as to how those variables work, so this is my attempt at clarifying the issue.

Defining variables in pkg-config files

The typical preamble stanza of a pkg-config file is something like this:

prefix=/some/prefix
libdir=${prefix}/lib
datadir=${prefix}/share
includedir=${prefix}/include

Each variable can reference other variables; for instance, in the example above, all the other directories are relative to the prefix variable.

Those variables that can be extracted via pkg-config itself:

$ pkg-config --variable=includedir project-a
/some/prefix/include

As you can see, the --variable command line argument will automatically expand the ${prefix} token with the content of the prefix variable.

Of course, you can define any and all variables inside your own pkg-config file; for instance, this is the definition of the giomoduledir variable inside the gio-2.0 pkg-config file:

prefix=/usr
libdir=${prefix}/lib

…

giomoduledir=${libdir}/gio/modules

This way, the giomoduledir variable will be expanded to /usr/lib/gio/modules when asking for it.

If you are defining a path inside your project’s pkg-config file, always make sure you’re using a relative path!

We’re going to see why this is important in the next section.

Using variables from pkg-config files

Now, this is where things get complicated.

As I said above, pkg-config will expand the variables using the definitions coming from the pkg-config file; so, in the example above, getting the giomoduledir will use the prefix provided by the gio-2.0 pkg-config file, which is the prefix into which GIO was installed. This is all well and good if you just want to know where GIO installed its own modules, in the same way you want to know where its headers are installed, or where the library is located.

What happens, though, if your own project needs to install GIO modules in a shared location? More importantly, what happens if you’re building your project in a separate prefix?

If you’re thinking: “I should install it into the same location as specified by the GIO pkg-config file”, think again. What happens if you are building against the system’s GIO library? The prefix into which it has been installed is only going to be accessible by the administrator user; or it could be on a read-only volume, managed by libostree, so sudo won’t save you.

Since you’re using a separate prefix, you really want to install the files provided by your project under the prefix used to configure your project. That does require knowing all the possible paths used by your dependencies, hard coding them into your own project, and ensuring that they never change.

This is clearly not great, and it places additional burdens on your role as a maintainer.

The correct solution is to tell pkg-config to expand variables using your own values:

$ pkg-config \
> --define-variable=prefix=/your/prefix \
> --variable=giomoduledir
> gio-2.0
/your/prefix/lib/gio/modules

This lets you rely on the paths as defined by your dependencies, and does not attempt to install files in locations you don’t have access to.

Build systems

How does this work, in practice, when building your own software?

If you’re using Meson, you can use the get_pkgconfig_variable() method of the dependency object, making sure to replace variables:

gio_dep = dependency('gio-2.0')
giomoduledir = gio_dep.get_pkgconfig_variable(
  'giomoduledir',
  define_variable: [ 'libdir', get_option('libdir') ],
)

This is the equivalent of the --define-variable/--variable command line arguments.

If you are using Autotools, sadly, the PKG_CHECK_VAR m4 macro won’t be able to help you, because it does not allow you to expand variables. This means you’ll have to deal with it in the old fashioned way:

giomoduledir=`$PKG_CONFIG --define-variable=libdir=$libdir --variable=giomoduledir gio-2.0`

Which is annoying, and yet another reason why you should move off from Autotools and to Meson. 😃

Caveats

All of this, of course, works only if paths are expressed as locations relative to other variables. If that does not happen, you’re going to have a bad time. You’ll still get the variable as requested, but you won’t be able to make it relative to your prefix.

If you maintain a project with paths expressed as variables in your pkg-config file, check them now, and make them relative to existing variables, like prefix, libdir, or datadir.

If you’re using Meson to generate your pkg-config file, make sure that the paths are relative to other variables, and file bugs if they aren’t.

by ebassi at March 15, 2018 04:45 PM

March 06, 2018

Ross Burton

Rewriting Git Commit Messages

So this week I started submitting a seventy-odd commits long branch where every commit was machine generated (but hand reviewed) with the amazing commit message of "component: refresh patches". Whilst this was easy to automate the message isn't acceptable to merge and I was facing the prospect of copy/pasting the same commit message over and over during an interactive rebase. That did not sound like fun. I ended up writing a tiny tool to do this and thought I'd do my annual blog post about it, mainly so I can find it again when I need to do it again next year...

Wise readers will know that Git can rewrite all sorts of things in commits programatically using git-filter-branch and this has a --msg-filter argument which sounds like just what I need. But first a note: git-filter-branch can destroy your branches if you're not careful!

git filter-branch --msg-filter has a simple behaviour: give it a command to be executed by the shell, the old commit message is piped in via standard input, and whatever appears on standard output is the new commit message. Sounds simple but in a way it's too simple, as even the example in the documentation has a glaring problem.

Anyway, this should work. I have a commit message in a predictable format (: refresh patches) and a text editor containing a longer message suitable for submission. I could write a bundle of shell/sed/awk to munge from one to the other but I decided to simply glue a few pieces of Python together instead:

import sys, re

input_re = re.compile(open(sys.argv[1]).read())
template = open(sys.argv[2]).read()

original_message = sys.stdin.read()
match = input_re.match(original_message)
if match:
    print(template.format(**match.groupdict()))
else:
    print(original_message)

Invoke this with two filenames: a regular expression to match on the input, and a template for the new commit message. If the regular expression matches then any named groups are extracted and passed to the template which is output using the new-style format() operation. If it doesn't match then the input is simply output to preserve commit messages.

This is my input regular expression:

^(?P<recipe>.+): refresh patches

And this is my output template:

{recipe}: refresh patches

The patch tool will apply patches by default with "fuzz", which is where if the
hunk context isn't present but what is there is close enough, it will force the
patch in.

Whilst this is useful when there's just whitespace changes, when applied to
source it is possible for a patch applied with fuzz to produce broken code which
still compiles (see #10450).  This is obviously bad.

We'd like to eventually have do_patch() rejecting any fuzz on these grounds. For
that to be realistic the existing patches with fuzz need to be rebased and
reviewed.

Signed-off-by: Ross Burton <ross.burton@intel.com>

A quick run through filter-branch and I'm ready to send:

git filter-branch --msg-filter 'rewriter.py input output' origin/master...HEAD

by Ross Burton at March 06, 2018 05:00 PM

March 02, 2018

Emmanuele Bassi

Recipes hackfest

The Recipes application started as a celebration of GNOME’s community and history, and it’s grown to be a great showcase for what GNOME is about:

  • design guidelines and attention to detail
  • a software development platform for modern applications
  • new technologies, strongly integrated with the OS
  • people-centered development

Additionally, Recipes has become a place where to iterate design and technology for the rest of the GNOME applications.

Nevertheless, while design patterns, toolkit features, Flatpak and portals, are part of the development experience, without content provided by the people using Recipes there would not be an application to begin with.

If we look at the work Endless has been doing on its own framework for content-driven applications, there’s a natural fit — which is why I was really happy to attend the Recipes hackfest in Yogyakarta, this week.

Fried Jawanese noodle make a healty breakfast

In the Endless framework we take structured data — like a web page, or a PDF document, or a mix of video and text — and we construct “shards”, which embed both the content, its metadata, and a Xapian database that can be used for querying the data. We take the shards and distribute them though Flatpak as a runtime extension for our applications, which means we can take advantage of Flatpak for shipping updates efficiently.

During the hackfest we talked about how to take advantage of the data model Endless applications use, as well as its distribution model; instead of taking tarballs with the recipe text, the images, and the metadata attached to each, we can create shards that can be mapped to a custom data model. Additionally, we can generate those shards locally when exporting the recipes created by new chefs, and easily re-integrate them with the shared recipe shards — with the possibility, in the future, to have a whole web application that lets you submit new recipes, and the maintainers review them without necessarily going through Matthias’s email. 😉

The data model discussion segued into how to display that data. The Endless framework has the concept of cards, which are context-aware data views; depending on context, they can have more or less details exposed to the user — and all those details are populated from the data model itself. Recipes has custom widgets that do a very similar job, so we talked about how to create a shared layer that can be reused both by Endless applications and by GNOME applications.

Sadly, I don’t remember the name of this soup, only that it had chicken hearts in it, and that Cosimo loved it

At the end of the hackfest we were able to have a proof of concept of Recipes loading the data from a custom shard, and using the Endless framework to display it; translating that into shareable code and libraries that can be used by other projects is the next step of the roadmap.

All of this, of course, will benefit more than just the Recipes application. For instance, we could have a Dictionary application that worked offline, and used Wiktionary as a source, and allowed better queries than just substring matching; we could have applications like Photos and Documents reuse the same UI elements as Recipes for their collection views; Software and Recipes already share a similar “landing page” design (and widgets), which means that Software could also use the “card” UI elements.

There’s lots for everyone to do, but exciting times are ahead!

And after we’re done we can relax by the pool


I’d be remiss if I didn’t thank our hosts at the Amikom university.

Yogyakarta is a great city; I’ve never been in Indonesia before, and I’ve greatly enjoyed my time here. There’s lots to see, and I strongly recommend visiting. I’ve loved the food, and the people’s warmth.

I’d like to thank my employer, Endless, for letting me take some time to attend the hackfest; and the GNOME Foundation, for sponsoring my travel.

The travelling Wilber


Sponsored by the GNOME Foundation

by ebassi at March 02, 2018 12:50 AM

February 25, 2018

Tomas Frydrych

Coille Coire Chuilc

Coille Coire Chuilc

It’s been a long time since Linda and I climbed Beinn Dubhchraig. Just another couple of Munros bagged. Not a very memorable day of drizzle and nay views, leaving a lingering impression of a long trot through a bog punctuated by spindly pine trees, and no urge to return. One that persisted for a couple of decades. But today couldn’t be more different: the sky is blue, the air is crisp, the ground is frozen. And those spindly trees? They are no more.

Instead I find myself at an edge of a delightful Caledonian pine forest inviting me to step in. And so I do, walking along the east bank of Allt Gleann Auchreoch to the dilapidated bridge higher up the glen, then wandering about the woodland south of Allt Coire Dubhchraig, before following it up the hill. There are some magnificent pine specimen here, framing the views over to Ben Challuim and Beinn Dorrain. And higher up the pines are replaced by young birches, that are rapidly continuing to self-seed, the purple hue of their twigs striking against the snow-covered ground.

Beinn Dubhchraig is in a magnificent winter condition, there is much more snow than I expected, and all perfect firm neve. I enjoy the views: Beinn Dorrain, Ben Challuim, the Crianlarich hills, Ben Oss, Ben Lui. As I nip up the rather windy Ben Oss, Ben Lui looks particularly majestic — I imagine it will be very busy on a day like this.

Coille Coire Chuilc

On the way down I sit under a large pine for a bite to eat, enjoying the afternoon sunshine. A perfect day. It is rare for my days out to bring together the two places where I feel most at home, the hills and the woods. I usually have to choose the one over the other. It needn’t be this way, nor should it. Here in the midst of Coille Coire Chuilc I am reminded that, given will, a real change is possible in less than a lifetime. And just now I can smell it coming on the breeze.

Coille Coire Chuilc

by tf at February 25, 2018 11:08 AM

February 15, 2018

Tomas Frydrych

Pine Seeds

Pine Seeds

Over the twenty something years since the National Trust for Scotland took over the Mar Lodge Estate, the upper Glen Lui (or, Gleann Laoigh Bheag, as it is properly called), has become a real gem of a place. But today is not exactly a gem of a day. There be might fluffy fresh snow on the ground, but it's breezy, and visibility is limited indeed. Some might think it outright miserable!

Or, a natural black and white scene, you might say. In any case, the sort of a day nobody goes out for The Views. I am on my way down to the Bob Scott bothy for a lunch before heading back to civilisation. An end to three days in the hills. Carefully planned in rough outlines, then (even more) carefully improvised, to match the reality of the winter Cairngorms.

A brief promise of sunshine blown away somewhere below the summit of Derry Cairngorm on Sunday morning, leaving just the wind and thick cloud. The map came out there and then, and pretty much stayed out since. Careful navigation over the summit and onto the 1053 point bealach, then down to Loch Etchachan, in hope the cliffs surrounding it will provide some shelter from the strong westerly for the night.

Down at the loch it is indeed much calmer, though you wouldn't know there is loch down here under all that snow. Care is needed not to pitch inside a possible avalanche path, not just in the view of what the conditions are like now, but what they could be in the morning. And so I dig myself a nice rectangular platform, about a foot or so deep, nearly on the loch shore. Not as sheltered as might have been, but safe.

I am done just as the light starts fading. A coffee. While the snow is melting, a couple of messages exchanged with Linda using my InReach, then dinner. One of Ian Rankin's (audio)books for company by candlelight, followed by undisturbed sleep.

Monday morning starts with porridge, then digging myself out of the tent, glad to have kept the shovel inside. I am surprised by the amount of snow drift, my neat rectangular platform all but gone, and the kit I left in its corner buried under good two feet of snow. A scarily compacted fresh, foot thick, windslab capping it all.

Pine Seeds

Beinn Mheadhoin teases me with some lovely pink tones, but barely long enough to get the camera out -- time to get moving.

The (careful) plan was to camp here for two nights, but it is obvious that if I leave the tent here I will have hard time finding it later, and, more importantly, this place is too exposed for the 70mph southerly forecast for tonight. And so I pack my stuff, all 24kg of it (minus some food, plus some snow), put the snowshoes on, and set off into the clag for Ben Macdui, selecting my route carefully, mindful of the windslab I saw down there.

The wind picks up in no time, and while this is a familiar ground, I need a map and compass to keep me on track. I am comfortable with being here, the conditions are challenging, but, I dare say, within my comfort zone. And yet on a day like this, the plateau is one scary place (as it should be). Navigating here is hard, errors easy to make, opportunities to spot and correct them few and far apart. Escape routes limited even in the summer, for cliffs abound in all directions, and in the present winter conditions some of them, if not most, are unsafe.

The spindrift is heading along the surface directly against me, flowing around my boots like a fast river. It is making me feel dizzy, even seasick, yet my eyes are irresistibly drawn to it. A new experience. Keep looking forward, above it, rather than at it; that does the trick.

The ruin, then after a while the summit. Too windy to hang around. I take a bearing for the 'corner' of the Sron Riach ridge, pace and follow it religiously, using Allt Clach nan Taillear as a tick off point. A couple of jets flying repeatedly overhead, or perhaps just the wind swirling around in my hood; I can't tell. I reach the rocky corner bang on, pleased with myself.

As I am taking my next bearing from the map, there is a brief rupture in the cloud offering a glimpse of the cornices lining the ridge -- they are some of the biggest cornices I have ever seen, meters of overhanging snow. Back into the clag. I back off good 30m from the edge before daring to follow my bearing, and even then nervously (the lack of photos is my witness). Visibility is 5-10m; I make a point of always keeping some visible rocks peaking out of the snow to my left.

I finally emerge from the whiteness at around the 1100m contour line with a sigh of relief and sight of the Devil's Point, the first real 'view' of the day. Even better, I can also see that my preferred option for today, descending down the line of Caochan na Cothaiche is viable, for its eastern side is fully scoured, and poses no avalanche risk. In contrast, lower down the western side of the narrow gully has a huge build up of snow on it, and cornices, with some fresh debris lower down.

Pine Seeds

The floor of the glen is not entirely calm, but it will do. I dig another platform, pitch the tent. It's early, but this spot is as good as it will get. From here there is a direct line of sight under the clouds down Glen Lui onto the Glen Shee Munros -- it's sunny over there, and I feast my eyes on the vista, nursing a cup of coffee. Dinner (not much gas left), message to Linda, then time for some John Le Carre.

The wind arrives at 9.30pm, as the forecast promised. The usual moment of anxiety -- will the pegs stay in? Should I go out and check? I don't. I dug right down to the frozen turf and double pegged all the lines, they are going nowhere, or rather, I can't do any better anyway (I double peg as a matter of course, 20cm Y paracord extensions permanently on all the guylines). I briefly toy with sticking the anemometer out of the tent, but can't be bothered looking for it, I guess somewhere around the 40+mph mark. It's over as suddenly as it started not long past midnight (again just as forecast), and I sleep soundly after that.

Tuesday morning. I give the tent a good shake. The porch is covered by an inch of the finest powder I have ever seen, and I curse myself for not tiding more last night, rummaging through it looking for my spork. At least I covered the tops of my boots with bags. I drain the gas to the very last drop (thank God for upside down canister stoves!); there is, just, enough for my porridge and a litre of warm water. Outside it's windy and snowing.

As I pack, the snow is depositing on the tent faster than I am sweeping it away, and after a couple of minutes I give up and just roll it in. Snowshoes on and into the blizzard. Goggles would have been useful, but they are too wet inside to be any use, and no amount of wiping is helping. At least there is no navigating to be done, just follow the stream down the narrow glen.

And so here I am on the nice path in Gleann Laoigh Bheag. It stopped snowing a while ago, and there is but a breeze, four or five inches of fluffy snow covering everything. The pines are looking very Christmassy, in a It's a Wonderful Life sort of B&W way. Pristine scenery, no footprints, fresh or old.

My eye catches the sight of a small brown spec on the undisturbed snow, then another. I bend a bit to take a closer look. A pine seed. They are all around me, they have come from heaven down to earth gliding on their little wings. In the midst of this bleak, inhospitable day, life is being, not born, but hewn out by the gale from the cones; life against the odds. A promise of a brighter, greener, future, one hearkening back to the days before the axe and saw laid this landscape barren.

The bothy is warm. A bit of food, a bit of banter. Then I step outside ... into a different world. The cloud has broken, the sky is blue, the sunlit landscape postcard perfect -- The Views. But the views, they come and go. The pine seeds, I expect some of them I will see again in the years to come. From now on, every time I see a seedling in Gleann Laoigh Bheag, I'll be wondering, is it you?

But 'nough idle musings. The most pressing existential question of today is this: will the Glen Shee snow gates be open? For I am back in the 'real' world.

Pine Seeds

by tf at February 15, 2018 09:48 AM

February 05, 2018

Tomas Frydrych

A Lesson from the Wee Hills

A Lesson from the Wee Hills

Days like these don’t come around that often. After a couple of brief snow flurries the sun banished the cloud, and now the early morning light glitters on the pristine slopes of Beinn Challuim. It is nearly exactly twenty years since I’ve been up here last, in very different conditions; a memorable day, though not for the best of reasons.

When I first arrived in Scotland I was by no means new to the outdoors or the hills. I am fortunate enough to have spent much of my free time in the open since early childhood, exploring the woods, hiking, wild camping, ski touring. From my mid teens treks in the Tatras, and farther afield, became a regular feature of the summer holidays — half a dozen friends of similar age, minimal equipment, high level camps mostly just under the stars.

Over those years there had been a few #microepics, including a couple of close shaves, and by the time I landed in Scotland as a postgraduate student in the mid ‘90s I had gained a healthy respect for the mountains, summer and winter alike. But compared to even the smaller continental ranges Scotland’s ‘wee hills’ — their summits barely reaching the altitudes of Alpine valleys — seemed innocuous and benign.

It didn’t take long to get disabused of that idea. Looking back, some of the incidents we now laugh at. Like when, having ignored Heather the Weather’s warning of 70mph winds, I left Linda a few hundred yards of the summit of Meall Ghaordaidh, weighed down by a large stone, while I crawled on all four to touch the summit cairn (all I can say is, we were young and weekends were precious). But even after all that time, the Beinn Challum day is still not that funny.

As a research student I discovered that clearing my head with a midweek day in the hills much improved my overall productivity, and so Wednesday outings became a regular part of my studies. Even nowadays the hills tend to be fairly quiet midweek, but back then I never ever met anyone. Indeed, tales were circulating of injured midweek hill walkers surviving a couple of days on biscuits until someone turned up at the weekend.

This might seem far fetched, but in those days mobile phones were almost a novelty, cellular signal virtually nonexistent outside of the Central Belt, and consumer GPS units still a few years away — those who got lost in the hills were on their own until someone reported them missing; self reliance was, necessarily, a part of essential hillcraft.

As I expect you have guessed, this particular Wednesday in late February I was heading up Beinn Challuim. I have never been much of a fan of there-and-back outings, and so decided to leave the car at the Auchtertyre farm, and do a horseshoe starting with Beinn Chaorach.

It was not a very nice day, with an unpleasant westerly, sleeting heavily. Having experienced similar conditions a few weeks earlier in the Drummochter hills, I invested into a pair of goggles (not a negligible expense), which on this day didn’t come off my face (sadly, the sleet was so saturated that the glue between the double lens failed in the course of the day).

Visibility gradually deteriorated and by the time I reached Beinn Challuim, I was in a complete whiteout. I wasn’t put out by any of it. I had an excellent Berghaus GoreTex jacket that kept me dry (which I was just about able to afford thanks to James Leckie of Falkirk) and carried two big flasks of hot drink and plenty of food —really, I was in my element, relishing the adversity. But by this point I was also beginning to feel quite tired, it was turning out to be a longer day than I planned.

Fortunately all that was left was the descent back to the car. This should have been quite straight forward, and such was my confidence in my ability to navigate that I didn’t feel the need to get the compass out. I was sure map alone was going to be enough to follow the ‘obvious’ broad ridge. And indeed, the ridge was easy to follow, but somehow progress was slow.

Too slow. I emerged from the cloud eventually but alas, things were not as they should have been. I should have been near the Auchtertyre farm, or at worst near Kirkton, and certainly near a railway track, but I saw no houses and no track. I ended up somewhere in the Lochan a’Craoi area above Inverhaggernie — to this day I am unsure of the exact location — and I was in for a long walk back, with not much of the day left.

I was spared some it by a couple of ghillies on a quad bike, two hinds on a small trailer behind it. They offered me a lift to Inverhaggernie, ‘if you don’t mind sitting on the deer’. I didn’t mind in the slightest.

That day was the end of the ‘wee hills’ mentality, for I knew I got lucky. The careless navigation mistake per se was not super serious, at least I ended up on the right side of the hill, but I understood that I could have easily made a similar one earlier in the day and ended up further north in the Forest of Mamlorn — that would have been a whole different proposition. I started taking the weather lot more seriously from there on, and I also updated my personal Freeserve page about the Scottish hills with a dire warning to the foreign visitor about the deceptiveness of their size, and the nastiness of their inclement climate.

Today Beinn Challuim summit offers views for miles in any direction, and there is no wind, not even a breeze. There are three of us lingering up here, none feeling like leaving. Eventually I descend the W-NW spur toward Bealach Ghlas-Leathaid — that wasn’t my original plan, but twenty years on I still don’t like there-and-back days. It proves to be a good choice. The lower part of Gleann a’Chlachain is a kaleidoscope of colours, their tones striking in the low afternoon light. I stroll leisurely back to Kirkton basking in the sun. There is no hurry, and like I said, days like these don’t come around that often.

by tf at February 05, 2018 09:31 PM

February 03, 2018

Tomas Frydrych

Mountain Star

Mountain Star

It was love at first sight. Those smooth curves, precision crafted from a solid block of stainless steel, the needle-sharp point, the smooth black, fully rubberised, shaft on which big red letters proudly declared:

Stubai — Made in Austria

She (for to my teenage mind that ice axe was definitely she) hung proudly in the window of the small climbing equipment shop. It wasn’t that I needed an ice axe, I wasn’t a mountaineer. But there was something deeply symbolic about it that made me pine for it.

It wasn’t just the smooth curve that drew the eye, there wasn’t that much else to look at in the window of the state-owned shop. A few steel carabiners, a Czech-made rope — in Czechoslovakia of my teenage years climbing gear wasn’t something you bought, it was something you made.

And so my first clumsy attempts to learn how to self arrest were with a slaters hammer, belonging to a friend and re-purposed by a blacksmith, an acquaintance of an acquaintance. Another friend, a machinist, who unlike me was a proper climber, made a pair of technical axes at his work, based on some pictures from a foreign magazine he managed to get hold of. He showed me with great pride, excited about the inverse picks.

(I learnt later that pair failed on their first trip to the Tatras, my friend over-tempered the steel and the tips snapped off; such is the nature of progress. But not in my wildest dreams would have I imagined I will one day reminisce on this in Scotland, where the technical ice axe was born, and I expect Hamish McInnes suffered some similar teething problems as my friend did trying to follow in his footsteps.)

But back to that Stubai. There was yet another reason why that axe stood out. The price tag of 700 Kčs — three weeks of moderately decent wages — put it well out of my reach. And not just mine; it hung there for years, an object of unrequited lust, while in real life tools were improvised and borrowed.

When a decade later I walked into The New Heights in Falkirk to get the first ice axe I could call my own and saw a Stubai Mountain Star hanging there, that was it, there was no other choice I could make. Not the same axe, not as refined, more mass produced. Not the cheapest option either. But the pedigree unmistakable, I wasn’t buying a tool, I was buying into a dream.

It’s a fine winter day today, sandwiched between two ugly fronts, and so I am making most of it. The views from An Caisteal are stunning, with just enough cloud to create an ambience on the ridges. As I lean on the ice axe, all these memories flood in.

Over the years there have been others. Some leaner, some meaner, some definitely prettier. Some long gone, some still around. The Mountain Star among the latter, after twenty something years a trusted companion. It does everything I have ever wanted from a walking axe, and does so perfectly. The chromoly requires no care except for the occasional sharpening, the length is perfect to offer support on easy ground, and I like the reassuring weight, the feeling a tool was made for life rather than a season.

I eat my lunch on the summit and contemplate how to return. The Stob Glas ridge is irresistible. It’s not very cold and as I approach Bealach na Ban Leacainn, my crampons are starting to ball up; time to take them off, once I reach a safe place. In the meantime, a practised, near effortless, flick of the wrist to tap them with my axe, and they are clear again. None of my other axes can do that, they are either too light, or too short, or both — the Mountain Star is going to stay for a while yet, I think. And tonight I shall raise a glass to whoever designed it all those years back. Prost!

by tf at February 03, 2018 11:55 AM

January 21, 2018

Tomas Frydrych

Discovering Snowshoes

Discovering Snowshoes

I have thought about getting a pair of snowshoes a few times over the years, but never did. The copious quantities of snow at the tail end of last year finally gave me the needed nudge. Of course, as invariably happens, all that early snow summarily thawed away on the very day the snowshoes arrived, and I haven't had a chance to play with them until this week.

Having never snowshoed before, I thought an easy potter around the Ochils might provide just the right sort of an introduction, and it did (in fact I was having so much fun, I pottered around for over six hours till the last light). Perhaps it is the fact that I Telemark, and so am used to things dangling underfoot, but I found walking in snowshoes to be an entirely natural, zero learning curve, sort of a thing.

I was pleasantly surprised by the huge reduction in effort snowshoes provide. It does not come so much from not sinking so deep, as I imagined it would have, but rather from the way in which the snowshoes glide. Even when sinking half a meter or so, you don't need to lift your foot out, rather, as the foot starts moving forward, the snowshoe floats up to the surface. I'd go as far as to say that in deep snow this requires lot less effort than skinning would have, particularly with today's wide skis.

But where the snowshoes really come into their own is coming down hill. In nice deep soft snow I am able to move at a pace that is considerably faster than I would be walking down in the summer, indeed, not far off my running pace (though, admittedly, as a runner I am a slow descender). By the same token, I now understand why snowshoers feature so prominently in avalanche victim statistics -- it's really easy to get carried away (not unlike skiing, but skiers have had avalanche awareness drilled into them for decades, and it is paying off).

When I was shopping for the snowshoes, I had a set of fairly specific requirements:

  • Mountaineering-type, so they could cope with steeper terrain (means to an end),
  • Not too heavy, so as not to be too much pain to carry when things get more interesting,
  • Suitable for the Scottish conditions, with our variable depth windblown snow cover, which means making contact with the rock beneath it time from time (i.e., steel, rather than plastic / aluminium, and not a design where the membrane is attached by wrapping it around the frame).

In the end, in spite of the eye watering cost, I settled on the MSR Lighting Ascent, which ticks all the boxes: the flat steel frame promises an all around traction, and the membrane is attached inside it, rather than wrapped. Also, they get good reviews.

I was so encouraged by my wee Ochils potter earlier in the week that yesterday I took my snowshoes for their first proper outing up and over Beinn Each onto Stuc a'Chroin and back. Ideal conditions, snow at places waist deep, and excellent fun. But also an opportunity to test the snowshoes in some more challenging terrain, including patches of steeper névé. All in all just over eight hours of true winter wonderland, of which I wore the showshoes for at least seven (they only came off for the short steep descent from Beinn Each, and the final 50m of the Stuc).

That they work well in soft snow I already knew, but I was impressed with the traction provided by the frame and the crampon when going up firm névé. The main limiting factor here is that beyond certain gradient the toe of the boot, rather than the rotating crampon, starts making contact with the ground, at which point the traction is compromised. The angle at which this happens is quite steep, steep enough to be stabbing the slope with the pick of an ice axe, rather than the spike, once the real crampons come on.

I got caught out this way on a short section of the Stuc. The main problem was not so much that I wasn't wearing real crampons, but that I was still using poles, while on a gradient that really called for an ice axe. Awkward shuffling off to a gentler slope to get the proper tools out followed (obviously, this is not a fault of the snowshoes, but a simple error of judgement).

Similarly, traction descending on firm névé is excellent, and broadly speaking, I found that I can descend comparable gradient to what I can sensibly ascend. In deep snow, however, the snowshoes become problematic on very steep ground, they have a tendency to run away more easily than just boots, and you can't really bum slide very well with them on. (And again, you will quite likely find yourself with poles rather than an axe.)

The main limitation of the MSR Lightning Ascent is poor lateral rigidity; this is a feature of the frame design (though the bindings don't help, on that below), and it makes traversing a firm slope very awkward. I have quickly realised that for short sections it is much more efficient to sidestep such ground, facing into the slope, but best of all is to pick a different line where possible, or to put the crampons on.

The bindings I am not hugely impressed with. They are designed to fit a variety of boots (I expect I could make them fit the Sorels I use to clear the drive), but really the best thing I can say about them is that they are easy to get out of fast. They are hard to tension right when putting them on, and two or three stops were needed each time to make adjustments. This does not improve the lateral stiffness either -- I am thinking for this sort of a technical snowshoe it would make sense to have crampon style step in bindings.

But all in all, would I buy them again? Definitely! Should have done so long time ago.

by tf at January 21, 2018 03:52 PM

January 16, 2018

Tomas Frydrych

Pinto Bean Soup

Pinto Bean Soup

My love of lentils and legumes of all sort goes as far back as I can remember. In recent years, the pinto has become my firm favourite among the beans, for it's a versatile legume of a gentle flavour that is easy to work with. The burrito use aside, the pinto is an excellent foundation for a bean salad, great in chili, and once you taste it baked with tomatoes, you will never want to eat Heinz again. And then there is the soup.

While I enjoy cooking, I don't always have the time for elaborate and time-consuming recipes. Fortunately good homemade food doesn't necessarily mean hours over the stove, and the pinto soup is an example of that -- it takes me under half an hour to make. The ingredients are simple, only the pinto beans, onion and chillies (fresh or crushed) are required, plus some stock; if you have carrots around, then they make a good addition, as does a bit of garlic, but you will get an excellent soup with just onion and chillies.

Being of the 'cooking is an art, not science' school of thought, I consider quantities mere minutia dictated by taste. But as a rough guideline, 400g of dry beans will make around three litres of the soup. For that I use two medium onions, and maybe a couple of larger carrots; chillies to personal taste.

Soak the beans over night (you can get away with less, but it impacts on the cooking time), then cook till soft -- using a pressure cooker hugely speeds this up. You will have to work out the exact timing for your pressure cooker yourself, but in ours, at 0.4 bar, pre-soaked pinto beans take 8min. Now, the secret to a good pinto bean soup is not to drain the cooking liquid, i.e., you should cook the beans in about as much water as you want in the final soup.

While the beans are cooking, chop the onion, not too fine, and fry it off with the chillies until nice and soft (I use rapeseed oil, I find the gentle flavour works well with the subtle flavour of the pinto). Add any garlic to the onion near the end.

Once the beans are ready mix in the onions, and any other ingredients, then add stock to taste (I quite like the Knorr stock pots, usually use one vegetable and one herb pot, but you might prefer something more wholesome and homemade instead). Bring to boil and cook (not pressure-cook!) for about 5min, or if using carrots, until they are soft.

That's it. As many foods, the flavour will develop if it sits for a time rather than being served immediately. It will keep in the fridge for a couple of days, or it can be easily sterilised in a kilner-type jar if you want to keep it longer, or there is not enough space in the fridge.

by tf at January 16, 2018 10:56 AM

January 13, 2018

Tomas Frydrych

And Time to Back Off

And Time to Back Off

Forecast is not great -- high winds, increasing in the course of the day, temperature likely above zero regardless of altitude, and precipitation arriving by an early afternoon. The sort of a day when it's not worth carrying a tripod, or driving too far, yet at the same time not bad enough to just stay at home all weekend and brood (as I know I would).

So here I am at Inverlochlarig; before first light, in the hope of beating the worst of the weather. In the recent years this has become my preferred way into the 'Crianlarich' hills. I like tackling these seven Munros in a single continuous run -- just about the only enjoyable route I have been able to come up with near me that has climbing to distance ratio comparable to some of the bigger rounds. But that would be in the summer, and on a cracker of a day.

Today the plan is less ambitious: head up Beinn Tulaichean, and then, depending on conditions and time, onto Cruach Ardrain, and perhaps Stob Garbh, one way or another returning via Inverlochlarig Glen. I have not been up Beinn Tulaichean from this side for some two decades, and my memories from the last time are rather vague, so this outing has a degree of (welcome) novelty.

In the view of the SE wind I decide to give the usual walkers' path a miss, and instead head up the western flank of the hill, in the shelter of the SW spur. This turns out to be a good choice, with only light wind. I eventually join the main ridgeline somewhere at around the 600m contour line. Here my pocket anemometer registers just over 40mph (I carry one having realised I tend to overestimate wind speeds and hence underestimate forecasts). And spindrift. Time for some extra layers and the goggles.

Visibility is dropping rapidly with height, and by the time I reach the flatter area around the 750m contour line, it's down to ten yards. The compass comes out, from here on I am moving on a bearing as visibility continues to drop further. The terrain is quite complex here, lot of large boulders, with big gaps between them, now covered -- but not necessarily filled -- with snow. I narrowly avoid falling into a large hole that appears out of nowhere right in front of me just before the gradient steepens again.

There are two sets of fairly recent footprints here -- mine was the first car in the car park, so I am guessing from yesterday. I follow them cautiously, while keeping an eye on the needle, one can't be too careful; I loose them somewhere along the way.

I have reached a point where the ground starts descending again. I know I am near the summit now, but in the view of the complex terrain I need to get an accurate location fix. An altimeter would have been useful in these conditions, but I forgot to reset it earlier (a rare, and annoying, omission). I get the phone out; I prefer the map and compass, it sharpens the mind, but I am not a Luddite. I am 120m from the summit cairn, just a bit off the little col below it.

I get a bearing, reach the col. The light is so flat now that even in the goggles it is impossible to adequately judge the gradient under my feet. There is a step down, but I can't tell how big. I get on my knees, only with my face this close to the ground I can see it's not too steep, and it shouldn't be more than a couple of feet down. I descend gingerly.

On the other side the ground starts rising steeply -- the final 30m of ascent to the summit cairn. As I start climbing up I catch a sight of what I think at first to be a small cornice above; in fact it's the fault line from an avalanche -- I am taken aback, the ground under my feet does not feel like avalanche debris, but for a short while I can see the fault clearly enough, including the poorly bonded layers within it. I realise that what I thought was an old line of obscured footprints couple of meters to left of me is a track made by some more recent debris coming from above.

I retreat back over the dip in the col to a safe place to assess the situation. The limited visibility is debilitating: though I am sure I am not more than twenty yards from it, the fault line is just a fuzzy shadow, if that, and I have no idea what the ground above it is like. The part of the slope I was on is unlikely to avalanche again, but on what I have seen so far, it is not unlikely that if I load the ground above the fault line it could release; I can't take the risk.

I don't mind not reaching the summit, but I hate giving up. I study the map. It seems it might be possible to contour hundred or so meters to the east and gain the summit from there. I take a bearing and start pacing the 100m, and voila, here are the two sets of footprints I saw earlier, heading the same way. But after only 50m or so they disappear under what this time is unmistakably avalanche debris, the whole eastern aspect of the summit is covered by it, as far down as I can see, while above me the same fault line continues beyond the limit of visibility.

I decide to pace the entire 100m. From here I can see that the avalanche is delimited by a rocky rib, but it seems too steep to climb it. I retrace my steps back to my safe spot. It's only now I notice that, inexplicably, there is almost no wind at this altitude. I must be in the lee of the Stob Binnein ridge, which also explains the heavy snow deposits on the ground above me.

On a windy day like today, one must not waste an opportunity like this. The flask comes out, I eat my piece; I am quite content now. Then a back bearing -- while I should be able to follow my footprints back the way I came, you never know.

I descend the usual tourist route, mostly following the two pairs of footprints I saw earlier. I can see the pair were conscious of the avalanche risk, taking a sensible line; indeed, a bit lower down they dug a snow pit on their way up.

I reach the snow line, with views of Loch Doine and Loch Voil. Time to take some pictures, and shed some layers; I am overheating. The jacket goes in the bag ... and the rain starts almost immediately. But who cares? I am glad of yet another good day in the hills.

by tf at January 13, 2018 09:08 PM

December 31, 2017

Tomas Frydrych

If Running were Everything ...

If Running were Everything ...

As a lad I used to spend Hogmanay with my friends at some remote and basic cabin, far away from the noise and clutter of the city. There were two customs we invariably welcomed the New Year in with. We chucked one of our mates into the nearest pond to mark his birthday (which meant cutting a hole though the ice the evening before). And then we sat down and each wrote a letter to themselves, reflecting on the year just gone by, hoping for the future, one of the more responsible lads charged with keeping the, gradually thickening, envelopes from Hogmanay to Hogmanay.

I still have that old envelope full of my teenage dreams somewhere, though it’s been many years since I’ve added a page; different times, different place. Yet, I was reminded of it yesterday reflecting on 2017, recalling with unexpected clarity that every year reading the previous year’s letter I was struck by how differently it panned out, indeed, how often those very aspirations were swept away by the flow of time.

If running was everything, and running stats something to worry about, with a mere 1,353km run and just 47,100m ascended, this would have been a decisively poor year. But running is not everything, and I couldn’t care less about stats.

It kicked off pretty well, with a late February trip to the remote Strathfarrar hills, providing minimal support to John Fleetwood on his Strathfarrar Watershed challenge. It was the first bigger outing I was able to do since October of the year before, and one which exceeded expectations—well worth an Achilles heel injury I picked up along the way, even if it kept me out of the hills for the next couple of months.

As always, our two week holiday in Assynt didn’t disappoint. The highlights included an extended variation on the Coigach Horseshoe, a run from Inchnadamph to Kylesku over the Stack of Glencoul (something I wanted to do for years, but never got to) and, what ultimately turned out to be my best, most memorable, day in the hills this year, a run taking in the south ridge of Ben More Assynt. That too came at a cost, another foot injury, one that, unfortunately, has plagued me for the rest of the year.

June brought the West Highland Way Race, on which Linda and I were crewing for our friend David, and while the whole trail running / ultra scene is not my kind of a thing, this was a truly special experience, and all in all possibly the most memorable weekend of our year.

In July Linda and I spent an excellent weekend fast-packing in the Cairngorms, and I also managed to squeeze in the Eastern Mamores and Grey Corries that eluded me last year, plus a couple of fun days on the south side of Glen Etive. But by the end of July I could no longer ignore the nasty plantar fasciitis I picked up in Assynt. By mid September the foot seemed back to normal, but only lasted a couple of trail runs while on a visit to Portland, OR, and I haven’t run since.

I admit, over the last five months I have really missed running, not least because of the inescapable loss of fitness and the sniggering bathroom scale. There really is nothing like it, the simplicity, the lack of faffing, the fact I can run seven days a week from my front door if I want to.

Yet, at the same time, that gap created new opportunities. I have been spending more time in the woods, with no objectives, just binoculars and/or a camera. In many ways this has been very liberating, bringing back memories, and reminding me how much I miss proper forests in Scotland.

Then there have been numerous wee camping trips. I much prefer these to just single day hikes. I like the peace and quiet of the night in the hills you get even in the middle of a raging storm, the uninterrupted time, to think, to listen to audio books (having reached an age where reading glasses have become a necessity, I avoid reading in the tent). The early mornings, the first light. (But also, during the single days out I always find myself wishing I was running, knowing that most of the time I could travel farther, along a more interesting route.)

And then, of course, after some nine months of planning, last May we have launched runslessepic.scot, offering bespoke guiding services, navigation courses, as well as a rudimentary Hillcraft for Runners course. We are planning some guided hillrunning weekends in the summer, watch this space ...

So yes, that was my year. 2018? All I know is, it starts tomorrow, and I am going for a run first thing!

by tf at December 31, 2017 10:02 AM

December 23, 2017

Tomas Frydrych

The Crew that Slept in

The Crew that Slept in

The West Highland Way Race, with its 30+ year history, can only be described an iconic classic. So when earlier this year our friend David got a place, Linda and I enthusiastically volunteered to join Gita (his partner) and McIver (their collie) to do the crewing. Little did we know what we were letting ourselves in for ...

For those who do not know, the West Highland Way is Scotland's premier long distance walking route that goes from Milngavie near Glasgow to Fort William. It is some 96 miles long, and involves nearly 15,000 feet of vertical ascent. Each year many thousands of people walk it, typically taking around a week to finish. The competitors in the Race, run since 1988, must complete the route in no more than 35 hours, and for that they receive a coveted commemorative crystal goblet.

What sets the Race apart from most other running events is that the prize giving ceremony only takes places after all runners finish, so that all the runners, and crews, can be present; this makes for a very special occasion with a unique, hard to describe, atmosphere. But I am jumping ahead here.

Let's rewind to Friday evening, 23 June 2017. Linda and I arrive in Milngavie about an hour before the 1am start. We made no special arrangements to meed David and Gita here, which immediately shows our lack of grasp of the scale of the event -- there must be over a thousand or so folk milling around the railway station! We wander about for a while, and make a couple of visits to the registration point, but there is no sign of our friends.

Having more or less reconciled ourselves to not finding David, we bump into him by sheer chance just before the briefing. He seems in good spirits. Gita has already left to get some sleep, and we wander off to High Street, leaving David to his own thoughts.

There is a visible Polis presence, for whom I expect tonight makes a change from the typical Friday night in Milngavie. I am hoping to get some pictures of David as they set off, but, of course, I fail to spot him.

The Crew that Slept in

Then off to Balmaha for a little sleep. It is only at this point, as we make steady progress in a column of hundreds of vehicles, I begin to appreciate the importance of the 1am start. Our arrangement is to get together with Gita at 3am, so I get up about that time to go to the loo -- to my dismay the visitor centre and its toilets are closed, my already low opinion of the way the Loch Lomond and Trossachs National Parks is run sinking even more. In contrast, the Oak Inn has opened specially at 2am, but with all the good will in the world its toilet simply can't cope.

Linda calls Gita and we are told to look for the annoying orange flashing lights. It's a recovery van, with three laddies trying to fix Gita's headlights, which both blew on her drive to here. This is not a great news. The laddies are nice enough, but I am sceptical of success when one of them confides in me that the Kangoo uses 'strange giant bulbs' they've never seen before (referring to an H4!), and which, obviously, they don't have with them. At this point the most important thing is to shoo them away, because David should be arriving shortly, and there is nothing to be gained by him knowing about any of this.

The Crew that Slept in

He arrives bang on time, on good form, has some food and is off again. Gita stays behind waiting for daylight, while Linda and I set off in hope of finding H4 bulbs somewhere at 5am on Sunday morning; we succeed eventually at Dumbarton Euro Garages, after no luck in the, rather fortified, BP garage in Alexandria.

The next crew stop is Ben Glas farm. Here only one vehicle per crew is allowed, so we regroup first, make ourselves some cooked breakfast among the midges, change Gita's bulbs ... lot of time to kill, so a visit to the Falls of Falloch, deserted at this early hour.

The Crew that Slept in

At Ben Glas we don't have long to wait as David arrives at the check point slightly ahead of time, but convinced he is going too slow (we are aiming for a sub 24h finish).

By the time we arrive at Tyndrum the lack of sleep is beginning to catch up with us. We are operating on our own time, where everything is measured from a zero at Milngavie to (hopefully) just under 24 in Fort William. We have completely lost any sense of how that might relate to 'normal' time. In this private timezone it is the middle of an afternoon, and it comes as a bit of a shock that we can't get three fish suppers from the Real Food Cafe, because they only put on the friers after breakfast! Fortunately it's not a long wait till 11am, and, our fish suppers in Gita's car, we are off to the Auchtertyre check point.

The Crew that Slept in

We don't have long to wait. David arrives on schedule, but the effort is beginning to show. Some food, change of clothes, and he is off. For some reason, I decide that since the stove is out I might just as well make a flask coffee and soup for the next stop -- I don't know why, with hindsight this does not make that much sense, but by now none of us are operating at full metal capacity, so I am faffing about for bit with the food before we head on.

Next stop Bridge of Orchy. By the time we get there we are all properly knackered. The girls decide to get some sleep, but I don't sleep well in daylight and tend to wake up with a nasty headache, so I go for a walk instead. It starts raining almost immediately -- the 'weather' we knew was to come for the second half of the race is nearly with us.

A 45min walk does my brain good, but also stirs my bowels, so a quick trip to the hotel is due. My conscience doesn't allow me to just use the facilities, so I sit at the bar for a bit nursing a pint of lemonade, before making good on why I really came here (I am fairly certain I fell asleep in the cubicle, for I do not think I was that long but by the time I step out there is a long queue, and everyone is giving me the evil eye).

Outside the sun is back out, which is good. As I am about to turn down the road toward the bridge, I catch a glimpse of a runner that moves lot like David. Nah, the clothes are wrong; except then I vaguely remember him changing at Auchtertyre ... sh!t, it's David right enough, a long, very long, time ahead of our schedule.

He is glad to see me, thinking I have been waiting for him here on the corner! Should I tell him??? I excuse myself and sprint down the hill where both girls are still soundly asleep. There are some muffled words from inside the cars, which I can't hear clearly, but can venture a guess, then a lot of commotion. At the same time, there are car shenanigans taking place, parking is very tight here and with our tail gate open the other crew can't open theirs or something. A lot of our stuff falls out onto the road in the process. David does not stop long, and the only reason this pit-stop is not a disaster is that the coffee and soup are already made from Auchtertyre!

By the time we get to Glencoe ski centre the weather has arrived in earnest: it's cold, windy and pissing down. My head is feeling like a giant hangover, I try to sleep for a bit, but it's not helping and neither coffee nor sugar are making any difference. Time to stop feeling sorry for myself, the way the weather is just now I think it is likely that the organisers might insist that the runners are accompanied from now on, so I go to get changed.

But there is no sign of David, and we are all getting rather nervous. He arrives some twenty minutes later than we expected him to, visibly exhausted, soaked to skin and very cold. He is a sight for sore eyes, and all three of us are thinking this is it, but nobody wants to broach the subject.

Eventually, as a round about way, I ask 'do you want me to come along?', fully expecting him to say he was calling it, but instead he simply says 'yes'. There are lots of guts in those three letters, and this, ultimately, will become the moment that in the following weeks and months we will keep returning to.

And so we are off, walking, rather slowly, down the road. By the time we get to Kings House my headache is gone, and I am operating quite normally again (nothing like a bit of exercise!), keeping an eye on the pace, doing the math. I am aware I am talking too much, but conditions are so crap I feel I need to, so neither of us has time to think about that.

Up on the high ground above Kinlochleven it's very windy and our feet are in an inch or two of freezing water more or less constantly. We are moving slower than we need to be, and I am dreading the prospect of getting changed in this weather in a car park. But we pick up the pace a bit on the descent, even overtaking a few people who overtook us earlier on.

Just as we reach the village the sun comes out briefly, blowing some of the bleakness away. And to my great relief Linda and Gita managed to find some space inside the sports centre where the check point is. We don't have time to hang about here, the last two legs were both slower than the 24h pace, claiming back the buffer David built up to Bridge or Orchy. So just getting changed, a bite to eat, hot tea, an official kit check (from here on support runner is mandatory).

We manage a good pace on the climb out, but less so once the route starts descending the other side; I am reminded of the old fellrunner's wisdom, it's not the climbs that get you. The ground here is rough, and after 80 miles David's feet are hurting.

I am not much company, it takes all my effort to concentrate on setting the pace. At times I feel quite bad about pushing him, but I am determined not to let him finish in 24:02; we are either going to make it under 24h, or blow up properly, and just now it could go either way. There is another runner who joins us on the climb out of nowhere, and he makes up for me with conversation.

As we are approaching Lundavra I am glad to hear David saying that if the ground was a bit better he feels he could still do some running, so when we hit the good path beyond, I pick up the pace a bit, but there is no response from behind me. At this point I think that's it, the 24h dream is gone. But in fact David perks up not much later. I turn around at the bottom of the big descent -- it's an amazing sight, a line of bobbing head torches as far as I can see.

I am concerned about the climb out, but it turns out David is still climbing well, and as we start the final descend to Fort William gets proper second wind. We are running about 6-7min/km, overtaking quite a few people, and I am having hard time keeping up with him. We lose some of the energy on the final stretch of the road, which feels much longer than it should be, but that no longer matters, we are going to make it, and David eventually finishes in 23:42:31.

The Crew that Slept in

And then it's the prize giving the next day. This is hard to describe, it really needs to be experienced. 2017 was a particularly special year, with Rob Sinclair setting a new race record of 13:41:08. This is a truly amazing feat.

But as I sit there that morning to my mind, the new record is not as amazing as Nicole Brown, the last finisher, coming in just a few minutes earlier, in 34:40:28. Having been out the previous night in the awful weather for just five hours or so, I can honestly say I would not have stuck it out for another twenty hours of the same if you were paying me. And this, I think, is what the West Highland Way Race is ultimately about.

So yes, if you get a chance to crew on the West Highland Way, do so, it is worth it, unique, and unforgettable.

PS: The organisers recommend using two crew teams, and with hindsight this is wise. We just could not resist the temptation of seeing the start of the race, and did underestimate the fatigue that would bring.

by tf at December 23, 2017 06:07 PM

December 20, 2017

Chris Lord

My Impossible Story

Keeping up my bi-yearly blogging cadence, I thought it might be fun to write about what I’ve been doing since I left Mozilla. It’s also a convenient time, as it coincides with our work being open-sourced and made public (and of course, developed in public, because otherwise what’s the point, right?) Somewhat ironically, I’ve been working on another machine-learning project, though I’m loathe to call it that, as it uses no neural networks so far, and most people I’ve encountered consider those to be synonymous. I did also go on a month’s holiday to the home of bluegrass music, but that’s a story for another post. I’m getting ahead of myself here.

Some time in March I met up with some old colleagues/friends and of course we all got to chatting about what we’re working on at the moment. As it happened, Rob had just started working at a company run by a friend of our shared former boss, Matthew Allum. What he was working on sounded like it would be a lot of fun, and I had to admit that I was a little jealous of the opportunity… But it so happened that they were looking to hire, and I was starting to get itchy feet, so I got to talk to Kwame Ferreira and one thing lead to another.

I started working for Impossible Labs in July, on an R&D project called ‘glimpse’. The remit for this work hasn’t always been entirely clear, but the pitch was that we’d be working on augmented reality technology to aid social interaction. There was also this video:

How could I resist?

What this has meant in real terms is that we’ve been researching and implementing a skeletal tracking system (think motion capture without any special markers/suits/equipment). We’ve studied Microsoft’s freely-available research on the skeletal tracking system for the Kinect, and filling in some of the gaps, implemented something that is probably very similar. We’ve not had much time yet, but it does work and you can download it and try it out now if you’re an adventurous Linux user. You’ll have to wait a bit longer if you’re less adventurous or you want to see it running on a phone.

I’ve worked mainly on implementing the tools and code to train and use the model we use to interpret body images and infer joint positions. My prior experience on the DeepSpeech team at Mozilla was invaluable to this. It gave me the prerequisite knowledge and vocabulary to be able to understand the various papers around the topic, and to realistically implement them. Funnily, I initially tried using TensorFlow for training, with the mind that it’d help us to easily train on GPUs. It turns out re-implementing it in native C was literally 1000x faster and allowed us to realistically complete training on a single (powerful) machine, in just a couple of days.

My take-away for this is that TensorFlow isn’t necessarily the tool for all machine-learning tasks, and also to make sure you analyse the graphs that it produces thoroughly and make sure you don’t have any obvious bottlenecks. A lot of TensorFlow nodes do not have GPU implementations, for example, and it’s very easy to absolutely kill performance by requiring frequent data transfers to happen between CPU and GPU. It’s also worth noting that a large graph has a huge amount of overhead that will be unrelated to the actual operations you’re trying to run. I’m no TensorFlow expert, but it’s definitely a particular tool for a particular job and it’s worth being careful. Experts can feel free to look at our repository history and tell me all the stupid mistakes I was making before we rewrote it 🙂

So what’s it like working at Impossible on a day-to-day basis? I think a picture says a thousand words, so here’s a picture of our studio:

Though I’ve taken this from the Impossible website, this is seriously what it looks like. There is actually a piano there, and it’s in tune and everything. There are guitars. We have a cat. There’s a tree. A kitchen. The roof is glass. As amazing as Mozilla (and many of the larger tech companies) offices are, this is really something else. I can’t overstate how refreshing an environment this is to be in, and how that impacts both your state of mind and your work. Corporations take note, I’ll take sunlight and life over snacks and a ball-pit any day of the week.

I miss my 3-day work-week sometimes. I do have less time for music than I had, and it’s a little harder to fit everything in. But what I’ve gained in exchange is a passion for my work again. This is code I’m pretty proud of, and that I think is interesting. I’m excited to see where it goes, and to get it into people’s hands. I’m hoping that other people will see what I see in it, if not now, sometime in the near future. Wish us luck!

by Chris Lord at December 20, 2017 01:00 PM

December 18, 2017

Tomas Frydrych

Strathfarrar Watershed (A View from the Sidelines)

Strathfarrar Watershed (A View from the Sidelines)

I suspect most of those reading this have never heard of John Fleetwood. Recently someone described John as 'quietly getting on with doing extraordinary mountain journeys with zero fanfare', which about sums him up. Behind that 'extraordinary' hide a few other adjectival phrases, of which perhaps the most important is 'preferably in winter', yet his accounts of these ventures are a bit understated. So here is one mortal's peripheral story of the Strathfarrar Watershed.

I first met John some fifteen years ago in the Christian Rock and Mountain Club. Hillrunning wasn't yet on my personal radar, the shared passion was mountains and climbing. John was a determined (some might even say driven!) winter climber and an alpinist, and though to my recollection I only ever climbed with him on the same rope once (he was climbing much harder stuff than I even aspired to), there were many shared trips, drams, songs and stories (and vegetarian curries; John was about the only vegetarian I knew in those days, and so always volunteering to take care of the food).

As all of my friends, present and past, would undoubtedly readily confirm, I am not very good at keeping in touch, and so we lost contact for a number of years. Time passes rather fast, bringing with it some significant birthdays among the old CRMC crowd, and a reunion meet in the Yorkshire Dales couple of years ago.

By then, hillrunning had become my main passion, and I was (still/again) training for the Assynt Traverse. John was just back from a rather epic traverse of the Alps, and there was much to talk about. I never talked running with John before, and realised quickly that we share a very similar take on it, though we practise it on quite different levels. And he was the first (and last) person that I came across who knew exactly what the Assynt Traverse was!

Consequently, when John got in touch at the start of this year about his plans to attempt a winter traverse of the Strathfarrar watershed, I readily agreed to go along. All we needed was a good dump of snow, which a storm at the end of February helpfully provided. And so on the morning of the 27th we find ourselves at the gate on the Glen Strathfarrar private road. (And if you intend to read any further, I suggest you read John's account before carrying on, what follows will make more sense.)

There was never any question of me accompanying John. Even at the peak of my physical condition this outing would be well beyond my limits, and I am not even remotely at my peak. And so as John heads up the little farm track to gain the hills north of the glen, I assemble my bike and set off along the road. The plan is to cycle to Monar Lodge, run along Loch Monar to gain the high ground over Creag na Gaoithe, eventually joining John's route at Bidean an Eoin Derig; follow it to the Maol-bhuidhe bothy where we will meet, for some warm food, dry clothes and spare batteries. Then, perhaps, I'll accompany John for a bit up to Sgurr na Lapaich, before making my own way down to Monar Lodge to pick up the bike ...

Strathfarrar Watershed (A View from the Sidelines)

But that's all still ahead of us. It is a crisp morning, promising a clear sunny day ahead. An inch or so of snow on the road makes the cycle quite arduous, though the stunning scenery is more than making up for that. But soon my feet are freezing, and I can't think of any explanation why I packed SIDI racing shoes rather than Specialized Defroster winter boots. By the time I reach the far side of Loch a'Mhuillidh, I can't feel my toes and have to stop to put on an extra pair of socks, which helps a bit.

Strathfarrar Watershed (A View from the Sidelines)

The glen is full of deer, there must be thousands of them, feeding on the hay the estate provides. They are somewhat unpredictable, particularly the younger bucks, and so care is needed, especially where the road splits the herd. I slowly gain height, there is more snow, and some pushing to be done, before I reach the lodge -- the 25k or so takes me some three hours, a lot longer than I expected.

After a quick early lunch basking in sunshine, I put on my mudclaws and set off along the loch. The sky is blue, no cloud to speak of, the loch like a mirror -- the centre of the high pressure must be bang on the top of here.

Strathfarrar Watershed (A View from the Sidelines)

The jog is very pleasant, though the temperature has crept up a bit, melting the snow, and so for the entire 10km I run in an inch or two of ice-cold water. I don't mind, days like these don't come around often, and I think the cold feet a price worth paying.

Strathfarrar Watershed (A View from the Sidelines)

When I eventually stop for some oatcakes and cheese at the foot of Creag na Gaoithe, a wisp of cloud appears from somewhere and it suddenly gets rather cold. I don't hang around and start plodding up.

The snow on the sun-exposed hillside is saturated with water, and my cold feet are doing my head in: I start worrying about the inevitable temperature drop on the higher ground, about how far I still have to go today ... in this game, the head is everything.

Nevertheless, the sun is back out on the ridge, surface frozen and runnable, my feet warming up quickly. I pause briefly at the foot of the arrete that leads to the summit of Bidean an Eoin Deirg, wondering if I need to put on crampons, settling for an ice axe only, and quickly regretting it. Conditions are tricky, and exposure on both sides considerable. And, of course, now I am in a place I can't put them on ... I carefully backtrack onto a small platform lower down -- how many times over the years have I got caught out like this?

A bite to eat on the summit, then over to the Sgurr a'Chaorachain trig point. There are some footprints here, and I nearly descend down the N ridge by mistake following them. But I realise quickly enough. The compass comes out to double check, for in the afternoon light the climb out of Bealach Coire Choinnich onto to Sgurr Choinnich seems improbably steep and monolithic. I even briefly contemplate dropping down into Coire Choinnich to avoid it, but the slope there is obviously heavily loaded, the risk of triggering an avalanche high.

As it happens, the ascent is straight forward, the ground at an amenable gradient (as the map clearly shows), but the snow is deep, at times waist deep. I don't even pause on Sgurr Coinnich, I am well behind schedule, reaching Bealach Bhernais exactly at sunset.

Strathfarrar Watershed (A View from the Sidelines)

There are decisions to be made. The next section of John's route is difficult navigation-wise, and the deep snow will make progress hard. I have no idea how far behind me John is, but I do know that should he catch up with me I could not keep up with him. But more than anything, I am tired, have been on my feet for over nine hours, and my lack of fitness is beginning to show.

I decide to take the bailout route -- there really is only one such option today, and it's here in Bealach Bhernais. I should say, this is not something I am desperately devising here in the dropping temperature while watching the stunning sunset. Rather, it is something we discussed over a vegetable curry the previous night in the warmth of the (most excellent, would recommend to a friend) Black Isle Berries Bunkhouse. On these big winter ventures planning is key to survival, and John's planning is nothing if not meticulous.

The bailout route means heading west to pick up the stalker's path that leads into Coire Beithe, following this past Loch an Laoigh to eventually pick up the path into Coire na Sorna, past Loch Calavie and down to the bothy over open hill. It's still 17km or so to go, but an easy 17km compared to the watershed line.

I enjoy the sunset, then get the head torch out and reset the altimeter. The initial descent is awkward, and the stalker's path hard to locate, but once I do, it is a decent pony track, and to my surprise I am running rather well down the gentle gradient. Once I get beyond Loch an Laoigh, I find a huge track where the map indicates a path.

After a short while my head torch beam starts picking up some strange, spooky aberrations ahead. This turns out to be heavy machinery, with high vis jackets left hanging on the operator seats. Even in the darkness, I am saddened by the intrusion, we do not value landscape anywhere near enough in this wee country of ours.

There is only one small snag with the bailout route: when I printed out my map at home, I didn't print enough of it. I am missing perphas no more than 1/2 km, but unfortunately it includes the place where the Coire na Sorna path leaves the track I am on. To avoid dropping too low and off my map, I decide to leave the unenjoyable track early and head for the open hillside. The Sorna path is, in fact, yet another big track, which I intersect at around the 300m contour line.

A short climb, then the track levels out, Loch Calavie should be on my right. Yet my (reasonably powerful BD Icon Mk I) head torch beam is not picking it up. There seems to be just a black bottomless abyss there. This is disconcerting. I stop to check the map. It turns out I am standing no further than a foot away from the edge of the water, I can hear it splashing when I am still, but somehow if I shine my beam further out, there is no reflection whatsoever. Spooky.

I carry on, and, not being able to see the loch that well, I miss its eastern end, where another path I want to take branches off. But I am sufficiently alert to realise almost immediately when the main track starts climbing again. The foot of the loch is a proper peat bog, and it takes me a while to negotiate it, before a brief spell on the new path. Then on a bearing down to Loch Cruoshie. There are a few obvious re-entrants here, serving as useful tick offs, and my navigation is bang on.

The final unknown is whether the outflows from Loch Cruoshie will be manageable to cross or not. There is an alternative, but it means a fair detour which I would prefer to avoid. They are freezing, knee deep, but very mercifully slow flowing. Not far to go now, perhaps the reason why I become somewhat complacent about navigation even though there is a thick mist hanging around. As a result locating the bothy takes me longer than it should have, not ideal after wading through the icy water. I am much relieved when I finally spot its outline at the far reach of my head torch beam.

It's 10.45pm and I am glad this 15h day is at an end. I have a quick look around. There is a wheel barrow in the 'utility room', with what looks like a sack of coal in it -- if wishes were horses ... I don't know who gets a bigger fright, whether me or the mice sheltering in it; alas no coal. I make my dinner, stick a candle into the window for John, and promptly fall asleep.

At 4.30am, roughly the time I think John might be arriving, I get up to scan the hillside for light. There is none, so I replace the nearly burnt out candle with a fresh one, and crawl back into the sleeping bag till 7.30am. Time for porridge as the day is breaking out. Still no sign of John, but I am not concerned, not yet. Another glorious day is beginning, and there are pictures to be taken.

Strathfarrar Watershed (A View from the Sidelines)

Nevertheless, as time progresses I am aware there is a cut off point beyond which I can't stay here. I have only a small amount of food left, and a bit extra I left with the bike, but not much. I need to leave here no later than noon. But if John doesn't arrive by then, we have a more serious situation anyway, I suspect. I decide not to worry prematurely, and John appears half an hour later.

He is visibly exhausted and the bottoms of his walking poles have turned into giant ice balls. Yet, he doesn't stay long, just enough to eat, change some clothes, and have a go at the ice balls with an ice axe. In spite of the very hard snow conditions he is determined to carry on. Having been up there just a few hours earlier, I know exactly what he is up against, and I find the level of mental stamina required to carry on quite astonishing.

There is no question of me joining him for Sgurr na Lapaich. I am too spent, and my right heel is rather tender, has been since midday of the previous day. I suspected a giant blister to start with, but as there was nothing to be done about that until I got to the bothy, I didn't bother. But to my surprise, when I took my socks off previous evening, there was no external damage, which is more disconcerting than a giant blister would have been. So I need an easy option.

The relatively easy exit route takes me into the bealach between An Cruachan and An Soccach, down a stalker's path along Allt Riabhachain and then through the Drochaid nam Meall Bhuidhe bealach to pick up the path leading into Glean Innis an Loichel. It's been overcast since mid day, but the sun comes out for a bit in the afternoon just as I enter the glen. Then a bit of road running to get back to Monar Lodge. All in all just under six hours.

The pain in my right heel has got progressively worse during the day, and for some reason is particularly acute on the bike. But there is no snow left on the road, and I can see some serious weather coming in, so push hard, taking just seventy five minutes to get back to the car. But not before the weather arrives, the last part of the cycle in freezing rain and stinging hail. I spare a thought for John up there on the high ground, better him than me.

Off to Beauly where I devour a fish supper. I hope I'll be able to stay in the Bunkhouse again -- John was hopeful of a 36h finish, so we did not book another night. I am in luck. Early start next day, back at Struy for 5:30am as agreed. I try to sleep in the car for a bit more, but it gets too cold without the engine running, so I get up and go for a walk. I get a call from John at 6.30am -- he has only six miles left, but says he is moving slowly.

The big question is what is 'slowly' in the Fleetwood parlance? I expect it's more like my fast than my slow, so start walking up the forest track John will come down to meet him. Another nice cold morning. Just before the track emerges from the forest, there is a giant, iced over, and hard to avoid, puddle and I lament not wearing walking boots.

Once on the open hillside I can see quite far, but there is no sign of John. I wonder if we might have somehow missed each other, and hurry back. As I do, I register another, quite appealing, forestry track going off to the right, which under different circumstances I'd explore, but the last thing I want is for John having to wait for me.

No sign of John at the car, so I get the camera out and head back to the village, to take pictures of snowdrops and study the grave stones, as you do. There is lot of history here, but not much life. I chat to a couple of drivers of forestry trucks waiting for the time when they are allowed into the forest.

Time moves on, three hours and counting since the call. I keep an eye on the track on the hill, but no sign of John. Then a rather dishevelled figure emerges down the road; I take do a double take, the direction is wrong, but yes, it is John, with tales of a dead end forestry tracks and dense sitka. I am very glad to see him, the last couple of hours I was beginning to worry about him for the first time since he set off.

by tf at December 18, 2017 08:42 PM

November 03, 2017

Tomas Frydrych

Regarding Microspikes

Regarding Microspikes

Recently there has been some chatter about using lightweight footwear in the winter hills, and in that context microspikes have been mentioned. As someone who uses microspikes a lot, I'd really like to warn quite emphatically against taking microspikes into the hills as a substitute for crampons -- in some ways wearing microspikes can be considerably more dangerous than just wearing boots without crampons.

Don't get me wrong, I really like microspikes; they are an excellent tool for winter running.

Regarding Microspikes

But they only work in a very limited range of conditions. Specifically, they are only suitable for moderately steep slopes, roughly speaking, slopes you can consistently keep the entire sole of your foot on the ground, and they only work well on pure, exposed, ice and hard neve. They do not work if the hard surface is covered by even a fairly small amount of loose, non-compacting, snow (e.g. blown-on dry powder), and they do not work on the cruddy snow that much of Scottish winter is made of -- the 9mm spikes are too short to find purchase.

But the real problem with microspikes is not that they have limits, all tools do, but rather that (a) they go from a superb secure grip to zero traction in a fraction of a second, and (b) that this tends to happen on a much steeper ground than would have if I were just wearing boots. With boots the loss of traction tends to be gradual, and I get plenty of warning to get the crampons out, or just to back off. In contrast, the microspikes will happily, and effortlessly, take me on a ground that in boots alone I would have long been aggressively kicking steps. This means that slipping with microspikes is likely to be a much more serious proposition than slipping with just my boots on. What gradient are you comfortable self arresting on? 10°? 30°? 45°?

This is not some just theoretical musing, it's something I have learnt the hard way. One January some years back I was doing my regular training run which takes in Ben Vorlich and Stuc a'Chroin from Braeleny Farm. The hills were in early winter conditions, and as was my habit at the time, I brought my standard winter gear of ice axe, microspikes and crocs (the latter for the several river crossing along the way). Ben Vorlich was nicely iced up and windswept, and the micros were working a treat. From the distance the Stuc a'Chroin 'Nordwand' did not look too bad, plenty of bare rock, and so I decided (to use a technical climbing term) 'to take a look at it'.

Regarding Microspikes

I gained height fairly quickly, and as I did the snow condition, and my traction, progressively deteriorated, until I reached an awkward steep groove where it was obvious that if I carried on any further I would not be able to back off. As I started down-climbing the true limitations of the microspikes became painfully obvious: if my traction going up was poor, it was nothing compared to going down. The next half hour, spent kicking in short step after step, was some of the tensest time I have ever experienced in winter hills (I once had a few awkward minutes in the Man Trap, nowhere near as bad, I dare say).

I made it safely to the foot of the buttress eventually and headed over to the broad corrie that in the summer is used to avoid the Nordwand. The snow conditions there were superb. The iced up neve put a big grin on my face as I made rapid progress up, though the upper section was way too steep for the micros, and I had to make great effort to keep at least my toebox on the slope over the final metres. But my axe placements were bombproof, the sun was shining, and my previous escapade was promptly forgotten.

Regarding Microspikes

It is perhaps the sunshine, so rare in Scottish winter, that explains that a month later I am back, again wearing the micros. By now the winter is full on, and the Nordwand is plastered with snow -- I have no intention heading up there, I have learnt my lesson. Or so I think.

The first warning signs come on the descent to Bealach an Dubh Choirein. There is more snow, and a short steep section that needs to be down-climbed proves very awkward. It is a sign of things to come. The conditions in the NE corrie are much changed as well, the line I took out of here last time is topped up by a steep wall and a cornice, and is out of the question both because of the gradient and the avalanche risk. At the same time there is no sign of the perfect neve, and as I make my way up along the north edge of the corrie, I am struggling for any sort of a grip in the cruddy snow. I weave my way up through a series of awkward traverses and rocky steps, kicking and cutting, at times down to the vegetation. All that in the full knowledge that had I been wearing crampons, I wouldn't have given this sort of ground a second thought.

That day I decided to have a simple policy for my winter runs -- if the terrain is serious enough to require carrying an ice axe, I take crampons. No exceptions. At times it is tempting not to, all that extra weight. Indeed there have been times on a run I wished I had micros instead of crampons; it is almost invariably followed by a relief that I have brought the crampons, when a few miles on conditions change. And so when I am packing my gear and that temptation comes, I just think back to those days and the temptation goes away. Life is too precious, and the winter hills don't stand for hubris.

P.S. As I have mentioned elsewhere, I use the Kahtoola KTS crampon for running.

by tf at November 03, 2017 02:35 PM

October 17, 2017

Tomas Frydrych

To Eat or not to Eat (contd)

To Eat or not to Eat (contd)

The disillusionment with the M&S curry aside, the biggest factor that forced me to rethink camping food was running. While Scotland's hills provide superb playground from short jogs to long days, it is the linking of multiple days together that opens up, literally, whole new horizons. Alas, none of my previous approaches to cooking was suited to self-supported multiday runs.

The problem is twofold. On the one hand, running is far too much impacted by the load we carry. I have never obsessed about weight, not beyond eliminating the unnecessary ('light weight' is a synonym of 'short lasting', and I prefer durable), but for running the elimination approach was not enough. I found out that a load of up to about 6kg impacts my pace, but generally not the quality of my running. However, once it gets above 9kg or so, there is very little genuine running taking place. I managed to cut the base kit, including 0.5l water carried, to about 6.5kg. That leaves about 2kg for food ... and brings me to other issue.

The energy burn while running is just that little bit higher. At the same time I don't like running over multiple days on a large calorific deficit: feeling hungry takes away from the fun, impacts one's metal capacity, and makes subsequent recovery longer. Yet running I can easily burn more than 6,000 kcal per day, while the theoretical (and unreachable) limit of what I can pack into 1kg of food is ~9,000 kcal (pure oil). In other words, I'll never carry enough food not to incur a deficit, which means I need to pay attention to the calorific density of the food I take to make the most of it.

Since we are talking calories and running, there is an additional issue to be aware of. The ultra-runner experience seems to suggest that while on the move we can only absorb ~250kcal/h. This is worth keeping in mind when planning the menu: the bulk of the calories needs to come from the evening meal, while during the day small but frequent food intake is the best strategy.

Doing it on the Cheap

Breakfast is easy -- 2 packets of plain instant porridge; no milk required, just add boiling water and 75g or so of 60% chocolate for extra calories. Stir thoroughly, let it sit for a couple of minutes.

During the day my staple food is nut and raisin mix (I like the Tesco Finest variety, but it's too expensive; you can make a nearly identical mix from the nuts and berries Lidl sells, at about half the price), and oatcakes and hard cheese (I am particularly fond of the rough Orkney oatcakes, and Comte). The benefit of oatcakes is lower GI index, which means a more steady supply of energy, plus they are relatively high in fat (the mentioned ones are about 120 kcal per oatcake). Hard cheese has probably the highest calorific density of any normal food, it does not perish quickly, and I happen to like it. If I need a sugar hit, I take Jelly Babies -- not as good as a gel in terms of the hit but lot cheaper, and more fun (4 Jelly Babies correspond to ~1 gel).

The evening meal is where the main challenge, but also the opportunity for eating well, lies. It takes no genius to realise that the M&S curry and Uncle Ben's rice combo fails badly on the calorific density count, for much of the content of both the rice packet and the tin is water, and water is dead weight, i.e., negative calories. Yet it is easy to prepare a good, cheap, home made, meal that is also lot lighter.

My firm favourite is to make a tomato-based sauce, usually with chorizo, some olives, pine nuts, or whatever else I have around / take fancy to. I reduce this to a thick paste and simply pack it into a zip-lock bag. The trick is to use only as little fat/oil as is necessary for the cooking process, and then take some nice olive oil in a small bottle instead. This reduces the mess in case the zip-lock bag fails you (I confess, I double bag, just in case). Nalgene make small leak-proof utility bottles perfect for the oil; I find the 30ml bottle is about right for a single meal, and the 60ml for two (adding ~250/500 kcal respectively).

I normally tend to have this sauce with Chinese-style noodles. Ultimately, I want something that requires as little cooking as possible, for if I can reduce the amount of cooking that I do, I can significantly reduce the weight of the cooking paraphernalia (on that below). After much searching, I have settled on Sainsburys brand of noodles in round nests; they only require 3 min boiling (which can be cut to less if I leave them to sit for a bit), and they fit neatly inside a Toakes 0.5l pot, which is just big enough to cook two of them.

(As far as reducing the cooking time goes, couscous is the best option, but while I love it, I find it does not fill me up, so I prefer some form of pasta.)

I don't bother heating up the sauce, I simply mix it with the noodles in my food bowl, and add the extra oil, depending on the sauce maybe bringing some Parmesan to sprinkle on the top (if you are anything like me, you will realise quickly that draining the noodles is an awful waste ... makes a great soup instead).

The main shortcoming of this approach is that food this prepared does not keep very long; how long will depend on the ingredients (one of the reasons I like using chorizo), and the ambient temperature. Personally, I am happy with this approach for a two night trip in the usual Scottish temperatures, but one needs to use common sense, and if in doubt, reheat everything thoroughly. The other issue is that I still end up carrying quite a bit of water in the food, making it hard to get more than couple of days of food out of my 2kg allowance.

The answer to both of these problems lies in dehydration, which I shall come to in a third instalment of these posts of my camping food 'journey'.

A Side Note: The Kitchen Sink

I always take a 'bowl' to eat from, it means the pot is free for making coffee while eating -- the bottom of an HDPE milk carton makes a superb camping bowl; it is lightweight, it folds flat, the HDPE withstands boiling water, and it gets simply recycled at the end of the trip (for two nests of noodles, you will need the bottom of a six pint carton).

I don't bother with a cup. I carry a 0.5l Nalgene wide-mouth HDPE bottle: during the day this is my water bottle (I make it a 'policy' not to carry more than 0.5l at any time during the day, in Scotland it is rare that more is needed, particularly if I take the Sawyer mini filter), and in the evening it becomes my cup. It is fine with boiling water, the screw top means I don't spill it by accident in the tent, it holds heat rather well, and it can double up as a hot water bottle during the night.

Once I realised that I only need a 0.5l pot for one person (0.7l for two), it became obvious that the ubiquitous gas camping stove is a lot of dead weight to lug about (as well as bulk). The smaller canister weighs around 230g for 110g of gas, while a decent small stove weighs around 80g (there are smaller stoves on the market, e.g., the 25g Chinese BRS-3000T; mine flares out so dangerously when reducing the flame once it's hot that I will not use it again, and would advise against buying it -- the 55g saved compared to a proper stove from a reputable manufacturer is not worth it). There is also the high cost of gas, exacerbated by the accumulation of partially empty canisters after each trip (that these canisters are not refillable is an ugly blot on the outdoor equipment industry green credentials).

I find that the most weight-, as well cost-, efficient solution for short trips is cooking on alcohol. Alcohol stoves come in different shapes and forms, but my favourite is the 30ml burner made by this guy. It is spill-proof (the alcohol is soaked up into a some sort of a foam), and weights 14g; together with the small stand he also sells, and an alu foil homemade windshield, it comes to around 30g. I need around 50g of alcohol per day, plus 50g extra to give myself a margin for spilling my coffee (or to pour boiling water into my shoes when they freeze solid overnight). Small plastic bottles seem to invariably weigh 20g regardless their size up to about 0.25l, so for one night outing this translates to about 160g less in weight (and about £4 cheaper) than gas (so I can treat myself to more chocolate!).

The things to be aware regarding alcohol cooking:

  • It stops being weight-efficient after about 3-4 days (alcohol contains about 1/2 the energy of gas per weight; the savings come from being able to take only what you need and the low weight of the bottle).
  • It takes longer to boil water on the above linked stove than it does on a good quality gas stove, and you really need a windshield; but time to cook is something I am never short of on my trips.
  • Most importantly, alcohol stoves can produce fairly high amounts of CO if the oxygen supply is restricted by, e.g., a windshield, so always make sure there is enough oxygen getting through to the flame and the tent is adequately ventilated (the latter applies to all stoves, some gas stoves are considerably worse than others).

To be continued ... (on dehydrating food)

by tf at October 17, 2017 05:34 PM

October 16, 2017

Tomas Frydrych

To Eat or not to Eat (Well)

To Eat or not to Eat (Well)

I have always liked my food; perhaps it's because I come from a place that obsesses over wholesome home cooking. I also like my food now more than I once used to; perhaps it's because my adoptive homeland doesn't do food particularly well (doesn't really 'get' food).

A good meal is one of those little, simple, pleasures that can put a smile on your face when there isn't much else to smile about, and this fully applies to eating in the outdoors.

My overnight ventures into the woods started at a time and a place where camping stoves did not really exist, a camping mat was something that two muscle-bound men carried to a lake for kids to float on, and a good warm (fur) kidney belt was one's most treasured possession. I think of those days with bemusement as I mentally survey my current weekend camping kit list -- we were unwitting practitioners of 'extreme ultralight' (except there was nothing particularly light about the coveted US Army issue rucksack, the cotton tarp, or the draughty sleeping bag). But back to food and eating.

My standard fare during those days was a half a kilo shop-bought tin of meat and sauce, cooked on an open fire, in the tin, with bread on the side. All in all, it made a pretty decent evening meal. (I can't remember what we ever ate for breakfast, but my lunch was invariably a tin of Soviet-made sardines in tomatoes sauce; it became a running joke, for they did not agree with me, but I couldn't resist them.)

In my early teens one of my friends found a WWII Wehrmacht issue petrol stove in his loft. It was bulky, heavy, and caused much excitement when he brought it along one weekend. It roared mightily, and promptly burned a neat finger sized hole through the bottom of his tin -- it amused us greatly, as we stirred our own tins on the fire, watching him trying to salvage what he could from his dinner.

But that was an exception. The only readily available stove on the market was a clone of the folding German Esbit. The flame was feeble, it was impossible to keep the hygroscopic fuel tablets dry, and the moisture in them made them explode and shoot burning bits all around. Every so often some younger lad would turn up with one, and we would happily munch on our warm food watching him fighting it, before giving up, and learning to cook the 'normal' way. The only time these solid fuel stoves came into play were our summer treks through the Tatras (and farther); there open fires were banned, and/or there was no natural fuel.

The week or so long treks required a different approach to food. Tins were out of the question because of the weight, and the silly stoves forced us to keep boiling of water to minimum. Our rations for the week came to a loaf of bread and a foot or so of salami for lunches (the culinary highlight of each day), oats and raisins for breakfast, and pasta (usually with sugar an raisins) for tea. The oats were pre-soaked over night to reduce the cooking time, and the pasta was only just brought to boil and left to sit till it was soft enough, meaning it was never very warm when we ate it. (We had some savoury option on the menu as well, but I can't recall what it was; I suspect my mind blocked it away for sanity sake. It might have been pasta with sardines.)

When I came to Scotland in mid '90s, I had a brief fling with ready made camping food -- all in all three dates I recall; we broke up quietly, were not a good match for each other. I did not like the food and could not afford the prices. It made me realise I like my food too much to suffer for no good reason. These foil packets offered nothing that the tins of my childhood did not offer, except with less flavour and at a premium price. And so I reverted to kind. For a number of years my basic camping food became a tin of M&S curry, cooked in the tin, and a packet of Uncle Ben's microwavable rice (a trick I learnt from a friend -- it needs no cooking, just a little hot water to warm it up).

Then one day, after a cancelled trip, Linda away, I made the mistake of heating up the tin of curry for my tea at home. It was terrible. I decided there and then that I deserved better, and so began my quest for good, home made, food on the go.

To be continued ... (with the stuff this post was meant to be about in the first place)

by tf at October 16, 2017 11:04 AM

October 13, 2017

Emmanuele Bassi

GLib tools rewrite

You can safely skip this article if you’re not building software using enumeration types and signal handlers; or if you’re already using Meson.

For more that 15 years, GLib has been shipping with two small utilities:

  • glib-mkenums, which scans a list of header files and generates GEnum and GFlags types out of them, for use in GObject properties and signals
  • glib-genmarshal, which reads a file containing a description of marshaller functions, and generates C code for you to use when declaring signals

If you update to GLib 2.54, released in September 2017, you may notice that the glib-mkenums and glib-genmarshal tools have become sligly more verbose and slightly more strict about their input.

During the 2.54 development cycle, both utilities have been rewritten in Python from a fairly ancient Perl, in the case of glib-mkenums; and from C, in the case of glib-genmarshal. This port was done to address the proliferation of build time dependencies on GLib; the cross-compilation hassle of having a small C utility being built and used during the build; and the move to Meson as the default (and hopefully only) build system for future versions of GLib. Plus, the port introduced colorised output, and we all know everything looks better with colors.

Sadly, none of the behaviours and expected input or output of both tools have ever been documented, specified, or tested in any way. Additionally, it turns out that lots of people either figured out how to exploit undefined behaviour, or simply cargo-culted the use of these tools into their own project. This is entirely on us, and I’m going to try and provide better documentation to both tools in the form of a decent man page, with examples of integration inside Autotools-based projects.

In the interest of keeping old projects building, both utilities will try to replicate the undefined behaviours as much as possible, but now you’ll get a warning instead of the silent treatment, and maybe you’ll get a chance at fixing your build.

If you are maintaining a project using those two utilities, these are the things to watch out for, and ideally to fix by strictly depending on GLib ≥ 2.54.

glib-genmarshal

  • if you’re using glib-genmarshal --header --body to avoid the “missing prototypes” compiler warning when compiling the generated marshallers source file, please switch to using --prototypes --body. This will ensure you’ll get only the prototypes in the source file, instead of a whole copy of the header.

  • Similarly, if you’re doing something like the stanza below in order to include the header inside the body:

    foo-marshal.h: foo-marshal.list Makefile
            $(AM_V_GEN) \
              $(GLIB_GENMARSHAL) --header foo-marshal.list \
            > foo-marshal.h
    foo-marshal.c: foo-marshal.h
            $(AM_V_GEN) (
              echo '#include "foo-marshal.h"' ; \
              $(GLIB_GENMARSHAL) --body foo-marshal.list \
            ) > foo-marshal.c
    

    you can use the newly added --include-header command line argument, instead.

  • The stanza above has also been used to inject #define and #undef pre-processor directives; these can be replaced with the -D and -U newly added command line arguments, which work just like the GCC ones.

  • This is not something that came from the Python port, as it’s been true since the inclusion of glib-genmarshal in GLib, 17 years ago: the NONE and BOOL tokens are deprecated, and should not be used; use VOID and BOOLEAN, respectively. The new version of glib-genmarshal will now properly warn about this, instead of just silently converting them, and never letting you know you should fix your marshal.list file.

If you want to silence all messages outside of errors, you can now use the --quiet command line option; conversely, use --verbose if you want to get more messages.

glib-mkenums

The glib-mkenums port has been much more painful than the marshaller generator one; mostly, because there are many, many more ways to screw up code generation when you have command line options and file templates, and mostly because the original code base relied heavily on Perl behaviour and side effects. Cargo culting Autotools stanzas is also much more of a thing when it comes to enumerations than marshallers, apparently. Imagine what we could achieve if the tools that we use to build our code didn’t actively work against us.

  • First of all, try and avoid having mixed encoding inside source code files that are getting parsed; mixing Unicode and ISO-8859 encoding is not a great plan, and C does not have a way to specify the encoding to begin with. Yes, you may be doing that inside comments, so who cares? Well, a tool that parses comments might.

  • If you’re mixing template files with command line arguments for some poorly thought-out reason, like this:

    foo-enums.h: foo-enums.h.in Makefile
            $(AM_V_GEN) $(GLIB_MKENUMS) \
              --fhead '#ifdef FOO_ENUMS_H' \
              --fhead '#defineFOO_ENUMS_H' \
              --template foo-enums.h.in \
              --ftail '#endif /* FOO_ENUMS_H */' \
            > foo-enums.h
    

    the old version of glib-mkenums would basically build templates depending on the phase of the moon, as well as some internal detail of how Perl works. The new tool has a specified order:

    • the HEAD stanzas specified on the command line are always prepended to the template file
    • the PROD stanzas specified on the command line are always appended to the template file
    • the TAIL stanzas specified on the command line are always appended to the template file

Like with glib-genmarshal, the glib-mkenums tool also tries to be more verbose in what it expects.


Ideally, by this point, you should have switched to Meson, and you’re now using a sane build system that generates this stuff for you.

If you’re still stuck with Autotools, though, you may also want to consider dropping glib-genmarshal, and use the FFI-based generic marshaller in your signal definitions — which comes at a small performance cost, but if you’re putting signal emission inside a performance-critical path you should just be ashamed of yourself.

For enumerations, you could use something like this macro, which I tend to employ in all my projects with just few, small enumeration types, and where involving a whole separate pass at parsing C files is kind of overkill. Ideally, GLib would ship its own version, so maybe it’ll be replaced in a new version.


Many thanks to Jussi Pakkanen, Nirbheek Chauhan, Tim-Philipp Müller, and Christoph Reiter for the work on porting glib-mkenums, as well as fixing my awful Parseltongue.

by ebassi at October 13, 2017 03:21 PM

October 09, 2017

Tomas Frydrych

Of Camera Bags

Of Camera Bags

There is no end of acquiring them, the search for the perfect camera bag seems endless. Here are some of mine, and some thoughts on them.

Ortlieb Protect

The now discontinued (looks like Ortlieb stopped making camera bags altogether), but still available, Protect is a compact, waterproof bag in the tradition of Ortlieb robustness, with a slider closure which is easy to operate in big gloves. The inside of the bag is made of a thick closed-cell foam that gives it rigidity, but, unusually for a camera bag, is not lined with fabric. It is officially IP54 rated (though I am fairly certain that when I first got mine it was sold as IP67; I believe there were issues with the slider seal in cold temperatures). Size wise it is just big enough for my old Lumix GF-2 with a 14-70mm kit lens.

The great thing about this bag is that it can be comfortably hung with a couple of carabiners on backpack shoulder straps, providing fast and easy on-the-go access. This makes it an excellent mountain biking and skiing solution for smaller cameras.

I got the Protect on a recommendation of a friend about a decade ago, and it has served me faithfully ever since. I love its simplicity and wish it was just a little bit bigger to accommodate my Lumix GX-8 camera, which brings me to the next bag ...

Ortlieb Compact-Shot

The Compact-Shot is yet another great, but discontinued, bag from Ortlieb. It is slightly bigger than the Protect, just enough for my Lumix GX-8 with a 12-40mm zoom, but unlike the Protect, the internal padding is lined with a soft cloth, as is normal for camera bags, and there is a small internal pocket. The zip closure is not as easy / fast to open as the Protect slider, and is quite awkward to close fully, but when closed the bag is IP67 rated.

The Compact-Shot has become my default bag of choice when I don't need to carry any extra lenses, and, chest mounted with a couple of carabiners, the bag I use for ski touring.

Thule Perspektive Compact Sling

The Perspektive CS is a roomy bum-bag. It is made from a water-repellent fabric, uses water-resistant zippers, and comes with a detachable stowaway rain cover. It is big enough to take my Lumix GX-8 together with 12-40mm and 40-150mm lenses (with either lens fitted), has a padded iPad Mini-sized pocket inside, as well as a phone pocket on the outside of the lid, and comes with a plenty of adjustable dividers for the inner space.

The waist strap with side stabilisers makes the bag very stable, enough to jog with. The bag is compact enough to combine with a small, high sitting, backpack, up to something like the OMM Adventure Light 20, which makes a good combination for fastpacking trips. The only thing I'd change on the belt is to extend the padding fully under the D-rings, as this would make it more comfortable (I have done a couple of very long fastpacking days with this bag, and was beginning to curse the D-rings near the end).

The one issue I have run into with this bag is that the rain cover is to easy to detach, and the connecting strap will often self-detach when the cover is on -- this makes it easy to loose when taking it off in windy conditions. But overall, this is a well thought out and made bag.

LowePro Flipside Trek BP 350 AW

The Flipside is my 'pottering in the woods' backpack, but also the camera bag I am most ambivalent about. On the upside, it is very comfortable to carry, the camera compartment is spacious enough when I want to bring the big lens and more, and the through-the-back access is handy.

But there are some, to me at least, fairly significant design flaws. The non-gear storage space is very limited, enough for a sandwich, a small water bottle, a light-weight jacket and perhaps an extra thin layer. The lack of internal space is aggravated by the mesh side pockets being both small (i.e., too small for the like of a litre Nalgene bottle) and rather shallow (the bottom half of the pockets is made from a non-stretchy material to make it more durable, but there is not enough of it, so, e.g., normal 0.5l drinks bottle cannot be inserted all the way to the bottom). It is possible to strap things, such as a tripod, on the outside of the bag, but then you have to forego of the built-in rain cover, which is rather snug fitting.

Had there been another 3+ or so litres of non-gear space in this bag, this would have been my ideal camera day-bag. As is, I have strapped an external 5 litre pouch on the back of it, but like I said, that makes the rain cover useless, which is sub-optimal in the normal Scottish weather.

Tenga BYOB 9

Tenga get around the basic problem with camera backpacks (they never really work well enough; see above) by providing a range of minimal padded camera inserts that you put into a bag of your choice. The model number is the depth of the bag in centimetres, and the BYOB 9 is just big enough for my Lumix GX-8 with 12-40mm lens + another lens of a similar size, and either another pancake prime, or a few extra bits and bobs, such as a remote control and a blower.

The great thing about the BYOB is how the sizes of the bags in the range were chosen -- for a given camera size you get optimally low profile bag easy to place at the top of a normal sized backpack. The main downside is that the padding is inexplicably thin (about half of that on my other camera bags); I'd prefer more protection for my kit. Also, although the fabric is water-repellent, the zip is not, so I always feel it necessary to put this inside a dry bag.

Crumpler Light Delight 200

My default running camera is Lumix GM-5 with a 14mm pancake prime lens, and it's proven rather difficult to find a good pouch for it that could be shoulder mounted. The closest I have come to is the Light Delight 200. It's slightly wider than ideal for the GM-5, so I padded it with a strip of an old sleeping map to stop it from moving about when I run. On the upside, the depth is just enough for a 20mm pancake fitted.

Overall this pouch is well made and well padded. The back has a Velcro strap for attaching it to Crumpler backpacks, but it can be attached quite well to OMM packs with a bit of a string, and some creative knotting.

The main downside is that the bag is not even remotely rain proof. Also, the top zip has two sliders which annoyingly rattle when running, so I promptly removed one of them. With that modification, I have happily run hundreds of miles with it.

by tf at October 09, 2017 01:39 PM

September 07, 2017

Tomas Frydrych

Thoughts on the Dumyat Path

Thoughts on the Dumyat Path

If, like me, you thought we saw the last of the heavy machinery on Dumyat, you were wrong. In the last few days diggers have arrived again to (at the expense of SP Energy Networks) graciously bestow upon us a new path from the Sheriff Muir road car park to the very summit.

Updated 9/9/2017, 09:15; see the end. Formal complaints to be addressed to SPEN on customercare@spenergynetworks.com

In broad strokes, the situation as it emerged is this: when SPEN was granted permission for the Beauly power line, it came on the condition that they will do some 'good work' for the locals in return; in the Stirling case this happens to include work on the Dumyat path.

That the main path is in need of some attention, and has been for some time, is not something I would dispute. There is a significant amount of erosion taking place, which I have written about at length before (complete with 60+ images documenting the erosion patterns). But what is happening on Dumyat just now is not the answer. As I see it, there are two big problems here: the contractor's approach, and the lack of understanding how the hill is used.

The contractor is rather heavy handed and appears ill prepared. There is an apparent lack of proper planning (let's just bring a big digger, that will do it), the lack of understanding the geology of the hill (didn't expect it to be this 'rocky', doh), the lack of any sympathy for the natural features of the landscape (levelling uneven sections of the exposed bedrock, really?!). SNH has guidelines on how upland paths should be constructed, and this is not it.

The extent, and progress, of the erosion on the hill varies along its length, depending on the gradient and what is found immediately below the surface. On the steep sections, in some cases the erosion exposes very loosely bound rock and/or gravel deposits, which then suffer from bad water run off. These are the places that most require some stabilisation and mitigation, but in fact this is mainly limited to two locations, both on the upper part of the hill (to be precise, around NS 8278 9772 and NS 8352 9763). These places would benefit from some drainage work, and perhaps relocation of the path, but it needs to be done sensitively and with care, not with a bulldozer.

In other instances of the steep ground, the erosion relatively quickly exposes bare, but solid bedrock. While it's not pretty when it is happening, it simply stops there once the rain cleans up the rock. Yes, if you have been going up this hill for many years, the path has changed dramatically at these places. But it is questionable whether any intervention will achieve anything meaningful here. For example, the contractors seem set on evening out the level exposed section of bedrock around NS 8157 9788 with loose soil. It is not clear to me what the objective of that is, and why such resurfacing is needed at all -- this part has remained stable for many years.

On the easier angled sections the path suffers limited water run off. The damage here falls into two main classes. There are some boggy areas in the vicinity of natural springs (notably NS 8270 9777 and NS 8319 9774). These would benefit from board walks being constructed; the alternative is re-routing the path, but on that see below.

Apart from these natural bogs, the damage on the easily angled parts of the path is almost entirely due to soil being moved by feet and wheels, the effect of water run off is minimal. As such the path tends to broaden (see the link above for more on this), but remains stable in terms of its depth. It is, again, arguable that these parts will benefit from the work being done in any way. The only answer here would be confining the traffic to a narrow corridor, and that brings me to the second problem with the work being undertaken.

It would appear that whoever approved the current solution has absolutely no understanding of how the hill is used. There are many of the regular local users who are quite happy with the hill in its rugged semi-natural state, and they will more likely than not avoid the new path. This will include many of the local hill runners, for whom the current landscape offers excellent and easily accessible training ground. And it will include the mountain bikers who really happen to like the hill the way it is, and even the way in which it evolves (mountain biking nowadays is generally not a means to travel distances, but rather it is about the challenge 'under the wheels').

And here lies the main problem. If the objective of this exercises is to provide the good citizens of Stirling with an easy, all-ability, access to Dumyat, then the contractors are following the wrong line. There is much to be said for such a path, but it would need to follow the natural contours of the hill. Such path would have the benefit of splitting the descending bike traffic from those on the path. But sanitising the current path along its length will simply result in much of the current traffic being shifted to its immediate sides, and the erosion will continue spreading.

When path work is done not understanding, or ignoring, the mountain bike use case, it will fail to relieve the erosion pressures; there are examples of this emerging elsewhere (Ben Lawers, Cairngorms). On Dumyat the bike use is well established, people have been riding bikes here for as long as mountain bikes existed, i.e., for over 20 years. It is wholly unjustifiable not to take them into account.

More so, like it or not, mountain bikes are part of Scotland's outdoor landscape, and they are going to stay, accounting for a significant chunk of Scotland's tourism revenue. Dumyat is a fairly insignificant knoll above Stirling, but it foreshadows issues that are emerging elsewhere in Scotland's bigger hills. Mountain biking is no longer the niche pursuit it once was, and we need to start seriously talking about how it fits in into the outdoor pursuit family and into our hills.

Update 8 Sep 2017

So, SPEN has now released a PR statement about the work, which includes a picture of a short segment a path upgrade from Ben Vorlich, and states:

SP Energy Networks is undertaking works to sensitively restore the existing Dumyat Hill and Cocksburn Reservoir Paths. This project will employ established upland path techniques to create a naturally formed route allowing areas of erosion to organically regenerate.

The works will form an entirely natural upland path developed in soil and stone ... This will help to prevent further severe erosion ... The aim is not to create a formal path but to replicate the existing path using the same materials in a form that will support ever increasing users and user groups visiting the area.

(Emphasis mine.)

I'll simply invite the reader to compare the PR speak and the imagery with what is, in fact, happening on Dumyat:

Sensitive use of heavy machinery:
Thoughts on the Dumyat Path

Established upland path techniques:
Thoughts on the Dumyat Path

Not a formal, but entirely natural, path:
Thoughts on the Dumyat Path

Controlling severe erosion (things are looking just great after one afternoon of rain):
Thoughts on the Dumyat Path

This needs to stop now. If, like me, you are concerned about what is happening on Dumyat, please send a formal letter of complaint with your concerns to SPEN on customercare@spenergynetworks.com.

Update 8 Sep 2017, 23:34

I have just returned from a brief visit to Dumyat, and, as hard it is to believe, things have taken further turn for the worse during today, as the following images will illustrate.

The first image shows the start of the track. An attempt has been made to neaten it up by laying down bits of turf along its sides. However, it should be noted that there is no topsoil present here. The area here was exposed down to bedrock, which during the Denny - Beualy construction was levelled out using grey industrial hardcore. The orange path in this picture is barely a couple of inches of soil that appears to have been scraped from the hollow on the left of the image, and the bits of turf were removed from elsewhere and simply laid on the old hardcore:

Thoughts on the Dumyat Path

The next image shows the old hard core and how the turf has been laid onto it. Considering we are now outside of the growing period, it is very unlikely that much of this will survive the wet winter months.

Thoughts on the Dumyat Path

The hollow that seems to have been used to excavate the soil for this section of the path, covered in badly damaged turf:

Thoughts on the Dumyat Path

The next image shows the start of the first rise. The original surface here was bedrock, part of which is still visible left of the path, and thin layer of intermittent turf, forming a pleasant green slope. The turf has been stripped, and the bedrock covered with a thin layer of topsoil brought from elsewhere:

Thoughts on the Dumyat Path

Along side the entire length of the track being worked on, there is extensive damage to the turf, which at places has been intentionally stripped for no obvious reason. The area in the first picture is of particular concern, because the loosely bonded gravel underneath has been exposed and will be subject to rapid water erosion -- this is the primary erosion pattern on the hill.

Thoughts on the Dumyat Path

Looking back down the initial rise:

Thoughts on the Dumyat Path

And the damage to the side of the track:

Thoughts on the Dumyat Path

This area originally contained a natural rock step. This has been incomprehensibly levelled out with large amount of material excavated from the left of the track:

Thoughts on the Dumyat Path

The next image shows the excavation area. As the result of the excavation the side of the hill has been exposed to water run off, and will deteriorate rapidly during the winter months:

Thoughts on the Dumyat Path

Another natural rocky feature being levelled out; the material for this seems to have been simply dug up to the side of it, leaving deep ditches on both sides. The track at this point is somewhere in the region of 7-8m wide:

Thoughts on the Dumyat Path Thoughts on the Dumyat Path Thoughts on the Dumyat Path

The next image shows the same area as the second image I posted earlier today; in the course of the day the contractor piled up a large amount of topsoil into this area obliterating the natural step at the end of the this section:

Thoughts on the Dumyat Path

Detail of the fill, this is well over a foot in depth:

Thoughts on the Dumyat Path

Just for reference, this is the old path that we are fixing here:

Thoughts on the Dumyat Path

And the area the top soil for the fill was excavated from:

Thoughts on the Dumyat Path

The contractor is McGowan Ltd, and they seem have a track record:

Thoughts on the Dumyat Path In case you find these images disturbing, let me assure you that they in fact don't do justice to the ugly reality, you might want to see for yourself if you are local.

Update 9 Sep 2017, 9:15

It appears the local representative of Cycling UK was given access to the plans for the path, available here. What is clear is that the work undertaken is not in keeping with the agreed plans. Notably, the section covered by the following images was supposed to be 'hand-built only' using stone pitching; what a mess:

Thoughts on the Dumyat Path Thoughts on the Dumyat Path Thoughts on the Dumyat Path Thoughts on the Dumyat Path Thoughts on the Dumyat Path

I am also concerned that the plan for the natural bog area around NS 8309 9761 is 'raised hardcore'.

Formal complaints to be addressed to SPEN on customercare@spenergynetworks.com.

by tf at September 07, 2017 10:18 AM

September 04, 2017

Tomas Frydrych

GPS Accuracy and the Automation Paradox

GPS Accuracy and the Automation Paradox

It's been a busy summer for UK's MRTs. Not a week has gone by without someone getting lost in our hills, without yet another call to learn how to use a map and compass and not to rely on phone apps. This in turn elicits other comments that the problem is not in the use of digital tools per se, but in not being able to navigate. True as this is, the calls for learning traditional navigation should not be dismissed as Luddite, for not being able to navigate competently and the use of digital technologies are intrinsically linked.

GPS Accuracy

Before getting onto the bigger problem, the question of GPS accuracy is perhaps worth digressing into. Our perception of what the GPS in our phone can do for us is skewed by our urban experience. We use mapping applications daily to locate street addresses, and we have got used to how accurate these things are in that context.

However, many of us do not appreciate that because GPS does not work well at all in cities, mobile phones use so called Assisted GPS. With A-GPS the accurate location is derived from the known positions of mobile phone masts and the presence of domestic wifi IDs, which street mapping vehicles collect and store in massive databases. And, obviously, A-GPS only works in cities and with a working Internet connection (which is why your phone will complain when you use the GPS while in the air-plane mode).

So how accurate is GPS alone?

First, there is the accuracy of the GPS service per se. This is the simple part: the US Government undertakes to operate the service in such a manner that a user in the worst location relative to the current position of the satellites can achieve grid accuracy of ±17m and altitude accuracy of ±37m in 95% of cases. You can often get better results, but need to allow for even bigger error 5% of the time.

Then there is how well the device on the ground can access and process that service. The above numbers assume a clear view of the sky down to 5° above horizon, allowing for the acquisition of 6 different satellite signals. They assume no weather interference, and a good quality receiver that makes full use of all the available information.

In the hills the real conditions often are nowhere near optimal, and the tiny GPS devices in watches and phones, with their tiny aerials, are not of the requisite standard. The real errors will be, possibly significantly, bigger (e.g., I have seen an error of some 200m on an iPhone 5 on one occasion in the upper Glen Nevis).

So how good are these numbers from the perspective of mountain navigation?

The altitude resolution error is potentially around a major contour line difference and deteriorates rapidly as the number of visible satellites drops. As such GPS estimated altitude is not much use for accurate navigation. (Altitude is very useful for mountain navigation, and a much better resolution is achievable with a barometric altimeter, when used correctly.)

But if you compare the location accuracy to what a moderately competent navigator in moderately challenging circumstances will be able to estimate without the GPS, the GPS wins hands down; this is what it's designed for. Nevertheless, the GPS based location cannot be assumed to be pinpoint accurate. In complex terrain the errors can be navigationally significant, and are not good enough to keep me safe -- there are many locations in the mountains where if I overshoot my target by 30m I will die.

This is, of course, no different than following a compass bearing. Neither the compass nor the GPS are magic bullets that will keep me safe. But with GPS we seem to be conditioned to trust the technology more than it merits. Competent navigation comes down not to the tools, but to making sound judgements based on the information provided by the tools, whether it's map, compass, or GPS. And that brings me to the Automation Paradox.

The Paradox of Automation

The Automation paradox can be formulated in different ways, but it comes down to this:

Automation leads to degradation of operator skills, while, at the same time, the skills required to handle automation failures are frequently considerably higher than average.

In an industrial field, the introduction of automation largely replaces a workforce of skilled craftsmen/women with a low skilled one. This is unavoidable; the craft skills come from practice of the craft. The automation of the process does away with the practice, and doing so removes the opportunities for practising the skills.

But the bigger problem with automation is this: when automated processes fail and require manual intervention, they tend to do so in atypical, complex, corner cases which require higher level of skill to handle, skill that the workforce does not have. In industrial fields this leads to the development of a small number of exceptionally skilled (and highly paid) experts who get called in when the automatic process fails.

Navigating by GPS is subject to the Automation Paradox; it takes away the grind of reading maps, taking bearings, pacing and timing distances. This is great while it goes well, for it leaves more time to enjoy the great outdoors, and so we do. But in doing so it deprives us of the opportunities to develop the rudimentary navigation skills.

But when it fails, there is every chance it will not be on a nice sunny day with cracking visibility. It will be when the weather is awful enough to interfere with the radio waves, or in a location where no satellites are visible. The competent navigator will simply turn the unit off and carry on, for she has other tools in the bag and knows how to use them. The rest will have to call in the experts to get them off the hill (assuming their phone has a signal).

And this is the problem with the GPS. It's not that it's not a useful tool, it is, in the hands of a competent navigator. But that competency is developed through deliberate and ongoing practice of the basics. What it does not come from is the following of a GPS track downloaded from somewhere on the Internet, and let's not delude ourselves, that's how the GPS gets generally used.

P.S. If you live in the Central Belt and don't know where to start, I offer Basic Mountain Navigation and Night-Time Navigation courses.

by tf at September 04, 2017 11:05 AM

August 11, 2017

Emmanuele Bassi

GUADEC 2017

Another year, another GUADEC — my 13th to date. Definitely not getting younger, here. 😉

As usual, it was great to see so many faces, old and new. Lots of faces, as well; attendance has been really good, this year.

The 20th anniversary party was a blast; the venue was brilliant, and watching people going around the tables in order to fill in slots for the raffle tickets was hilarious. I loved every minute of it — even if the ‘90s music was an assault on my teenage years. See above, re: getting older.

The talks were, as usual, stellar. It’s always so hard to chose from the embarrassment of riches that is the submission pool, but every year I think the quality of what ends up on the schedule is so high that I cannot be sad.

Lots and lots of people were happy to see the Endless contingent at the conference; the talks from my colleagues were really well received, and I’m sure we’re going to see even more collaboration spring from the seeds planted this year.


My talk about continuous integration in GNOME was well-received, I think; I had to speed up a bit at the end because I lost time while connecting to the projector (not enough juice when on battery to power the HDMI-over-USB C connector; lesson learned for the next talk). I would have liked to get some more time to explain what I’d like to achieve with Continuous.

Do not disturb the build sheriff

I ended up talking with many people at the unconference days, in any case. If you’re interested in helping out the automated build of GNOME components and to improve the reliability of the project, feel free to drop by on irc.gnome.org (or on Matrix!) in the #testable channel.


The unconference days were also very productive, for me. The GTK+ session was, as usual, a great way to plan ahead for the future; last year we defined the new release cycle for GTK+ and jump start the 4.0 development cycle. This year we drafted a roadmap with the remaining tasks.

I talked about Flatpak, FlatHub, Builder, performance in Mutter and GNOME Shell; I wanted to attend the Rust and GJS sessions, but that would have required the ability to clone myself, or be in more than one place at once.

During the unconference, I was also able to finally finish the GDK-Pixbuf port of the build system to Meson. Testing is very much welcome, before we bin the Autotools build and bring one of the oldest modules in GNOME into the future.

Additionally, I was invited to the GNOME Release Team, mostly to deal with the various continuous integration build issues. This, sadly, does not mean that I’m one step closer to my ascendance as the power mad dictator of all of GNOME, but it means that if there are issues with your module, you have a more-or-less official point of contact.


I can’t wait for GUADEC 2018! See you all in Almería!

by ebassi at August 11, 2017 01:33 PM