Planet Closed Fist

June 25, 2020

Tomas Frydrych

Care for the Elderly

How many phone calls does it take to organise a COVID-19 test for a ninety+ year old who needs an emergency admission to a care home?

Linda’s been at it all day today ... and she is at 14 and counting. At this point she has a vague promise of a test, sometime. Today? Tomorrow? Next week? Nobody knows.

[The good news: while I was typing this a tester turned up and took the swabs; achievement unlocked.]

But here is the thing, sorting out this sort of stuff is what Linda does for living, she knows what to say to whom, when to push, and, ultimately, when not to take no for an answer.

Following her progress over the course of the last eight hours it became clear to me that had it been me trying to get this organised I’d have fallen by the wayside well before lunch. And I don’t think the way things went today is atypical. A colleague of Linda’s went through a similar situation recently ... 13 calls were needed — I can’t see our parents ever managing.

The free social care for the elderly is great on the glossy paper of political manifestos but the reality of it fails expectations. The councils don’t have enough money to implement it properly, and so not only what is on offer is rather meagre, but the system actively discourages uptake: the automated phone menus are endless, the forms are incomprehensible, the processes are protracted ad absurdum.

Call me cynical, but I don’t believe that this is just a bureaucratic inefficiency. We have been through this enough times in the last year and a half to start picking up the patterns, and the message is clear: you are a nuisance, be nice and go away. And I bet many do.

by tf at June 25, 2020 06:56 PM

June 10, 2020

Tomas Frydrych

'If You Elect a Clown ...

... expect a circus.'

Chessington World of Adventures by Beans on Toast, as good a summary of the UK situation as you will find (and some great photos by Pitlad).

by tf at June 10, 2020 08:38 AM

June 07, 2020

Tomas Frydrych

The Five Miles

There has been much hoohaa in recent days over people breaking the recommendation not to travel further than five miles for their recreation. But while the images of weekend scenes at Balmaha and Arrochar are quite ridiculous, the politicians need to get of their high horses and lose that self righteous indignation, for the real problem is not with us people.

It has been clear from the beginning that the various do and don’t rules are being created by people living in some sort of an alternative reality cocoon. Take the, now thankfully abolished, 60min day exercise rule—no wonder the nation is experiencing an obesity epidemic if our politicians reckon that’s all we need.

And let there be no doubt, these are political constraints. At no point have we been presented with any evidence to back the restrictions on our outdoor activities other than the need to keep 2m distance: no explanation why 60min and not 90min, why once a day and not twice a day; the very fact that the restriction didn’t apply to walking a dog demonstration of its arbitrariness.

The latest five mile rule is no different, at best a number conjured up by a bureaucratic pen, at worst made up on the fly by Nicola right there behind the pulpit. For many, if not most of us, a five mile radius doesn’t encompass our ‘local area’: we can’t do our shopping within five miles, meet our closest friends, never mind reach a proper outdoor space.

Now, I understand why the Scottish Government are reluctant to ease the lockdown, they don’t have the epidemic under control. But by now we all know that’s mainly the case because the lockdown was introduced way too late and because contact tracing was abandoned when it mattered most. And on the top of it, for better part of three months the Scottish Government, as much as the English one, didn’t make any preparations for eventually getting us out of lockdown.

So forgive me if I, for one, am sick and tired of being patronised, of being told that ‘we understand people want to head in to the places they love, but they will still be there when it’s over’. This shows so little grasp of what outdoor recreation is about that I don’t know whether to scream or just cry quietly. For great many of us, we don’t head to the outdoors because we love it in some abstract platonic sense. We do so because we need to, because it’s the only way we know to maintain our sanity; and that’s why in turn we love these places. And by now, we are crawling up the walls.

If we are having a problem with controlling the spread of the virus, it’s not because we, the people, are being irresponsible, it’s because of the half baked guidelines coming to us from up on high. It is inevitable that when the official rules don’t make enough sense for real people living real lives, folk resort to making their own individual choices. And when we do, it becomes each one for themselves, and that’s the last thing we need.

by tf at June 07, 2020 06:09 AM

June 02, 2020

Emmanuele Bassi

Type instances

Let us assume we are writing a library.

The particular nature of our work is up for any amount of debate, but the basic fact of it comes with a few requirements, and they are by and large inevitable if you wish to be a well-behaved, well-integrated member of the GNOME community. One of which is: “please, think of the language bindings”. These days, luckily for all of us, this means writing introspectable interfaces that adhere to fairly sensible best practices and conventions.

One of the basic conventions has to do with types. By and large, types exposed by libraries fall into these two categories:

  • plain old data structures, which are represented by what’s called a “boxed” type; these are simple types with a copy and a free function, mostly meant for marshalling things around so that language bindings can implement properties, signal handlers, and abide to ownership transfer rules. Boxed types cannot have sub-types.
  • object types, used for everything else: properties, emitting signals, inheritance, interface implementation, the whole shebang.

Boxed and object types cover most of the functionality in a modern, GObject-based API, and people can consume the very same API from languages that are not C.

Except that there’s a third, kind of niche data type:

  • fully opaque, with instance fields only known within the scope of the project itself
  • immutable, or at least with low-mutability, after construction
  • reference counted, with optional cloning and serialization
  • derivable within the scope of the project, typically with a base abstract class
  • without signals or properties

Boxing

One strategy used to implement this niche type has been to use a boxed type, and then invent some private, ad hoc derivation technique, with some structure containing function pointers used as a vtable, for instance:

boxed-type.c [Lines 3-79] download
/* {{{ Base */
typedef struct _Base            Base;
typedef struct _BaseClass       BaseClass;

struct _BaseClass
{
  const char *type_name;
  gsize instance_size;

  void  (* finalize)    (Base *self);
  void  (* foo)         (Base *self);
  void  (* bar)         (Base *self);
}

struct _Base
{
  const BaseClass *base_class;

  // Shared field
  int some_field;
};

// Allocate the instance described in the vtable
static Base *
base_alloc (const BaseClass *vtable)
{
  // Simple check to ensure that the derived instance includes the
  // parent base type
  g_assert (vtable->instance_size >= sizeof (Base));

  // Use the atomic refcounted boxing to allocated the requested
  // instance size
  Base *res = g_atomic_rc_box_new (vtable->instance_size);

  // Store the vtable
  res->base_class = vtable;

  // Initialize the base instance fields
  res->some_field = 42;

  return res;
}

static void
base_finalize (Base *self)
{
  // Allow derived types to clear up their own instance data
  self->base_class->finalize (self);
}

Base *
base_ref (Base *self)
{
  return g_atomic_rc_box_acquire (self);
}

void
base_unref (Base *self)
{
  g_atomic_rc_box_release (self, base_finalize);
}

void
base_foo (Base *self)
{
  self->base_class->foo (self);
}

void
base_bar (Base *self)
{
  self->base_class->bar (self);
}

// Add a GType for the base type
G_DEFINE_BOXED_TYPE (Base, base, base_ref, base_unref)
/* }}} */

The code above lets us create derived types that conform to the base type API contract, while providing additional functionality; for instance:

boxed-type.c [Lines 81-123] download
/* {{{ DerivedA */
typedef struct {
  Base parent;

  char *some_other_field;
} DerivedA;

static void
derived_a_finalize (Base *base)
{
  DerivedA *self = (DerivedA *) base;

  g_free (self->some_other_field);
}

static const BaseClass derived_a_class = {
  .type_name = "DerivedA",
  .instance_size = sizeof (DerivedA),
  .finalize = derived_a_finalize,
  .foo = derived_a_foo, // defined elsewhere
  .bar = derived_a_bar, // defined elsewhere
};

Base *
derived_a_new (const char *some_other_field)
{
  Base *res = base_alloc (&derived_a_class);

  DerivedA *self = (DerivedA *) res;

  self->some_other_field = g_strdup (some_other_field);

  return res;
}

const char *
derived_a_get_some_other_field (Base *base)
{
  DerivedA *self = (DerivedA *) base;

  return self->some_other_field;
}
/* }}} */

Since the Base type is also a boxed type, it can be used for signal marshallers and GObject properties at zero cost.

This whole thing seems pretty efficient, and fairly simple to wrap your head around, but things fall apart pretty quickly as soon as you make this API public and tell people to use it from languages that are not C.

As I said above, boxed types cannot have sub-types; the type system has no idea that DerivedA implements the Base API contract. Additionally, since the whole introspection system is based on conventions applied on top of some C API, there is no way for language bindings to know that the derived_a_get_some_other_field() function is really a DerivedA method, meant to operate on DerivedA instances. Instead, you’ll only be able to access the method as a static function, like:

obj = Namespace.derived_a_new()
Namespace.derived_a_get_some_other_field(obj)

instead of the idiomatic, and natural:

obj = Namespace.DerivedA.new()
obj.get_some_other_field()

In short: please, don’t use boxed types for this, unless you’re planning to hide this functionality from the public API.

Typed instances

At this point the recommendation would be to switch to GObject for your type; make the type derivable in your project’s scope, avoid properties and signals, and you get fairly idiomatic code, and a bunch of other features, like weak references, toggle references, and keyed instance data. You can use your types for properties and signals, and you’re pretty much done.

But what if you don’t want to use GObject…

Well, in that case GLib lets you create your own type hierarchy, with its own rules, by using GTypeInstance as the base type.

GTypeInstance is the common ancestor for everything that is meant to be derivable; it’s the base type for GObject as well. Implementing a GTypeInstance-derived hierarchy doesn’t take much effort: it’s mostly low level glue code:

instance-type.c [Lines 4-189] download
typedef struct _Base            Base;
typedef struct _BaseClass       BaseClass;
typedef struct _BaseTypeInfo    BaseTypeInfo;

#define BASE_TYPE               (base_get_type())
#define BASE_GET_CLASS(obj)     (G_TYPE_INSTANCE_GET_CLASS ((obj), BASE_TYPE, 

// Simple macro that lets you chain up to the parent type's implementation
// of a virtual function, e.g.:
//
//   BASE_SUPER (self)->finalize (obj);
#define BASE_SUPER(obj)         ((BaseClass *) g_type_class_peek (g_type_parent (G_TYPE_FROM_INSTANCE (obj))))

struct _BaseClass
{
  GTypeClass parent_class;

  void  (* finalize)    (Base *self);
  void  (* foo)         (Base *self);
  void  (* bar)         (Base *self);
}

struct _Base
{
  GTypeInstance parent_instance;

  gatomicrefcount ref_count;

  // Shared field
  int some_field;
};

// A structure to be filled out by derived types when registering
// themselves into the type system; it copies the vtable into the
// class structure, and defines the size of the instance
struct _BaseTypeInfo
{
  gsize instance_size;

  void  (* finalize)    (Base *self);
  void  (* foo)         (Base *self);
  void  (* bar)         (Base *self);
};

// GValue table, so that you can initialize, compare, and clear
// your type inside a GValue, as well as collect/copy it when
// dealing with variadic arguments
static void
value_base_init (GValue *value)
{
  value->data[0].v_pointer = NULL;
}

static void
value_base_free_value (GValue *value)
{
  if (value->data[0].v_pointer != NULL)
    base_unref (value->data[0].v_pointer);
}

static void
value_base_copy_value (const GValue *src,
                       GValue       *dst)
{
  if (src->data[0].v_pointer != NULL)
    dst->data[0].v_pointer = base_ref (src->data[0].v_pointer);
  else
    dst->data[0].v_pointer = NULL;
}

static gpointer
value_expression_peek_pointer (const GValue *value)
{
  return value->data[0].v_pointer;
}

static char *
value_base_collect_value (GValue      *value,
                          guint        n_collect_values,
                          GTypeCValue *collect_values,
                          guint        collect_flags)
{
  Base *base = collect_values[0].v_pointer;

  if (base == NULL)
    {
      value->data[0].v_pointer = NULL;
      return NULL;
    }

  if (base->parent_instance.g_class == NULL)
    return g_strconcat ("invalid unclassed Base pointer for "
                        "value type '",
                        G_VALUE_TYPE_NAME (value),
                        "'",
                        NULL);

  value->data[0].v_pointer = base_ref (base);

  return NULL;
}

static gchar *
value_base_lcopy_value (const GValue *value,
                        guint         n_collect_values,
                        GTypeCValue  *collect_values,
                        guint         collect_flags)
{
  Base **base_p = collect_values[0].v_pointer;

  if (base_p == NULL)
    return g_strconcat ("value location for '",
                        G_VALUE_TYPE_NAME (value),
                        "' passed as NULL",
                        NULL);

  if (value->data[0].v_pointer == NULL)
    *base_p = NULL;
  else if (collect_flags & G_VALUE_NOCOPY_CONTENTS)
    *base_p = value->data[0].v_pointer;
  else
    *base_p = base_ref (value->data[0].v_pointer);

  return NULL;
}

// Register the Base type
GType
base_get_type (void)
{
  static volatile gsize base_type__volatile;

  if (g_once_init_enter (&base_type__volatile))
    {
      // This is a derivable type; we also want to allow
      // its derived types to be derivable
      static const GTypeFundamentalInfo finfo = {
        (G_TYPE_FLAG_CLASSED |
         G_TYPE_FLAG_INSTANTIATABLE |
         G_TYPE_FLAG_DERIVABLE |
         G_TYPE_FLAG_DEEP_DERIVABLE),
      };

      // The gunk for dealing with GValue
      static const GTypeValueTable value_table = {
        value_base_init,
        value_base_free_value,
        value_base_copy_value,
        value_base_peek_pointer,
        "p",
        value_base_collect_value,
        "p",
        value_base_lcopy_value,
      };

      // Base type information
      const GTypeInfo base_info = {
        // Class
        sizeof (GtkExpressionClass),
        (GBaseInitFunc) NULL,
        (GBaseFinalizeFunc) NULL,
        (GClassInitFunc) base_class_init,
        (GClassFinalizeFunc) NULL,
        NULL,

        // Instance
        sizeof (GtkExpression),
        0,
        (GInstanceInitFunc) base_init,

        // GValue
        &value_table,
      };

      // Register the Base type as a new, abstract fundamental type
      GType base_type =
        g_type_register_fundamental (g_type_fundamental_next (),
                                     g_intern_static_string ("Base"),
                                     &event_info, &finfo,
                                     G_TYPE_FLAG_ABSTRACT);

      g_once_init_leave (&base_type__volatile, expression_type);
    }

  return base_type__volatile;
}

Yes, this is a lot of code.

The base code stays pretty much the same:

instance-type.c [Lines 191-243] download
static void
base_real_finalize (Base *self)
{
  g_type_free_instance ((GTypeInstance *) self);
}

static void
base_class_init (BaseClass *klass)
{
  klass->finalize = base_real_finalize;
}

static void
base_init (Base *self)
{
  // Initialize the base instance fields
  g_atomic_ref_count_init (&res->ref_count);
  res->some_field = 42;
}

static Base *
base_alloc (GType type)
{
  g_assert (g_type_is_a (type, base_get_type());

  // Instantiate a new type derived by Base
  return (Base *) g_type_create_instance (type);
}

Base *
base_ref (Base *self)
{
  g_atomic_ref_count_inc (&self->ref_count);
}

void
base_unref (Base *self)
{
  if (g_atomic_ref_count_dec (&self->ref_count))
    BASE_GET_CLASS (self)->finalize (self);
}

void
base_foo (Base *self)
{
  BASE_GET_CLASS (self)->foo (self);
}

void
base_bar (Base *self)
{
  BASE_GET_CLASS (self)->bar (self);
}

except:

  • the reference counting is explicit, as we must use g_type_create_instance() and g_type_free_instance() to allocate and free the memory associated to the instance
  • you need to get the class structure from the instance using the GType macros instead of direct pointer access

Finally, you will need to add code to let you register derived types; since we want to tightly control the derivation, we use an ad hoc structure for the virtual functions, and we use a generic class initialization function:

instance-type.c [Lines 245-290] download
static void
base_generic_class_init (gpointer g_class,
                         gpointer class_data)
{
  BaseTypeInfo *info = class_data;
  BaseClass *klass = g_class;

  klass->finalize = info->finalize;
  klass->foo = info->foo;
  klass->bar = info->bar;

  // The info structure was copied, so we now need
  // to release the resources associated with it
  g_free (class_data);
}

// Register a derived typed of Base
static GType
base_type_register_static (const char         *type_name,
                           const BaseTypeInfo *type_info)
{
  // Simple check to ensure that the derived instance includes the
  // parent base type
  g_assert (type_info->instance_size >= sizeof (Base));

  GTypeInfo type_info;

  // All derived types have the same class and cannot add new virtual
  // functions
  type_info.class_size = sizeof (BaseClass);
  type_info.base_init = NULL;
  type_info.base_finalize = NULL;

  // Fill out the class vtable from the BaseTypeInfo structure
  type_info.class_init = base_generic_class_init;
  type_info.class_finalize = NULL;
  type_info.class_data = g_memdup (type_info, sizeof (BaseTypeInfo));

  // Instance information
  type_info.instance_size = type_info->instance_size;
  type_info.n_preallocs = 0;
  type_info.instance_init = NULL;
  type_info.value_table = NULL;

  return g_type_register_static (BASE_TYPE, type_name, &type_info, 0);
}

Otherwise, you could re-use the G_DEFINE_TYPE macro—yes, it does not require GObject—but then you’d have to implement your own class initialization and instance initialization functions.

After you defined the base type, you can structure your types in the same way as the boxed type code:

instance-type.c [Lines 294-347] download
typedef struct {
  Base parent;

  char *some_other_field;
} DerivedA;

static void
derived_a_finalize (Base *base)
{
  DerivedA *self = (DerivedA *) base;

  g_free (self->some_other_field);

  // We need to chain up to the parent's finalize() or we're
  // going to leak the instance
  BASE_SUPER (self)->finalize (base);
}

static const BaseTypeInfo derived_a_info = {
  .instance_size = sizeof (DerivedA),
  .finalize = derived_a_finalize,
  .foo = derived_a_foo, // defined elsewhere
  .bar = derived_a_bar, // defined elsewhere
};

GType
derived_a_get_type (void)
{
  static volatile gsize derived_type__volatile;

  if (g_once_init_enter (&derived_type__volatile))
    {
      // Register the type
      GType derived_type =
        base_type_register_static (g_intern_static_string ("DerivedA"),
                                   &derived_a_info);

      g_once_init_leave (&derived_type__volatile, derived_type);
    }

  return derived_type__volatile;
}

Base *
derived_a_new (const char *some_other_field)
{
  Base *res = base_alloc (derived_a_get_type ());

  DerivedA *self = (DerivedA *) res;

  self->some_other_field = g_strdup (some_other_field);

  return res;
}

The nice bit is that you can tell the introspection scanner how to deal with each derived type through annotations, and keep the API simple to use in C while idiomatic to use in other languages:

instance-type.c [Lines 349-363] download
/**
 * derived_a_get_some_other_field:
 * @base: (type DerivedA): a derived #Base instance
 *
 * Retrieves the `some_other_field` of a derived #Base instance.
 *
 * Returns: (transfer none): the contents of the field
 */
const char *
derived_a_get_some_other_field (Base *base)
{
  DerivedA *self = (DerivedA *) base;

  return self->some_other_field;
}

Cost-benefit

Of course, there are costs to this approach. In no particular order:

  • The type system boilerplate is a lot; the code size more than doubled from the boxed type approach. This is quite annoying, but at least it is a one-off cost, and you won’t likely ever need to change it. It would be nice to have it hidden by some magic macro incantation, but it’s understandably hard to do so without imposing restrictions on the kind of types you can create; since you’re trying to escape the restrictions of GObject, it would not make sense to impose a different set of restrictions.

  • If you want to be able to use this new type with properties and you cannot use G_TYPE_POINTER as a generic, hands-off container, you will need to derive GParamSpec, and add ad hoc API for GValue, which is even more annoying boilerplate. I’ve skipped it in the example, because that would add about 100 more lines of code.

  • Generated signal marshallers, and the generic one using libffi, do not know how to marshal typed instances; you will need custom written marshallers, or you’re going to use G_TYPE_POINTER everywhere and assume the risk of untyped boxing. The same applies to anything that uses the type system to perform things like serialization and deserialization, or GValue boxing and unboxing. You decided to build your own theme park on the Moon, and the type system has no idea how to represent it, or access its functionality.

  • Language bindings need to be able to deal with GTypeInstance and fundamental types; this is not always immediately necessary, so some maintainers do not add the code to handle this aspect of the type system.

The benefit is, of course, the fact that you are using a separate type hierarchy, and you get to make your own rules on things like memory management, lifetimes, and ownership. You can control the inheritance chain, and the rules on the overridable virtual functions. Since you control the whole type, you can add things like serialization and deserialization, or instance cloning, right at the top of the hierarchy. You could even implement properties without using GParamSpec.

Conclusion

Please, please use GObject. Writing type system code is already boring and error prone, which is why we added a ton of macros to avoid people shooting themselves in both their feet, and we hammered away all the special snowflake API flourishes that made parsing C API to generate introspection data impossible.

I can only recommend you go down the GTypeInstance route if you’ve done your due diligence on what that entails, and are aware that it is a last resort if GObject simply does not work within your project’s constraints.

by ebassi at June 02, 2020 03:06 PM

May 25, 2020

Tomas Frydrych

England’s Favourite Ned*

... did nothing wrong. Of course not. They never do, do they? No matter the vandalism, the destruction in their wake, the trail of buckie shards left behind, someone else is to blame.

Personally, I am not sure whether I am more concerned that our collective fate lies in the hands of someone who at the first whiff of trouble runs to mummy and daddy, or someone who thinks driving 30 miles is an appropriate way to ascertain he is fit to drive.

But more importantly, if we learnt anything at all from this episode, it’s that the Government’s Herd Immunity strategy really was nothing other than a cavalier disregard for the lives of other (ordinary) people. For politicians who wish to pursue a Coventry strategy in the name of the greater good cannot shrink away from paying a personal price for that greater good. And one thing is sure, this lot doesn’t have the moral backbone, if they have any backbone at all.

Of course, England got the Prime Minister it wanted, and Dom was always part of that package. As for the rest of us ... dog shite someone else’s shoes treaded into the house.

[*] I think down south they call them chavs.

by tf at May 25, 2020 06:37 PM

May 11, 2020

Tomas Frydrych

Stay Alert!

Take a comfortable lockdown seat, stick some popcorn into the microwave and watch: Remember the overweight BoJo stuck on a zip wire waving the Union Jack? You’ve seen nothing yet, England and her Tories are about to find out the true cost of putting an inept clown in charge.

Of course, this is not a third rate comedy, but a national tragedy in which we happen to be caught up and in which some 30,000-50,000 of us (depending whose counting you believe) have already died. And, predictably, that human cost is being disproportionately distributed across the socioeconomic spectrum, which suits the Tories to the bone. I guess not much of a surprise there, Compassionate Conservatism is as much of an oxymoron as Socialism with a Human Face was.

I am glad to see that the Scottish Government have finally shed that pathological fear of being accused of politicising the pandemic and stood up for Scotland. There is little doubt it’s far too early to lift the lockdown.

We hear a lot about the R number: a great theoretical indicator, which unfortunately can’t be accurately calculated. But there is another number, one which is not an estimate, that’s worth paying attention to: the number of newly diagnosed cases. As of yesterday this stood at 181 in Scotland; I expect for practical reasons this will need to drop into low double digits in order to bootstrap the test and trace strategy (which is why suddenly dropping tracing in early March was grossly negligent).

So, yeah, I can’t but ask again, are the tracers getting recruited and being trained, so Scotland can hit the ground running when the conditions are right?

by tf at May 11, 2020 07:18 AM

May 02, 2020

Tomas Frydrych

Netflix and Chill

At the top of this week’s Netflix ‘for you’ list came the star studded 1995 classic Outbreak. I cringed a bit, but watched it in the end, being the lockdown and all that, and I am glad I did, for I finally understand where the UK Government got its COVID-19 strategy from! This Hollywood disaster cliche has it all: the politicians’ cavalier attitude to loss of life, the war imagery, the pseudoscience, the vaccine produced in days, the triumphal ending.

Also this week, in order to meet Hancock’s arbitrary, and entirely meaningless, 100,000 test target, we saw the UK Government clumsily fiddling the figures to get to that magic number. While the UK might not be particularly high up in the world’s school leavers literacy and math skill rankings, not even a furloughed UK six year old would be fooled by this particular slight of hand—yet another confirmation that our dear leaders don’t care about anything but loss of face, the Government’s daily briefings continuous stream of inane slogans, spin and outright lies.

There is one phrase in particular that sums up the moral bankruptcy of the UK Government: Protect the NHS — it is not the job of the public to be protecting the NHS! It’s for the NHS to protect us, that’s its job description and its vocation.

The fact the health service is unable to cope is solely the fault of the Tory governments of the past decade, making the daily routine of hollow thanks from the governmental pulpit a hypocrisy of the highest order. (And the NHS is not coping; a slow suffocation is a terrible way to die. In a supposedly civilised society nobody should be allowed go through it without appropriate hospital care—the next time you hear how many people are dying of this virus outside of a hospital, ask yourself why are they not in one if things are so splendid?)

Luckily the situation is a wee bit better in Scotland then south of the border. There are two reasons for this, (a) we are somewhat behind the English curve here, so the start of the lockdown has come relatively earlier up here, and (b) the Scottish Government is spending bit more on the NHS then they do down south. But, of course, there are limits to how much we can spend on public services up here, for we are chained up to the spending policies of our English overlords by the Barnett formula, our budget hostage of Tory worldview. This cannot go on, this crisis is making that, yet again, clear.

And still no practical steps taken toward ending the lockdown ... we can’t go on like this indefinitely, not even for months. When it is obvious our Governments have no plan, it is unavoidable that we start exercising our individual judgement on what we have to do to get ourselves and our families safely through this, and that’s not ideal. So rather than lamenting the rise in traffic, it’s time for our politicians to pull their finger out.

by tf at May 02, 2020 08:54 AM

April 25, 2020

Tomas Frydrych

A Grownup Conversation?

This week’s attempt of the Scottish Government to start a grown up conversation on how we deal with the ongoing COVID-19 crisis is most welcome, to the extent to which it goes, which is not far enough.

I have lot time and respect for Nicola, and I have been a loyal SNP voter for over 20 years. But when it comes to this crisis even I am beginning to lose patience with the Scottish Government’s reluctance to break away from the ideologically driven policies of England’s Prime Minister and the ineffectual English Parliament.

For there can be no doubt that ideology is at the bottom of the present UK crisis. The current state of the NHS and the lack of epidemic readiness, those are the products of a decade of ‘Compassionate Conservatism’. And for all that talk of ‘best scientific advice’, even without today’s revelation that Cummings takes part in SAGE meetings the ideological paw prints on the No 10 handling of the crisis have been evident since the beginning, the herd immunity strategy a convenient solution to the growing costs of social and medical care.

Furthermore, I know many scientists don’t want to hear this, but science is not apolitical. No human endeavour is, the very assertion of being apolitical is in itself a profound political statement. We only need to look back a bit to see how science is marked by politics: alongside the long list of amazing scientific advances of the 20th century comes a long list of scientists who ended up on the wrong side of history, working for unsavoury ideological causes, both left and right.

Science costs money, lots of money. An in our world money is concentrated in the hands of a very few who pull the strings of power; democracy, as we know it in the west today, is an illusion, in reality we live in a plutocracy. The plutocrats are the people who will emerge from this crisis richer at the expense of the rest of us, and they are the people who shape the focus and direction of scientific endeavour. It is for this reason that the science the Government appeals to must be open to scrutiny, for science without peer review is no science at all, but mere alchemy.

The problem with having a grown up conversation is that it needs to lead to grown up answers, and in this instance there is only one such an answer, and we have know what it is since the beginning: mass testing, rigorous contact tracing and targeted precision isolation. This is the only viable way we can get from where we are to the point where a vaccine might fix this problem permanently.

The vaccine solution is a long way away, for it’s not just how long the science of developing effective safe vaccine might take, but also a question of costs (some people will be getting very rich of this, make no mistake), and mass global vaccination of some 7 billion people (for vaccination to be effective, around 70% of population must receive it) — I’d be very surprised if life is back to any sort of normality before 2025.

Ongoing mass lockdown is not a solution. So if we are to have an adult conversation, the first thing we need to discuss is why, three months into this thing, we are still no where near being able to test, trace and isolate. And no more alchemy bullshit, please.

by tf at April 25, 2020 10:41 AM

April 11, 2020

Tomas Frydrych

Reaping the Herd Immunity Strategy Costs

With BoJo out of critical care, and yesterday’s UK COVID-19 deaths just shy of a 1000, the highest ever recorded in Europe, it’s time to ask the PM ‘how’s the herd immunity working for ya?’ For as Marina Hyde noted, the people dying today were most likely infected in that critical time when BoJo and co. were telling us this was nothing serious to worry about.

More disconcertingly, Herd Immunity remains the Government’s strategy, the only thing that has really changed is that the ‘herd immunity’ designation is not verbalised, but that’s all. The press really need to pull their collective finger out and start doing their job of holding the politicians accountable, instead of endless speculating about how many more weeks of lockdown there might be ...

(The answer to that one is quite simple: unless the Government starts mass testing, contact tracing and targeted isolating, the lockdown will have to last until the nation has been vaccinated, for that’s the only way to achieve herd immunity without deaths in six figures. So that would in the order of years, not weeks.)

by tf at April 11, 2020 07:13 AM

April 05, 2020

Tomas Frydrych

One Set of Rules for Us, Another for Them

Seems the lockdown rules don’t apply to Dr Calderwood, Scotland’s Chief Medical Officer. This is incredibly serious and there is only one credible way in which Nicola Sturgeon can handle the situation and that is to sack the CMO.

The excuse that the CMO has been working hard and took the opportunity for a break will not wash. There are scores of others working as hard, if not harder, than any government bureaucrat — the nurses, the doctors, the cleaners, the administrators desperately trying to keep the NHS afloat, the supermarket workers, the delivery drivers. Majority of them get paid fraction of what the CMO earns and can but dream of second homes. And all of them are expected to follow the rules or else.

The lock down is the lynchpin of the Government’s strategy, there really is only one thing Nicola can do here. To do otherwise is to signal that the lockdown is an unnecessary overreach.

Since I wrote this earlier today, Dr Calderwood has apologised for an ‘error of judgement’, while Nicola Sturgeon has said we all make ‘mistakes’. Neither is an appropriate description of what has happened.

It comes down to this: either Dr Calderwood doesn’t believe the current policy is necessary to fight the pandemic, in which case she should say so, and the Government should put in place a better policy for all of us. Or she does believe this is the necessary and correct policy, in which case this seems to me like serious professional misconduct.

It has further emerged that she visited the family dacha last weekend as well, and that she was merely issued with a police warning. Sorry Nicola, this really does look like different rules for different people.

by tf at April 05, 2020 09:07 AM

April 04, 2020

Tomas Frydrych

The Lies are Killing Us

I don't know how you, but I have reached a point where I can't take the daily No. 10 briefings. There is no end to the half-truths, omissions and outright lies; it's the £350m red bus all over again.

Earlier this week much was made of the rate of hospital admissions becoming, for a couple of days, constant -- an indication that the Government is doing the right thing, that the rate of infections is slowing down. While a possible explanation, it is, as has been the Government's habit throughout, a case of the most optimistic interpretation possible.

There is, of course, another explanation, one that, given what we know about the state of the NHS in general, and what we are hearing from people on the cutting edge of things, seems quite likely: we might have reached the real NHS capacity.

This capacity is not simply the number of hospital beds, nor even the number of ventilators. It is also how much qualified nurses and doctors there are to man those beds, how many of them are well enough to work. It's how many ventilators the oxygen supplies in a hospital can cope with, the number of ambulances to bring sick people into hospitals, the number of paramedics to drive them, the space in the mortuaries, etc., etc.

At last night's briefing the question was asked whether by the time the peak comes there will be enough ventilators. Not to worry, we are no where near that limit. Really? How stupid do they take us for? By the Government's own figures, there are some 40,000 confirmed cases. Given that testing is pretty much limited to people who are serious enough to require hospital admission and the UK has some 8,000 ventilators, no computer model is needed to get the picture.

When Hancock declared that the peak is expected in a week's time, not even Nicola Sturgeon could fall in line (I really wish the Scottish Government had the guts to break away from the ideologically-driven English strategy). The curse of computer models.

As I said before, mathematical models are only as good as the data they are built from and that is fed into them, and the UK data quality is very poor. It is now emerging that even the numbers of deaths are not what they seem to be, e.g., the 159 deaths reported on 30 March have now been revised to over some 460, i.e., the curve we are seeing at the daily briefings, and that so much is being made of, is completely disconnected from reality -- this virus cannot, will not, be defeated by computer modelling.

Then there is the whole masks business. We are told masks don't work, the virus is not air born, plus most people don't put them on correctly anyway. That might be, but it's only half the story, the half where you wear a mask not to get infected. But here is the other half: wearing a mask makes it pretty hard to sneeze all over the apples in Tesco.

There is, of course, a more pragmatic reason why the general population shouldn't wear mask just now -- there aren't enough of them in the UK for the front line staff; I get that, and I imagine the majority of the UK population do as well, but why not be straight with us?

Every evening, again and again, much is being made of the drop in the number of people traveling, a sure sign the infection rate is slowing down. Again, there is an obvious problem with this. By the Government's own figures, which I imagine are, as has been the general trend, on the optimistic side of things, there are some 400,000 infected people in the community. That means that every day thousands of infected key workers go on spreading the virus (the NHS alone employs nearly 1% of the UK population). Folk on the cutting edge are reporting 25%, 30%, even more NHS staff being currently off ... this can only get worse.

Now, I do believe the lock down is the right thing to do, and no matter how frustrating you might find it, how angry and distrusting you are of the Government, please, please, stick with it. Not for them, but for mum, dad, granny, the old guy next door, the asthmatic pal.

But lock down is not going to save us; the way to get ahead of this is to follow the WHO protocol: test, contact trace, isolate; test, contact trace, isolate. And in spite of the inevitable u-turn on testing, the basis of the Government strategy is still the misguided idea of herd immunity.

But the biggest problem the Government is facing is the erosion of trust. I, for one, don't trust them as far as I could throw them, and if BoJo wants to turn this around, then here are the things that need to happen at the political end:

  1. Cummings (as the Government's chief strategist), Vallance and Whitty (as the two scientists ultimately responsible for the scientific advice), and Hancock (as the health minister on who's watch this mess came to be), need to be sacked, as does the posy of 'nudgers' and behavioural quacks. (This is not a question of whether they acted in good faith, I expect they did, but we can leave that for the Public Enquiry that is undoubtedly coming.)

  2. We need an experienced epidemiologist in charge, someone who is not so arrogant as to think the WHO advice is for third world countries only (as was actually said at one of the recent No. 10 briefings).

  3. The Government needs to start being straight with us. This is not the '40s, and we are not a nation of idiots.

Do I expect this to happen? Given the blame game has started already (and no, the UK situation is neither the fault of the Chinese, nor the Germans), I suspect not, but it's what it would take for me to give the Government some benefit of the doubt.

PS: The attentive reader has undoubtedly observed that all the links in this piece are from the Guardian. FWIW, I am not a Guardian reader, and I take an issue with lots of its politics. But as far as COVID-19 journalism goes, I am finding theirs to be the most comprehensive coverage of any of the UK media outlets.

by tf at April 04, 2020 11:01 AM

April 01, 2020

Tomas Frydrych

The Race is On

Which of our politicians is going to come up with the worst COVID-19 joke today, I wonder? All bets are off on this one, of fools we have no shortage. <-- more -->

by tf at April 01, 2020 06:03 AM

March 26, 2020

Tomas Frydrych

Not Dyson, FFS!

The UK Government has yet to put a foot right in its handling of the COVID-19 epidemic, and the decision to order 10,000 yet to be invented ventilators from a vacuum cleaner manufacturer while refusing to join the EU procurement effort is almost certainly another misstep on the way to the mother of all Public Inquiries.

If you need to rapidly manufacture a large number of pieces of medical equipment, then you need to do two things: (1) you need to provide (unlimited) support to the existing manufacturers of such equipment to scale up production, and (2) you need some temporary solution that can be constructed with minimal investment and expertise from existing off-the-shelf components (e.g., the OxVent), to tie you over while (1) is ramping up. The one thing (3) you absolutely don't want to do is to set out to invent a completely new solution, and commission it from people with a zero expertise and experience in the highly specialised field.

One doesn't have to be a medical scientist to grasp that pursuing option (3) instead of (2) makes no sense (though, in the interest of full disclosure, I do have a degree in Biomedical Engineering). This is yet another giant gamble, the UK Government yet again basing policy on the most optimistic outcomes. (Or perhaps just BoJo, in his Churchillian self-delusion, on a quest for his 'Spitfire', combined with a dose of Brexiteer cronyism.)

We need to, and can, do better in Scotland. Hopefully the establishment of a separate scientific committee to advise the Scottish Government is a signal that Bute House is, finally, willing to break away from London's inadequate policies, and I hope they will look seriously at the OxVent project.

by tf at March 26, 2020 10:05 AM

March 20, 2020

Tomas Frydrych

SLPOTY 2019 Commendation

The Scottish Landscape Photographer of the Year 2019 results have been announced, and I am glad to note I got a commendation in the monochrome category for Assynt Evening.

Must do better next time ... no, seriously, I am pretty chuffed, the standard this year was clearly very high, as you will be able to judge for yourself.

I am rather taken by the Wild Caledonia image from Kenny Muir's winning portfolio, it's beautifully, sensitively, done and represents the Scotland that I know, love and seek out, yet, also the Scotland that could be but isn't really.

However, my firm favourite from this year's winners gallery is Rocky Shore by Katrina Brayshaw, the image that won the Monochrome category.

by tf at March 20, 2020 09:38 AM

March 19, 2020

Tomas Frydrych

7 Weeks in ...

... and still not following the WHO guidelines. But what does the WHO know, eh? It takes an astonishing level of arrogance to ignore the WHO and at the same time to claim to follow the best scientific advice. But then, this is the guy who gave us the £350m red bus.

by tf at March 19, 2020 07:16 AM

March 17, 2020

Tomas Frydrych

Why Testing Matters

As I noted yesterday morning, testing is at the heart of the WHO procedures for dealing with the COVID-19 pandemic, and it is rather disappointing that both the UK and Scottish Governments are still choosing to ignore this. We are already reaping the consequences of this, and it will rapidly snowball.

Why is WHO so emphatic about the need to test, test, test? I can think of at least four good reasons:

  1. Computer models are only as good as the data they are built from. If the data is poor, so will the predictions, and any policies built on them. Not testing means poor models and no idea what to expect next.

  2. Not testing all suspected cases leads to unnecessary self-isolation. This has become particularly acute with the new 14-day household isolation policy, due to which a large number of NHS and other critical services workers have not been able to go to work today. How many of these would have been able to work if their unwell family members had been tested?

  3. Currently there is no test to tell if a person has had the infection and recovered. UK officials are assuming once recovered we will have some immunity, meaning we could return to work, not need to self-isolate if other family members develop symptoms, etc. But without being tested while ill, we won't be able to tell, and so end up in an endless circle of self-isolation.

  4. There is emerging evidence from the Chinese data that suggests that majority (perhaps as much as 90%) of all infections are spread by people who are asymptomatic. If this is the case, there is absolutely no way to get the spread under control without testing.

In other words, we can neither control the spread of the virus, nor keep our society working unless we follow the WHO guidelines and test, test, test.

by tf at March 17, 2020 05:21 PM

March 16, 2020

Tomas Frydrych

Governed by Misfits and Weirdos

If, like me, you were were curious what to expect from a government seeking to employ misfits and weirdos, the wait is over. It means nothing more than ignoring prevailing scientific opinion when formulating government policy.

The UK Government's disregard for the World Health Organisation guidelines when dealing with the COVID-19 pandemic has laid this bare beyond any reasonable doubt. These guidelines are the best of the current science; there is no way to spin this otherwise.

The only reason the Government's Herd Immunity strategy (and there is little doubt in my mind that until this weekend that was The Strategy, it is the one thing that explains everything this government has, or rather has not, done until now), has not made us the laughing stock of the world is because there is nothing funny in the potential outcomes.

(Though we have come close; a Harvard epidemiologist and his colleagues assumed at first that the reports coming from the UK were satire).

Sadly, we know that the WHO approach works, for the Asian countries that are rigorously sticking to it have managed to get or keep their epidemics under control. We also know the opposite is true, we only need to look at Italy for an example of what a slow initial response leads to.

The WHO strategy hinges on large scale testing and contact tracing to provide effective isolation prior to individuals becoming symptomatic. In the UK we seem to have failed to apply these guidelines in the critical early stages, and now have already given up on them again; the official UK guidelines seem to amount to little more than the 'wash your hands' and 'self-isolate'.

The early laxness I am inclined to put down the Johnson's natural contrarianism, the current mess to lack of NHS capacity -- successive Tory Governments have systematically run down the NHS with a view to cheap privatisation; we are about to reap the fruit of that.

So when I hear those speaking on behalf of the Government declaring the UK is following the best scientific advice, I can't avoid thinking this is nothing but desperate spin.

For in my mind the disregard of the WHO guidelines raises the questions of not only of our Government's competency (no surprises there), but ultimately whether Profs Vallance and Whitty are the right people to spearhead the UK's response; I really have no desire for myself and my elderly relatives to be guinea pigs in a high risk public health experiment, no matter how individually brilliant the dissenting scientists might be.

P.S. When the car makers are producing those ventilators, could the UK Government please take steps to ensure they don't include any defeat devices?

by tf at March 16, 2020 10:52 AM

February 16, 2020

Tomas Frydrych

Updated Photies Site

Updated Photies Site

I finally revamped my photos site. I have meant to do this for over a year now, but didn't get to it until now. The biggest change is no more SmugMug.

SmugMug seemed like a good idea a couple of years ago, but it failed to live up to expectations. Like so many other services with a similar licensing model, you don't get to find out what the service is really like until you take out a paid subscription, and it turns out the really good features, such as the Loxley Colour integration, are severely crippled below the (prohibitively expensive) top tier subscription.

The main issues for me were that even with the (not cheap) tier-2 Portfolio subscription I could not control printing (e.g., paper selection) on per-image basis, which made it completely unusable. In general customising the site was awkward, and SmugMug would roll out updates that would break your customised site without you even knowing. And, not last, and certainly not least, you cannot get rid of that stupid name, even at the Portfolio level it's always there, and it keeps popping up in the browser bar if you use a custom domain.

Anyway, I don't have particularly complex requirements, so I ended up using a self-hosted Ghost, which has served me well in the past, with a bespoke theme called Elphin specifically focusing on imagery (WIP, but you are welcome to make use of it).

And yes, you can now get a normal RSS feed for my photos!

by tf at February 16, 2020 03:54 PM

February 12, 2020

Tomas Frydrych

Updating the Photos Site

I am updating https://foto.aye.tf just now, and had to take the site off-line for a bit, so if you have landed here looking for it, it will be back in a day or two.

by tf at February 12, 2020 07:28 PM

January 29, 2020

Tomas Frydrych

Scotland the Invisible

Scotland the Invisible

All in all 2019 wasn’t a bad year by any means, but nevertheless, for a variety of reasons I needn’t to bore you with, it sticks in mind as a year of journeys unrealised and photographs untaken, of unfinished business and, not least, a year of blogs unpublished. Free time was harder to find, and even harder to set aside.

As such I was reluctant to waste my life on long car journeys, and much of my time outdoors was spent exploring what I call ‘The Invisible Places’: places without an obvious charisma to merit our closer attention, places I’d not usually think of visiting (and nobody else seems to either). Spots that perhaps caught my attention reading a map knowing I had no more than three hours to spare, or ones that I got curious about in years gone by from the window of the car going some place officially worthy. Or just somewhere I found myself in for whatever other reasons.

I suspect that to the contemporary adventurer this might sound sad, for such places offer limited opportunities to deploy all that necessary paraphernalia a serious outdoor person can’t be seen without. Nor do they promise to offer a suitable backdrop for the jaw dropping audiovisuals our modern day heroic deeds require. But in fact these little ventures not only brought me lot of pleasure, but turned out to be a bit of an eye opener.

I watched eagles soar and listened to the noiseless flight of owls. There were curlews and snipes, ravens and snow buntings. I had my coffee and pieces by the most picturesque of streams, came across stunning gorges in the unlikeliest of places, seen waterfalls that by their size and majesty can compete with any in the land, yet you will find in no guide book (and I, for sure, ain’t telling).

These little outings made me realise that there is much more to this land of ours than the Glencoes, the An Teallachs, the Lairig Ghrus. And at the same time I came to see that the perceived beauty of this land, the one that we get from the places where award winning photographs are made and faked, the places our outdoor culture revolves around, where the influencers make their money, and hence campaign for, that beauty is but skin deep.

For I have also seen feed buckets in the pretties of streams and would have dared to drink from none. There were giant tyres right by ‘Please Take Your Litter Home’ signs, and washing machines at the bottom of dramatic ravines. Quaint beaches decorated by discarded fishing nets and fish farm detritus; I marvelled over decomposing cars in the middle of postcard vistas.

Strongbow tinnies abounded, and along side them I collected enough balloons to have a party of my own. I have, I kid you not, stumbled on a half a shotgun buried deep in a river bed, and lost count of the land wrecked by hoofs, bulldozers and forestry machinery; of bogs churned up by the argocats of hunters too posh to walk.

And so, reluctantly, I came to see that this, not the other, is the real Scotland. That for over two decades my preoccupation with the other, the tourist honeypots, blinded me to the sorry state most of this land is really in. If you only stop and listen, you can hear it groaning, praying for redemption.

There is lot of talk of Climate Change and C02 reduction, but that’s only half of the picture. We are facing a second, equally serious, existential crisis in a rapid loss of biodiversity, an issue that is not getting enough practical attention. And, unfortunately, biodiveristy is at times directly at odds with the steps we take to address Climate Change (e.g., in Scotland we are obsessed with the expansion of renewables, which invariably come at the loss of habitat; this will get worse with growing adoption of electric vehicles).

But the real problem is not so much that renewables are encroaching on biodiversity (though better checks and balances are needed, and the Scottish Government appears to be failing to see the bigger picture here), but rather that the land as a whole is in a very poor ecological condition to start with, because for a very long time we have collectively not given a crap. What is needed is a radical, nation wide, program of rapid ecological restoration, e.g., along the lines of the Land Stewardship model proposed two and half years ago by the Scottish Wildlife Trust. Yet this is not happening, isn’t even on the distant horizon.

The state of the land reflects how we, as a nation, really feel about it, and the fact that we are dragging our feet doesn’t say much for us. Go and see for yourself, you might be surprised at what you find.

 

P.S. The tile image is called Ceci n'est pas une microbead, and is not from one of the ‘invisible’ places.

by tf at January 29, 2020 05:32 PM

January 28, 2020

Tomas Frydrych

The Cheery World of B&W

The Cheery World of B&W

My current interest in black and white photography goes back to a three night wild camping trip in my favourite part of the world a couple of years ago. I had planned it for some time, with one particular image in mind, but alas, happened to run into an early heatwave, with all that it brings -- largely cloudless skies, haze that even a polarizer can't do much about, thunder, and rapidly forming, all encompassing, evening inversions.

The weather was so unconducive to photography that on day 2 I simply left the camera in the bag just after breakfast. This, as often, turned out to be a mistake, as I nearly missed the one worthwhile image that came from this trip.

Now the colour original of this picture is rather uninspiring, and my first response seeing it on the big screen was a disappointed ‘doh’. But a B&W conversion opened up different possibilities, and ultimately made me wonder what it would be like to do this properly, i.e., work with B&W as a landscape medium in its own right, rather than using it as a last ditch effort to save a disappointing picture.

So what have I learnt since? Two things: (a) B&W landscape photography is very hard, and (b) taking a half decent B&W picture generally requires thinking in B&W terms before taking it.

Key Issues in B&W Landscape

When the colour dimension is eliminated, the building blocks that I am left with for my image are luminance, texture, and structure. Should be plenty, right? Well, it’s not that simple.

I came to realise rather quickly that the two dominant colour tones that make up the bulk of Scottish landscape for much of the year -- the yellow-greens and orange-browns -- happen to have quite similar luminance levels, i.e., they translate into rather similar shades of grey.

More so, colours of similar luminance cannot be separated very effectively by the use of traditional B&W filters if they are close to each other on the colour wheel (filters work best for colours across the wheel, e.g., yellow filter for blue and yellow; digital, with its ability to manipulate isolated narrow colour bands, has a definite advantage here, but that comes with its own pitfalls, on that shortly).

And so the quality of the grey scale in a B&W image comes mostly from the quality of the external light, and while good light is key to all photography, it’s doubly so for B&W I will go as far as saying that I learnt that without interesting light there can be no interesting B&W landscape.

The other thing I have discovered is that the various textures naturally found in landscape, particularly those associated with vegetation, are in fact also quite homogeneous -- it would seem that what I often perceive as a different texture at first glance is really a difference in colour. This leaves structure (the larger patterns) in the landscape as just about the only WYSIWYG attribute of the landscape as I initially perceive it.

Visualisation, Visualisation, ... Visualisation

The necessary fallout from the above observations is that it is virtually impossible to take a good B&W landscape without giving some advance thought to what the result will look like in grey scale.

Like the image that started my current quest, most of the B&W images I tend to come across these days are, quite obviously, a final attempt to salvage something from a picture that has not come out too well in colour. The aforementioned ability of digital post processing to manipulate narrow bands of colour in the B&W mix often makes things worse, because it allows me to break the natural tonality of the image; sometime this works, but more often it doesn’t.

I have been reflecting on why that is, and come to the conclusion that when we are looking at a monochrome image, our brain is processing it in a similar way in which it processes natural low light scenes, and it baulks when it comes across something it doesn’t expect. For example, my brain seems quite unperturbed by solid black sky, but a steep contrast curve in the clouds (the sort of ‘dramatic’ sky you get with a bit of Lightroom ‘clarity’ thrown in) jars to no end.

Digital does make it possible to experiment with what might and might not work, but personally I prefer working with B&W film as it forces me to focus on the fundamentals and on the landscape itself. (And yes, I do prefer the film look.)

It’s been said that practice makes perfect, and learning to see in B&W certainly comes one image at a time. To me that’s part of the fun, and one of the things that draws me to it.

by tf at January 28, 2020 10:02 PM

November 18, 2019

Tomas Frydrych

The Camera Needn’t Lie

The Camera Needn’t Lie

The question of the role and the importance of realism in landscape photography is one that keeps coming up over and over again. On the one hand we have the purists, holding onto old fashioned notions of truth, arguing that landscapes should be true to the land; on the other the free creatives, raising that millennia old question ‘What is Truth?’. And, broadly speaking, the former seem to have long lost the argument.

This was brought into a clear focus for me about a year ago listening to the Alex Neil and Erin Babnik debate on the Matt Payne podcast. I was routing for Alex, for I tend to identify with the traditionalist side of the argument (in spite of, or perhaps because of, being a child of postmodernity, and having long given up on simplistic notions of objective truth). But that afternoon Erin won the argument, and did so comprehensively.

So over the last year, while waiting for the light, or lugging the gear up somewhere in a hope of a good picture, I have been pondering why it is that, in spite of all the arguments (and in spite of understating the inadequacy of the old modernist and positivist models of truth), realism in landscape still matters to me, and why its widespread absence in contemporary photography hasn’t stopped bothering me. So here are some thoughts.

The gist of the creative side of the argument is that photography is always a distortion of reality, and hence the quest for realism is naive and bound to fail. This assertion is, on the most basic level, true. That is, it is true the same way as the assertion that the earth is not a sphere. However, it fails the same way naive postmodernism did, by dismissing the question of a degree of distortion as inconsequential.

The distortion of reality in a photographic image has several different origins, and in my own thinking I have come to split it into five categories: fundamental, systematic, compensatory, interpretative and re-interpretative.

Fundamental Distortion is one that comes from the most basic aspects of producing a photographic image, and is always present, regardless of how, or by whom, the image is taken. The most obvious fundamental distortion is the flattening of a three dimensional reality into a two dimensional space.

Systematic Distortion is down the characteristics of the technological process used to generate the image, that is, it is not introduced by the photographer per se but by the equipment. It will vary between different equipment, but is independent of the operator. This includes things such as the loss of colour information when using B&W film, lack of dynamic range and/or colour shift due to the physical limitations of a CMOS sensor, or a geometrical distortion introduced by the lens.

Compensatory Distortion is the result of steps taken to counteract the effects of the fundamental and systemic distortion to better match the perception of a human observer. Some of this is done automatically by the equipment (lens profile, sensor calibration, auto ISO, etc.), some of it as a conscious choice of the photographer (graduated filters, colour cast removal, etc.). Given several photographers with different equipment, following compensatory adjustments the images would all be quite similar (but not identical, hence this too is a form of distortion).

Interpretative Distortion is where the photographer’s ego fully enters the equation, it’s where we are no longer taking a photograph, but start making it. Interpretative changes take different forms and involve different tools, but they are essentially about emphasis and mood.

Re-interpretative Distortion is where the connection between the image and reality is severed and the image becomes an expression of the creator’s vision inspired by a place, rather than being an image of the place per se.

The categorisation above is not about the tools used, but simply about the kind and degree of distortion they introduce. And while the boundaries are somewhat fuzzy, they are distinguishable from each other, so that, e.g., if I were running a photographic competition, I would be able to write up a set of rules such as to clearly define acceptable entries.

For example, a +/- 1 stop HDR, or a 1 stop burn-in can be well argued to be compensatory for the dynamic range limitations of sensor or photographic paper, but +/- 5 stop HDR takes us well beyond the usable dynamic range of the human eye, and hence beyond that which can be humanly observed. A removal of a transient object (a passing car) can be interpretative, since the object is not a property of the landscape itself, but the removal of a building is re-interpretative, for it presents a scene which cannot be observed.

To put it differently, for me the basic question with reference to the landscape genre is this: if a third party observer present at the taking of the image were shown the final product, would they say ‘yes, this is what we saw’? If the answer is yes, then the distortion in the image is no more than interpretative; if the answer is no, then we have departed the genre of landscape photography by any meaningful definition, for such a definition could be applied to any and every photographic image.

(The other point worth noting here is that the fundamental and systematic distortions are qualitatively different from the rest: you don't get to remove a building and justify it by the limited dynamic range of your sensor, this kind of argument is incongruous, yet incredibly common in this debate.)

But why should any of this matter? If my images are simply about expressing my creative urge, or, perhaps, bringing pleasure to others (for me photography is largely about the former), then all of this is of no importance. But is this really all there is to a landscape photograph?

To come back to the aforementioned podcast, the most important point in the discussion there is made by Erin Babnik noting that unrealistic expectations of realism in photography by the viewer are dangerous: if we believe photographs to represent reality, we are sooner or later going to be deceived.

This observation is valid, but the underlying analysis, and the conclusions drawn from it are not. The issue at hand is very similar to the problem of ambiguity of language in human communication, and just as the fact that language is always imprecise and ambiguous doesn’t mean that we can’t ever understand each other, so the inherent distortion present in all photography doesn’t mean that all images of a place are equally false representations of it, and that no photograph can ever be a realistic representation of an external reality (you do have a photograph in your passport, don’t you?).

The problem with landscape photography for me is that, whether I like it or not, a landscape photograph is not merely an image, but also an act of communication. It doesn’t just entertain, but also informs, let’s the viewer experience land that they might not have first hand knowledge of. Furthermore, a landscape photograph is a record not solely in space but also in time -- our knowledge of land as it once was is entirely dependent on such imagery.

The earlier observation that severing the tangible connection with observable reality voids the landscape genre of meaning leads to the necessary conclusion that the landscape genre comes with an implicit contract between the viewer and the photographer that what is being shown is real. The issue here is not the legitimacy of creative post-processing; I have no quibble with that, do what makes you happy. What I do object to is presenting creative fantasies, no matter how good and aesthetically pleasing, as landscapes.

Breaching the genre contract has consequences for the viewer: on the more innocuous end of the spectrum is the disappointment when the wild place you travel a long distance to looks nothing like the photographs that brought you there (someone thoughtfully cloned out all the car parks), on the more insidious end our carefully manicured ‘photos’ obscure the real state of the planet, its environment and our impact on it. (The former doesn't bother me too much, but the latter at least should give us a pause for thought.)

But there are also collective consequences for the photographer: the contemporary cavalier attitude toward reality is breeding a generation of cynical viewers who increasingly see less and less value in photography -- I suspect contemporary landscape photographers are unwittingly Photoshopping themselves out of existence.

by tf at November 18, 2019 06:40 PM

August 15, 2019

Emmanuele Bassi

Another layer

Five years (and change) ago I was looking at the data types and API that were needed to write a 3D-capable scene graph API; I was also learning about SIMD instructions and compiler builtins on IA and ARM, as well as a bunch of math I didn’t really study in my brush offs with formal higher education. The result was a small library called Graphene.

Over the years I added more API, moved the build system from Autotools over to Meson, and wrote a whole separate library for its test suite.

In the meantime, GStreamer started using Graphene in its GL element; GTK 3.9x is very much using Graphene internally and exposing it as public API; Mutter developers are working on reimplementing the various math types in their copies of Cogl and Clutter using Graphene; and Alex wrote an entire 3D engine using it.

Not bad for a side project.

Of course, now I’ll have to start maintaining Graphene like a proper grownup, which also means reporting its changes, bug fixes, and features when I’m at the end of a development cycle.

While the 1.8 development cycle consisted mostly of bug fixes with no new API, there have been a few major internal changes during the development cycle towards 1.10:

  • I rewrote the Euler angles conversion to and from quaternions and matrices; the original implementation I cribbed from here and there was not really adequate, and broke pretty horribly when you tried to roundtrip from Euler angles to a transformation matrix and back. This also affected the conversion between Euler angles and quaternions. The new implementation is more correct, and as a side effect it now includes not just the Tait–Bryan angles, but also the classic Euler angles. All possible orders are available in both the intrinsic and extrinsic axes variants.
  • We’re dealing with floating point comparison and with infinities a bit better, now; this is usually necessary because the various vector implementations may have different behaviour, depending on the toolchain in use. A shout out goes to Intel, who bothered to add an instruction to check for infinities only in AVX 512, making it pointless for me, and causing a lot more grief than necessary.
  • The ARM NEON implementation of graphene_simd4f_t has been fixed and tested on actual ARM devices (an old Odroid I had lying around for ARMv7 and a Raspberry Pi3 for Aarch64); this means that the “this is experimental” compiler warning has been removed. I still need to run the CI on an ARM builder, but at least I can check if I’m doing something dumb, now.
  • As mentioned in the blog posts above, the whole test suite has been rewritten using µTest, which dropped a dependency on GLib; you still need GLib to get the integration with GObject, but if you’re not using that, Graphene should now be easier to build and test.

On the API side:

  • there are a bunch of new functions for graphene_rect_t, courtesy of Georges Basile Stavracas Neto and Marco Trevisan
  • thanks to Marco, the graphene_rect_round() function has been deprecated in favour of the more reliable graphene_rect_round_extents()
  • graphene_quaternion_t gained new operators, like add(), multiply() and scale()
  • thanks to Alex Larsson, graphene_plane_t can now be transformed using a matrix
  • I added equality and near-equality operators for graphene_matrix_t, and a getter function to retrieve the translation components of a transformation matrix
  • I added interpolation functions for the 2, 3, and 4-sized vectors
  • I’m working on exposing the matrix decomposition code for Gthree, but that requires some untangling of messy code so I’ll be in the next development snapshot.

On the documentation side:

  • I’ve reworked the contribution guide, and added a code of conduct to the project; doesn’t matter how many times you say “patches welcome” if you also aren’t clear on how those patches should be written, submitted, and reviewed, and if you aren’t clear on what constitutes acceptable behaviour when it comes to interactions between contributors and the maintainer
  • this landed at the tail end of 1.8, but I’ve hopefully clearly documented the conventions of the matrix/matrix and matrix/vector operations, to the point that people can use the Graphene API without necessarily having to read the code to understand how to use it

This concludes the changes that will appear with the next 1.10 stable release, which will be available by the time GNOME 3.34 is out. For the time being, you can check out the latest development snapshot available on Github.

I don’t have many plans for the future, to be quite honest; I’ll keep an eye out for what GTK and Gthree need, and I expect that once Mutter starts using Graphene I’ll start receiving bug reports.

One thing I did try was moving to a “static” API reference using Markdeep, just like I did for µTest, and drop yet another dependency; sadly, since we need to use gtk-doc annotations for the GObject introspection data generation, we’re going to depend on gtk-doc for a little while longer.

Of course, if you are using Graphene and you find some missing functionality, feel free to open an issue, or a merge request.

by ebassi at August 15, 2019 09:16 AM

June 20, 2019

Tomas Frydrych

The Fitzroys

My shoulder hurts from the seatbelt, plumes of steam bellowing from underneath the bonnet; I am sure I can smell petrol. Shaking fingers fumbling with the buckle. The driver side door won’t budge. Scrambling out over the gear stick. I put some distance between myself and the car, expecting it to burst into flames any moment.

Too many Hollywood movies.

There in the boot are some of my most treasured worldly possessions, not least a near new pair of Scarpa Fitzroy boots; even at the heavily discounted £99, a purchase I could ill afford. As the initial adrenaline spike wears off in the chill of the crisp winter morning, I pluck up the courage to retrieve them, then watch a glorious dawn over the distant hills. It will be a cracker, as Heather predicted.

In the years that followed the Fitzroys and me had trodden all over the Scottish hills, hundreds of miles of fine ridges, and even finer bogs; in the heat of the summer, and through the winters. Later on I added the Cumbraes for the winter climbing days, and Mescalitos for the summer, but the latter turned out to be terrible for walking, and so the Fitzroys remained my main summer boot. 20+ years later I still have them, still use them from time to time. One of the better £99 I have ever spent; they don’t make them like they used to.

Some twenty minutes later a car comes down the icy ungritted road, and the driver kindly lends me his mobile phone to call the AA. When there is no sign of the recovery truck an hour later I walk over to a nearby farm where the lights are now on, and ask to use their phone. As I step out back into the fine morning a couple of minutes later, the yellow truck rolls over the hill — ‘Wow, that was quick!’ says the farmer. I just laugh.

It’s a slow journey home, and I prattle, worried what Linda will say about me writing off our car; money is tight and the car is (or rather ‘was’) the key to maintaining our sanity. The AA man brings me back to reality: ‘You are lucky, son, I have seen less damage where the people did not make it.’

For a long time that is to be my first waking up thought; every day in the hills a bonus, none of them taken for granted.

by tf at June 20, 2019 11:03 AM

June 14, 2019

Tomas Frydrych

North Coast 500 — An Alternative View

The NC500 is a travesty. The idea of car touring holidays harkens back to the environmental ignorance of mid 20th century and is wholly unfit for these days of an unfolding environmental catastrophe. VisitScotland, the Scottish Government, and all those who lend their name to promoting this anachronism, should be ashamed of themselves.

The principal contribution of the NC500 to the northwest highlands is pollution — particulates from tyres and Diesel engines, chemical toilet waste dumped into roadside streams and on beaches, camping detritus and fire rings decorating any and every scenic spot reachable by car. Quiet villages along the north coast have been turned into noisy thoroughfares. During the main season the stream of traffic is incessant.

Campervans wherever you look. Invariably two up front (plus a dog), the back loaded with stacks of Heinz beans from supermarkets down south. The professional types in their shiny T5s with eye-wateringly expensive conversions, the retired couples in ever bigger mobile homes, at times towing a small car behind. Even now, still off season, you will find them hogging passing places along every minor Assynt road from an early afternoon on.

Once upon a time the quiet roads of the northwest highlands provided some of the best cycle touring in the country. No more, thanks to the ever growing volume of traffic and vehicle size. Many of these vehicles are too wide for the single track roads to safely pass a pedestrian, never mind a cyclist. An opportunity missed.

This is not sustainable, the putative benefits to the local communities are failing to materialise (or so I hear from the locals), the real beneficiaries the car makers, the campervan hire places, the oil companies. Politicians congratulating themselves on being seen doing something for the Highlands without spending any money. Meanwhile the roads crumble.

It will not (cannot) work. We only need to look at the Icelandic experience to see what’s coming.

Following the economic crash of 2008 the Icelanders thrown themselves wholeheartedly into developing tourism, it was all that was left. Myriads of camper van hire companies sprung up chasing the tourist buck. It didn’t take long for the country to be overrun and completely overwhelmed by them. It dawned rapidly that they bring nothing to local communities while causing no end of environmental damage; legislation banning all forms of car-camping outwidth officially designated campsites followed.

This is what needs to happen in Scotland, now.

Some will undoubtedly come out to blame the locals and the Highland Council for the associated problems: lack of adequate (read ‘free’) campervan facilities. Let’s think about this for a moment: people in purpose built luxury vehicles worth tens of thousands of pounds expecting the highlanders to, directly, or indirectly through their council tax, subsidise their holidays. Not sure what the right word for this is. Entitlement? Greed?

If the economic model for the revival of the highlands is to be tourism (which shouldn’t be automatically assumed to be the right answer, BTW), it needs to be built on bringing people in, rather than, as the NC500 does, taking them through. The Scottish Government and VisitScotland should be promoting places and communities, not roads; physical activities such as walking, cycling, paddling, not driving. This area has so much to offer that reducing it to iPhone snaps taken from a car window, as the NC500 ultimately does, is outright insulting.

I am sure my view that we should ban all roadside car camping as they did in Iceland, and heavily tax camper vans and mobile homes, will not be very popular. The campervan has become the ultimate middle class outdoor accessory, virtually all my friends have one. You will be hard pressed to find an outdoor influencer that doesn’t; vested interests creating blind spots big enough to park a Hymer in.

But I’ll voice it anyway, the Highlands deserve better.

by tf at June 14, 2019 03:36 PM

June 12, 2019

Damien Lespiau

jk - Configuration as code with TypeScript

Of all the problems we have confronted, the ones over which the most brain power, ink, and code have been spilled are related to managing configurationsBorg, Omega, and Kubernetes - Lessons learned from three container-management systems over a decade

This post is the first of a series introducing jk. We will start the series by showing a concrete example of what jk can do for you.

jk is a javascript runtime tailored for writing configuration files. The abstraction and expressive power of a programming language makes writing configuration easier and more maintainable by allowing developers to think at a higher level.

Let’s pretend we want to deploy a billing micro-service on a Kubernetes cluster. This micro-service could be defined as:

service:
  name: billing
  description: Provides the /api/billing endpoints for frontend.
  maintainer: damien@weave.works
  namespace: billing
  port: 80
  image: quay.io/acmecorp/billing:master-fd986f62
  ingress:
    path: /api/billing
  dashboards:
    - service.RPS.HTTP
  alerts:
    - service.RPS.HTTP.HighErrorRate

From this simple, reduced definition of what a micro-service is, we can generate:

  • Kubernetes Namespace, Deployment, Service and Ingress objects.
  • A ConfigMap with dashboard definitions that grafana can detect and load.
  • Alerts for Prometheus using the PrometheusRule custom resource defined by the Prometheus operator.
apiVersion: v1
kind: Namespace
metadata:
  name: billing
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: billing
  name: billing
  namespace: billing
spec:
  revisionHistoryLimit: 2
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  template:
    metadata:
      labels:
        app: billing
    spec:
      containers:
      - image: quay.io/acmecorp/billing:master-fd986f62
        name: billing
        ports:
        - containerPort: 80
apiVersion: v1
kind: Service
metadata:
  labels:
    app: billing
  name: billing
  namespace: billing
spec:
  ports:
  - port: 80
  selector:
    app: billing
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: billing
  namespace: billing
spec:
  rules:
  - http:
      paths:
      - backend:
          serviceName: billing
          servicePort: 80
        path: /api/billing
apiVersion: v1
data:
  dashboard: '[{"annotations":{"list":[]},"editable":false,"gnetId":null,"graphTooltip":0,"hideControls":false,"id":null,"links":[],"panels":[{"aliasColors":{},"bars":false,"dashLength":10,"dashes":false,"datasource":null,"fill":1,"gridPos":{"h":7,"w":12,"x":0,"y":0},"id":2,"legend":{"alignAsTable":false,"avg":false,"current":false,"max":false,"min":false,"rightSide":false,"show":true,"total":false,"values":false},"lines":true,"linewidth":1,"links":[],"nullPointMode":"null","percentage":false,"pointradius":5,"points":false,"renderer":"flot","repeat":null,"seriesOverrides":[],"spaceLength":10,"stack":false,"steppedLine":false,"targets":[{"expr":"sum
    by (code)(sum(irate(http_request_total{job=billing}[2m])))","format":"time_series","intervalFactor":2,"legendFormat":"{{code}}","refId":"A"}],"thresholds":[],"timeFrom":null,"timeShift":null,"title":"billing
    RPS","tooltip":{"shared":true,"sort":0,"value_type":"individual"},"type":"graph","xaxis":{"buckets":null,"mode":"time","name":null,"show":true},"yaxes":[{"format":"short","label":null,"logBase":1,"max":null,"min":null,"show":true},{"format":"short","label":null,"logBase":1,"max":null,"min":null,"show":true}]},{"aliasColors":{},"bars":false,"dashLength":10,"dashes":false,"datasource":null,"fill":1,"gridPos":{"h":7,"w":12,"x":12,"y":0},"id":3,"legend":{"alignAsTable":false,"avg":false,"current":false,"max":false,"min":false,"rightSide":false,"show":true,"total":false,"values":false},"lines":true,"linewidth":1,"links":[],"nullPointMode":"null","percentage":false,"pointradius":5,"points":false,"renderer":"flot","repeat":null,"seriesOverrides":[],"spaceLength":10,"stack":false,"steppedLine":false,"targets":[{"expr":"histogram_quantile(0.99,
    sum(rate(http_request_duration_seconds_bucket{job=billing}[2m])) by (route) *
    1e3","format":"time_series","intervalFactor":2,"legendFormat":"{{route}} 99th
    percentile","refId":"A"},{"expr":"histogram_quantile(0.50, sum(rate(http_request_duration_seconds_bucket{job=billing}[2m]))
    by (route) * 1e3","format":"time_series","intervalFactor":2,"legendFormat":"{{route}}
    median","refId":"B"},{"expr":"sum(rate(http_request_total{job=billing}[2m])) /
    sum(rate(http_request_duration_seconds_count{job=billing}[2m])) * 1e3","format":"time_series","intervalFactor":2,"legendFormat":"mean","refId":"C"}],"thresholds":[],"timeFrom":null,"timeShift":null,"title":"billing
    Latency","tooltip":{"shared":true,"sort":0,"value_type":"individual"},"type":"graph","xaxis":{"buckets":null,"mode":"time","name":null,"show":true},"yaxes":[{"format":"ms","label":null,"logBase":1,"max":null,"min":null,"show":true},{"format":"short","label":null,"logBase":1,"max":null,"min":null,"show":true}]}],"refresh":"","schemaVersion":16,"style":"dark","tags":[],"time":{"from":"now-6h","to":"now"},"timepicker":{"refresh_intervals":["5s","10s","30s","1m","5m","15m","30m","1h","2h","1d"],"time_options":["5m","15m","1h","6h","12h","24h","2d","7d","30d"]},"timezone":"browser","title":"Service
    \u003e billing","uid":"","version":0}]'
kind: ConfigMap
metadata:
  labels:
    app: billing
    maintainer: damien@weave.works
  name: billing-dashboards
  namespace: billing
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  labels:
    app: billing
    maintainer: damien@weave.works
    prometheus: global
    role: alert-rules
  name: billing
spec:
  groups:
  - name: billing-alerts.rules
    rules:
    - alert: HighErrorRate
      annotations:
        description: More than 10% of requests to the billing service are failing
          with 5xx errors
        details: '{{$value | printf "%.1f"}}% errors for more than 5m'
        service: billing
      expr: |-
        rate(http_request_total{job=billing,code=~"5.."}[2m])
            / rate(http_request_duration_seconds_count{job=billing}[2m]) * 100 > 10
      for: 5m
      labels:
        severity: critical

What’s interesting to me is that such an approach shifts writing configuration files from a big flat soup of properties to a familiar API problem: developers in charge of the platform get to define the high level objects they want to present to their users, can encode best practices and hide details in library code.

For the curious minds, the jk script used to generate these Kubernetes objects can be found in the jk repository.

Built for configuration

We’re building jk in an attempt to advance the configuration management discussion. It offers a different take on existing solutions:

  • jk is a generation tool. We believe in a strict separation of configuration data and how that data is being used. For instance we do not take an opinionated view on how you should deploy applications to a cluster and leave that design choice in your hands. In a sense, jk is a pure function transforming a set of input into configuration files.

  • jk is cross domain. jk generates JSON, YAML, HCL as well as plain text files. It allows the generation of cross-domain configuration. In the micro-service example above, grafana dashboards and Kubernetes objects are part of two different domains that are usually treated differently. We could augment the example further by defining a list of AWS resources needed for that service to operate (eg. an RDS instance) as Terraform HCL.

  • jk uses a general purpose language: javascript. The configuration domain attracts a lot of people interested in languages and the result is many new Domain Specific Languages (DSLs). We do not believe those new languages offer more expressive power than javascript and their tooling is generally lagging behind. With a widely used general purpose language, we get many things for free: unit test frameworks, linters, api documentation, refactoring tools, IDE support, static typing, ecosystem of libraries, …

  • jk is hermetic. Hermeticity is the property to produce the same output given the same input no matter the machine the program is being run on. This seems like a great property for a tool generating configuration files. We achieve this with a custom v8-based runtime exposing as little as possible from the underlying OS. For instance you cannot access the process environment variables nor read file anywhere on the filesystem with jk.

  • jk is fast! By being an embedded DSL and using v8 under the hood, we’re significantly faster than the usual interpreters powering DSLs.

Hello, World!

The jk “Hello, World!” example generates a YAML file from a js object:

// Alice is a developer.
const alice = {
  name: 'Alice',
  beverage: 'Club-Mate',
  monitors: 2,
  languages: [
    'python',
    'haskell',
    'c++',
    '68k assembly', // Alice is cool like that!
  ],
};

// Instruct to write the alice object as a YAML file.
export default [
  { value: alice, file: `developers/${alice.name.toLowerCase()}.yaml` },
];

Run this example with:

$ jk generate -v alice.js
wrote developers/alice.yaml

This results in the developers/alice.yaml file:

beverage: Club-Mate
languages:
- python
- haskell
- c++
- 68k assembly
monitors: 2
name: Alice

Typing with TypeScript

The main reason to use a general purpose language is to benefit from its ecosystem. With javascript we can leverage typing systems such as TypeScript or flow to help authoring configuration.

Types help in a number of ways, including when refactoring large amounts of code or defining and documenting APIs. I’d also like to show it helps at authoring time by providing context-aware auto-completion:

Types - autocompletion

In the screenshot above we’re defining a container in a Deployment and the IDE only offers the fields that are valid at the cursor position along with the accompanying type information and documentation.

Similarly, typing can provide some level of validation:

Types - autocompletion

The IDE is telling us we haven’t quite defined a valid apps/v1 Deployment. We are missing the mandatory selector field.

Status and Future work

Albeit being still young, we believe jk is already useful enough to be a contender in the space. There’s definitely a lot of room for improvement though:

  • Helm integration: we’d like jk to be able to render Helm charts client side and expose the result as js objects for further manipulation.
  • Jsonnet integration: similarly, it should be possible to consume existing jsonnet programs.
  • Native TypeScript support: currently developers need to run the tsc transpiler by hand. We should be able to make jk consume TypeScript files natively a la deno.
  • Kubernetes strategic merging: the object merging primitives are currently quite basic and we’d like to extend the object merging capabilities of the standard library to implement Kubernetes strategic merging.
  • Expose type generation for Kubernetes custom resources.
  • More helper libraries to generate Grafana dashboards, custom resources for the Prometheus operator, …
  • Produce more examples: it’s easy to feel a bit overwhelmed when facing a new language and paradigm. More examples would make jk more approachable.

Try it yourself!

It’s easy to download jk from the github release page and try it yourself. You can also peruse through the (currently small amount of) examples.

June 12, 2019 10:21 AM

June 01, 2019

Emmanuele Bassi

More little testing

Back in March, I wrote about µTest, a Behavior-Driven Development testing API for C libraries, and that I was planning to use it to replace the GLib testing API in Graphene.

As I was busy with other things in GTK, it took me a while to get back to µTest—especially because I needed some time to set up a development environment on Windows in order to port µTest there. I managed to find some time over various weekends and evenings, and ended up fixing a couple of small issues here and there, to the point that I could run µTest’s own test suite on my Windows 10 box, and then get the CI build job I have on Appveyor to succeed as well.

Setting up MSYS2 was the most time consuming bit, really

While at it, I also cleaned up the API and properly documented it.

Since depending on gtk-doc would defeat the purpose, and since I honestly dislike Doxygen, I was looking for a way to write the API reference and publish it as HTML. As luck would have it, I remembered a mention on Twitter about Markdeep, a self-contained bit of JavaScript capable of turning a Markdown document into a half decent HTML page client side. Coupled with GitHub pages, I ended up with a fairly decent online API reference that also works offline, falls back to a Markdown document when not running through JavaScript, and can get fixed via pull requests.

Now that µTest is in a decent state, I ported the Graphene test suite over to it and, now I can run it on Windows using MSVC—and MSYS2, as soon as the issue with GCC gets fixed upstream. This means that, hopefully, we won’t have regressions on Windows in the future.

The µTest API is small enough, now, that I don’t plan major changes; I don’t want to commit to full API stability just yet, but I think we’re getting close to a first stable release soon; definitely before Graphene 1.10 gets released.

In case you think this could be useful for you: feedback, in the form of issues and pull requests, is welcome.

by ebassi at June 01, 2019 11:15 AM

April 19, 2019

Tomas Frydrych

Let It Burn

Let It Burn

A flame stretching up to heaven. The newsrooms can’t get enough, a journalist’s dream come true. You, me, everyone, glued to our screens, riveting stuff. (Honey, make us some popcorn, will you?)

“We shall rebuild it!”, echoes through the corridors of power, “Money is no object!”.

I have no doubt.

A flame stretching up to heaven. The newsroom’s embarrassed. A 15 year old making noise, a traffic jam, a pensioner chained to railings. (Nutters. Collective shrugging of shoulders.)

“Full force of the law!”, echoes through the corridors of power (money is the object).

I have no doubt.

A skeleton thin polar bear 400 miles from home.

by tf at April 19, 2019 07:25 AM

April 14, 2019

Emmanuele Bassi

(New) Adventures in CI

One of the great advantages of moving the code hosting in GNOME to GitLab is the ability to run per-project, per-branch, and per-merge request continuous integration pipelines. While we’ve had a CI pipeline for the whole of GNOME since 2012, it is limited to the master branch of everything, so it only helps catching build issues post-merge. Additionally, we haven’t been able to run test suites on Continuous since early 2016.

Being able to run your test suite is, of course, great—assuming you do have a test suite, and you’re good at keeping it working; gating all merge requests on whether your CI pipeline passes or fails is incredibly powerful, as it not only keeps your from unknowingly merging broken code, but it also nudges you in the direction of never pushing commits to the master branch. The downside is that it lacks nuance; if your test suite is composed of hundreds of tests you need a way to know at a glance which ones failed. Going through the job log is kind of crude, and it’s easy to miss things.

Luckily for us, GitLab has the ability to create a cover report for your test suite results, and present it on the merge request summary, if you generate an XML report and tell the CI machinery where you put it:

  artifacts:
    reports:
      junit:
        - "${CI_PROJECT_DIR}/_build/report.xml"

Sadly, the XML format chosen by GitLab is the one generated by JUnit, and we aren’t really writing Java classes. The JUnit XML format is woefully underdocumented, with only an unofficial breakdown of the entities and structure available. On top of that, since JUnit’s XML format is undocumented, GitLab has its own quirks in how it parses it.

Okay, assuming we have nailed down the output, how about the input? Since we’re using Meson on various projects, we can rely on machine parseable logs for the test suite log. Unfortunately, Meson currently outputs something that is not really valid JSON—you have to break the log into separate lines, and parse each line into a JSON object, which is somewhat less than optimal. Hopefully future versions of Meson will generate an actual JSON file, and reduce the overhead in the tooling consuming Meson files.

Nevertheless, after an afternoon of figuring out Meson’s output, and reverse engineering the JUnit XML format and the GitLab JUnit parser, I managed to write a simple script that translates Meson’s testlog.json file into a JUnit XML report that you can use with GitLab after you ran the test suite in your CI pipeline. For instance, this is what GTK does:

set +e

xvfb-run -a -s "-screen 0 1024x768x24" \
    meson test \
         -C _build \
         --timeout-multiplier 2 \
     --print-errorlogs \
     --suite=gtk \
     --no-suite=gtk:gsk \
     --no-suite=gtk:a11y

# Save the exit code, so we can reuse it
# later to pass/fail the job
exit_code=$?

# We always run the report generator, even
# if the tests failed
$srcdir/.gitlab-ci/meson-junit-report.py \
        --project-name=gtk \
        --job-id="${CI_JOB_NAME}" \
        --output=_build/${CI_JOB_NAME}-report.xml \
        _build/meson-logs/testlog.json

exit $exit_code

Which results in this:

Some assembly required; those are XFAIL reftests, but JUnit doesn’t understand the concept

The JUnit cover report in GitLab is only shown inside the merge request summary, so it’s not entirely useful if you’re developing in a branch without opening an MR immediately after you push to the repository. I prefer working on feature branches and getting the CI to run on my changes without necessarily having to care about opening the MR until my work is ready for review—especially since GitLab is not a speed demon when it comes to MRs with lots of rebases/fixup commits in them. Having a summary of the test suite results in that case is still useful, so I wrote a small conversion script that takes the testlog.json and turns it into an HTML page, with a bit of Jinja templating thrown into it to avoid hardcoding the whole thing into string chunks. Like the JUnit generator above, we can call the HTML generator right after running the test suite:

$srcdir/.gitlab-ci/meson-html-report.py \
        --project-name=GTK \
        --job-id="${CI_JOB_NAME}" \
        --output=_build/${CI_JOB_NAME}-report.html \
        _build/meson-logs/testlog.json

Then, we take the HTML file and store it as an artifact:

  artifacts:
    when: always
    paths:
      - "${CI_PROJECT_DIR}/_build/${CI_JOB_NAME}-report.html"

And GitLab will store it for us, so that we can download it or view it in the web UI.

There are additional improvements that can be made. For instance, the reftests test suite in GTK generates images, and we’re already uploading them as artifacts; since the image names are stable and determined by the test name, we can create a link to them in the HTML report itself, so we can show the result of the failed tests. With some more fancy HTML, CSS, and JavaScript, we could have a nicer output, with collapsible sections hiding the full console log. If we had a place to upload test results from multiple pipelines, we could even graph the trends in the test suite on a particular branch, and track our improvements.

All of this is, of course, not incredibly novel; nevertheless, the network effect of having a build system in Meson that lends itself to integration with additional tooling, and a code hosting infrastructure with native CI capabilities in GitLab, allows us to achieve really cool results with minimal glue code.

by ebassi at April 14, 2019 04:02 PM

April 12, 2019

Tomas Frydrych

The ‘Truly Clean Green Energy’ Fallacy

The ‘Truly Clean Green Energy’ Fallacy

A recent UKH Opinion piece dealing with the ecological cost of the forthcoming Glen Etive micro hydro, and micro hydro in general, includes this statement: ‘[we can] produce large amounts of truly “clean and green” energy ... through solar, offshore ... and tidal energy solutions’. I have come across permutations of this argument before, and it strikes me that our assessment of the environmental cost of renewables, and our understanding of renewables in general, is somewhat simplistic, glossing over what it is renewables actually do.

Power plants, like the rest of our world, are subject to the law of conservation of energy. They don’t make energy, they convert energy from one form into another, so that it could be transported and released elsewhere. Renewables differ in one important respect: their source energy is being extracted directly from the ecosystems of their installation. In other words, renewables are principally mining operations of a resource that is an intrinsic part of active ecological processes; the fact that we cannot see the thing being mined by the naked eye doesn’t make it any less so.

Thus, renewables have a built-in ecological cost, for it is not possible to extract a significant amounts of energy from an ecosystem without affecting a material change in it -- there are no ‘truly clean and green’ renewables. This is something that is not talked about, I suspect not least because the renewables industry is more about the source energy being ‘free’ than ‘clean’. Yet, we cannot afford to assume this to be inconsequential.

The scale of the problem is easier to grasp when viewed from the other end of the transaction, so let’s talk wind. A typical wind turbine today is rated a bit over 2MW, so a large farm of 150 turbines, such as the Clyde Wind Farm alongside the M74, generates around 350MW. We know that when the 300,000 homes this represents consume this energy, it radically alters the ecosystem around them, changing ambient temperature, humidity, light and noise levels, etc. It is unreasonable to expect that the ecosystems from which this energy was extracted to start with will not experience a transformation of a comparable magnitude.

Sticking with wind, it’s been known for some time that wind farms create their own micro-climates, with one of the easily measurable effects being an increase in temperature at ground level. A study by Harvard researchers Miller and Keith published last year concluded that if all US power generation was switched to wind, this would lead to an overall continental US temperature increase of 0.24C.

The above study has been misrepresented by the popular press as ‘wind turbines cause global warming’, which is, obviously, not what it says; what it does show is that renewables have their own, significant, ecological impact. As Keith summarised it,

If your perspective is the next 10 years, wind power actually has—in some respects—more climate impact than coal or gas, if your perspective is the next thousand years, then wind power is enormously cleaner than coal or gas.

The problem is that the 1,000 year perspective currently focuses strictly on combating (the easily commercialised) effects of warming, while glossing over more subtle questions of ecological integrity. This needs to change.

There is an additional issue with renewables. Since no engineering processes is, or can be, 100% efficient, not all of the source energy extracted is converted to electricity. Some of it leaks back into the ecosystem in other forms. The classic example of this is noise. We know that people living in proximity to wind turbine installations report health issues, but the effect is far wider. We are aware of major effect on birds, bats, and, moving off shore, marine mammals; and, of course, these ecosystems are home to myriad of other tightly interconnected species.

This is not meant to be a diatribe against renewables. Given that the public opinion is firmly against nuclear energy, renewables are all we have in combating Climate Change, for Climate Change does change everything, if we don’t get it under control, nothing else is of consequence. Nor is this a diatribe against wind power; I have used wind as a convenient example, these problems are I believe intrinsic to the whole class of renewable energy, and I expect we will be hearing increasingly more on this subject in the future.

What I want to do here is simply to draw an attention to the fact that all renewables come with intrinsic ecological costs that go beyond the obvious and visible things like loss of habitat. I draw two conclusions from these observations:

  1. The single most important thing in addressing Climate Change is not switching to renewables but a radical cut to our energy consumption. Renewables might allow us to keep the planet cooler, but they do not necessarily preserve its ecosystems.

  2. I find the ‘renewable X bad, renewable Y good’ line of reasoning a form of NIMBY-ism, missing the big picture. We can’t afford to exclude any form of renewables en masse, because when it comes to renewables too much of any one thing is going to be bad news for something somewhere. And in Scotland our renewables portfolio is too heavily weighted toward wind, it needs to be more balanced.

by tf at April 12, 2019 09:29 AM

April 06, 2019

Tomas Frydrych

On Delayed Gratificiation

On Delayed Gratificiation

Some of the photographers of old get rather upset when folk say ‘film slows you down’, so I won’t say that, but I’ll say it slows me down for sure. It’s not just the ‘on location’ pace, but also the time it takes before I get to see what I tried to visualise.

It starts with the negative development. While the process itself is not particularly time consuming, for reasons both environmental and economic, I tend to develop my B&W negatives in batches of two, and I rarely shoot more than a roll a week, and sometime life permits a lot less than that.

The delay between imagining and seeing becomes even more pronounced with colour. The chemicals are designed to be mixed in batches for six rolls each, and once mixed don’t have very long shelf life. And so in the fridge the exposed films go, until I have at least four of them, which, considering I am mostly focusing on B&W these days, can take a while (this weekend I am developing slides I took in October and November of last year; quite exciting, a couple of frames there that I thought at the time had some promise).

And then there is the printing (for me photography is mainly about the print, I have nothing against digital display of photos, but it doesn’t do it for me personally). Sometime the print takes four hours in the darkroom, sometime fifteen, and I can maybe find a day or two a month for this. All in all, in the last six months I have made eight photographs, and have another four or so waiting to be printed.

I expect, you, the reader, might be asking why on earth would anyone do this in this day and age? I could give here a whole list of reasons but the simple and most honest answer is ‘because I enjoy it’; the tactile nature of it, the very fact it doesn’t involve a computer (which I spend far too much time with as is).

Of course, this delayed gratification can (and does) at times turn into a delayed disappointment; the cover image is my witness. But all in all I am finding that in this instant world of ours the lack of immediate feedback is more of a benefit than a hindrance, the inability to see the result right here and now makes photography sort of an exercise in patience, even faith, for as it was said long time ago

Faith is confidence in what we hope for and assurance about what we do not see, this is what the ancients were commended for.

And faith, in turn, inspires dreams, and dreams are what the best photographs are made of.

by tf at April 06, 2019 03:08 PM

March 14, 2019

Emmanuele Bassi

A little testing

Years ago I started writing Graphene as a small library of 3D transformation-related math types to be used by GTK (and possibly Clutter, even if that didn’t pan out until Georges started working on the Clutter fork inside Mutter).

Graphene’s only requirement is a C99 compiler and a decent toolchain capable of either taking SSE builtins or support vectorization on appropriately aligned types. This means that, unless you decide to enable the GObject types for each Graphene type, Graphene doesn’t really need GLib types or API—except that’s a bit of a lie.

As I wanted to test what I was doing, Graphene has an optional build time dependency on GLib for its test suite; the library itself may not use anything from GLib, but if you want to build and run the test suite then you need to have GLib installed.

This build time dependency makes testing Graphene on Windows a lot more complicated than it ought to be. For instance, I need to install a ton of packages when using the MSYS2 toolchain on the CI instance on AppVeyor, which takes roughly 6 minutes each for the 32bit and the 64bit builds; and I can’t build the test suite at all when using MSVC, because then I’d have to download and build GLib as well—and just to access the GTest API, which I don’t even like.


What’s wrong with GTest

GTest is kind of problematic—outside of Google hijacking the name of the API for their own testing framework, which makes looking for it a pain. GTest is a lot more complicated than a small unit testing API needs to be, for starters; it was originally written to be used with a specific harness, gtester, in order to generate a very brief HTML report using gtester-report, including some timing information on each unit—except that gtester is now deprecated because the build system gunk to make it work was terrible to deal with. So, we pretty much told everyone to stop bothering, add a --tap argument when calling every test binary, and use the TAP harness in Autotools.

Of course, this means that the testing framework now has a completely useless output format, and with it, a bunch of default behaviours driven by said useless output format, and we’re still deciding if we should break backward compatibility to ensure that the supported output format has a sane default behaviour.

On top of that, GTest piggybacks on GLib’s own assertion mechanism, which has two major downsides:

  • it can be disabled at compile time by defining G_DISABLE_ASSERT before including glib.h, which, surprise, people tend to use when releasing; thus, you can’t run tests on builds that would most benefit from a test suite
  • it literally abort()s the test unit, which breaks any test harness in existence that does not expect things to SIGABRT midway through a test suite—which includes GLib’s own deprecated gtester harness

To solve the first problem we added a lot of wrappers around g_assert(), like g_assert_true() and g_assert_no_error(), that won’t be disabled depending on your build options and thus won’t break your test suite—and if your test suite is still using g_assert(), you’re strongly encouraged to port to the newer API. The second issue is still standing, and makes running GTest-based test suite under any harness a pain, but especially under a TAP harness, which requires listing the amount of tests you’ve run, or that you’re planning to run.

The remaining issues of GTest are the convoluted way to add tests using a unique path; the bizarre pattern matching API for warnings and errors; the whole sub-process API that relaunches the test binary and calls a single test unit in order to allow it to assert safely and capture its output. It’s very much the GLib test suite, except when it tries to use non-GLib API internally, like the command line option parser, or its own logging primitives; it’s also sorely lacking in the GObject/GIO side of things, so you can’t use standard API to create a mock GObject type, or a mock GFile.

If you want to contribute to GLib, then working on improving the GTest API would be a good investment of your time; since my project does not depend on GLib, though, I had the chance of starting with a clean slate.


A clean slate

For the last couple of years I’ve been playing off and on with a small test framework API, mostly inspired by BDD frameworks like Mocha and Jasmine. Behaviour Driven Development is kind of a buzzword, like test driven development, but I particularly like the idea of describing a test suite in terms of specifications and expectations: you specify what a piece of code does, and you match results to your expectations.

The API for describing the test suites is modelled on natural language (assuming your language is English, sadly):

  describe("your data type", function() {
    it("does something", () => {
      expect(doSomething()).toBe(true);
    });
    it("can greet you", () => {
      let greeting = getHelloWorld();
      expect(greeting).not.toBe("Goodbye World");
    });
  });

Of course, C is more verbose that JavaScript, but we can adopt a similar mechanism:

static void
something (void)
{
  expect ("doSomething",
    bool_value (do_something ()),
    to_be, true,
    NULL);
}

static void
{
  const char *greeting = get_hello_world ();

  expect ("getHelloWorld",
    string_value (greeting),
    not, to_be, "Goodbye World",
    NULL);
}

static void
type_suite (void)
{
  it ("does something", do_something);
  it ("can greet you", greet);
}


  describe ("your data type", type_suite);

If only C11 got blocks from Clang, this would look a lot less clunkier.

The value wrappers are also necessary, because C is only type safe as long as every type you have is an integer.

Since we’re good C citizens, we should namespace the API, which requires naming this library—let’s call it µTest, in a fit of unoriginality.

One of the nice bits of Mocha and Jasmine is the output of running a test suite:

$ ./tests/general 

  General
    contains at least a spec with an expectation
      ✓ a is true
      ✓ a is not false

      2 passing (219.00 µs)

    can contain multiple specs
      ✓ str contains 'hello'
      ✓ str contains 'world'
      ✓ contains all fragments

      3 passing (145.00 µs)

    should be skipped
      - skip this test

      0 passing (31.00 µs)
      1 skipped


Total
5 passing (810.00 µs)
1 skipped

Or, with colors:

Using colors means immediately taking this more seriously

The colours go automatically away if you redirect the output to something that is not a TTY, so your logs won’t be messed up by escape sequences.

If you have a test harness, then you can use the MUTEST_OUTPUT environment variable to control the output; for instance, if you’re using TAP you’ll get:

$ MUTEST_OUTPUT=tap ./tests/general
# General
# contains at least a spec with an expectation
ok 1 a is true
ok 2 a is not false
# can contain multiple specs
ok 3 str contains 'hello'
ok 4 str contains 'world'
ok 5 contains all fragments
# should be skipped
ok 6 # skip: skip this test
1..6

Which can be passed through to prove to get:

$ MUTEST_OUTPUT=tap prove ./tests/general
./tests/general .. ok
All tests successful.
Files=1, Tests=6,  0 wallclock secs ( 0.02 usr +  0.00 sys =  0.02 CPU)
Result: PASS

I’m planning to add some additional output formatters, like JSON and XML.


Using µTest

Ideally, µTest should be used as a sub-module or a Meson sub-project of your own; if you’re using it as a sub-project, you can tell Meson to build a static library that won’t get installed on your system, e.g.:

mutest_dep = dependency('mutest-1',
  fallback: [ 'mutest', 'mutest_dep' ],
  default_options: ['static=true'],
  required: false,
  disabler: true,
)

# Or, if you're using Meson < 0.49.0
mutest_dep = dependency('mutest-1', required: false)
if not mutest_dep.found()
  mutest = subproject('mutest',
    default_options: [ 'static=true', ],
    required: false,
  )

  if mutest.found()
    mutest_dep = mutest.get_variable('mutest_dep')
  else
    mutest_dep = disabler()
  endif
endif

Then you can make the tests conditional on mutest_dep.found().

µTest is kind of experimental, and I’m still breaking its API in places, as a result of documenting it and trying it out, by porting the Graphene test suite to it. There’s still a bunch of API that I’d like to land, like custom matchers/formatters for complex data types, and a decent want to skip a specification or a whole suite; plus, as I said above, some additional formatted output.

If you have feedback, feel free to open an issue—or a pull request wink wink nudge nudge.

by ebassi at March 14, 2019 03:01 PM

February 24, 2019

Emmanuele Bassi

Episode 2.a: Building GNOME

In the GNOME project community, the people building the code are represented by two separate but equally important groups: the maintainers, who release the code, and the release team, who release GNOME. These are their stories.


Developing a software project can be hard, but building one ought to be simpler, right? After all, it’s kind of a binary state for software projects after any change: either the change broke the build, or it didn’t—and broken builds do not get released, right?

Oh, you sweet summer child. Of course it’s not that simple.

Building software gets even more complicated when it comes to complex, interdependent projects composed by multiple modules like GNOME; each module has dependencies lower in the stack, and reverse dependencies—that is, modules that depend on the interfaces it provides—higher in the stack.

If you’re working on a component low in the stack, especially in 2002, chances are you only have system dependencies, and those system dependencies are generally shipped by your Linux distribution. Those dependencies do not typically change very often, or at the very least they assume you’re okay with not targeting the latest, bleeding edge version. If push comes to shove, you can always write a bunch of fallback code that gets only ever tested by people running on old operating systems—if they decide on a whim to try and compile the latest and greatest application, instead of just waiting until their whole platform gets an upgrade.

Moving upwards in the GNOME stack, you start having multiple dependencies, but generally speaking those dependencies move at the same speed as your project, so you can keep track of them. Unlike other projects, the GNOME platform never had issues with the multiplication of dependencies—something that will come back to bite the maintainers later in the 2.x development cycle—but even so, with GNOME offering code hosting to like-minded developers, it didn’t use to be hard to keep track of things; if everything happens on the same source code repository infrastructure you can subscribe to the changes for your dependencies, and see what happens almost in real time.

Applications, finally, are another beast entirely; here, dependencies can be many, and spanning across different system services, different build systems, different code hosting services, and different languages. To minimise the causes of headaches, you can decide to target the versions packages by the Linux distribution you’re using, unless you’re in the middle of a major API version shift in the platform—like, say, the one that happened between GNOME 1 and 2. No Linux distribution packager in their right mind would ship releases for highly unstable core platform libraries, especially when the development cadence of those libraries outpaces the cadence of the distribution packaging and update process. For those cases, using snapshots of the source under revision control is the only option left for you as an upstream maintainer.

So, at this point, your options are limited. You want to install the latest and greatest version of your dependencies—unstable as they might be—on your local system, but you don’t want to mess up the rest of the system in case the changes introduce a bug. In other words: if you’re running GNOME as your desktop, and you upgrade a dependency for your application, you might end up breaking the your own desktop, and then spend time undoing the unholy mess you made, instead of hacking on your own project.

This is usually the point where programmers start writing scripts to modify the environment, and build dependencies and applications into their own separate prefix, using small utilities that describe where libraries put their header files and shared objects in order to construct the necessary compiler and linker arguments. GNOME libraries standardised to one of these tools, called pkg-config, before the 2.0 release, replacing the per-project tools like glib-config or gtk-config that were common during the GNOME 1.x era.

Little by little, the scripts each developer used started to get shared, and improved; for instance, instead of hard coding the list of things to build, and the order in which they should be built, you may want to be able to describe a whole project in terms of a list of modules to be built—each module pointing to its source code repository, its configuration and build time options, and its dependencies. Another useful feature is the ability to set up an environment capable of building and running applications against the components you just built.

The most successful script for building and running GNOME components, jhbuild, emerged in 2002. Written by James Henstridge, the maintainer of the GTK bindings for Python, jhbuild took an XML description of a set of modules, built the dependency tree for each component, and went through the set in the right order, building everything it could, assuming it knew how to handle the module’s build system—which, at the time, was mostly a choice between Autotools and Autotools. Additionally, it could spawn a shell and let you run what you built, or compile additional code as if you installed every component in a system location. If you wanted to experience GNOME at its most bleeding edge, you could build a whole module set into a system prefix like /opt, and point your session manager to that location. Running a whole desktop environment out of a CVS snapshot: what could possibly go wrong, right? Well, at least you had the option of going back to the safe harbours of your Linux distibution’s packaged version of GNOME.

Little by little, over the years, jhbuild acquire new features, like the ability to build projects that were not using Autotools. Multiple module sets, one for each branch of GNOME, and one for tracking the latest and greatest, appeared over the years, as well as additional module sets for building applications, both hosted on gnome.org repositories and outside the GNOME infrastructure. Jhbuild was even used on non-Linux platforms, like macOS, to build the core GNOME platform stack, and let application developers port their work there. Additionally, other projects like X11 and the freedesktop.org stack that we’ll see in a future episode, will publish their own module sets, as many of the developers in GNOME moved through the stack and brought their tools with them.

With jhbuild consuming sets of modules in order to build the GNOME stack, the question becomes: who maintained those sets? Ideally, the release team would be responsible for keeping the modules up to date whenever a new dependency was added, or an old dependency removed. As the release team was responsible of deciding which modules belonged in the GNOME release, they would be the ones best positioned to update the jhbuild sets. There was a small snag in this plan, though: the release team already had its own tool for building GNOME from release archives produced and published by the module maintainers, in the correct order, to verify that the whole GNOME release would build and produce something that could be packaged by Linux (and non-Linux) distributors.

The release team’s tool was called GARNOME, and was based on the GAR architecture designed by Nick Moffitt, which itself was largely based on the BSD port system. GARNOME was developed as a way for the release team to build and test alpha releases of GNOME during the 2.0 development cycle. The main difference between jhbuild and GARNOME was the latter’s focus on release archives, compared to the former’s focus on source checkouts. The main goal of GARNOME was really to replicate the process of distributors taking various releases and packaging them up in their preferred format. While editing jhbuild’s module sets was a simple matter of changing some XML, the GARNOME recipes were a fairly complicated set of Make files, with magic variables and magic include directives that would lead to building and installing each module, using Make rules to determine the dependencies. All of this meant that there was not only no overlap between who used jhbuild and who used GARNOME, but also no overlap between who contributed to which project.

Both jhbuild and GARNOME assumed you had a working system and development toolchain for all the programming languages needed to build GNOME components; they also relied on a whole host of system dependencies, especially when it came to talking to system services and hardware devices. While this was relatively less important for GARNOME, whose role was simply to build the whole of GNOME, jhbuild started to suffer from these limitations as soon as GNOME projects began interacting much more with the underlying services offered by the operating system.

It’s important to note that none of this stuff was automated; it all relied on human intervention for testing that things would not blow up in interesting ways. We were far, far away from any concept of a continuous integration pipeline. Individual developers had to hunt down breakage in library releases that would have repercussions down the line when building other libraries, system components, or applications. The net result was that building GNOME was only possible if everything was built out of release archives; anything else was deeply unstable, and proved to be hard to handle for both seasoned developers and new contributors alike, the more complexity was piled on the project.

GARNOME was pretty successful, and ended up being used for a majority of the GNOME 2 releases, until it was finally retired in favour of jhbuild itself, using a special module set that pointed to release archives instead of source code repositories. The module set was maintained by the release team, and published for every GNOME release, to let other developers and packagers reproduce and validate the process.

Jhbuild is still used to this day, mostly for the development of system components like GTK, or the GNOME Shell; application building has largely shifted towards containerised systems, like Flatpak, which have the advantage of being easily automated in a CI environment. These systems are also much easier to use from a newcomer perspective, and are quite more reliable when it comes to the stability of the underlying middleware.

The release team switched away from jhbuild for validating and publishing GNOME releases in 2018, long into the GNOME 3 release cycle, using a new tool called BuildStream which not only builds the GNOME components but it also builds the lower layers of the stack including the compiler toolchain, to ensure a level of build reproducibility that jhbuild and GARNOME never had.

References

by ebassi at February 24, 2019 05:00 PM

February 14, 2019

Emmanuele Bassi

Episode 2.2: Release Day

For all intents and purposes, the 2.0 release process of GNOME was a reboot of the project; as such, it was a highly introspective event that just so happened to result in a public release of a software platform used by a large number of people. The real result of this process was not in the bits and bytes that were compiled, or interpreted, into desktop components, panel applets, or applications; it was, instead, a set of design tenets, a project ethos, and a powerful marketing brand that exist to this day.

The GNOME community of developers, documenters, translators, and designers, was faced with the result of Sun’s user testing, and with the feedback on documentation, accessibility, and design, and had two choices: double down on what everyone was doing, and maintain the existing contributor and user bases; or adapt, take a gamble, and change—possibly even lose contributors, in the hope of gaining more users, and more contributors, down the road.

The community opted for the latter gamble.

The decision was not without internal strife.

This kind of decisions is fairly common in all projects that reached a certain critical mass, especially if they don’t have a central figure keeping everything together, and being the ultimate arbiter of taste and direction. Miguel de Icaza was already really busy with Ximian, and even if the Foundation created the release team in order to get a “steering committee” to make informed, executive decisions in case of conflicts, or decide who gets to release what and when, in order to minimise disruption over the chain of dependencies, this team was hardly an entity capable of deciding the direction of project.

If effectively nobody is in charge, the direction of the project becomes an emergent condition. People have a more or less vague sense of direction, and a more or less defined end goal, so they will move towards it. Maybe not all at the same time, and maybe not all on the same path; but the sum of all vectors is not zero. Or, at least, it’s not zero all the time.

Of course, there are people that put more effort in order to balance the equation, and those are the ones that we tend to recognise as “the leaders”, or, in modern tech vernacular, “the rock stars”.

Seth Nickell and Calum Benson clearly fit that description. Both worked on usability and design, and both worked on user testing—with Calum specifically working on the professional workstation user given his position at Sun. Seth was the GNOME Usability project lead, and alongside the rest of the usability team, co-authored the GNOME Human Interface Guidelines, or HIG. The HIG was both a statement of intent on the design direction of GNOME, as well as a checklist for developers to go through and ensure that all the GUI applications would fit into the desktop environment, by looking and behaving consistently. At the very basic core of the HIG sat a few principles:

  1. Design your application to let people achieve their goals
  2. Make your application accessible to everyone
  3. Keep the design simple, pretty, and consistent
  4. Keep the user in control, informed on what happens, and forgive them for any mistake

These four tenets tried to move the needle of the design for GNOME applications from the core audience of standard nerds to the world at large. In order to design your application, you need to understand the audience that you wish to target, and how to ensure you won’t get in their way; this also means you should never limit your audience to people that are as able bodied as you are, or coming from the same country or socio-economical background; your work should be simple and reliable, in the sense that it doesn’t do two similar things in two different ways; and that users should be treated with empathy, and never with contempt. Even if you didn’t have access to the rest of the document, which detailed how to deal with whitespace, or alignment, or the right widget for the right action, you could already start working on making an application capable of fitting in with the rest of GNOME.

Consistency and forgiveness in user interfaces also reflected changes in how those interfaces should be configured—or if they should be configured at all.

In 2002 Havoc Pennington wrote what is probably the most influential essay on how free and open source software user interface ought to be designed, called “Free software UI”. The essay was the response to the evergreen question: “can free and open source software methodology lead to the creation of a good user interface?”, posed by Matthew Paul Thomas, a designer and volunteer contributor at Mozilla.

Free and open source software design and usability suffer from various ailments, most notably:

  • there are too many developers and not enough designers
  • designers can’t really submit patches

In an attempt at fixing these two issues the GNOME project worked on establishing a design and usability team, with the help of companies to jump start the effort; the presence of respected designers in leadership positions also helped creating and fostering a culture where project maintainers would ask for design review and testing. We’re still a bit far from asking for design input instead of a design review, which usually comes with pushback now that the code is in place. Small steps, I guess.

The important part of the essay, though, is on the cost of preferences—namely that there’s no such thing as “just adding an option”.

Adding an option, on a technical level, requires adding code to handle all the potential states of the option; it requires handling failure cases; a user interface for setting and retrieving the option; it requires testing, and QA, for all the states. Each option can interact with other options, which means a combinatorial explosion of potential states, each with its own failure mode. Options are optimal for a certain class of users, because they provide the illusion of control; they are also optimal for a certain class of developers, because they tickle the instinct of making general solutions to solve classes of problems, instead of fixing the actual problem.

In the context of software released “as is”, without even the implied warranty of being fit for purpose, like most free and open source software is, it removes stress because it allows the maintainer from abdicating responsibility. It’s not my fault that you frobnicated the foobariser and all your family photos have become encoded versions of the GNU C library; you should have been holding a lit black candle in your left hand and a curved blade knife in your right hand if you didn’t want that to happen.

More than that, though: what you definitely don’t want is having preferences to “fix” your application. If you have a bug, don’t add an option to work around it. If somebody is relying on a bug for their workflow, then: too bad. Adding a preference to work around a bug introduces another bug because now you encoded a failure state directly into the behaviour of your code, and you cannot ever change it.

Finally, settings should never leave you in an error state. You should never have a user break their system just because they put invalid data in a text field; or toggled the wrong checkbox; or clicked the wrong button. Recovering is good, but never putting the user in the condition of putting bad values in the system is the best approach because it is more resilient.

From a process standpoint, the development cycle of GNOME 1, and the release process of GNOME 2.0, led to a complete overhaul of how the project should be shepherded into releasing new versions. Releasing GNOME 2.0 led to many compromises: features were not complete, known bugs ended up in the release notes, and some of the underlying API provided by the development platform were not really tested enough before freezing them for all time. It is hard to decide when to shout “pencils down” when everyone is doing their own thing in their own corner of the world. It’s even harder when you have a feedback loop between a development platform that provides API for the rest of the platform, and needs validation for the changes while they can still be made; and applications that need the development platform to sit still for five minutes so that they can be ported to the new goodness.

GNOME was the first major free software project to switch away from a feature-based development cycle and towards a time-based one, thanks to the efforts of Jeff Waugh on behalf of the release team; the whole 2.0 development cycle was schedule driven, with constant reminders of the various milestones and freeze dates. Before the final 2.0 release, a full plan for the development cycle was proposed to the community of maintainers; the short version of the plan was:

The key elements of the long-term plan: a stable branch that is extremely locked-down, an unstable branch that is always compilable and dogfood-quality, and time-based releases at 6 month intervals.

Given that the 2.0 release happened at the end of June, that would put the 2.2 release in December, which would have been problematic. Instead, it was decided to have a slightly longer development cycle, to catch all the stragglers that couldn’t make the cut for 2.0, and release 2.2 in February, followed by the 2.4 release in September. This period of adjustment led to the now familiar release cadence of a March and a September releases every year. Time based releases and freezing the features, API, translatable strings—and code, around the point-zero release date—ensured that only features working to the satisfaction of the maintainers, designers, and translators would end up in the hand of the users. Or, at least, that was the general plan.

It’s important to note that all of these changes in the way GNOME as a community saw itself, and the project they were contributing to, are probably the biggest event in the history of the project itself—and, possibly, in the history of free and open source software. The decision to focus on usability, accessibility, and design, shaped the way people contributing and using GNOME think about GNOME; it even changed the perception of GNOME for people not using it, for good or ill. GNOME’s brand was solidified into one of care about design principles, and that perception continues to this day. If something user visible changes in GNOME it is assumed that design, usability, or accessibility had something to do with it—even when it really didn’t; it is assumed that designers sat together, did user studies, finalized a design, and then, in one of the less charitable versions, lobbed it over the wall to module maintainers for its implementation with no regard for objections.

That version of reality is so far removed from ours it might as well have superheroes flying around battling monsters from other dimensions; and, yet, the GNOME brand is so established that people will take it as an article of faith.

For all that, though, GNOME 2 did have usability studies conducted on it prior to release; the Human Interface Guidelines were written in response to those studies, and to established knowledge in the interaction design literature and community; the changes in the system settings and the menu structures were done after witnessing users struggle with the equivalent bits of GNOME 1. The unnecessary settings that littered the desktop as a way to escape making decisions, or as a way to provide some sort of intellectual challenge to the developers were removed because, in the end, settings are not the goal for a desktop environment that’s just supposed to launch applications and provide an environment for those applications to exist.

This was peak GNOME brand.

On June 27, 2002, GNOME 2.0 was released. The GNOME community worked days and nights for more than a year after releasing 1.4, and for more than two years after releasing 1.2. New components were created, projects were ported, documentation was written, screenshots were taken, text was translated.

Finally, the larger community of Linux users and enthusiasts would be able to witness the result of all this amazing work, and their reaction was: thanks, I hate it.

Well, no, that’s not really true.

Yes, a lot of people hated it—and they made damn well sure you knew that they hated it. Mailing lists, bug trackers, articles and comments on news websites were full of people angrily demanding their five clocks back; or their heavily nested menu structure; or their millisecond-precision animation settings; or their “Miscellanous” group of settings in a “Miscellaneous” tab of the control centre.

A lot of people simply sat in front of GNOME 2, and managed to get their work done, before turning off the machine and going home.

A few people, though, saw something new; the potential of the changes, and of the focus of the project. They saw beyond the removed configuration options; the missing features left for a future cycle; the bugs caused by the massive changes in the underlying development platform.

Those few people were the next generation of contributors to GNOME; new developers, sure, but also new designers; new documentation writers; new translators; new artists; new maintainers. They were inspired by the newly refocused direction of project, by its ethos, to the point of deciding to contribute to it. GNOME needed to be ready for them.


Next week the magic and shine of the release starts wearing off, and we’re back to flames and long discussions on features, media stacks, inclusion of applications in the release, and what happens when Novell decides to go on a shopping spree, in the episode titled “Honeymoon phase”.

by ebassi at February 14, 2019 04:00 PM

January 24, 2019

Emmanuele Bassi

Episode 2.1: On Brand

In the beginning of the year 2000, with the 1.2 release nearly out of the door, the GNOME project was beginning to lay down the groundwork for the design and development of the 2.0 release cycle. The platform had approached a good level of stability for the basic, day to day use; whatever rough edges were present, they could be polished in minor releases, while the core components of the platform were updated and transitioned incrementally towards the next major version. Before mid-2000, we already started seeing a 1.3 branch for GTK, and new libraries like Pango and GdkPixbuf were in development to provide advanced text rendering and replace the old and limited imlib image loading library, respectively. Of course, once you get the ball rolling it’s easy to start piling features, refactorings, and world breaking fixes on top of each other.

If X11 finally got support for loading True Type fonts, then we would need a better API to render text. If we got a better API to render text, we would need a better API to compose Unicode glyphs coming from different writing systems, outside of the plain Latin set.

If we had a better image loading library, we could use better icons. We could even use SVG icons all over the platform, instead of using the aging XPM format.

By May 2000, what was supposed to be GTK 1.4 turned into the development cycle for the first major API break since the release of GTK 1.0, which happened just the year before. Of course, the 1.2 cycle would continue, and would now cover not just GNOME 1.2, but the 1.4 release as well.

It wasn’t GTK itself that led the charge, though. In a way, breaking GTK’s API was the side effect of a much deeper change.

As we’ve seen in the first chapter’s side episode on GTK, GLib was a simple C utility library for the benefit of GTK itself; the type system for writing widgets and other toolkit-related data using object orientation was part of GTK, and required depending on GTK for writing any type of object oriented C. It turns out that various C projects in the GNOME ecosystem—projects like GdkPixbuf and Pango—liked the idea of having a type system written in C and, more importantly, the ability to easily build language bindings for any API based on that type system. Those projects didn’t really need, or want, a dependency on a GUI toolkit, with its own dependency on image loading libraries, or windowing systems. Moving the type system to a lower level library, and then reusing it in GTK would have neatly solved the problem, and made it possible to create a semi-standard C library shared across various projects.

Of course, since no solution survives contact with software developers, people depending on GLib for basic data structures, like a hash table, didn’t want to suddenly have a type system as well. For that reason, GLib acquired a second shared library containing the GTK type system, upgraded and on steroid; the “signal” mechanism to invoke a named list of functions, used by GTK to deliver windowing system events; and a base object class, called GObject, to replace GTK’s own GtkObject. On top of that, GObject provided properties associated to object instances; interface types; dynamic types for loadable modules; type wrappers for plain old data types; generic function wrappers for language bindings; and a richer set of memory management semantics.

Another important set of changes finding their way down to GLib was the portability layer needed to make sure that GTK applications could run on both Unix-like, and non-Unix-like operating systems. Namely, Windows, for which GTK was getting a backend, alongside additional backends for BeOS, macOS, and direct framebuffer rendering. The Windows backend for GTK was introduced to make GIMP build and work on that platform, and increase its visibility, and possibly the amount of contributions—something that free and open source software communities always strive towards, even if it does increase the amount of feature request, bug reports, and general work for the maintainers. It’s important to note that GTK wasn’t suddenly becoming a cross-platform toolkit, meant to be used to write applications targeting multiple platforms; the main goal was always to allow extant Linux applications to be easily ported to other operating systems first, and write native, non-Linux applications as a distant second.

With a common, low level API provided by GLib and GObject, we start to see the beginning of a more comprehensive software development platform for GNOME; if all the parts of the platform, even the ones that are not directly tied to the windowing system, follow the same semantics when it comes to memory management, type inheritance, properties, signals, and coding practices, then writing documentation becomes easier; developing bindings for various programming languages becomes a much more tractable problem; creating new, low level libraries for, say, sending and receiving data from a web server, and leveraging the existing community becomes possible. With the release of GLib and GTK 2.0, the GNOME software development platform moves from a collection of libraries with a bunch of utilities built on top of GTK to a comprehensive set of functionality centered on GLib and GObject, with GTK as the GUI toolkit, and a collection of libraries fullfilling specific tasks to complement it.

Of course, that still means having libraries like libgnome and libgnomeui lying around, as a way to create GNOME applications that integrate with the GNOME ecosystem, instead of just GTK applications. GTK gaining more GNOME-related features, or more GNOME-related integration points, was a source of contention inside the community. GTK was considered by some GNOME contributors to be a “second party” project; it had its own release schedule, and its own release manager; it incorporated feedback from GNOME application and library developers, but it was also trying to serve non-GNOME communities, by providing a useful generic GUI toolkit out of the box. On the other side of the spectrum, some GNOME developers wanted to keep the core as lean as possible, and accrue functionality inside the GNOME libraries, like libgnome and libgnomeui, even if those libraries were messier and didn’t receive as much scrutiny as GLib and GTK.

Most of 2001 was spent developing GObject and Pango, with the latter proving to be one of the lynchpins of the whole platform release. As we’ve seen in episode 1.5, Pango provided support for complex, non-latin text, a basic requirement for creating GUI applications that could be used outside of the US and Europe; in order to get there, though, various pieces of the puzzle had to come together first.

The first, big piece, was adding the ability for applications to render TrueType fonts on X11, using fontconfig to configure, enumerate, and load the fonts installed in a system, and the Xft library, for rendering glyphs. Before Xft, X applications only had access to core bitmap fonts, which may look impressive if all you have a is thin terminal in 1987, but compared to the font rendering available on Windows and macOS they were already painfully out of date by about 10 years. During the GNOME 1.x cycle some components with custom rendering code already started using Xft directly, instead of going through GTK’s text rendering wrappers around X11’s core API; this led to the interesting result of, for instance, the text rendering in Nautilus pre-1.0 looking miles better than every other GTK 1 application, including the rest of the desktop components.

The other big piece of the puzzle was Unicode. Up until GTK 2.0, all text inside GTK applications was pretty much passed as it was to the underlying windowing system text rendering primitives; that mostly meant either ASCII, or one of the then common ISO encodings; this not only imposed restrictions on what kind of text could be rendered, but it also introduced additional hilarity when it came to running applications localized by somebody in Europe on a computer in the US, or in Russia, or in Japan.

Taking advantage of Unicode to present text meant adding various pieces of API to GLib, mostly around the Unicode tables, text measurement, and iteration. More importantly, though, it meant changing all text used inside GTK and GNOME applications to UTF-8. It meant that file system paths, translations, and data stored on non-Unicode systems had to be converted—if the original encoding was available—or entirely rewritten, if you didn’t want your GUI to be a tragedy of unintelligible text.

If the written word was in the process of being transformed to another format, pictures were not in a better position. The state of the art for raster images in a GTK application was still XPM, a bitmap format that was optimised for storage inside C sources, and compiled with the rest of the application. Sadly, the results were less than stellar, compared to more common formats like JPEG and PNG; additionally, the main library used to read image assets, imlib, was very much outdated, and mostly geared towards loading image data into X11 pixmaps. Imlib provided entry points for integrating with the GTK drawing API, but it was a separate project, part of the Enlightenment window manager—which was already moving to its replacement, imlib2. The work to replace imlib with a new GNOME-driven library, called GdkPixbf, began in 1999, and was merged into the GTK repository as a way to provide API to load images in various formats like PNG, GIF, JPEG, and Windows BMP and ICO files directly into GTK widgets. As an additional feature, GdkPixbuf had an extensible plugin system which allowed writing out of tree modules for loading image formats without necessarily adding new dependencies to GTK. Another feature of GdkPixbuf was a transformation and compositing API, something that imlib did not provide.

All in all, it took almost 2 years of development for GTK 1.3 to turn into GTK 2.0, with the first stable releases of GLib, Pango, and GTK cut in March 2002, after a few months of feature freeze needed to let GNOME and application developers catch up with the changes, and port their projects to the new API. The process of porting went without a hitch, with everyone agreeing that the new functionality was incredibly easy to use, and much better than what came before. Developers were so eager to move to the new libraries that they started doing so during the development cycle, and constantly kept up with the changes so that every source code change was just a few lines of code.

Yes, of course I’m lying.

Porting proceeded in fits and jumps, and it was either done at the very beginning, when the difference between major versions of the libraries were minimal and gave a false impression of what the job entailed; or it happened at the very end of the cycle, with constant renegotiation of every single change in the platform, a constant barrage of questions of why platform developers were trying to impose misery on the poor application developers, why won’t you think of the application developers…

Essentially, like every single major version change in every project, ever.

On top of the usual woes of porting, we have to remember that GNOME 1.4 started introducing technology previews for GNOME 2.0, like GConf, the new configuration storage. While Havoc Pennington had written GConf between 2000 and 2001, and concentrated mostly on the development of GTK 2 after that, it was now time to start using GConf as part of the desktop itself, as a replacement for the configuration storage used by components by the Panel, and by applications using libgnome, which is when things got heated.

Ximian developer Dieter Maurer thought that some of the trade-offs of the GConf implementation, mostly related to its use of CORBA’s type system, were enough of a road block that they decided to write their own configuration client API, called bonobo-config, which re-implemented the GConf design with a more pervasive use of CORBA types and interfaces, thanks to the libbonobo work that was pursued as part of the GNOME componentisation effort. Bonoboc-config had multiple backends, one of which would wrap GConf, as, you might have guessed it already, a way to route around a strongly opinionated maintainer, working for a different company.

The last bit is probably the first real instance of a massive flame war caused by strife between commercial entities vying for the direction of the project.

The flames started off as a disagreement in design and technical direction between GConf and bonobo-config maintainers, confined to the GConf development mailing list, but soon spiralled out of control once libgnome was changed to use bonobo-config, engulfing multiple mailing list in fires that burned brighter than a thousand suns. Accusations of attempting to destroy the GNOME projects flew through the air, followed by rehashing of every single small roadblock ever put in the way of each interested party. The newly appointed release manager for GNOME 2.0 and committer of the dependency on bonobo-config inside libgnome, Martin Baulig, dramatically quit his role and left the community (only to come back a bit later), leaving Anders Carlsson to revert the controversial change—followed by a new round of accusations.

Totally normal community interactions in a free software project.

In the end, concessions were made, hatred simmered, and bonobo-config was left behind, to be used only be applications that decided to opt into its usage, and even then it was mostly used as a wrapper around GConf.


Fonts, Unicode, and icons” seems like a small set of user-visibile features for a whole new major version of a toolkit, and desktop environment. Of course, that byline kind of ignores all the work laid down behind the scenes, but end users don’t care about that, right? We’ll see what happens when a major refactoring to pay down the technical debt accrued over the first 5 years of GNOME, and clear the rubble to build something that didn’t make designers, documentation writers, and QA testers cry tears of blood, in the next episode, “Release Day”, which will be out in two weeks time, as next week I’m going to be at the GTK hackfest, trying to pay down the technical debt accrued over the 3.0 development cycle of the toolkit. I’m sure it’ll be fine, this time. It’s fine. We’re going to be fine. It’s fine. We’re fine.

References

by ebassi at January 24, 2019 04:00 PM

January 17, 2019

Emmanuele Bassi

Episode 2.0: Retrospective

Hello, everyone, and welcome back to the History of GNOME! If you’re listening to this in real time, I hope you had a nice break over the end of 2018, and are now ready to begin the second chapter of our main narrative. I had a lovely time over the holidays, and I’m now back working on both the GNOME platform and on the History of GNOME, which means reading lots of code by day, and reading lots of rants and mailing list archives in the evening—plus, the occasional hour or so spent playing videogames in order to decompress.

Before we plunge right back into the history of GNOME 2, though, I wanted to take a bit of your time to recap the first chapter, and to prepare the stage for the second one. This is a slightly opinionated episode… Well, slightly more opinionated than usual, at least… As I’m trying to establish the theme of the first chapter as a starting point for the main narrative for the future. Yes, this is an historical perspective on the GNOME project, but history doesn’t serve any practical purposes if we don’t glean trends, conflicts, and resolutions of past issues that can apply to the current period. If we don’t learn anything from history then we might as well not have any history at all.

The first chapter of this history of the GNOME project covered roughly the four years that go from Miguel de Icaza’s announcement in August 1997 to the release of GNOME 1.4 in April 2001—and we did so in about 2 hours, if you count the three side forays on GTK, language bindings, and applications that closed the first block of episodes.

In comparison, the second chapter of the main narrative will cover the 9 years of the 2.x release cycles that went from 2001 to 2010. The duration of the second major cycle of GNOME is not just twice as long as the first, it’s also more complicated, as a result of the increased complexity for any project that deals with creating a modern user experience—one not just for the desktop but also for the mobile platforms that were suddenly becoming more important in the consumer products industry, as we’re going to see during this chapter. The current rough episode count is, at the moment I’m reading this, about 12, but as I’m striving to keep the length of each episode in the 15 to 20 minutes range, I’m not entirely sure how many actual episodes will make up the second chapter.

Looking back at the beginning of the project we can say with relative certainty that GNOME started as a desktop environment in a time when desktops were simpler than they are now; at the time of its inception, the bar to clear was represented by Windows 95, and while it was ostensibly a fairly high bar to clear for any volunteer-driven effort, by the time GNOME 1.4 was released to the general public of Linux enthusiasts and Unix professionals, it was increasingly clear that a new point of comparison was needed, mostly courtesy of Apple’s OS X and Microsoft’s Windows XP. Similarly, the hardware platforms started off as simpler iterations over the PC compatible space, but vendors quickly moved the complexity further and further into the software stack—like anybody with a WinModem in the late ‘90s could tell you. Since Linux was a blip on the radars of even the most widespread hardware platforms, new hardware targeted Windows first and foremost, and support for Linux appeared only whenever some enterprising volunteer would manage to reverse engineer the chipset du jour, if it appeared at all.

As we’ve seen in the first episode of the first chapter, the precursors to what would become a “desktop environment” in the modern sense of the term were made of smaller components, bolted on top of each other according to the needs, and whims, of each user. A collection of LEGO bricks, if you will, if only the bricks were made by a bunch of different vendors and you had to glue them together to build something. KDE was the very first environment for Linux that tried to mandate a more strict integration between its parts, by developing and releasing all if its building blocks as comprehensive archives. GNOME initially followed the same approach, with libraries, utilities, and core components sharing the same CVS repositories, and released inside shared distribution archives. Then, something changed inside GNOME; and figuring out what changed is central to understanding the various tensions inside a growing free and open source software project.

If desktop environments are the result of a push towards centralisation, and comprehensive, integrated functionality exposed to the people using, but not necessarily contributing to them, splitting off modules into their own repositories, using their own release schedules, their own idiosynchrasies in build systems, options, coding styles, and contribution policies, ought to run counter to that centralising effort. The decentralisation creates strife between projects, and between maintainers; it creates modularisation and API barriers; it generates dependencies, which in turn engender the possiblity of conflict, and barriers to not just contribution, but to distribution and upgrade.

Why, then, this happens?

The mainstream analytical framework of free and open source software tells us that communities consciously end up splitting off components, instead of centralising functionality, once it reaches critical mass; community members prefer delegation and composition of components with well-defined edges and interactions between them, instead of piling functionality and API on top of a hierarchy of poorly defined abstractions. They like small components because maintainers value the design philosophy that allows them to provide choice to people using their software, and gives discerning users the ability to compose an operating system tailored to their needs, via loosely connected interfaces.

Of course, all I said above is a complete and utter fabrication.

You have no idea of the amounts of takes I needed to manage to get through all of that without laughing.

The actual answer would be Conway’s Law:

organizations which design systems […] are constrained to produce designs which are copies of the communication structures of these organizations

We have multiple contributors, typically highly opinionated, typically young or, at least, without lots of real world experience. Worse case, the only experience available comes from years of computer science lessons, where object orientation reigns supreme, and it’s still considered a good idea despite all the evidence to the contrary.

These multiple contributors end up carving their own spaces, because the required functionality is large, and the number of people working on it is always smaller than a 100% coverage. New functionality is added; older modules are dropped because “broken”, or “badly designed”; new dependencies are created to provide shared functionality, or introduced as abstraction layers to paper over multiple modules offering sligthly different takes on how some functionality ought to be implemented, or what kind of dependencies they require, or what kind of language or licensing terms ought to be used.

Complex free software projects with multiple contributors working on multiple components, favour smaller modules because it makes it easier for each maintainer to keep stuff in their head without going stark raving mad. Smaller modules make it easier to insulate a project against strongly opinionated maintainers, and let other, strongly opinionated maintainers, route around the things they don’t like. Self-contained modules make niche problems tractable, or at least they contain the damage.

Of course, if we declared this upfront, it would make everybody’s life easier as it would communicate a clear set of expectations; it would, on the other hand, have the side effect of revealing the wardrobe malfunction of the emperor, which means we have to dress up this unintended side effect of Conway’s Law as “being about choice”, or “mechanism, not policy”, or “network object model”.

The first chapter in the history of the GNOME project can be at least partially interpreted within this framework; the idea that you can take a complex problem space and partition it until each issue becomes tractable individually, and then build up the solution out of the various bits and pieces you managed to solve, letting it combine and recombine as best as it can to suit the requirements of the moment, platform, or use case. Throw in CORBA as an object model for good measure, and you end up with a big box of components that solve arbitrarily small issues on their own, and that can theoretically scale upwards in complexity. This, of course, ignores the fact that combinatorial explosions of interactions make things very interesting for anybody developing, testing, and using these components—and I use “interesting” in the “oh god oh god we’re all going to die” sense of the word.

More importantly, and on a social level, this framework allows project maintainers to avoid having to make a decision on what should work and what shouldn’t; what is supported and what isn’t; and even what is part of the project and what falls outside of it. If there some part of the stack that is misbehaving, wrap it up; even better, if there are multiple competing implementations, you can always paper over them with an abstraction layer. As long as the API surface is well defined, functionality is somebody else’s problem; and if something breaks, or mysteriously doesn’t work, then I’m sure the people using it are going to be able to fix it.

Well, it turns out that all the free software geeks capable of working on a desktop environment are already working on one, which by definition means that they are the only ones that can fix the issues they introduced.

Additionally, and this is a very important bit that many users of free and open source software fail to grapple with: volunteer work is not fungible—that is, you cannot tell people doing things on their spare time, and out of the goodness of their hearts, to stop doing what they are doing, and volunteer on something else. People just don’t work that way.

So, if “being about choice” is on the one end of the spectrum, what’s at the other? Maybe a corporate-like structure, with a project driven by the vision of a handful of individuals, and implemented by everyone else who subscribes to that vision—or, at least, that gets paid to implement it.

Of course, the moment somebody decides to propose their vision, or work to implement it, or convince people to follow it, is the moment when they open themselves up to criticism. If you don’t have a foundational framework for your project, nobody can accuse you of doing something wrong; if you do have it, though, then the possibilities fade away, and what’s left is something tangible for people to grapple with—for good or ill.

At the beginning of the GNOME project we had very few individuals, with a vision for the desktop; while it was a vision made of components interoperating to create something flexible and adaptable to various needs, it still adhered to specific design goals, instead of just putting things together from disparate sources, regardless of how well the interaction went. This led to a foundational period, where protocols and interfaces were written to ensure that the components could actually interoperate, which led to a somewhat lacklustre output; out of three 1.x minor releases all we got was a panel, a bunch of clock applets, and a control centre. All the action happened on the lower layers of the stack. GTK became a reasonably usable free software GUI toolkit for Linux and other Unix-like operating systems; the X11 world got a new set of properties and protocols to deal with modern workflows, in the form of the EWMH; applications and desktop modules got shared UI components using CORBA to communicate between them.

On a meta level, the GNOME project established a formal structure on itself, with the formation of a release team and a non-profit foundation that would work as a common place to settle the internal friction between maintainers, and the external contributions from companies and the larger free and open source software world.

Going back to our frame of reference to interpret the development of GNOME as a community of contributors, we can see this as an attempt to rein in the splintering and partition of the various components of the project, and as a push towards its new chapter. This tension between the two efforts—one to create an environment with a singular vision, even if driven by multiple people; and the other, to create a flexible environment that respected the various domains of each individual maintainer, if not each individual user—defined the first major cycle, as it would (spoiler alert) every other major cycle.

Now that the foundational period was over, though, and the challenges provided by commercial platforms like Windows and OS X had been renewed, the effort to make GNOME evolve further was not limited to releasing version 2.0, but to establish a roadmap for the future beyond it.


Next week we’re going to dive right back into the development of GNOME, starting with the interregnum period between 1.4 and 2.0, in which our plucky underdogs had finally become mainstream enough to get on Sun and IBM radars, and had to deal with the fact that GNOME was not just a hobby any more, in the episode titled: “On Brand”.

References

by ebassi at January 17, 2019 02:45 PM

December 31, 2018

Tomas Frydrych

Why I like Film Photography

Why I like Film Photography

Film is experiencing something of a renaissance these days, as attested on social media (#ishootfilm #filmisnotdead), and, perhaps more importantly, by the reappearance of numerous previously discontinued film emulsions—there is, again, money to be made from film.

The contemporary Church of Analogue Photography is a broad one. On the one end sit the old hands, the ‘patriarchs’ who never gave up on film in the first place. Their’s is the voice of experience, of continuity in the old quest for handmade excellence. They worry about tonality and longevity, the perfect negative, the perfect print. They are, for most, whether consciously or not, practitioners of fine art, understanding the potential of film, but also that analogue on its own doesn’t mean it’s any good.

Then there are the ‘youngsters’, of the new generation of photographers who grew up with both feet firmly planted in the digital world. They seem drawn to film because of the mystique of the past, because it is different, less sterile, less predictable, less mainstream. They have stumbled through the wardrobe into this weird magical world, and they are in love.

At times their passion gets the better of them, and the patriarchs sometimes look down at them—they are yet to discover that film alone doesn’t a worthwhile picture make. But by the same token they are less constrained by the received wisdom, reverently passed down from the old masters, as to what makes a good picture, and that’s not such a bad thing. And let’s face it, they are the reason companies like Kodak are resurrecting emulsions.

And of course, there are the born again film photographers, those of us who didn’t keep faith and abandoned film for the convenience of digital some years back, but somehow, for a variety of reasons, found our way back into the fold—perhaps because in some ways it is an opportunity to wind back the clock a bit, perhaps because we have run into insurmountable limitations of digital in our own creative quest. Or perhaps, surprisingly, it’s just pure old economics. (‘What?!’, I hear you gasp.)

When I started using film again this year after a break of some 17 years, it was definitely the latter. I had for some time pined after a medium format camera, and had my eye on the Fujifilm GFX50S. Alas, there was a small snag. Even though the Fuji is at the ‘entry level’ end of the medium format spectrum, I was looking at a better part of £15k to ‘enter’; in the end not even a looming significant birthday was enough to justify such an expense.

Film to the rescue! A very decent medium format film set up, plus a scanner, can be had for a 1/10th of the cost. ‘But, surely, the film is expensive!’

Is it? The £13.5k I have saved will buy so much film that by the time I break even with the upfront cost of a MF digital set up, the digital camera will be obsolete several times over; film cameras don’t suffer from such a problem, an excellent camera from a mid 20th century is still an excellent camera today.

Yet, while my reasons for returning to film were entirely pragmatic, and my intention was merely the hybrid workflow (scan, then post process as ‘normal’), film rather quickly got to me as a medium in its own right; there is just something about it.

The the old hands will tell you the something is tonality: film can represent considerably greater number of shades of colour, or grey scale, than a 12- or 14- bit digital sensor. This is, of course, impossible to show online, but is obvious when comparing a darkroom print with a digital one. But to be honest, for me that’s not it. For me that ‘something’ is the process that film brings with it, the enforced learning.

The modern digital camera is a wonderful tool. Things like multipoint exposure measuring mean that in 90% cases it will take an acceptable picture, which then usually can be muscled into a reasonable shape later. The cheapness of the digital image means that to photograph a sunrise I can simply take a few hundred pictures over a couple of hours, then pick one later. With a mirror less camera, I can do exposure compensation by the eye. Autofocus and manual focus assist mean my images are rarely unfocused.

But (all) such automation brings with itself the dumbing down of the operator skills that have come to be known as the Automation Paradox. It means I need not to understand how light gets turned into an image, what ‘exposure’ represents, how the morning gets born. Film lays that bare. You proudly work that hand-held light meter, expecting a perfect image, and bang, it’s too dark, or too light, or, in the B&W case, just an uninteresting monolithic grey. You don’t know why, because you are not used to taking detailed notes, yet.

And so I find that with film everything becomes more deliberate: I think more about what I see, what I want to convey, about light, structure, texture. About how that sunrise is going to unfold. The whole process becomes less about ‘taking’ and more about ‘seeing’, I stop being on the outside of the picture, and get sucked right in.

It’s not that I can’t do all of that with a digital camera, but I don’t have to, and so, being a lazy git at heart, I don’t; film forces me to, and that makes it, if nothing else, a wonderful learning medium.

But the process doesn’t end there. I enjoy developing films, learning how the choice of developer and time impacts the result (not to mention home development significantly reducing the costs). And the real fun comes in the darkroom. The tactile nature of it all, working out how to unlock the potential glimpsed in the negative: deciding what to burn and what to dodge, crafting the tools to do that, choosing what multigrade combination will best represent the mental image I have. The whole trial and error on the way to The Print.

I don’t necessarily believe that film is better, merely that film and digital are significantly different media. The true strength of film comes out when printed, while digital has a natural impedance match to digital presentation in an online world—I think there is little to be gained by film if the intended end is the screen. There are also subjects at which digital excels—wildlife and sports photography come immediately to mind. It would be silly to limit oneself to one or the other. But all in all, I find film to be more fun, and in particular B&W film.

In the digital world B&W is more often than not just a last ditch effort to rescue a bad image, and it shows; with film it’s always a fully independent form, one that requires a different way of looking at the world, an ability to see in monochrome. Without the safety net of arbitrary colour remixing in post irreversible decisions need to be made at the point of shooting: light assessed, filters chosen. But the upside of that is that it’s precisely the indiscriminate remixing of colour that is the reason why so many a digital B&W image jars, feels artificial, unreal; IR aside, film B&W images tend to always look somehow ‘natural’.

Perhaps the biggest thing I have learnt this year taking pictures is that B&W landscape is very hard to do well, very unforgiving. Hence most of my better B&W images this year have been of trees, and even people, only two or three true landscapes that I am happy with—here is my challenge for 2019.

Have a good one!

by tf at December 31, 2018 08:35 PM

December 25, 2018

Tomas Frydrych

'18 through the Lens

'18 through the Lens

I was going to write the usual annual retrospective, but sometimes life just gets in the way. They say a picture is worth a thousand words; perhaps it is, so anyway, here are a few.

'18 through the Lens A bit of solitude in the hills is good for the soul. An early morning at Shenavall, waiting for John during his epic winter round.

'18 through the Lens Discovering the joys of snowshoeing in the Trossachs.

'18 through the Lens I spent lot of time this year exploring places closer to home, discovering little overlooked gems of nature here and there.

'18 through the Lens And thinking about trees. One of my favourite, almost local, spots.

'18 through the Lens And farther from home.

'18 through the Lens After years of talking about it, Linda and I visited Iceland this year; of course, while Scotland was basking in sun, we picked the worst Icelandic summer in 100+ years, but we did glimpse the sun once or twice.

'18 through the Lens And while we liked Iceland a lot, this still remains our favourite place in the world.

'18 through the Lens For reasons, I spent lot more time taking pictures this this year, mostly landscapes, treescapes, and birds. This unremarkable picture of a greenshank is one that I had to work the hardest for -- bird photography in the wild is very hard.

'18 through the Lens And after a 17 year break, I started using film again. Not out of nostalgia, but as a means into the world of medium format and bigger cameras, but it really got to me as a medium in its own right. There just is something about film.

'18 through the Lens I have been particularly drawn into the wonderful world of Black and White film photography. Had it not been for the Iceland trip, most of my pictures this year would have been B&W.

That's it really. Have good holidays everyone!

by tf at December 25, 2018 10:32 AM

December 14, 2018

Emmanuele Bassi

And I’m home

It’s almost the end of the year, so it’s time for a recap of the previous episodes, I guess.

The tl;dr of 2018: started pretty much the same; massive dip in the middle; and, finally, got better at the very end.

The first couple of months of the year were pretty good; had a good time at the GTK hackfest and FOSDEM, and went to the Recipes hackfest in Yogyakarta in February.

In March, my wife Marta was diagnosed with breast cancer; Marta already had (different types of) cancer twice in her life, and had been in full remission for a couple of years, which meant she was able to cope with the mechanics of the process, but it was still a solid blow. Since she had already gone through a round of radiotheraphy 20 years ago—which had likely a hand in the cancer appearing now—her only option was surgery to remove the whole breast tissue and the associated lymph nodes. Not fun, but surgery went well, and she didn’t even need chemotherapy, so all in all it could have been way, way worse.

While Marta and I were dealing with that, I suddenly found myself out of a job, after working five years at Endless.

To be fair, this left me with enough time to help out Marta while she was recovering—which is why I didn’t come to GUADEC. After Marta was back on her feet, and was able to raise her right arm above her head, I took the first vacation in, I think, about four years. I relaxed, read a bunch of books, played some video games, built many, many, many Gundam plastic models, recharged my batteries—and ended up finally having time to spend on a project that I had pushed back for a while, because I really needed to add writing and producing 15 to 20 minutes of audio every week, after perusing thousands of email archives and old web pages on the Wayback Machine. Side note: donate to the Wayback Machine, if you can. They provide a fundamental service for everybody using the Web, and especially for people like me who want to trace the history of things that happen on the Web.

Of course I couldn’t stay home playing video games, recording podcasts, and building gunplas forever, and so I had to figure out where to go to work next, as I do enjoy being able to have a roof above my head, as well as buying food and stuff. By a crazy random happenstance, the GNOME Foundation announced that, thanks to a generous anonymous donation, it would start hiring staff, and that one of the open positions was for a GTK developer. I decided to apply, as, let’s be honest, it’s basically the dream job for me. I’ve been contributing to GNOME components for about 15 years, and to GTK for 12; and while I’ve been paid to contribute to some GNOME-related projects over the years, it was always as part of non-GNOME related work.

The hiring process was really thorough, but in the end I managed to land the most amazing job I could possibly hope for.

If you’re wondering what I’ll be working on, here’s a rough list:

  • improving performance, especially on less powerful devices
  • identify and land new features
  • identify and fix pain points for current consumers of GTK

On top of that, I’ll try to do my best to increase the awareness of the work being done on both the GTK 3.x stable branch, and the 4.x development branch, so expect more content appearing on the development blog.

The overall idea is to ensure that GTK gets more exposure and mindshare in the next 5 years as the main toolkit for Linux and Unix-like operating systems, as well better functionality for application developers that want to make sure their projects work on other platforms.

Finally, we want to make sure that more people feel confident enough to contribute to the core application development platform; if you have your pet feature or your pet bug inside GTK, and you want guidance, feel free to reach out to me.


Hopefully, the next year will not look like this one, and will be a bit better. Of course, if we in the UK don’t all die in the fiery chaos that is the Brexit circus…

by ebassi at December 14, 2018 07:00 PM

Episode 1.c: Applications

First of all, I’d like to apologise for the lateness of this episode. As you may know, if you follow me on social media, I’ve started a new job as GTK core developer for the GNOME Foundation—yes, I’m actually working at my dream job, thank you very much. Of course this has changed some of the things around my daily schedule, and since I can only record the podcast when ambient noise around my house is not terrible, something had got to give. Again, I apologise, and hopefully it won’t happen again.


Over the course of the first chapter of the main narrative of the history of the GNOME project, we have been focusing on the desktop and core development platform produced by GNOME developers, but we did not really spend much time on the applications—except when they were part of the more “commercial” side of things, like Evolution and Nautilus.

Looking back at Miguel’s announcement, though, we can see “a complete set of user friendly applications” in the list of things that the GNOME project would be focusing on. What good a software development platform and environment are, if you can’t use them to create and run applications?

While GIMP and GNOME share a great many things, it’s hard to make the case for the image manipulation program to be part of GNOME; yes: it’s hosted on GNOME infrastructure, and yes: many developers contributed to both projects. Nevertheless, GIMP remains fairly independent, and while it consumes the GNOME platform, it tends to do so in its own way, and under its own direction.

There’s another issue to be considered, when it comes to “GNOME applications”, especially at the very beginning of the project: GNOME was not, and is not, a monolithic entity. There’s no such thing as “GNOME developers”, unless you mean “people writing code under the GNOME umbrella”. Anyone can come along, write an application, and call it “a GNOME application”, assuming you used a copyleft license, GTK for the user interface, and the few other GNOME platform libraries for integrating with things like settings. At the time, code hosting and issue trackers weren’t really a commodity like nowadays—even SourceForge, which is usually thought to have always been available, would become public in 1999, two years after GNOME started. GNOME providing CVS for hosting your code, infrastructure to upload and mirror release archives, and a bug tracker, was a large value proposition for application developers that were already philosophically and technologically aligned with the project. Additionally, if you wanted to write an application there was a strong chance that you had contributed, or you were at least willing to contribute, to the platform itself, given its relative infancy. As we’ve seen in episode 1.4, having commit access to the source code repository meant also having access to all the GNOME modules; the intent was clear: if you’re writing code good enough for your application that it ought to be shared across the platform, you should drive its inclusion in the platform.

As we’ve seen all the way back in episode 1.1, GNOME started off with a few “core” applications, typically utilities for the common use of a workstation desktop. In the 1.0 release, we had the GNOME user interface around Miguel de Icaza’s Midnight Commander file manager; the Electric Eyes image viewer, courtesy of Carsten Haitzler; a set of small utilities, in the “gnome-utils” grab bag; and three text editors: GXedit, gedit, and gnotepad+. I guess this decision to ship all of them was made in case a GNOME user ended up on a desert island, and once saved by a passing ship after 10 years, they could be able to say: “this is the text editor I use daily, this is the text editor I use in the holidays, and that’s the text editor I will never use”.

Alongside this veritable text editing bonanza, we could also find a small PIM suite, with GnomeCal, a calendar application, and GnomeCard, a contacts applications; and a spreadsheet, called Gnumeric.

The calendar application was written by Federico Mena in 1998, on a dare from Miguel, in about ten days, and it attempted to replicate the offerings of commercial Unix operating systems, like Solaris. The contacts application was written by Arturo Espinosa pretty much at the same time. GnomeCal and GnomeCard could read and export the standard vCal and vCard formats, respectively, and that allowed integration with existing software on other platforms, as an attempt to “lure away” users from those platforms and towards GNOME.

Gnumeric was the brain child of Miguel, and the first real attempt at pushing the software platform forward; the original GNOME canvas implementation, based on the Tk canvas, was modified not only to improve the performance, but also to allow writing custom canvas elements in order to have things like graphs and charts. The design of Gnumeric was mostly mutuated from Excel, but right from the start the idea was to ensure that the end result would surpass Excel and its limitations—which was a somewhat tall order for an application developed by volunteers; it clearly demonstrates the will to not just copy commercial products, but to improve on them, and deliver a better experience to users. Gnumeric, additionally, came with a plugin infrastructure that exposed the whole workbook, sheet, and cells to each plugin.

Both the PIM applications and the spreadsheet application integrated with the object model effort, and provided components to let other applications embed or manipulate their contents and data.

While Gnumeric is still active today, 20 years and three major versions later, both GnomeCal and GnomeCard were subsumed into what would become one of the centerpieces of Ximian: Evolution.

GNOME 1.2 remained pretty much similar to the 1.0, from an application perspective. Various text editors were moved out of the release, and went along at their own pace, with gedit being the main survivor; Electric Eyes fell into disrepair, and was replaced by the Eye of GNOME as an image viewer. The newly introduced ggv, a GTK-based GUI layer around ghostscript, was the postscript document viewer. Finally, for application developers, a tool called “Glade” was introduced as a companion to every programmer’s favourite text editor. Glade allowed creating a user interface using drag and drop from a palette of components; once you were done with it, it would generate the C code for you—alongside the needed Autotools gunk to build it, if it couldn’t find any. The generated code was limited to specific files, so as long as you didn’t have the unfortunate idea of hand editing your user interface, you could change it through Glade, generate the code, and then hook your own application logic into it.

Many projects of that era started off with generated code, and if you’re especially lucky, you will never have to deal with it, unless, of course, you’re trying to write something like the history of the GNOME project.

For GNOME 1.4 we only see Nautilus as the big change in the release, in terms of applications. Even with an effort to ensure that applications, as well as libraries, exposed components for other applications to reuse, most of the development effort was spent into laying down the groundwork of the desktop itself; applications came and went, but were leaf nodes in the graph of dependencies, and as such required less coordination in their development, and fewer formalities when it came to releasing them to the users.

It would be a long time before somebody actually sat down, and decided what kind of applications ought to be part of the GNOME release.


With this episode, we’ve now reached the end of the first chapter of the history of the GNOME project. The second chapter, as I said a few weeks ago, will be on January 17th, as for the next four weeks I’m going to be busy with the end of the year holidays here in London.

Once we’re back, we’re going to have a little bit of a retrospective on the first chapter of the history of GNOME, before plunging directly into the efforts to release GNOME 2.0, and what those entailed.

So, see you next year for Chapter 2 of the History of GNOME.

References

by ebassi at December 14, 2018 06:00 PM

December 06, 2018

Emmanuele Bassi

Episode 1.b: Bindings

Back in episode 1.1 and 1.2 we talked a bit about “language bindings” as a resource available to GNOME application developers that did not want to deal with C as the programming language for their projects.

Language bindings for GTK appeared pretty much as soon as the GIMP adopted it, in order to write plugins for custom image processing effects; adding new filters just by dropping a file in a well-known location made GIMP extensible without requiring recompiling the whole application. So it’s not weird that the original announcement for the GNOME project mentions applications and desktop components written in Guile: since the very beginning, the intent was to only use C for core libraries of the GNOME platform, in order to have an application development stack consumable through programming languages other than C.

Of course, best laid plans, and all that.

If we look at the history of the bindings in GNOME we can divide it into two major eras: before introspection, and after introspection. We’re going to concentrate on the former, and leave the latter when we move on to the second chapter of the main narrative.

The first few bindings for GTK appearing on the scene were Objective C, C++, and Guile. The first two were definitely easier to achieve, as both Objective C and C++ are compatible with the C standard of the time; it was easy to build shallow abstractions over the C API, and if you needed something more complicated, you could always drop into C. Guile, on the other hand, used a LISP grammar; and even though it managed to call into C exactly the same, you could not really expose a pointer to a C data structure and tell developers to go to town on it, so you had to be careful in both what you exposed, and what you abstracted away.

After Objective C, C++, and Guile, Perl and Python bindings started to appear, as the market share for both those languages increased.

The shared problem among all of them was that GTK was growing as a toolkit, and its API surface started to exceed the capacity for a human being to hand code the various entry points by reading a C header file and translating it into the intended code for each programming language to trampoline into the underlying library.

It is a well established fact that programmers, left to their own devices, will try to program their way out of every problem. In this particular case, each binding started growing its own set of tools to generate code from C headers; then the script were modified to read C headers and ancillary metadata, needed to handle cases where C was sadly not enough to describe the API and the relation between widgets, like inheritance; or the idiosynchrasies of the GTK code base, like constructor arguments (the precursor to GObject’s properties) and signals.

The C++ bindings grew a veritable cornucopia of exceedingly 1997 decisions, like:

  • a set of hand coded header files that included parsing directives and metadata, alongside real C++ code
  • a set of m4 macros to do text processing over the generated files to replace some of the directives
  • a lexer and a parser, generated via flex and bison, to generate C++ code out of the generated headers

I looked at the early C++ bindings and I’m still in awe of the Rube Goldberg machine that was used to build them; the only things missing in the project build were a small marble, an hamster wheel, two pool cues, and a bowling bowl.

Every binding, though, had their own way of generating code, and thus their own way of storing metadata to describe the GTK API. To avoid the proliferation of ad hoc formats, and to provide a relatively official description of the API, GTK developers decided to settle to a common definition file, mutuated from the Guile bindings.

The definition file used an S-expression syntax, and it described:

  • enumeration types, which contained not only the C symbol, but also a “nick name”, that is a string that could be used to map it to the numeric value
  • boxed types, that is plain old data structures
  • object types, that is classes in the type system
  • functions, each with its return type and arguments

The definition files were originally a mix of hand written changes on top of the output generated by a script that would parse the C header files; this meant that they would go out of sync pretty quickly. Additionally, they were lacking a lot of ancillary information because the script did not know anything about the GTK type system. Various language bindings took the definition files and copied them into their own source repositories, to tweak them to fit their code generation steps; this introduced an additional drift into the already cumbersome process of keeping language bindings up to date with a fast moving library like GTK.

An attempt at improving and standardising the definition file format came in the late 1999 and early 2000, courtesy of Elliot Lee and Havoc Pennington. The s-expression syntax was maintained, but the data set was extended to allow matching objects with namespaces; methods with classes; and parameters with their types, names, and default values.

Of course, this still required generating C code out of a formally agnostic S-expression, generated from C code; this meant that bindings for dynamic languages were all but dynamic, and that additions to the GTK API would still require an additional step in order to be consumed by anything that wasn’t C, or C adjacent. For a while, though, this was good enough for application developers, and the ability to quickly write small tools or large applications, in Python, or Perl, or PHP, or Ruby, made the GNOME platform quite attractive.

Still, many applications in the GNOME project were written in C because that’s what C developers will do when given then chance — and that’s what we’re going to see in next week’s side episode.

References

by ebassi at December 06, 2018 04:00 PM

November 29, 2018

Emmanuele Bassi

Episode 1.a: The GIMP Toolkit

The history of the GNOME project is also the history of its core development platform. After all, you can’t have a whole graphical environment without a way to write not only applications for it, but its own components. Linux, for better or worse, does not come with its own default GUI toolkit; and if we’re using GNU as the userspace environment we’re still out of something that can do the job of putting things on the screen for people to use.

While we can’t tell the history of GNOME without GTK, we also cannot tell the history of GTK without GNOME; the two projects are fundamental entwined, both in terms of origins and in terms of direction. Sure, GTK can be used in different environments, and on different platforms, now; and having a free-as-in-free software GUI toolkit was definitely the intent of the separation from GIMP’s own code base; nevertheless, GTK as we know it today would not exist without GNOME, just like GNOME would not have been possible without GTK.

We’ve talked about GTK’s origin as the replacement toolkit for Motif created by the GIMP developers, but we haven’t spent much time on it during the first chapter of the history of the GNOME project.

If you managed to fall into a time hole, and ended up in 1996 trying to write a GUI application on Linux or any commercial Unix, you’d have different choices depending on:

  • the programming language you wish to use in order to write your application
  • the license you wish to use once you release your application

The academic and professional space on Unix was dominated by OpenGroup’s Motif toolkit, which was mostly a collection of widgets and utilities on top of the X11 toolkit, or Xt. Xt’s API, like all pre-Xorg X11 libraries API, is very 1987: pointers hidden inside type names; global locks; explicit event loop. Xt is also notable because it lacks basically any feature outside of “initialise an application singleton”; “create this top level window”, either managed by a window manager compatible with the Inter-Client Communication Conventions Manual, or an unmanaged “pop up” window that is left to the application developer to handle; and an event dispatch API, in order to write your own event loop. Motif integrated with Xt to provide everything else: from buttons to text entries; from menus to scroll bars.

Motif was released under a proprietary license, which required paying royalties to the Open Group.

If you wanted to release your application under a copyleft license, you’d probably end up writing something using another widget collection, released under the same terms as X11 itself, and like Motif based on the X toolkit, called the “X Athena widgets”—which is something I would not wish on my worst enemy; you could also use the GNUstep project toolkit, a reimplementation of the NeXT frameworks; but like on NeXT, you’d have to use the (then) relatively niche Objective C language.

If you didn’t want to suffer pain and misery with the Athena widgets, or you wanted to use anything except Objective C, you’d have to write your own layout, rendering, and input handling layer on top of raw Xlib calls — an effort commonly known amongst developers as: “writing a GUI toolkit”.

GIMP developers opted for the latter.

Since GTK was a replacement for an extant toolkit, some of the decisions that shaped its API were clearly made with Motif terminology in mind: managed windows (“top levels”), and unmanaged windows (“pop ups”); buttons and toggle buttons; menus and menu shells; scale and spin widgets; panes and frames. Originally, GTK objects were represented by opaque integer identifiers, like objects in OpenGL; that was, fortunately, quickly abandoned in favour of pointers to structures. The widget hierarchy was flat, and you could not derive new widgets from the existing ones.

GTK as a project provided, and was divided into three basic and separate libraries:

  • GLib, a C utility library, meant to provide the fundamental data structures that the C standard library does not have: linked lists, dynamic arrays, hash tables, trees
  • GDK, or the GIMP drawing toolkit, which wrapped Xlib function calls and data types, like graphic contexts, visuals, colormaps, and display connections
  • GTK, or the GIMP toolkit, which contained the various UI elements, from windows, to buttons, to labels, to menus

Once GTK was spun off into its own project in late 1996, GTK gained the necessary new features needed to write applications and libraries. The widget-specific callback mechanism to handle events was generalised into “signals”, which are just a fancy name for the ability to call a list of functions tied to a well known name, like “clicked” for buttons, or “key-press-event” for key presses. Additionally, and more importantly, GTK introduced a run time type system that allowed deriving new widgets from existing ones, as well as creating new widgets outside of the GTK source tree, to allow applications to define their own specialised UI elements.

The new GTK gained a “plus”, to distinguish it from the in-tree GTK of the early GIMP.

Between 1996 and 1997, GTK development was mostly driven by the needs of GIMP, which is why you could find specialised widgets such as a ruler or a color wheel in the standard API, whereas things like lists and trees of widgets were less developed, and amounted to simple containers that would not scale to large sets of items—as any user of the file selection widget at the time would be able to tell you.

Aside from the API being clearly inspired by Motif, the appearance of GTK in its 0.x and 1.0 days was very much Motif-like; blocky components, 1px shadows, aliased arrows and text. Underneath it all, X11 resources like windows, visuals, server-side allocated color maps, graphic contexts, and rendering primitives. Yes, back in the old days, GTK was fully network transparent, like every other X11 toolkit. It would take a few more years, and two major API cycles, for GTK to fully move away from that model, and towards its own custom, client-side rendering.

GTK’s central tenets about how the widget hierarchy worked were also mutuated in part from Motif; you had a tree of widgets, starting from the top level window, down to the leaf widgets like text labels. Widgets capable of holding other widgets, or containers, were mostly meant to be used as layout managers, that is UI elements encoding a layout policy for their children widgets—like a table, or an horizontal box. The layout policies provided by GTK eschewed the more traditional “pixel perfect” positioning and sizing, as provided by the Windows toolkits; or the “struts and springs” model, provided by Apple’s frameworks. With GTK, you packed your widgets inside different containers, and those would be sized by their contents, as well as by packing options, such as a child filling all the available space provided by its parent; or allowing the parent widget to expand, and aligning itself within the available space.

As we’ve seen in episode 1.2, one of the first things the Red Hat Advanced Development labs did was to get GTK 1.0 out of the door in advance of the GNOME 1.0 release, in order to ensure that the GNOME platform could be consumed both by desktop components and by applications alike. While that happened, GTK started its 1.2 development cycle.

The 1.2 development effort went into stabilisation, performance, and the occasional new widget. GLib was split off from GTK’s repository at the start of the development cycle, as it was useful for other, non-GUI C projects.

As a result of “Project Bob”, Owen Taylor took the code initially written by Carsten Haitzler, and wrote a theming engine for GTK. GTK 1.0 gave you API inside GDK to draw the components of any widget: lines, polygons, ovals, text, and shadows; the GDK API would then call into the Xlib one, and send commands over the wire to the X server. In GTK 1.2, instead of using the API inside GDK, you’d go through a separate layer inside GTK; you had access to the same primitives, augmented by a state tracker for things like colors, background pixmaps, and fonts. That state was defined via an ancillary configuration file, called a “resource” file, with its own custom syntax. You could define the different colors for each different state for each widget class, and GTK would store that information inside the style state tracker, ready to be used when rendering.

Additionally, GTK allowed to load “engines” at run time: small pieces of compiled C code that would be injected into each GTK application, and that replaced the default drawing implementation provided by GTK. Engines would have access to the style information, and, critically, as we’re going to see in a moment, to the windowing system surface that the widget would be drawn into.

It’s important to note that theme engines in GTK 1.2 could only influence colors and fonts; additionally, widgets would be built in isolation one from the other, from disjoint primitives. As a theme engine author you had no idea if the text you’re drawing is going to end up in a button, or inside a label; or if the background you’re rendering is going to be a menu or a top level window. Which is why theme engine authors tried to be sneaky about this; due to how GTK and GDK were split, the GDK windowing system surface needed to have an opaque back pointer to the GTK widget that owned it. If you knew this detail, you could obtain the GTK widget currently being drawn, and determine not only the type of the widget, but also its current position within the scene graph. Of course, this meant that theme engines, typically developed in isolation, would need to be cognisant of the custom widgets inside applications, and that any bug inside an engine would either break new applications, which had no responsibility towards maintaining an internal state for theme engines to poke around with, or simply crash everything that loaded it. As we’re going to see in the future, this approach to theming—at the same time limited in what it could achieve out of the box, and terrifyingly brittle as soon as it was fully exploited—would be a problem that both GTK developers and GNOME theme developers would try to address; and to the surprise of absolutely nobody, the solution made a bunch of people deeply unhappy.

Given that GTK 1.2 had to maintain API and ABI compatibility with GTK 1.0, nothing much changed over the course of its development history. By the time GNOME 1.2 was released, the idea was to potentially release GTK 1.4 as an interim version. A new image loading library, called GdkPixbuf, was written to replace the aging imlib2, using the GTK type system. Additionally, as we saw in episode 1.5, Owen Taylor was putting the finishing touches on a text shaping library called Pango, capable of generating complex text layouts through Unicode strings. Finally, Tim Janik was hard at work on a new type system, to be added to GLib, capable of being used for more than just GUI development. All of these changes would require a clear cut with the backward compatibility of the 1.x series, which meant that the 1.3 development cycle would result in GTK 2.0, and thus would be part of the GNOME 2.0 development effort—but this is all in the future. Or in the past, if you think fourth dimensionally.

Next week, we’re going to see what happens when people that really don’t want to use C to write their applications need to deal with the fact that the whole toolkit and core platform they are consuming is written in object oriented C, in the side episode about language bindings.

References

by ebassi at November 29, 2018 04:00 PM

November 22, 2018

Emmanuele Bassi

Episode 1.5: End of the road

With version 1.2 released, and the Foundation, well, founded, the GNOME community was free to spend its time on planning the next milestones for the project, namely: the last release of the 1.x cycle, and the first of the 2.x one.

GNOME 1.4 was supposed to be a technology preview for some of the changes in the platform that were going to be the centerpiece of the GNOME 2.0 core platform; additionally, applications such as Nautilus, Evolution, and Gnumeric were going to be added to the release as official GNOME components. Maciej Stachowiak volunteered to work as the release manager, and herd all the module maintainers in order to put all the cats in a row for long enough to get distribution-ready tarballs out of them.

From the technological side, one of the long term integration jobs was finally coming to fruition: a specification for interoperation between GNOME and the various window managers available on X11. The work, which was started back in the pre-1.0 beta days by Carsten Haitzler with the goal to make Enlightenment work with the GNOME components, attempted to specify various properties and protocols to be implemented by window managers and toolkits, in order to allow those toolkits to negotiate capabilities and perform operations that involved the window manager, such as making windows full screen, for media players and presentation tools; listing the number of virtual workspaces available, and moving windows across them; defining “struts”, or areas of the screen occupied by special parts of the desktop, such as panels, and that would not be covered when resizing or maximising windows; specifying properties for icons, window titles, and other ancillary information to be displayed by session components such as windows and task lists; creating protocols for embedding windows from different processes, used to implement panel applets, and shared components.

The Extended Window Manager Hints specification, built on top of the original Inter-Client Communication Convention Manual, or ICCCM, was the first attempt at real cross-desktop collaboration, with developers for toolkits, window managers, and desktop environments working together in order to let applications behave somewhat consistently, regardless of the environment in which they were being used. Alongside the EWMH, additional specifications like the XEMBED protocol, the clipboard specification, and the XSETTINGS specification, were created as foundations to the concept of a “free desktop” stack, and we’ll see where that will lead in the GNOME 2 days.

The creation of the EWMH gave a further push towards the establishment of a default window manager for GNOME. Enlightenment had steadily fallen out of favour, given its increase in complexity and its own push towards becoming a full environment in its own right. Additionally, its release planning had become somewhat erratic, with Linux distributions often forced to package development snapshots of dubious quality. While Enlightenment was never considered the default window manager for the GNOME desktop environment, it was the most integrated for a long while, especially compared to the older, or more “free spirited” ones, meant mostly to be used by people building their own environments out of tinfoil, toothpicks, hopes, and prayers. Nevertheless, it became clear that having at least a sensible default, released alongside the rest of the project’s modules and maintained within the same community, and more importantly with the same goals, was necessary. The choice for a window manager fell on Sawmill, a window manager written in a Lisp-like language called “rep”, and originally released around the same time as GNOME 1.2. Sawmill, later renamed Sawfish to avoid a naming collision with a log analysis software for Linux, was, like many Lisp-based projects, programmable by the user; it was possible to write rules to match windows created by specific applications, and apply policies, such as automatic maximisation, or moving to specific workspaces at launch, or saving the geometry and state of the windows when closing them. The window manager decorations were also themable, because of course they were.

Another change introduced for GNOME 1.4 as a technology preview in preparation for GNOME 2.0 was GConf, a configuration storage and notification subsystem written by Havoc Pennington. For the 0.x snapshots and 1.0 releases, GNOME provided a simple key/value configuration system called gnome-config. Each application would use the gnome-config API to store and retrieve their settings, which were saved inside flat, INI-like files in an hidden location under the user’s home directory. The system was fairly simple, but not very reliable, as it could theoretically leave files in an inconsistent state; it wasn’t at all performant, as it needed to traverse hierarchies on the file system to get to a key/value pair; there was no database schema for the keys and values, and it made it impossible to validate the contents of the configuration. Additionally, storing complex values required a fair amount of manual serialisation and deserialisation, so application developers typically ended up writing their own code for storing settings in flat configuration files under their own directory. It was in theory possible to use gnome-config to store and read settings for the desktop itself, but without a schema to define the contents of the settings and without any type of access arbitration there was no guarantee that different system components would end up stomping on each other’s toe, and there was no way for interested components or applications to be notified of changes to the configuration. The limitations of the API led to limitations in the user interface: all settings and preferences were by necessity applied explicitely, to allow saving them to a file, and ensure a consistent state of the underlying storage.

The goal of GConf was to provide a consistent storage for preferences. Values would be addressed via a path, like a file system; read and write operations would go through a session daemon, which was the onl ccomponent responsible for reading and writing the data to disk, and would be able to subscribe listeners to changes to a specific path. This way, applications could listen to system configuration changes and update themselves in response to them; not only that, but utilities could read and write well-known configuration keys, decentralising the configuration of the environment and application from the system settings control panel. Notification and daemon-mediated writes also made it possible to write instant apply preferences dialogs, getting rid of the “Apply” button.

All seemed to be going well for the 1.4 relase, and for the 2.0 planning.

In January 2001, Helix Code renamed itself to Ximian, once it couldn’t secure a trademark on the former name; nothing outside of the name of the company really changed.

In March 2001 Eazel released, at long last, Nautilus 1.0, and simultaneously dropped a bomb on the community: the company was going to lay off most of its employees, after failing to raise enough capital to keep the operations going. Eazel finally closed down in June 2001, after only two years of operations. The Dot-com bubble that propelled the IT industry into unsustainable heights was finally bursting, and everything was crashing down, with the startups that did not manage to get acquired by bigger companies closing left and right, leaving engineers scrambling to get a new job in a bearish market, and leaving cheap hardware, as well as cheap Herman Miller chairs, in the rubbish pile behind their vacant offices.

Eazel published all their remaining code, and its employees promised to remain involved in the maintenance of their projects while the community picked up what they could; which, in fairness, they did, until most of the engineering team moved to Apple’s web browser team.

Nautilus did languish after GNOME 1.4, but as a centerpiece for GNOME 2.0, and with the port to GTK 2.0, it got progressively whittled down to remove the more startuppy bits, while remaining a file manager with first class remote volume browsing; the GNOME community was left with a lot of the technical debt that remained unpaid for the following 15 years, as a result of Eazel’s abrupt end.

The second edition of GUADEC, held in Copenhagen in April 2001, right in the middle of the Eazel drama, nevertheless featured more attendees and more presentations, including one that would prove fundamental in setting the future direction of the project. Sun’s usability engineer Calum Benson presented the results of the first round of user testing on GNOME 1.4, and while the results were encouraging in some areas, they laid bare the limits of the current design approach of a mish-mash of components. If GNOME wanted to be usable by professionals, not curating the offering of the desktop was not a sustainable option any more.

Sun had shipped GNOME 1.4 as a technology preview for Solaris 8, and conducted the first extensive set of user tests on it by sitting professional users in front of GNOME and asking them to perform certain tasks, while recording their actions. The consistency, or lack thereof, of the environment was one of the issues that the testing immediately identified: identical functionality was labelled differently depending on the component; settings like the fonts to be used by the desktop were split across various places; duplication of functionality, mostly for the sake of duplication, was rampant. Case in point: the GNOME panel shipped with not one, not two, but five different clock applets—including, of course, the binary clock, because nothing says “please quickly tell me if it’s time for a very important meeting with my boss” like having to read the answer in binary, from fake leds on panel 48 pixels tall. Additionally, all those clocks, like all the panel applets, could be added and removed by sheer accident, clicking around on the desktop. The panel itself could simply go away, and there was no way to revert to a working state without nuking the settings for the user. The speed of the few animated components, like automatically hiding the panel to increase the available on screen space, could be controlled down to the millisecond, just in case you cared, even if the refresh rate of your display could never have the same resolution. Nautilus had enough settings that they had to be split into a class-based system, with Novice, Intermediate, and Expert levels, as if managing your files required finishing RPG campaigns, and acquiring enough experience points from your dungeon master.

The most important consequence of the usability study, though, was that in order for GNOME to succeed as a desktop environment for everyone, it had to stop being made just for the people that wrote it, or that could spend hours of time tweaking the options, or learning how the environment worked first. Not everyone using GNOME would be a white male geek in their 20s, writing free software. Web designers, hardware engineers, financial analysists, musicians, artists, writers, scientists, school teachers; GNOME 2.0 would need a set of guidelines for applications and for the desktop, to ensure a consistent experience for every user, similar to how the platforms provided by Apple and Microsoft defined a common experience for the operating system.

In 2001, though, the Linux desktop encountered an unexpected problem, in the same way a sentence encounters a full stop. Apple, fresh out of both the return of its co-founder Steve Jobs, the subsequent successful product launches for new lines of personal computers, and after releasing what was nominally a beta in late 2000, unveiled their new operating system, OS X. Sure, it was slow, buggy, and definitely nothing more than an alpha release; but it was a shiny, nice looking desktop environment working on top of a Unix-like user space—precisely what the free software equivalents were trying to achieve. The free desktop community was so worried about Microsoft that it never really considered Apple to be a contender.

September 2001 saw a new point release of OS X, and in just six months from its initial unveiling, Apple had managed to fix most of the instability of the previous snapshot; on top of that, they announced that all new Apple computers starting from January 2002 would be sold with OS X as the default operating system. GNOME 2.0 was released in June 2002, and in August 2002 Apple released the second point release of OS X, which improved performance, hardware support, and added accelerated compositing to the stack. Apple rose back from the ashes of the ‘90s with a new, compelling software platform, and Linux found itself a new, major competitor outside of Windows. A competitor hungry for the same kind of professional and amateur markets that Linux was trying to target; and, unlike Linux, Apple had both a hardware platform and a single driving force, with enough resources to sustain this effort.

The fight for the survival of GNOME, and of the Linux desktop, was back on, and GNOME had to start from 2.0.


We have now reached the point in the story where the GNOME community of volunteers and the ecosystem of companies around them is working on a new major release of the core platform, the desktop, and the applications around it, so it’s a good place to take a break. Writing the History of GNOME is quite intensive: there’s a lot of material that needs to be researched in order to write the 2500 words of an episode, edit them, record them, and edit the recording.

For the next three weeks we’re going to have some short side episodes on technologies and applications relevant to GNOME; the first side episode is going to be about GTK; the second will be about how GNOME wrote language bindings in the early years; and the third episode will be about the first GNOME applications that entered the project.

After these side episodes, we’re going to pause for four weeks, while I gather my notes for the next chapter in our history of the GNOME project; we are going to come back on January 17th, with the story of how GNOME got its groove back with version 2.0, in the next chapter: “Perfection, improved”.

References

by ebassi at November 22, 2018 02:00 PM

November 17, 2018

Richard Purdie

Post exercise collapse

Its no secret that I’ve some kind of energy problem. What is less known is how horrible the effects can be, or how plain weird the pattern is. People see me manage the motorcycling, the mountain biking or cycling but they don’t see what happens afterwards.

My current working model is that exercise generates some kind of toxin. That toxin builds up in the body and its effects peak around 24-28 hours after the exercise that generated it. The symptoms of that are best described as “flu like”, lethargy, feeling cold, a brain fog, inability to concentrate, feeling washed out, muscle and joint pains, tinnitus, teeth nerve pain and general mood changes. Those feelings can go on for around 2 weeks. There is no respiratory side to it and it never seems to be actual flu.

How serious is it? At its worst, after two days of motorcycling I collapsed semi-conscious and was out of it for 18 hours. My body had run so low on energy it lost the ability to regulate its temperature properly (thankfully I collapsed onto a duvet).

Whatever the toxin is, it appears to attack the nervous system, hence the various joint/muscle pains, tinnitus, teeth sensitivity and so on. If I’ve kept going and not given in to it either (through shear willpower or pain killers) I ended up with a permanent hand/limb tremor. Thankfully I also discovered biotin (Vitamin B7) appears to accelerate healing of that and after around a year, I managed to stop shaking, a major win.

The closest thing in medical knowledge that matches is paracetamol overdose, the key match being the timeframe. You don’t see symptoms of that for 24-48 hours after overdose. This lead to the realisation that its treatment also seems to help me. Its treated with NAC (N-acetyl-Cysteine) which is thankfully freely available and is a non-essential amino acid so its comparatively safe with low side effect risks.

Have I talked to a doctor about this? In short, yes, a lot. They have run a ton of tests over around a decade and there are few types of specialists I’ve not seen at some point. There are a ton of things we know it isn’t and some abnormalities. They don’t match anything they recognise. The two interesting data points are that my liver is unhappy about something (always raised GGT and sometimes raised ALT/AST/ALP) and prolactin is elevated. No idea about the prolactin (a story in its own right) but the liver fits the toxin poisoning model.

Its taken me that decade to figure out the pattern and to come up with the current coping strategy which takes the recovery from two weeks to 2-3 days. At one point I felt like I was accumulating damage (the tremor in particular), I’m pleased to say that it feels much less so now.

There are swings and roundabouts as the NAC appears to help a huge amount but may expose/cause other vitamin deficiency (B2?).

I personally suspect there is some genetic glitch somewhere, not enough to threaten life but enough to mean backup pathways which are less efficient are being used. There is solid science behind NAC increasing levels of Glutathione which is a master anti-oxidant and the way the body cleans up toxins. B2 is needed for recycling Glutathione and biotin is being studied in nervous system disease.

I guess I’m putting this out there in case it helps anyone else or that someone with knowledge of biochemistry could give any further insight into this.

by Richard at November 17, 2018 11:06 AM

November 15, 2018

Emmanuele Bassi

Episode 1.4: Founding the Foundation

As is the case of many free and open source software projects, GNOME’s planning and discussions mostly happen online, on a variety of mediums, like IRC, mailing lists, and issue trackers. Some of those mediums fade out over time, replaced by other, newer ones; or simply because they become less efficient.

The first 10 to 15 years of GNOME can be clearly traced and reconstructed from mailing list archives, which is a very good thing, otherwise this podcast would not be possible without spending hundreds of hours in compiling an oral history of the project, with the chance of getting things wrong—or, at least, wronger than I do—or simply omitting details that have long since been forgotten by the people involved.

Nevertheless, it’s clear that some of the planning and discussion cannot really happen over a written medium; the bandwidth is simply not there. Shared, physical presence is much more efficient for human beings, in order to quickly communicate or iterate over an idea; the details can be summarised later, once everyone is on the same page.

By the year 2000, the GNOME project was already more than 2 years old, had its first major release, and it was now encompassing various companies, as well as volunteers around the globe.

While some of those people may have ended up sharing an office while working for the same company, and of course everyone was on IRC pretty much 24/7, the major stakeholders and contributors had yet to meet in one place.

Through some aggressive fundraising, what was supposed to be a small conference organised by Mathieu Lacage in March for the benefit of the students of the Ecole National Supérieure des Télécommunications in Paris was scaled up to allow the attendance of 40 GNOME developers from around the world for four days of presentations and discussions.

The main theme of the meeting was fairly ambitious: laying down the foundation for the next major release of GNOME, starting from the core platform libraries, with a new major version of GTK; the introduction of a comprehensive text rendering API called “Pango”; and a more pervasive use of Bonobo components in the stack.

Additionally, it allowed companies like Eazel, Ximian, and Red Hat, to present their work to the larger community.

Owen Taylor and Tim Janik presented their plans for GTK 1.4, including a new type system and improved integration with language bindings; Owen also presented Pango, a library for text rendering in non-Latin localisations, with Unicode support, and support for bidirectional and complex text. GTK was also going to be available on Windows and BeOS, thanks to the efforts of Tor Lillqvist and Shawn Amundson, respectively. Havoc Pennington was working on a new text editing widget, based on Tk’s multi-line text entry. Language bindings, for C++, Python, and Ada, were also presented, as well as applications targeting the GNOME platform.

Out of the four days of presentations, planning, discussions, and hacking came two major results:

  • the creation of a “steering committee”, with the goal of planning and directing the development efforts for GNOME 2.0
  • the push for the creation of a legal entity capable of collecting donations on behalf of the GNOME project, and act as a point of reference between the community and the commercial entities that wanted to contribute to GNOME

As a side note, Telsa Gwynne’s report of GUADEC’s first edition is also the first time I’ve seen an explicit mention of the “Old Farts Club”, as well as the rule of being over 30 in order to enter it; I think we can add that to the list of major achievements of the conference.

After GUADEC, in May 2000, GNOME 1.2 was released, as part of the stabilisation of the 1.x platform, and GNOME 1.4 was planned by the steering committee to be released in 2001, with the 2.0 development happening in parallel.

The process of creating the GNOME Foundation would take a few additional months of discussions, and in July 2000 the foundation mailing list was created for various stakeholders to outline their positions. The initial shape of the Foundation was modelled on the Apache Software Foundation, as both a forum for the technical direction of the project, and a place for corporations to get involved with the project itself. The goals for this new entity, as summarised by Bart Decrem, an Eazel co-founder, were:

  1. Providing a forum to determine the overall technical direction of GNOME
  2. Promoting GNOME
  3. Foster collaboration and communication among GNOME developers
  4. Manage funds donated to the GNOME project

There was a strong objection on having corporations being able to dictate the direction of the project, so one of the stated non-goals was for the Foundation to not be an industry consortium, similar to The Open Group. The Foundation would also not hire developers directly.

In order to avoid corporate dominance, no company would be allowed to be a member of the foundation: if a company wanted to have a voice in the direction of the project they could hire a Foundation member, and thus have a representative in the community. Additionally, there would be a limit on directors on the board working for the same company.

As the Foundation was going to be incorporated in the US, one way to avoid both under-representation of non-US members and the potential of fragmentation through separate entities in each large geographical region, was to be more open about both the membership and the board election process. GNOME contributors would be able to join the Foundation as long as they were actively involved with the project, and each member would be eligible to be elected as a director. Companies would be part of an advisory organ, not directly involved in shaping the project.

The idea of having the Foundation be the structure for setting the technical direction of the project was dropped fairly quickly, replaced by its function to be the place for settling controversial decisions, leaving the maintainers of each module in charge of their project.

It is interesting to note that many of the discussions that were part of the Foundation’s initial push have yet to be given an answer, more than 15 years later. If the Foundation is meant to be a forum for module maintainers, how do we define which modules should be part of GNOME, and which ones shouldn’t? Is being hosted on GNOME infrastructure enough to establish membership of a module? And, if so, who gets to decide that a module should be hosted on GNOME infrastructure? Is GNOME the desktop, or is that just a project under the GNOME umbrella? Are applications part of GNOME? The GNOME project is, to this day, still re-evaluating those questions.

Alongside the push from Eazel, Red Hat, and Ximian to get the Foundation going, came the announcement that Sun was going to support GNOME as the desktop for their Solaris operating system, in order to replace the aging CDE. To that goal, Sun was going to share the resources of its engineering, design, and QA teams with the GNOME project. Additionally, IBM, HP, and Dell wanted to support GNOME through the newly created Foundation.

Surprisingly, the discussions over the Foundation proceeded quickly; the self-imposed deadline for the announcement was set for August 15, 2000, three years after the first announcement of the GNOME project, to presented at the Linux World Expo, a trade fair with a fair amount of media exposure. The creation of the actual legal entity, an initial set of bylaws, and the election of a board of directors would follow.

Having a few of the hot startups in the Linux space, as well as well established companies in the IT sector, come together and announce they were putting their weight behind the GNOME project would, of course, be spun in a way that was adversarial to Microsoft, and so it was. The press release at the LWE pushed the angle of a bunch of companies joining together to challenge Microsoft, using a bunch of free code wrote by hacker weirdos to do so.

The announcement of the GNOME Foundation did not impress the KDE project, which released a statement trying to both downplay the importance of GNOME and of the companies that pledged resources to the GNOME project.

In November 2000, after finalising the initial set of bylaws for the Foundation and opening the membership to the people contributing to the project, GNOME held the first ever elections for the position of director of the board. With 33 candidates, a pool of 370 possible voters, and 330 valid ballots in the box, the first eleven directors were:

  • Miguel de Icaza (Helix Code)
  • Havoc Pennington (Red Hat)
  • Owen Taylor (Red Hat)
  • Jim Gettys (Compaq)
  • Federico Mena Quintero (Helix Code)
  • Bart Decrem (Eazel)
  • Daniel Veillard (W3C)
  • Dan Mueth (Eazel)
  • Maciej Stachowiak (Eazel)
  • John Heard (Sun Microsystems)
  • Raph Levien (Eazel)

Additionally, the advisory board was thus composed:

  • Compaq
  • Eazel
  • Free Software Foundation
  • Gnumatic
  • Helix Code
  • Henzai
  • IBM
  • Object Management Group
  • Red Hat
  • Sun Microsystems
  • VA Linux

After the election, the new board started working in earnest on the process for incorporating the foundation and registering it as a non-profit entity; this took until March 2001, after a couple of false starts. In the meantime, the main topics of discussions were:

  • the foundation bylaws, needed for the incorporation, the tax-exempt status, and for opening a bank account in order to receive membership fees from the advisory board
  • the GNOME 1.4 release management, handled by Maciej Stachowiak
  • the preparation for the 2nd edition of GUADEC, to be held in Denmark

Additionally, the GNOME Foundation was going to work on establishing a trademark for the project, both as the name GNOME and for the project’s logo.

Originally, the GNOME logo was not a logo at all. It was part of a repeating pattern for one of the desktop backgrounds, designed by Tuomas Kuosmanen, who also designed Wilber, the GIMP mascot. Tuomas reused the foot pattern as an icon for the panel, namely the button for the launcher menu, which contained a list of common applications, and let the user add their own.

In a typical free software spirit, and with a certain amount of bravery considering the typical results for such requests, Red Hat decided to host a competition for the logo setting as the prize for the winning submission a graphic tablet; they also asked contestants to use GIMP to create the logo, which, sadly, precluded the ability to get vector versions of it. In the end, many good submissions notwithstanding, the decision fell to a modified version of the original foot, also done by Tuomas—only instead of a right foot, it was a left foot, shaped like a “G”.

Leaving aside the administrivia of the Foundation for a moment, let’s go back to the technical side of the GNOME project, and take small detour to discuss the tools used by GNOME developers. Over the years these tools have changed, in many cases for the better, but it helps to understand why these changes were made in the first place, especially for newcomers that did not experience how life was way back when developers had to bang rocks together to store the code, use leaves and twigs to compile it, and send pigeons to file bug reports.

GNOME code repositories started off using CVS. If you know, even in passing, what Git is, you can immediately think of CVS as anything that Git isn’t.

CVS was slow; complicated; obscure; unforgiving; not extensively documented; with a terrible user experience; and would fail in ways that could leave both the local copy of the code and the remote one in a sorry state for everyone.

No, hold on.

Sorry, that’s precisely like Git.

Well, except “slow”.

Unlike Git, though, all the operations on the source revisions were done on the server, which meant that you didn’t have access to the history of the project unless you were online, and that you couldn’t commit intermediate states of your work without sending them to the server. Branching was terrible, so it was only done when strictly necessary. These limitations influenced many of the engineering practices of the time; you had huge change log files in place of commit logs; releases were only marked as such by virtue of having generated an archive, as tagging was atrocious; the project history was stored per-file, so you would not have the ability to see a change in its entirety unless you manually extracted a patch between two revisions; conflicts between developers working on the same tree were a daily occurance, and made integration of different work a pain.

It was not odd to have messy history in the revision control, as well as having to ask the CVS administrators to roll back a change to a previously backed up version, to compensate for some bad commit or source tree surgery.

Due to how GNOME components were initially developed — high level modules with shared functionality which were then split up — having commit access to one module’s repository allowed access to every other repository. This allowed people to work on multiple modules, and encouraged contributions across the whole code base, especially from newcomers. As a downside, it would lead to unreviewed commits and flames on mailing lists.

All in all, though, the “open doors” policy for the repositories worked well enough, and has been maintained over the years, across different source revision control software, and has led to not only many “drive by” patches, but also to a fair number of bugs being fixed.

Talking about bugs, the end of 2000 was also the point when the GNOME project moved to Bugzilla as their bug tracking system.

Between the establishment of the project and October 2000, GNOME used the same software platform also used by Debian to track bugs in the various modules. The Debian Bug Tracking System was, and still is, email based. You’d write an email, fill out a couple of fields with the module name and version, add a description of the issue and the steps to reproduce it, and then send it to submit@bugs.gnome.org. The email would be sent to the owner of the module, who would then be able to reply via email, add more people to the email list, and in general control the status of the bug report through commands sent, you guessed, by email. The web interface at bugs.gnome.org would show the email thread, and let other people read and subscribe to it by sending an email, if they were experiencing the same issue, or were simply interested in it.

By the late 2000, the amount of traffic was enough to make the single machine dedicated to the BTS keel over and die; so, a new solution was being sought, and it presented itself in the form of Bugzilla, a bug tracking system originally developed by Mozilla as a replacement for the original Netscape in house bug tracker once Netscape published their code in the open.

The web-friendly user interface; the database-backed storage for bug reports; and the query system made Bugzilla a very attractive proposition for the GNOME project. Additionally, Eazel and Ximian were already using Bugzilla for their own projects, which made the choice much more easy to make. Bugzilla went on to be the bug tracking system for GNOME for the following 18 years.

By the end of the millenium, GNOME was in a good position, with a thriving community of developers, translators, and documentation writers; taking advantage of the licensing woes of the “Kompetition”, and with a major release under its belt, the project now had commercial backing and a legal entity capable of representing, and protecting, the community. The user base was growing on Linux, and with Sun’s committment to move to GNOME for their next version of Solaris, GNOME was one step away from becoming a familiar environment for millions of users.

This is usually when the ground falls beneath your feet.


While GNOME developers, community members, and companies were off gallivanting in the magical world of foundations and transformative projects, the rest of the IT world was about to pay its dues. The bubble that had been propelling the unsustainable amount of growth of the previous 3 years was about to burst spectacularly.

We’re going to see the effects of the end of the Dot com bubble on the GNOME project in next week’s episode, “End of the road”.

by ebassi at November 15, 2018 04:00 PM

November 08, 2018

Emmanuele Bassi

Episode 1.3: Land of the bonobos

With the GNOME 1.0 release, and the initial endorsement of the project by Linux distributors like Red Hat and Debian, it was only a matter of time before a commercial ecosystem would start to coalesce around the GNOME project. The first effort was led by Red Hat, with its Red Hat Advanced Development laboratories. Soon, others would follow.

Two companies, in particular, shaped the early landscape of GNOME: Ximian and Eazel.

Ximian was announced alongside GNOME 1.0, at the Linux Expo 1999, by none other than the project creator and lead, Miguel de Icaza, and his friend and GNOME contributor, Nat Friedman; it was meant to be a company that would work on GNOME, and provide support for it, in a model similar to the Red Hat one, but with a more focused approach than a whole Linux distribution. Initially, the name of the company was “International GNOME Support”, before being renamed Helix Code first, and Ximian in 2001, when it proved impossible to secure a trademark on Helix Code. For the benefit of clarity, I’ll use “Ximian” throughout this episode, even if we’re going to cover events that transpired while the company name was still called “Helix Code”.

Ximian’s initial effort was mostly spent towards raising capital to hire developers to work on GNOME and its applications, using the extant community as a ready made talent pool, like many other companies did, and still do to this day.

Ximian focused on:

  • providing GNOME as a commercially supported independent platform on top of existing Unix-like operating systems
  • jump-starting an application ecosystem that would target GNOME and focused on enterprise-oriented platforms

The results of these two areas of focus were Red Carpet and Ximian GNOME, for the former; and Evolution, for the latter.

Red Carpet was a way to distribute software developed at its own pace, on top of different Linux and Unix-like operating systems. Whether you were running Red Hat Linux, SuSE, Debian, Mandrake, or Solaris; whether your platform was Intel-based or PPC-based; whether you had a fully supported OS or you were simply an enthusiast and early adopter; you’d be able to download a utility that checked the version of a known list of components on your system; downloaded the latest version of those components and their dependencies from the Ximian server; installed the packages that would fit with your system; and kept them up to date every time a new version was released upstream. All of this was managed through a user-friendly interface, definitely nicer than the alternatives provided at the time by any other distribution, with clear indications of software sources; out of date packages; and suggested updates.

Through Red Carpet, Ximian created Ximian GNOME, a software distribution channel that delegated the maintenance of the desktop environment and selected applications, to an entity outside of the one that provided the core OS, and allowed to keep GNOME updated at the pace of upstream development, minus the time for QA.

Of course, this whole approach relied on the desktop being fully separate from the core OS, something that was possible only back in the early days of Linux as a competitive workstation operating system; additionally, it wasn’t something devoid of risks. Linux installations have always leaned towards heavy customisation post-installation, with the combinatorial explosion of packages available for every user and system administator to install on top of a base system, and without a clear separation of namespaces between the core OS, the graphical environment, and the user applications. Bolting a whole new desktop environment on top of a fluid base system could be problematic at best, and it made upgrading the base system, the desktop environment, and the applications an interesting challenge—with “interesting” defined as “we’re all going to die”. Red Carpet did allow, though, for keeping a stable base underneath a fast moving target like GNOME, assuming that you only cared about upgrading GNOME.

Another advantage of Red Carpet was the ability to provide a standard upgrade path for a fleet of systems, which is what made it attractive for system administrators.

Outside of Red Carpet, Ximian was working on Evolution, an email client and groupware platform for enterprise environment.

Email clients for Linux, like text editors and IRC clients, are a dime a dozen, but mostly terminal-based, and a poor fit in an enterprise environment, as they would not integrate with groupware services, like calendars, events, or shared address books. Sure, you could script your way out of most anything, if you were playing at being a BOFH, but Carol in Human Resources would not be able to get away with not receiving her email because Charlie made an error writing a procmail rule, and sent all the notifications halfway to Siberia.

Groupware suites are pretty much a requirement in corporate environments, and since contracts with those corporations can make or break a company, having a client capable of not only providing the same features, but also integrating with existing infrastructure is a fundamentally interesting proposition.

While Ximian tried to break into the commercial space through the tried and tested route of support contracts and enterprise software, Eazel chose a different, and somewhat risker route.

Eazel was a case of building a company around an idea 10 years too early; the idea in question being: providing remote storage for files and application, complete with browsing remote volumes and folders from your file manager as if they were on local storage. All of this would happen on the nascent Linux desktop, which meant creating a fair amount of the necessary infrastructure and shaping the design and user interaction at the same time.

Eazel was founded by many former Apple employees, including Andy Hertzfeld and Darin Adler, who were the technical leads of the original Macintosh team; and Susan Kare, the designer of the Macintosh icons and typefaces.

The main entry point for users of Eazel services was a new file manager, called Nautilus, coupled to a virtual file system abstraction that would be designed specifically for graphical user interfaces, to avoid blocking the UI while file operations on resources with large latency, like a network volume, were in progress. Given that fuzzy licensing situation around KDE and Qt, Eazel decided to work with the GNOME community, and took the remarkable step of developing Nautilus in the open.

Eazel started working their way upstream with the GNOME community around the late 1999/early months of 2000, thanks to Maciej Stachowiak, an early hire and a GNOME developer who worked on Guile, as well as various GNOME applications and components. Of course, the first thing the core GNOME developers did when approached by Eazel was to port their code from its early C++ incarnation to C, to fit in with the rest of the platform. The interesting thing that happened was that Eazel developers complied with that request, and stuck with the project.

At the time, GNOME’s file manager was a GUI layer around Miguel de Icaza’s Midnight Commander, a Norton Commander clone; MC was mostly used as a terminal application, even if it could integrate with different GUI toolkits, like Tk and GTK when running under X11. The accretion of various GUI led to cruft accumulating as well, and some of the design requirements for a responsive UI in a desktop environment poorly fit in with how a terminal UI would work. Additionally, maintenance was, as it’s wont, mostly volunteer-based. Federico Mena spent time, while working at the RHAD labs, on the GNOME integration, and that was the closest thing to somebody being paid to work on the MC code base. As the work on Nautilus progressed both from Eazel and the GNOME community, the scale slowly tipped in favour of the more integrated, desktop-oriented file manager with paid maintenance.

Of course, what we call Nautilus (or “Files”) today was a very different beast than what it was when Eazel introduced it to the GNOME community in February 2000. Folder views could have annotations, you could add emblems on files and folders, set custom icons, and even custom backgrounds. The file view was a canvas, with the ability to zoom in and out the grid of icons, or even stretch the icons and change their sizes, as well as their position. You could have live preview of files directly inside Nautilus without opening a different application, and thanks to the work done at Red Hat by Christopher Blizzard on integrating the Mozilla web rendering engine with GTK, Nautilus could also browse the web, or more likely your company’s intranet, or WebDAV shares.

While working on Nautilus, Eazel also provided functionality to the core GNOME platform; mainly, the GNOME VFS library was made available outside of Nautilus as a way to perform file operations outside of the simple POSIX API, and other applications quickly started using it to access remote volumes, or to integrate functionality like copying and moving files; libeel, a library for custom widgets used in Nautilus; and librsvg, a library for rendering SVG files, mostly graphical assets used by Nautilus for its icons, saw adoption on the GNOME stack.

Despite the differences in the originating companies, both Nautilus and Evolution shared a common design philosophy: componentisation. Not just plugins and extension modules, but whole components for integrating functionality into, and from, those applications.

The “object model environment” that begat the GNOME project’s acronym, and was simmering in the background, started getting into full swing between GNOME 1.0 and 1.2, with complex applications exposing components for other applications to reuse and re-arrange, as well as taking components themselves and adding new functionality without necessarily adding new dependencies.

Instead of raw CORBA, Eazel and Ximian worked on OAF, the object activation framework, a way to enumerate, activate, and watch components on a system; and Bonobo, a library that made it easy to write GUI components for GNOME applications to use, reducing the CORBA boilerplate.

The idea was that applications would mostly be “shells” that instantiated components, either provided by themselves or by other projects, and build their UI out of them. Things like menus and actions associated with those menus could be described in XML, everyone’s favourite markup language, and exposed as part of the component. GNOME and its applications would be a collection of libraries and small processes, combined and recombined as the users needed them.

Of course, the first real application of this design was to embed the Minesweeper game into Gnumeric, the spreadsheet application, because why wouldn’t you?

The effort didn’t stop there, though; Evolution was, in fact, a shell with email client, calendar, and address book components. Nautilus used Bonobo to componentise functionality outside the file management view, like the web rendering, or the audio playback and music album view.

This was the heyday of the component era of GNOME, and while its promises of shiny new functionality were attractive to both platform and application developers, the end result was by and large one of untapped potential. Componentisation requires a large initial effort in designing the architecture of an application, and it’s really hard to introduce after the fact without laying waste to working code. As an initial roadblock it poorly fits with the “scratch your own itch” approach of free and open source software. Additionally it requires not just a higher level of discipline in the component design and engineering, it also depends on comprehensive and extensive documentation, something that has always been the Achille’s Heel of many an open source project.

In practice, the componentisation started and stopped around the same time as GNOME 2 was being developed, and for a long while in the history of the project it was the proverbial dead albatross stuck around the neck of the platform.

For GNOME developers in mid-2000, though, this was a worthy goal to pursue, so they worked hard on stabilising the platform while applications iterated over it.

The project’s efforts moved in two prongs: minor releases for the 1.x platform, and planning towards the 2.0 release

GNOME 1.2 was released as a minor improvement over the existing 1.0 platform in May 2000, and a similar 1.4 minor release was planned for mid-2001; meanwhile, the effort of integrating the functionality spun off in libraries like Bonobo and OAF, gnome-vfs and librsvg, into the core platform was well underway.


Ximian and Eazel helped shape GNOME’s future not just by creating products based on the GNOME desktop and platform, or by hiring GNOME developers; they also contributed to establish two very important parts of the GNOME community, that exist to this day: GUADEC, the GNOME conference; and the GNOME Foundation.

Next week we’re going to witness the birth of both GUADEC and the Foundation, and we’ll take a small detour to look at the tools used by the GNOME project to host their code and track the bugs contained in said code, in “Founding the Foundation”.

References

by ebassi at November 08, 2018 04:00 PM

November 01, 2018

Emmanuele Bassi

Episode 1.2: Desktop Wars

The year is 1998.

In an abandoned warehouse in San Francisco, in a lull between the end of the previous rave and the beginning of the next, the volume of the electronica has been turned all the way down; the strobes and lasers have been turned off, and somebody cracked open one of the black tinted windows, to let some air in. On one of the computers, made by parts scavenged here and there, with a long since forgotten beer near its keyboard, a script is running to compile a release archive of GNOME 0.20. The script barely succeeds, and the results are uploaded to an FTP server, just in time for the rave to start. There’s no need to write an announcement, the Universe will provide.

At the same time, somewhere in Europe, in a room dominated by large glass windows, white walls with geometric art hanging off of them, and lots of chrome finish, the hum of 50 developers with headphones working in concert quiets down after the project leader, like an orchestra conductor, raises to his feet. He looks at every young developer, from the 16 years old newcomer with a buzz haircut, to the 25 years old grizzled old timer that will soon leave for his 5 years mandatory military service; he then looks down, and presses a key that runs the build of a pre-release for KDE 1.0. The build will of course succeed without a hitch, and the announcement will be prepared later, in the common room, with a civilised discussion between all project members.

The stage is thus evenly divided.

The players are:

  • KDE, a commune of software developers using a commercially backed toolkit to write a free software desktop environment
  • GNOME, a rag tag band of misfits and university students, writing a free software toolkit and desktop environment

These are the desktop wars.

I jest, of course. The reality was wildly different than the memes. Even calling them “the desktop wars” is really a misnomer — after all, we don’t call the endless, tiresome arguments between Emacs and vi users as “the text editor wars”; or the the equally exhausting diatribe between spaces and tabs aficionados as “the code indentation wars”. At most, you could call this a friendly competition between two volunteer projects in the same problem space, but that doesn’t make for good, click-bait-y, tribal division.

Far from being a clinical, cold, and corporate-like project, KDE was started by volunteers across the globe, even if it did have a strong European centre of mass; while it did use a version of Qt not released under the terms of a proper free software license, KDE had a strong ethos towards user freedom from the very beginning, being released under the terms of the GNU General Public License; and while it was heavily centralised, its code base was not a machine of perfect harmony that never failed to build.

GNOME, on the other hand, was not a Silicon Valley-by-way-of-Burning Man product of acid casualties and runaways; its centre of mass was nearer to the East coast of the US than to the West, even if GIMP was initially developed at Berkeley. GNOME was indeed perceived as the underdog, assembled out of a bunch of components developed at different paces, but its chaotic initial form was both the result of the first few months of alpha releases, and of the more conscious decision of supporting and integrating existing projects.

Nevertheless, the memes persisted, and the battle lines were drawn by the larger free and open source software community pretty much immediately, like it happened many times before, and will happen many times after.

The programming language was one of the factors of the division, certainly, bringing along the extant fights between C and C++ developers, with the latter claiming the higher technical ground by using a modern-ish language, compared to the portable assembly of the former. GNOME used the existence of bindings for other languages, like Perl, Python, Objective C, and Guile, as a way to counter-balance the argument, by including other communities and programming paradigms into the mix. Writing GNOME libraries surely was a game for C developers, but writing GNOME applications was supposedly to be all about choice. Or, at least, it would have been, once the GNOME libraries were done; while the project was in its infancy, though, the same people writing the libraries ended up writing the applications, which meant a whole lot of C being unleashed unto the unsuspecting world.

From a modern perspective, relying on C as the main programming language was probably the most contentious choice, but in the context of 1997 it would be hard to call it surprising. Sure, C++ was already fairly well known as a system level language, but the language itself was pretty much stuck to the 2nd edition of “The C++ Programming Language” book, published in 1989; the first ISO C++ standardisation came in 1998, followed by one 2011, 13 years later. Additionally, programmers had been bitten by the binary incompatibilities across compilers as well as different versions of the same compiler, while the language evolved; and the standard library was found lacking in both design and performance, to the point that any major C++ library, like Qt or Boost, ended up re-implementing the same large chunks of the same basic data types. In 1997, writing a complex, interdependent project like a desktop environment using C++ was the “edgy” effort, for lack of a better word, comparable to writing a desktop environment in, say, Rust in 2018.

Another consideration, tied into the support for multiple languages, was that basically all high level languages exposed the ability to interface their internals using the C binary interface, as it was needed to handle the OS-level integration, like file and network operations.

We could debate forever if C was the right choice — and there are people that still do that to this day, so we would be in good company — but in the end the choice was made, and it can’t be unmade.

By and large, though, the deciding factor that put somebody in either the KDE or the GNOME camp was social and political; fittingly, as the free and open source software movement is a social and political movement. The argument boiled down to a very simple fact: the toolkit chosen by the KDE project was, at the time, not distributed under a license that fit the definition of free software, and that made redistributing KDE a pain for everyone that was not the KDE project themselves.

The original Qt license, the Qt Free Edition License, required that your own project never depended on modifications of Qt itself, and that you licensed your own code under the terms of the GPL, the LGPL, or a BSD-like license. Writing libraries depending on Qt also required to jump through additional hoops, like sending a description of the project to Trolltech.

Of course, that put the KDE project in the clear: they were consuming Qt mostly as a black box, and they were releasing their own code under the terms of the GPL. It did place the distributors of KDE binaries on less certain grounds, though, with Debian outright refusing to package KDE as it would put the terms of the GPL used by KDE in direct conflict with the terms of the Qt Free Edition License; the license itself was really not conforming to the Debian Free Software Guidelines, so distributing Qt itself (as well as any other project that decided to use its license) was out of the question. If you wanted to use pre-built packages for KDE 1.0 on Debian, you had to download and install them from a third party repository, maintained by KDE themselves.

Other distributions, such as SuSE and Mandrake, were less finnicky about the details of the interaction between different licenses, and decided to ship binary builds of KDE as part of their main package repositories.

The last big name in the Linux distributions landscape at the time was Red Hat, and things were afoot there.

Just like Debian, Red Hat was less than enthused by the licensing issues of Qt and KDE, and saw a fully GPL and LGPL desktop environment as a solution for their commercial contracts. Of course, GNOME was mostly alpha quality software at the time, and had to catch up pretty quickly if it wanted to be a viable alternative to, well, shipping nothing and supporting roughly every possible combination of window managers and utilities.

Which is why, in 1998, Red Hat created the Red Hat Advanced Development Laboratories.

The RHAD labs were “an independent development group to work on problems of usability of the Linux operating system”: a few developers embedded in upstream communities, tasked with both polishing the many, many, many rough edges of GNOME, and taking over some of the unglamorous aspects of maintenance in a large project.

Under the watchful eye of Mark Ewing and Michael Fulbright, RHAD labs hired Elliot Lee, who wrote the CORBA implementation library ORBit, and worked on the componentisation of GNOME; Owen Taylor, who co-maintained GTK and shepherded the 1.0 release; Carsten “the rasterman” Haitzler, who wrote the Enlightenment window manager and worked on specifying how other window managers should integrate with GNOME; Jonathan Blandford, who worked on the GNOME control centre; and Federico Mena, who worked on GIMP, GTK, and on the GNOME GUI for the Midnight Commander file manager, as well as writing the first GNOME calendar application. In time, the RHAD would acquire the talents of other well-known GNOME contributors, like Havoc Pennington and Tim Janik, to work on GTK 2.0; Christopher Blizzard, to work on the then newly released Mozilla web browser; and David Mason, to work on the GNOME user documentation.

In September 1998, GNOME released version 0.30, to be shipped by Red Hat Linux 5.2 as a technology preview, thanks to the work of the people at the RHAD labs. Of course, it was not at all smooth sailing, and everyone there had to fight to keep GNOME from getting cut — mostly by convincing the Red Hat co-founder and CEO, Robert Young, that the project was in a much better state than it looked. The now infamous “Project Bob” was a mad dash of 36 hours for all the members of the RHAD labs to create and present a demo of GNOME, making sure it would work — at least, within the strict confines of the demo itself. Additionally, Carsten Haitzler would write a cool new theme using the newly added loadable module support in GTK, to show off the capabilities of the toolkit in its upcoming 1.2 release. Up until then, GTK looked fairly similar to Motif, but counting on his experience on the Enlightenment window manager, Haitzler added the ability to customise the appearance of each UI element in a variety of ways.

Of course, in the end, it was the theme that won over Red Hat management, and without it, “Project Bob” may have failed and spelled doom for the whole RHAD labs and thus the commercial viability of the GNOME project.

In December 1998, Trolltech announced that Qt would be released under the terms of a new license, called the “Q Public License”, or QPL. The QPL was meant to comply with the Debian Free Software Guidelines and lead to Debian being able to ship packages of Qt, but it did not really solve the issue of GPL compatibility, so, in essence, nothing changed: Debian and Red Hat would not ship KDE until its 2.0 release, 2 years later, when Trolltech relented, and decided to ship Qt 2.0 under the terms of both the QPL and of the GNU GPL.

By the beginning of 1999, KDE was about to release version 1.1, whereas GNOME locked down the features for a 1.0 release, which was announced in March 1999. In April 1999, Red Hat Linux 6.0 shipped with GNOME as its default desktop environment. The 1.0 release was presented at the Linux Expo, in May; the presentation was hectic, and plagued by the typical first release issues; GMC, the file manager, for instance, would crash when opening a new directory, but the session manager was modified to restart it immediately, so all you’d see was a window closing and re-opening in the desired location.

The splash of the first major release attracted attention; it was a signal that the GNOME developers were ready for a higher form of war, one that involved commercial products that could integrate with, and be based off of GNOME.

The desktop wars were entering a new phase, one where attrition and fights by proxy would dominate the following years.


Next week we’re going to zoom into the nascent ecosystem of companies that were born around GNOME, and focus on two of them: Ximian, or Helix Code as it was called at the time; and Eazel. Both companies defined GNOME’s future in more ways than just by adoping GNOME as a platform; they worked within the GNOME community to create products for the Linux desktop, and they shaped the technical and social decisions of the GNOME project well after the first chapter of its history.

Alongside that, we’re also going to look at the effort to bring about the era of components that was initially outlined with the GNOME acronym: a desktop and an object model. We’re going to see what happened to that, once we step into “The land of the bonobos”.

References

by ebassi at November 01, 2018 05:00 PM

October 25, 2018

Emmanuele Bassi

The History of GNOME

I’ve done a thing which may be of interest if you’re following the GNOME community.

As I said on Twitter, I have spare time, and I like boring people to death by talking about things that matter to me a lot; one of the things that matter to me is GNOME and its community—and especially its history.

Of course, I had to go and make it about liminal spaces and magic rituals, because that’s what makes it fun. This, though, is a magic ritual. I’m holding a seance, and I’m calling forth the past of the GNOME project for the people that live down its light-cone.

GNOME has the luxury of having a lot of people that stuck around—some even from the early days when there was no GNOME; there are also other people, though, some of them born after Miguel’s announcement, that are now starting to contribute to GNOME. I guess that means that it’s time to look back a bit, and give some more context to the history of the project.

I hope I won’t bore you that much with this; I hope that people will learn something new, or re-discover something that was forgotten. In general, I do hope people will have fun with it.

by ebassi at October 25, 2018 06:00 PM

Episode 1.1: GNOME

It is a long running joke in the GNOME community, that the acronym, or, more accurately, the backronym, that serves as the name of the project does not apply any more.

The acronym, though, has been there since the very first announcement of the project: GNOME, the GNU Network Object Model Environment.

The history of GNOME begins on August 15th, 1997.

NASA landed the Pathfinder on Mars just the month before.

Diana, Princess of Wales, would tragically die at the end of the month in Paris.

The number one song in the US charts is “I’ll be missing you”, by Puff Daddy.

On the 15th of August, Miguel de Icaza, a Mexican university student, announced the creation of the GNOME project, an attempt at creating “a free and complete set of applications and desktop tools, similar to CDE and KDE but based entirely on free software”.

To understand each part of that sentence, I’m afraid we will have to go back to a time forgotten by the laws of gods and men: the ‘80s.

In 1984, Richard Stallman started the GNU project. Don’t bother try to expand the acronym, it’s one of those nerdy things for which the explanation is just not as clever when said it out loud as it feels in one’s head. Incidentally, GNU is the reason why the G in GNOME is not silent. The history of GNU is an interesting topic, but we’ll avoid covering it here; if you want to, you can read all about it on the Free Software Foundation’s website.

GNU was, and it still is, an attempt at creating a Unix-compatible system that is completely free as in “software freedom”; the freedom in question is actually four different freedoms:

  • the freedom to use this operating system on whatever machine you want
  • the freedom to study it, down to its source code
  • the freedom to share it with others, without going through a single vendor
  • the freedom to modify it, if it does not do something you want

These four freedoms are enshrined in the “Copyleft” movement, which uses distribution licenses such as the GNU General Public License and Lesser General Public License, to create software programs and libraries, that respect those four freedoms.

GNU is a collection of tools, like the compiler suite and Unix-like command line utilities, in search of a kernel capable of running them — the attempt at creating a working, main stream GNU kernel is, shall we say, still in progress to this day. In 1991, though, Linus Torvalds, a student from Finland, created the Linux kernel, which was quickly adopted as the major platform for GNU, and the rest, as they say, is history — though, like GNU’s history, also not the one you’re going to hear in this podcast.

While the development tools and console utilities were mostly getting taken care of, the GUI landscape on Linux was composed by an heterogeneous set of tools, typically starting with a window manager; some task management tool, like a list of running programs and a way to switch between them; and smaller utilities, like launchers for common applications or monitoring tools for local and network resources.

Each environment was typically built from the ground up and customised within an inch of its life, a system tailored to the levels of micro-management and pain tolerance of each Linux user, and in those days, the levels of both were considerably high.

Large, integrated desktops were the remit of commercial Unix systems, like SunOS, AIX, HP/UX. One of those environments was the Common Desktop Environment, or CDE.

Created in 1993 by the Open Group, an industry consortium founded by the likes IBM, Sun, and HP, CDE was built around Motif, a commercial widget toolkit written in the late 80s by HP and DEC for the X display server, the mainstream graphics architecture on Unix and other Unix-compatible operating systems.

The Open Group quickly standardised CDE, and until the early 2000s, when Linux had started eating most of their lunches, it was considered the de facto standard desktop environment for commercial Unix platforms.

One of the things that Linux and the commercial Unix systems running on the Gibsons had in common was the X graphic system. This meant that you could write applications on your Unix system at university, or at work, and then run it on Linux at home, and vice versa.

As it’s often the case, Linux users wanted to bring some of the tools used on the Big Irons to their little platform, and in 1996 a group of C++ developers led by Matthias Ettrich, created the KDE project, a desktop environment in the same vein as the CDE project, as the name implies. Since Motif was written in C and released under an expensive proprietary license, they used a different widget toolkit, written in C++, as the basis for the desktop called “Qt” (pronounced “cute”), made by a Norwegian company called Trolltech.

Qt was released under a license called “The Qt Free Edition License”, which limited the redistributability of the code: you could get the source of the toolkit, but you could not redistribute modified versions of it. While this was good enough if you wanted to download and build the source on your personal computer, it put some strain on the people distributing Linux and its software, and it went against the Copyleft ethos of the GNU operating system that was the basis of Linux distributions — to the point that an effort to reimplement a Qt-compatible widget toolkit and releasing it under a Free Software license was started by the GNU project, called “Project Harmony”.

In the meantime, Motif was proving to be a hurdle for other free software projects as well.

In 1996, Spencer Kimball and Peter Mattis, two students of the University of California at Berkeley, wrote a raster graphics image editing tool using the C programming language, as part of a university project, and called it the “GNU image manipulation program”, or GIMP, as many have come to regret when searching the name on the Internet. As university students, they had access to Motif for free, so the initial implementation of the GIMP used that toolkit, but redistribution to the world outside university, as well as technical issues with Motif, led them to write an ad hoc GUI widget library for their application, called “The GNU Image Manipulation Program Toolkit”, or “GTK” for short.

A community of software developers, including people that would be influential to GNOME like Federico Mena and Owen Taylor, started to coalesce around GIMP; they saw the value of an independent, free software widget toolkit, so GTK was split off from the main GIMP code repository in the early 1997, and began a new life as an independent project, released under the terms of the GNU Library General Public License. The licensing terms of GTK allowed it to follow the four software freedoms — use, study, share, and modify — but it also allowed the creation of less-free software on top; as long as you distributed any eventual changes you did to GTK under the same license, your application’s code could be released under any other license you wanted. This major distinction with Motif and Qt pushed the newer, volunteer-driven GTK forward, while it filled the gaps with the older, commercially supported toolkits.

GTK had a fairly lean API, and its use of C quickly allowed developers to write “bindings”, that let other programmng languages use the underlying C API with a minimal translation layer to pass values around. Soon after, programmers of Perl, Python, C++, and Guile, an implementation of a dialect of the LISP programming language called Scheme, could use GTK to write plugins for GIMP, or complete, stand alone applications. Compared to KDE’s choice of Qt, which only supported C++ and Python, it was a clear advantage, as it exposed GTK to different ecosystems.

What was GNOME like, back in late 1997/early 1998?

The answer to that question is: an heterogeneous collection of tools, mostly sharing dependencies, and developed together, that occasionally got released and even more rarely did build out of the box without resorting to using random snapshots out of the source revision control.

You had a panel, with launchers for common applications, and with a list of running programs. There were the beginnings of a set of core applications: a help browser; a file manager; a suite of small utilities, mostly GUI ports of command line tools; games; an image viewer; a web browser based on an embeddable HTML renderer; a text editor. Notably, and unlike KDE with its own KWM, not a window manager.

The question of what kind of window manager should be part of a GNOME environment was punted to the users — it’s actually the first instance of a controversial thread on the GNOME mailing list, with multiple calls for an “officially sanctioned” window manager, typically opposed by people happy to let everyone use whatever they wanted — the externally developed Enlightenment was the most common choice at the time, but you could literally run GNOME with WindowMaker; GNUStep; or XF Window Manager 2; and you could still call it “GNOME”.

From a code organization perspective, GNOME started off as a single blob, which got progressively spun into separate components:

  • gnome-libs, a series of core libraries based off of GTK
  • gnome-core, which contained the session manager and the panel
  • gnome-graphics, which contained the Electric Eyes image viewer
  • gnome-games, a mixed bag of simple games
  • gnome-utils, a mixed bag of GUI utilities, like gtop
  • gnome-admin, which contained an SNMP monitoring tool and Gulp, the line printer configuration utility

While GNOME at the time was still a loosely connected set of components, the overall direction of the design was far more grandiose, though. If you remember the GNOME acronym from the announcement, it mentioned a “network object model”.

But what is an “object model”?

One of the things that Microsoft and Apple did with much fanfare, back in the early ‘90s, was introducing the concept of embeddable components provided by both the OS and applications.

You could create a spreadsheet, then take a section of it, embed it into your word processor document, and edit the table from within the word processor itself, instead of flip flopping between the two applications on the limited screen real estate available. The idea was that documents and applications would just be built out of blocks of data and controls, shared across the whole operating system. Those components were available to third party developers in programming suites like Visual Basic, Visual C++, or Delphi, and you could quickly create a well integrated application out of them.

Sun tried to push Java as the foundation for this design; Apple called their short-lived implementation of this technology “OpenDoc”; Microsoft called its much more successful version “OLE”, or: Object Linking and Embedding; and, of course, any self-respecting desktop environment competing with Windows needed something similar to match features.

In order to implement an infrastructure of objects and remote procedure calls that could be invoked on those objects you needed a communication system; OLE used COM, the Common Object Model; GNOME decided to use CORBA, or the Common Object Request Broker Architecture, which was not only meant to be used on local systems but it also worked on the network. As a CORBA implementation, GNOME started by using MICO, a C++ library, and then replaced it with to ORBit, a C library written specifically for the project and addressing some of the MICO shortcomings.

While ideally GNOME would provide components for everything, the initial beneficiary of this architecture was the only recognisable bit of the desktop environment that was visible to the user all the time: the panel.

The contents of the panel were small, self-contained applications that would use CORBA as a mechanism to negotiate being embedded into the panel own window to display their state, or pop up things like menus and other windows on user request.

The core applications started getting CORBA-based components, but we’ll have to wait until after GNOME 1.0 to get to a widespread adoption of this architecture.

Between August 1997 and May 1998, GNOME released various 0.10 snapshots to mark the progress of the project. By June 1998, GNOME 0.20 was released as a “pseudo-beta”, followed, in September 1998, by 0.30, the first named release of GNOME, called “Bouncing Bonobo”.

Between 0.20 and 0.30, though, something had happened: Red Hat, a Linux distribution vendor, founded the Red Hat Advanced Development Labs, hired a bunch of software developers that happened to contribute to GNOME, amongst other things, and the benevolent corporate overlords started taking notice of this Linux desktop.

Nobody really knew it at the time, but the First Desktop Wars had begun.

References

by ebassi at October 25, 2018 03:27 PM

Prologue

Prologue

History is one of my passions; more than offering a narrative of the past, it gives us context, and with context comes a better understanding of the people that made history, and of their choices.

This is true for every human endeavour, and so it’s also true of software projects.

Free and Open Source Software is a social and political movement before a techological one; as such, the history of a free software project should help us contextualise not just the technological decisions, but the personalities, the goals, and the constraintsthat influenced those decisions.

It’s often the case, especially in the field of software development, that newcomers to a specific project, or a subset of a project, end up facing some code that does not work; or that barely works, some of the time. The immediate reaction is typically trying to understand that code, that technology, that whole stack, in order to fix it, or replace it. Just learning what the code does in that specific instant, though, gives only a partial picture of how that code, that technology, that whole stack came into being.

It’s said that programmers think that bad code comes from bad programmers; in reality, bad code often comes from good people operating under different constraints than ours. Understanding those constraints, and how they changed over time, is the first step in understanding how to write better code.

This is especially true of projects that involve hundreds of developers, operating under wildly variable constraints, for decades.

Projects like GNOME.

GNOME is a free software project that aims to create an environment for computer users that is

  • easy to use
  • accessible for everyone
  • under a license that allows everyone to modify and contribute to it

As far as free and open source software projects go, it’s fairly old; it’s old enough to vote, and to buy a drink in the US. GNOME, as a community, attracts not only volunteer contributions, but commercial ones as well, and it provides the technological underpinnings for both volunteer and commercially driven products. It is a recognised brand, for better or worse, and it comes with opinionated decisions, as well as an ethos that not only drives the individuals that make it, but also the ones that use it.

More importantly, GNOME comes with a varied history, and a lot of baggage, deeply rooted in the choices made by thousands of people. It’s something that newcomers to the project often deal with by asking questions to whoever appears to be knowledgeable — but it’s not something that the the project itself offers.

So, I wanted to work on recontextualising GNOME.

Why me, though?

My history with GNOME does not date as far back as the project’s origins. I did start using Linux in 1997, around the time GNOME was created, but for the first couple of years I mostly used the VT for emails, IRC, Usenet, and the occasional Napster download (kids, ask your parents); a GUI was something I used to launch Netscape Navigator. I experimented with KDE, but I ended up settling on WindowMaker, because I enjoyed the non-Windows-y look of the desktop.

As a user, I switched to GNOME full time by the tail end of 1.x series, and went through the 2.0 transition with a stronger interest in the direction of the project — to the point that I started contributing to it.

GNOME is the reason I became a passionate believer not just in the freedom of releasing the source code of your work, but also in the necessity of a working environment for the people using a server, a workstation, a laptop, or even a phone. I learned principles of software development; architecture; security; privacy; hardware enablement; design and user experience; quality assurance and testing; accessibility. I got my first job through GNOME, and I met wonderful people that I consider friends, peers, and inspirations. I lived through a huge chunk of the history of the GNOME project, and in some cases I can kid myself into thinking I helped shape that history. This does not make me the most neutral or dispassionate person to lend his perspective on GNOME, but you’ll have to make do with me, I guess.

In the beginning this was to be a presentation, but Jonathan Blandford did an amazing keynote on the history of GNOME in Manchester, at GUADEC 2017, for the 20th anniversary of the project, where he went through most of the milestones and events of the previous three decades. I realised, at that point, that it would take somebody like me, who’s less talented than Jonathan, much more than an hour to do the same, let alone go through GNOME’s history in depth.

I started writing a series of blog posts, but halfway through I realised I was using the same kind of voice I use when writing presentation notes, which is when I realised I should probably just read them. Thus, a podcast.

So, let’s lay down some of the ground rules for this.

The first question is: who is this podcast for?

When I write, I have three broad categories of people that I want to reach; the first is composed by people that are new, or relatively new, to the GNOME project, and want to add more context to the history of the project and of the community they just joined. In a large, established project like GNOME, there’s a lot of insider information that you typically have to suss out in person, or that you end up gleaning if you hang out in the social channels long enough. Why is using flags in a UI such a problem? Why are settings expensive? What’s up with the clocks? There is a lot of established knowledge that is only evident by its final results, but context for those results is missing. It’s already hard to get started in a new community, so this podcast aims at lowering the curve for newcomers at least a bit.

The second audience is made by people that are not familiar with the GNOME project outside of having heard about it; they are familiar enough with Linux, but lack the “insider baseball” aspect of both the community and the technology. They may know that GNOME is a desktop environment, but have no idea what a desktop environment is made of.

The third audience is the GNOME developers themselves; people that have been embedded in the community long enough to know some, or many, of the ins and outs of the decisions made in the past, but they may be unfamiliar with the whole history of the project. I hope they’ll forgive me for going on tangents, espcially at the beginning.

The podcast is divided in chapters, roughly corresponding with each major version of the desktop. There will be one episode per week, with breaks between chapters. I’m not the fastest, nor the most adept at this medium, so I want to allow some leeway to maintain both quality and quantity.

Each episode will go through events in the history of the GNOME project, but I’ll try to take some time to expand on the context in the Free Software world, as well as the rest of the happenings around the same time.

In some cases, I may have to give some technological definition, mostly to ensure that we have a common vocabolary, especially for people who are not heavily invested in the stack itself, or for people that learned English as a second language.

The primary sources for this podcast are public mailing list archives; additionally, I’ll refer to blogs from GNOME developers, as well as other public sources, like interviews and presentations given at conferences. As I said earlier, I do have direct experience of a chunk of the GNOME timeline, but I’ll try my best to stick to public sources for the events that involved me as well; this is not a “tell-all” kind of podcast. Using only public sources has the advantage that I can refer to specific information, but has the downside that I might miss some of the backstory; I don’t want to create an oral history of the project, as that would be its own endeavour. I’m sure people involved in some of the events will have their own version, and I’ll gladly accept corrections.

Each episode will have a companion article, which will contain the script and the sources I’ve used.

Hopefully, this podcast will be interesting for both newcomers to the GNOME project, and for old timers as well. Looking back to what happened helps us shape what’s to come; and having a better understanding of the past can give us confirmation of our choices, or a chance to revisit them.

Finally, I wanted to have an excuse to say out loud, with apologies to Mike Duncan:

Hello, and welcome to the history of GNOME.

References

by ebassi at October 25, 2018 03:26 PM

History of GNOME: Table of Contents

  1. Episode 0: Prologue
    • What
    • Why
    • Who
    • How

Chapter 1: Perfection, achieved

  1. Episode 1.1: The GNU Network Object Model Environment
    • Linux and Unix
    • X11, and the desktop landscape
    • KDE and the Qt licensing
    • GNU Image Manipulation Program
    • GTK
    • Guile and language bindings
    • Network Object Model
  2. Episode 1.2: Desktop Wars
    • KDE vs GNOME
    • C++ vs C vs Perl vs Scheme vs ObjC vs …
    • Qt vs GTK
    • Red Hat vs the World
    • Project Bob
    • GTK 1.2 and themes
    • GNOME 1.0
  3. Episode 1.3: Land of the bonobos
    • Helix Code, and Red Carpet
    • Eazel, and Nautilus
    • Components, components, components
    • GNOME 1.2
  4. Episode 1.4: Founding the Foundation
    • GUADEC
    • The GNOME Foundation
    • The GNOME logo
    • Working on GNOME: CVS
    • Working on GNOME: Bugzilla
  5. Episode 1.5: End of the road
    • The window manager question
    • Sawmill/Sawfish
    • GConf
    • GNOME 1.4
    • Sun, accessibility, and clocks
    • Dot bomb
    • Eazel’s last whisper
    • Outside-context problem: OSX
  6. Side episode 1.a: GTK 1
  7. Side episode 1.b: Language bindings
  8. Side episode 1.c: GNOME applications

Chapter 2: Perfection, Improved

  1. Episode 2.0: Retrospective
  2. Episode 2.1: On brand
    • GTK 2.0: GTK Harder
    • Fonts, icons, and Unicode
    • Configuration woes
  3. Episode 2.2: Release day
    • Design, usability, accessibility, and brand
    • Human Interface Guidelines
    • The cost of settings
    • Time versus features
    • GNOME 2.0 reactions
  4. Side episode 2.a: Building GNOME
  5. Episode 2.3: Honeymoon phase
    • Honeymoon’s over
    • Metacity
    • Galeon and Epiphany
    • GStreamer and media
    • Novell and Ximian
    • Evolution
    • Bug master

by ebassi at October 25, 2018 11:00 AM

October 07, 2018

Tomas Frydrych

Dr Beeching’s Unicorns

Dr Beeching’s Unicorns

Glen Ogle. Most of the time a place on the way to somewhere else, somewhere more exciting. Yet, for me also a special, magical place where years ago my inner eye first really glimpsed the beauty of this land.

We just got our first car, a decade old Astra Belmont, and were on a first road trip out of the Central Belt. Up to Loch Ness and Skye, if I recall correctly—truth be said, I no longer recall much of it, a vague memory of the Nessie exhibition, and a boat trip on the loch, zero memories of Skye. But I do remember Glen Ogle. The way it opened up in front of my eyes for the first time as we drove up from Lochearnhead. It cast a spell over me, one that has lasted throughout the years. Looking back, I think this was the moment this land became My Scotland.

We returned to explore the old railway line. Back then there was no path to speak off, the old embankment shrouded in trees. Nature doing its best to erase man’s intrusion. Years later I watched with dismay as so many of those trees were cleared during Route 7 construction—a valuable project for sure, but such chainsaw enthusiasm.

But before that it was like being transported into another, wild, mythical world, half expecting a ghost train to appear at any moment from around the corner underneath the dark canopy.

It didn’t. Instead we run into a giant locked up gate worthy of Alcatraz. A gate clearly meant to keep people, not animals, out. (Not such an uncommon sight in the days before the Land Reform Act; the ‘good old days’ weren’t really.) Undeterred, we climbed over for a wee bit more exploring, till an industrial size manure heap finally stopped us.

‘That would be one of Dr Beeching’s, then’, said my mother-in-law as we discussed our trip over the dinner table.

Dr Beeching, 1913 - 1985. An overpaid political appointee to the chair of the British Railways Board. His lasting legacy the decimation of UK’s railway infrastructure. All in all some 6000 miles of railway track, over a third of the UK total, decommissioned.

The construction of the Callander to Oban railway involved 13,000 workers and took fifteen years; undone by a stroke of a pen. A story repeated over and over across the land. Political expediency masquerading as fiscal prudence. Lack of long term strategic vision, the consequences of which are felt astutely more than half a century later, and will be much longer, perhaps for ever.

Today Linda’s doing her recce for the Glen Ogle race, and I am out for a wander with a camera on the other side of the glen. A blustery autumnal day. The colours are beautiful, the showers frequent, the wind gusts strong enough to knock my tripod over and send my bag of lenses rolling down the steep hill. I spend most of my time ‘waiting for the light’, left to my thoughts.

Another heavy patch of rain has painted a bright rainbow across the top of the glen. Below its arch a steady stream of relentless traffic slowly making its way up the road, the frustration of the drivers almost palpable in the air. Dr Beeching’s unicorns.

My Glen Ogle. A place of internal inter-generational conflict. There on the other side, standing on the old viaduct, the younger me protesting the tree felling, the loss of (perceived) wildness. On this side the present me, wishing the working railway was back. Not a change of values as such, merely of perspective.

Later today when I develop the film I will find out that nothing worthwhile came out of the pictures. But it never is about the pictures.

by tf at October 07, 2018 09:59 AM

September 14, 2018

Tomas Frydrych

A Prince Holding Court

A Prince Holding Court

I saw him as soon as I came over the rise. Hard not to, perched on a rock some hundred yards ahead, completely out of scale in this landscape.

I heard of them being seen around here time from time, but this is the first I have laid my eyes on one, the first time I have come close to one at all. A young bird judging by the colouring, just sitting there, that unmistakable long neck and large beak, the pronounced shoulders -- the royal posture of a sea eagle.

Of course, I have neither binoculars nor a camera, not even a phone. I am three hours into a short run that got a bit out of hand when, on the spur of a moment, I decided to wade through the loch into the no-man's land. But,

Don't need no Real-to-Reel
    Recorder
to tell me I've been there,
I ken that fine.*

And so I forget about the midges (and the dozens of ticks crawling over me) and just stand here watching, in awe.

The thing that takes me by surprise the most is not the eagle, but the other birds. There are about a dozen, perhaps more, mostly crows (hooded and not) sitting in a circle around it. A tight circle, two, three yards away, quite unconcerned, preening their feathers. Like a scene from the Jungle Book: a Prince holding Court.

I am seen. The eagle flaps lazily and flies in my direction, passing maybe thirty yards away, checking me out. An about turn, back at my eye level, this time no farther than fifteen yards. You read about the two-plus meter wing span, but o' my, pictures and numbers didn't prepare me for the reality.

The eagle lands back at its perch, the other birds settling once more around it; the Court is back in session. Eventually the Prince gets bored and takes off once more in the direction of the sea, and the Court rises again.

One of the carrion crows follows it, trying to peck its back.

Keep your friends close but your enemies closer.

The difference in size makes it look rather comical! The eagle seems unperturbed and, with a nonchalance of an apex predator, keeps rising and rising until it's just a dot high up in the sky.

And me? I'll be back here an hour later, with two cameras and five lenses, and the immutable Time laughing behind my back. Never mind, I ken that fine. (And you? You get a picture of a sunset instead.)

--

[*] Andrew Greig, Men on Ice.

by tf at September 14, 2018 08:50 AM

September 10, 2018

Tomas Frydrych

Of Eagles and Men

Of Eagles and Men

There are three of them up there, and what a racket! Correction: the racket, that’s just the two of them. She is soaring silent, near motionless, regal; aloof. Her path seemingly unalterable. On a mission permitting no distractions.

--

I am transported years back to a different place, studying a black and white photograph above a fireplace. Old Mr Thornburn shuffling around a table behind me. ‘Such a camaraderie I had not known before, nor have since,’ is all he (ever) said about it.

Up on the mantle piece a young Mr Thornburn. In a glass bubble on the tail end of a (seemingly immovable) Lancaster in a cloudless sky. To me, a two generations removed causal observer, a picture of peace and tranquility, of adventure.

See? The camera does lie! An what an illusion! For this is an image of nothing less than Life Itself suspended. For a few hours? For eternity? For a mission permitting no distractions.

--

The screeching grows louder. Lot of posturing, then just one brief clash. More posturing, but we (me, her, the two of them) know it’s all over. Age and experience triumphs over the virility of youth. The upstart, minus a few feathers, retreating; he will not be back.

Yet, we (me, her, the two of them) know it’s far from over. Merely the beginning of the end. The upstart will return, perhaps next year, perhaps the year after. Bigger, craftier, having the upper claw.

I watch the two of them soar higher and higher. And we (me, her, him) know that when that day comes it will be the other she takes back to her nest. Her mission allows no sentiment. But, for now at least, the inevitability of the future has been deferred.

[Though we (you and me, if not her and them) know chances are one or the other, if not all three, will get shot, trapped, poisoned, or just fly miles out to sea to drown—Scottish eagles seem to prefer such a fate to longevity, some (useless wastrels) opine.]

--

Mr Thornburn is gone now, and with him the memories he never spoke of. I didn’t then, but I understand now that some experiences are too profound to trivialise by telling, in turn making other experiences too trivial to merit it. My great grandfather’s journal, from yet an earlier war, contains such stories—stories that could never be shared with those who couldn’t understand, and didn’t need telling to those who did.

The younger me, too preoccupied with the (untold) tales of heroic deeds. The older me, too late, wanting to ask the (heavily pregnant) question that back then was hanging in the air.

Mime is a sheltered generation, collectively short on stories of substance. And so we have become compulsive tellers of trivialities, serial manufacturers of pseudo-heroic deeds, dressed up in multicoloured cloaks of fake profundity. Entertaining distractions from the imperative of the (ultimate) mission.

Tall tales of denial.

--

All that is left of the eagles are their distant calls, leaving me alone to my thoughts. ‘Generation goes, and generation comes, but the earth lasts forever.’

Perhaps.

The Earth is groaning underneath us. We can count on the going, but can we on the coming? How many more cycles are there left? Two, three?

If only the eagles could talk.

by tf at September 10, 2018 07:16 AM

September 04, 2018

Emmanuele Bassi

Reference counting in modern GLib

Reference counting is a fairly common garbage collection technique used in many projects; the core GNOME platform uses pretty much all the time, from container data types, to GObject.

Implementing reference counting in C is typically fairly easy. Add a field to your data type:

typedef struct {
  int ref_count;

  char *name;
  char *address;
  char *city;

  int age;
} Person;

Then initialise it when creating a new instance:

Person *
person_new (void)
{
  Person *res = g_new0 (Person, 1);

  res->ref_count = 1;

  return res;
}

Instead of a person_copy() and a person_free() pair of functions, you need to write a person_ref() and a person_unref() pair:

Person *
person_ref (Person *p)
{
  // Acquire a reference
  p->ref_count++;

  return p;
}

static void
person_free (Person *p)
{
  // Free the data
  g_free (p->name);
  g_free (p->address);
  g_free (p->city);

  // Free the instance
  g_free (p);
}

void
person_unref (Person *p)
{
  // Release a reference
  p->ref_count--;

  // If this was the last reference, free the data
  // associated to the instance and then the instance
  // itself
  if (p->ref_count == 0)
    {
      person_free (p);
    }
}

Of course, trivial doesn’t mean correct. For instance, the code above assumes that all reference acquisition and release operations will happen from the same thread; if that’s not the case, you’ll have to use atomic integer operations to increase and decrease the reference count. Additionally, the code above does not check for overflows in the reference counting, which means that the value could saturate and lead to leaks.

Reimplementing all the checks and behaviours is not only boring, but it inevitably leads to mistakes along the way. For this reason, GLib 2.58 introduced two new types:

  • grefcount, to implement simple reference counting
  • gatomicrefcount, to implement atomic reference counting

Both come with their own API, which leads to this code:

typedef struct {
  grefcount ref_count;
  // or: gatomicrefcount ref_count;

  // same as above
} Person;

Person *
person_new (void)
{
  Person *res = g_new0 (Person, 1);

  g_ref_count_init (&res->ref_count);
  // or: g_atomic_ref_count_init (&res->ref_count);

  return res;
}

Person *
person_ref (Person *p)
{
  g_ref_count_inc (&p->ref_count);
  // or: g_atomic_ref_count_inc (&p->ref_count);

  return p;
}

void
person_unref (Person *p)
{
  if (g_ref_count_dec (&p->ref_count))
  // or: if (g_atomic_ref_count_dec (&p->ref_count))
    {
      person_free (p);
    }
}

The grefcount and gatomicrefcount make it immediately obvious that the field is used to implement reference counting semantics; the API checks for saturation of the counters, and will emit a warning; the atomic operations are verified. Additionally, the API checks if you’re trying to pass an atomic reference count to the grefcount API, and vice versa, so you have a layer of protection there during eventual refactoring, even if the types are both integer aliases.

We did not stop there, though.

Adding a reference count field to a structure is not always possible; for instance, if you have ABI compatibility restrictions, or if the type defition is public, adding a field may just not be something you can do within the same API version. By way of an example: you may have a type meant to be typically placed on the stack, so it needs a public, complete declaration, in order to have a well-defined size at compile time. You may also want to pass around an instance of the type as the argument for a GObject signal, or as a property — but it may be expensive to copy the data around, so you really want to have an optional reference counting mechanism that is invisible to the vast majority of the use cases.

This is why we also added an allocator function that adds reference counting semantics to the memory areas it returns, called GRcBox:

typedef struct {
  char *name;
  char *address;
  char *city;

  int age;
} Person;

Person *
person_new (void)
{
  return g_rc_box_new0 (Person);
}

Person *
person_ref (Person *p)
{
  return g_rc_box_acquire (p);
}

void
person_unref (Person *p)
{
  // person_free() is copied from the code above; we use
  // g_rc_box_release_full() because we have data to free
  // as well, but there's a variant for structures that
  // do not have internal allocations
  g_rc_box_release_full (p, person_free);
}

As you can see, this cuts down the boilerplate and repetition considerably.

The data returned to you by g_rc_box_new() and friends is exactly the same as you’d get from g_new(), so you can pass it around to your API exactly the same — but it is transparently augmented with reference counting semantics. You acquire and release references without having an explicit reference counter. The only restriction is that you cannot reallocate the data, so calling realloc() is not allowed; and, of course, you cannot free the memory directly with free() — you need to release the last reference.

Similar to GRcBox, there’s a GAtomicRcBox, which provides atomic reference counting semantics, with a similar API.

Both GRcBox and GAtomicRcBox are Valgrind-safe, so you won’t get false positives or unreachable memory if run your code under Valgrind.

You don’t even need to have a structure to allocate: you can use GRcBox to allocate any memory area with a non-zero size. Incidentally, this is how we implemented a oft-requested feature: reference counted strings, which are also available in GLib 2.58.

Reference counted strings work exactly like any other C string, but instead of copying them and freeing them, you acquire and release references to them:

char *s = g_ref_string_new ("hello, world!");

// "s" is just like any other C string
g_print ("s['%s'] length = %d\n", s, strlen (s));

g_ref_string_release (s);

Reference counted strings can also be interned, which means that calling g_ref_string_new_intern() with the same string twice will give you a new reference the second time around, instead of allocating a new string; once the last reference to the interned string is dropped, the string is un-interned, and the resource allocated for it are freed.

Since you may be wary of passing around char * for reference counted strings, there’s a handy GRefString C type you can use to improve readability, like we have GStrv for char **:

GRefString *s = g_ref_string_new ("hello");

// ...

g_ref_string_release (s);

GRefString also has an autocleanup function, so you can do:

{
  g_autoptr(GRefString) s = g_ref_string_new ("hello!");

  // ...
}

and the string will automatically be released once it goes out of scope.

by ebassi at September 04, 2018 03:35 PM

August 18, 2018

Tomas Frydrych

M4/3: The Outdoor Camera System

M4/3: The Outdoor Camera System

It’s been 10 years since the birth of the M4/3 camera system. I got my first M4/3 (Lumix GF2) in 2010 and never looked back. Indeed, I am about to argue that during that decade M4/3 has become the best camera system both for the landscape photographer on the move and a wildlife photographer alike, hitting the sensor size sweet spot. And yet, it’s completely overlooked by the outdoor movers and shakers!

There are reasons for that. Some historical (there is a large contingent of outdoor photographers who switched to digital when it was being grafted into legacy 35mm systems). Some to do with misunderstanding of what determines digital picture quality and where the technology is. And let’s not be shy, there is some pure ‘mine is bigger than yours’.

Compared to all the other removable lens systems out there M4/3 has a whole bunch of things in its favour, here are some of them of the top of my head.

  • M4/3 is light on the back. My ‘complete’ outdoor camera kit, which consists of a camera, 24-80mm eq. zoom, a fisheye, 200-800mm eq. ‘bird’ lens, a full-sized tripod, spare batteries and a handful of filters weights 5.5kg. Apart from the tripod it fits inside a BYOB 10 insert – my mother-in-law goes around with a (considerably) bigger handbag than that! Now try that with a full-frame system; the equivalent ‘bird’ lens alone weights more than the whole lot and measures over 50cm.

  • M4/3 is light on the wallet. At £1,200 the aforementioned Leica ‘bird’ zoom is not cheap by any means, but it’s not out of the reach of us mere mortals. And that’s basically as expensive as it gets. Equivalent full-frame lenses start at ~£6,500 ... Similarly, top, professional grade, M4/3 cameras such as the Lumix G9 will set you back by ~£1,200 ... need I say more?

  • The M4/3 dual image stabilisation (5-axis in camera + in lens) is second to none. It means not only I can comfortably hold shots to 1/25s, but that if required I can hand-hold that 800mm eq. beast, and that is priceless.

  • If you want a camera that can take both excellent stills and video, the M4/3 is without a competition, and most of the lenses have been designed with this in view (i.e., zooms with constant aperture when zooming).

  • Ask yourself why is there such an abundance of sensor cleaning kits for full-frame cameras, but nothing for M4/3? As it happens, M4/3 comes with built-in sensor cleaning as standard. Taking pictures outdoors? Invaluable!

‘But the image quality, the bokeh ...’, I hear you say. OK, let’s talk some geekery.

Size Matters

Megapixels sell cameras, for the belief that image quality is to do with pixel count is more deeply entrenched in the psyche of the digital era photographer than anything else. Yet the reality is lot more complex, and in fact, more pixels are not necessarily better (back to that in a moment).

On its own, the MP value has only a bearing on how much the image can be enlarged. To get a good quality photographic print you need around 200 dpi (dots per inch), if you are looking for a truly bespoke work, and have a capable enough printer (a person, not a gadget), then 300 dpi. But that means an old 6MP camera is enough for an excellent print at around 10"-15" wide, while 20MP is good enough up to 17"-25" – a very few, even professional, photos will ever be printed to anything near that.

But what do I know? Here is what Colin Prior has to say on the subject of megapixels:

What do you need a 100MPs for? ... What do we need that resolution for? Nobody can answer that question! Anything more than 30MPs is total bonkers (An interview on the The Togcast Podcast, episode #26)

So, if you do the sort of photography that requires to regularly produce prints at A2 or bigger size then the M4/3 might not be a good fit, but do you? Most of us don’t, vast majority of today’s imagery is intended for digital comsumption, which puts much lesser demand on pixel density. A 27" iMac retina display has a resolution of 200 dpi, a typical consumer monitor 120 dpi, ‘4k’ means 4000 pixels wide – an 20.3MP M4/3 image exceeds all of those.

‘But lot of pixels means you can crop a lot!’ I prefer to compose my pictures on location instead, don’t you?

In contrast, the physical size of the sensor matters a great deal. M4/3 cameras have a sensor that has (a clue in the name) a 4:3 ratio, and a diagonal exactly 1/2 of a 35mm film negative (what in the digital age, to the amusement of medium and large format photographers everywhere, has come to be known ‘full frame’, FF for short). The relationship between the FF and M4/3 diagonals is referred to as a crop factor of 2.

The physical size of the sensor determines three things in particular: the total amount of light that falls on the sensor at any given light conditions, the focal length of a lens needed to achieve a given angle of view, and the perceived depth of field.

Equivalent Aperture

Because the M4/3 sensor area is a little bit over 1/4 of the FF sensor area, the total amount of light (the number of photons) falling on the sensor is ~1/4 of what would fall on a FF sensor if both lenses were set at the same f-stop, or the same amount of light that would fall on the FF sensor if its lens was closed down by 2 f-stops. This relationship is referred to as equivalent aperture.

The amount of light that falls on the sensor has bearing on noise. There is a whole field of science dedicated to the subject (signal theory), but in essence, in any real-world system that transmits information, the useful bits (signal) are always mixed with bits that should not be there (noise; think of it as the crackling of an old vinyl). The closer the size (amplitude) of the signal is to that of the noise, the more intrusive the latter becomes (quiet music, loud crackling). This relationship is described by signal-noise ratio (SNR).

In a digital sensor the amplitude of the noise is largely given by the quality of the circuitry, while the amplitude of the useful signal is given by the intensity of the light. When it’s dark, the signal is small and has to be amplified. This is where the ISO number comes in. With a digital camera the ISO number represents the amplification factor: the bigger, the more amplification. But the problem with amplification is that it increases not only the amplitude of the signal, but also of the noise.

But the important thing to understand, and I get the distinct impresion that lot photographers don’t, is that when it comes to digital, noise is not determined by the amount of light that falls on the sensor (indicated by effective aperture) but that falling on an individual pixel on the sensor. In other words, noise depends not only on the effective aperture, but also on the overall pixel count.

So, at the same f-stop a pixel on a 45MP FF sensor receives not 4x (effective aperture) but only ~1.7x the amount of light that a pixel on 20MP M4/3 sensor does, i.e., all other things being equal, the M4/3 lens needs to be opened up by less than a stop to achieve the same performance – the fact that the current generation of M4/3 cameras has stuck with sensible sensor pixel counts, whereas FF sensors are chasing ‘bonkers’ pixel densities, works hugely in the M4/3 favour.

In practical terms, the noise of the system determines two things: how quickly the image degrades when the ISO value is pushed up, and the dynamic range of the sensor (the difference between the lightest and darkest light level the sensor can capture). So what does it look the real world?

Regarding the high ISO usability it looks pretty good. The current crème de la crème of the M4/3 cameras, the Lumix G9, doesn’t show significant amounts of noise until above ISO 3200. In terms of outdoor photography this is very decent. And even my now somewhat ageing GX8 is in my experience perfectly fine at least ISO 800.

At the same time, M4/3 lenses tend to be fast. A typical zoom lens will come at f2.8, good enough to shoot any daytime landscape at ISO 100, and it gets even better with primes. The Olympus M.Zuiko fisheye opens up to f1.8, which is good enough for night time photography at ISO 100. Personally, I rarely need to set my M4/3 cameras to anything other than ISO 100, the main exception being bird photography with the big telephoto at fast shutter speeds (1/2500s or so); but even for that ISO 3200 is usually enough.

Then, of course, there is the excellent performance of current noise reduction algorithms that can be used in post-processing. In other words, it’s just like the mega pixels – in terms of low light noise the M4/3 cameras are well beyond what is necessary to take that perfect landscape image.

What about the dynamic range? The active range of the human eye is estimated to be ~10-14EV (the absolute dynamic range is about 24EV , but this is not actively usable since the eye takes a better part of an hour to fully adjust to a transition from light to darkness). A top end FF (as well as MF) sensors come bit under 15EV, while top end M4/3 cameras deliver bit over 13EV.

‘Aha!’, you say, ‘the FFs are almost 1.5 stops better!’

My answer to that is two fold: on the one hand 13 and a bit EV is more than enough in most landscape scenarios, and when it is not, it is easily remedied (either by an old fashioned graduated filter, or by that wonderful feature of modern digital cameras, exposure bracketing. On the other hand, 1.5 stop is not enough of a difference; I routinely use 3EV GND filter, which means I’d still have to do something even with a top end FF (or even MF) camera.

So to sum up, I don’t believe the ‘better image quality’ argument for FF sensors holds. It’s not that sensor size doesn’t come at a cost of quality, it does. But once the quality of the hardware gets beyond a certain point, it no longer matters, and the quality of the image comes down purely to what we do with the camera. And today’s M4/3 cameras are, in my personal view at least, well beyond that point.

Equivalent Focal Length

For a given angle of view, the focal length of the lens is proportional to the crop factor. So, if the ‘normal’ angle of view for FF sensor is created by a 50mm lens, for M4/3 the same is achieved with a 25mm lens, which would be said to be a ‘50mm equivalent’.

The big consequence of this, and the real strength of M4/3 as a system for the photographer on the move, as I already noted, is that the lenses are much smaller and lighter (not to mention cheaper). As Jim Frost once put it, ‘you don’t need a wheelbarow to carry it around’. Nothing to add to that really.

Depth of Field

While equivalent focal length is useful to compare the angle of view, two lenses of such equivalent focal length are not equivalent in other important respects. The most notable difference is the depth of field. Lenses of equivalent focal length have the same depth of field at apertures proportionate to the crop factor, i.e., the depth of field of a M4/3 25mm lens at f2.8 is the same as that of a 50mm FF lens at f5.6. In other words, M4/3 has a noticeably greater depth of field than a FF camera for the same angle of view and aperture.

The increased depth of field is the single biggest, and I think the only practically significant, difference between modern M4/3 and FF cameras; whether the M4/3 is deemed to be a good fit will depend on how much value shallow depth of field adds to one’s images.

There are photographic disciplines where a shallow depth of field is definitely a must, but personally I don’t think outdoor photography is necessarily one of them. First of all, it’s not so much of an issue with closeups, for at short distances even the M.Zuiko f1.8 fisheye will produce a decent bokeh (plus closeups can, more often than not, be done using a longer focal length). At long focal lengths this is not an issue at all, e.g., the depth of field on the Leica 100-400mm zoom is perfect for wildlife photography.

So in practical terms the depth of field only makes a noticeable difference at mid foreground distances with normal to wide angle lenses. To be specific, with an M4/3 12mm lens (24mm eq.) at f2.8 with far focus limit stretching to infinity, the near focus limit will never be more than ~3.5m. A FF 24mm lens at 2.8 can push that near limit to around 6.5m. But how much difference does this really make to me? If nothing else, if I really want that extra 3m of unfocused foreground, and I can’t achieve the desired composition at a longer focal length, I can fix that in post-processing.

In Conclusion

As far as I am concerned the M4/3 does it all. It’s a great system for outdoor photography, whether it’s landscape or wildlife, and after eight years of using it, I can’t envisage ever wanting to ‘step up’ to a full-frame system.

It’s not that I don’t understand the benefits of a bigger media, having shot thousands of pictures in years gone by on 35mm film, I do. But for vast majority of my pictures the M4/3 is the optimal system. And when it’s not? Well that’s where medium format comes in, of course. Full frame? But a relic of disposable cameras and holiday snaps from a long gone era of film. 😉


A Few Notes on M4/3 Cameras

Just a few comments on the cameras and lenses I have owned, with a random selection of pictures for illustration. Probably worth saying, the pictures have had a varied degree of post processing applied in Lightroom, but no Photoshop involved (life is too short for that).

Lumix DMC-GF2

My first M4/3 camera. Introduced in 2010 as a somewhat dumbed down successor to the hugely popular GF1, the GF2 delivered 12.1MP resolution with a dynamic range of 10.3. By today’s standards not much, but nevertheless in a combination with a 14mm pancake lens it proved to be an excellent setup for my outdoor activities. The RAW output made it amenable to decent amount of post processing, compensating somewhat for the limited dynamic range, and with little care it could produce pretty decent pictures.

M4/3: The Outdoor Camera System Loch Voil (Lumix DMC-GF2, Lumix G 20mm/f1.7)

M4/3: The Outdoor Camera System Larch Tree (Lumix DMC-GF2, Lumix G 14mm/f2.5)

Lumix DMC-GM5

The main limitation of the GF2 for me was that it did not have a viewfinder, only a fixed LCD screen. This made it hard to use in bright light, and also to compose images properly in general. Eventually I had enough ‘if only this was a bit to the left’ pictures to look for a replacement. The GM1 ticked all the boxes, but by the time I made up my mind (the GM1 was quite dear), its successor, GM5, came out in 2014.

To me the GM5 was a true marvel of technology. It was much smaller and much lighter than the GF2, at 260g (with the 14mm pancake lens) this was a camera I could fit into shoulder strap pouch and run with. Yet, it addressed what I thought were the main defficiencies of the GF2: it has a decent viewfinder and even some external mechanical controls, e.g., for exposure settings. And the 16MP sensor had a much improved dynamic range of 11.7 – this was a compact-like camera packing some serious punch.

In spite being four years old now, which in digital sensor technology is a lifetime, I still use the GM5, and indeed over the years it has produced some of the pictures that made me most happy. In many ways it remains the perfect outdoor camera for me. These days I usually refer to it as ‘the running camera’ on account of its size an weight, for it’s what I take when weight matters. Nevertheless, the dynamic range is enough for most outdoor photography scenarios, and with minimal post-processing the GM5 is capable of producing excellent quality prints of up to ~18" or so.

M4/3: The Outdoor Camera System Sgurr an Fhidleir and Lochan Tuath (Lumix DMC-GM5, Lumix G 14/f2.5)

M4/3: The Outdoor Camera System Bynack More (Lumix DMC-G5, Lumix G 14/2.5)

Lumix DMC-GX8

The Lumix GX series came as the spiritual successor of the GF1, a class of camera aimed at the serious amateur. The GX8, introduced in 2015, is arguably a pinnacle of that series (for reasons only known to Panasonic, the GX9 is a dumbed down version of the GX8, lacking some of the things that make the GX8 so great, not least the mechanical controls and weatherproofing).

The GX8 has a well thought out external controls, including a dedicated EV Compensation dial big enough to work in winter gloves, decent resolution tilting viewfinder, a tilting screen, is weatherproof, and all in all a pleasure to use. The in-body 5-axis stabilisation works extremely well, and the 20.3MP sensor has an excellent dynamic range of 12.6 – more often than not, the GX8 produces great looking images straight out the camera.

The main limitation of the GX8 is the presence of an anti-aliasing filter on the sensor, which results in some loss of sharpness in fine detail. Nevertheless it produces excellent prints up to ~24".

M4/3: The Outdoor Camera System Loch Leven (Lumix DMC-GX8, Olympus M.12-40mm / f2.8)

M4/3: The Outdoor Camera System Bealach Bhearnais (Lumix DMC-GX8, Olympus M.12-40mm / f2.8; hand-held at ISO 800)

Lumix GC-G9

The Lumix G series is aimed at the professional user. The much improved 20.3MP sensor has a dynamic range of over 13EV, no AA filter, and produces very sharp images that hold their own even alongside today’s best FF cameras. Like the GX8, the G9 is weatherproof, but the body is somewhat bigger and heavier (mostly due to an enlarged hand grip).

Some of the nice features of the GX8 are missing (no dedicated EV compensation dial, the viewfinder doesn’t tilt), but the number of customisable buttons is greater, and the little on-camera LCD is a nice touch. The viewfinder resolution has gone up, its image very bright and clear. The 5-axis stabilisation has been further improved, there is a sensor shifting high resolution mode, delivering 80MP (cool, but awkward in real life), and 4k / 6k mode for shooting stills.

I haven’t had the G9 for long, but the step up from the GX8 is palpable; while the GX8 is a contender for a serious use camera, the G9 is that camera without a question.

M4/3: The Outdoor Camera System Kolla, Iceland (Lumix DC-G9, Olympus M.12-40mm / f2.8)

M4/3: The Outdoor Camera System Ísafjörður, Iceland (Lumix DC-G9, Olympus M.12-40mm / f2.8)

A Few Notes on Lenses

Over the years, I have accumulated a collection of M4/3 lenses, but the ones I most often use are the following.

Lumix H-H014 14mm Pancake

When weight really matters, this is the sole lens I’ll pack, together with the GM5. At f2.5 it is reasonably fast, light, and very low profile.

Olympus M.Zuiko Digital ED Pro 12-40mm

This is my default lens, combined with either the GX8 or G9. It is weatherproof, f2.8, constant aperture while zooming (great for videos), and, as the whole M.Zuiko series, of excellent optical quality.

M4/3: The Outdoor Camera System Shenavall (DMC-GX8, Olympus M.12-40mm / f2.8)

M4/3: The Outdoor Camera System Beinn a'Chlaidheimh (DMC-GX8, Olympus M.12-40mm / f2.8)

Olympus M.Zuiko Digital ED Pro 8mm Fisheye

A true, 180 degree, fisheye. At f1.8, this is a great lens for both night time photography and daytime landscapes.

M4/3: The Outdoor Camera System Lochain Coire an Lochain, Breariach (DMC-GX8, Olympus M.8mm / f1.8; exposure bracketted)

M4/3: The Outdoor Camera System Bealach Beinn an Eoin (Lumix DC-G9, Olympus M.8mm / f1.8)

Leica DG Vario Elmar 100-400mm Zoom

The aforementioned ‘bird’ lens, brilliant for wildlife photography. F4-6.3, in-lens image stabilisation, excellent optical quality. At 1kg, not the lightest but, more often than not, worth it.

M4/3: The Outdoor Camera System Tawny Owl (DMC GX8, Leica DG 100-400 / f4.0-6.3; shot at dusk from a monopod at ISO 6400, with noise cleaned up in post processing)

M4/3: The Outdoor Camera System Greenshank (Lumix DC-G9, Leica DG 100-400 / f4.0-63; hand-held at ISO 400)

by tf at August 18, 2018 12:11 PM

August 14, 2018

Tomas Frydrych

More on the Triad & Decagon Stoves

More on the Triad & Decagon Stoves

I have mentioned the Vargo Triad and Decagon in an earlier post on Cooking with Alcohol. I have now had a chance to use both stoves for real, and the experience was, unfortunately, not so good.

The Triad

More on the Triad & Decagon Stoves The Vargo Triad is a lovely made pressurised titanium stove with a capacity of ~40ml. I bought it specifically with a view to a two week trip to Iceland this summer. When it arrived, it was a love at first sight, it is a real thing of beauty. And as you can see from the earlier post, it performed well both on the initial tests in our kitchen, as well as during experimental snow melting on a very nice, calm, day in the Ochils -- I felt confident the Triad was going to be perfect for us.

However, before going to Iceland I happened to take it on an overnight outing in the hills. It was during the heatwave, and so, unusually for me, I decided to just bivvy rather than take a tent, and, somewhat unexpectedly, it turned out to be a fairly windy evening, hitting around 40mph during the night on the high ground.

Not a big deal. I chose relatively sheltered locations to cook in, and also used some large stones as an external windbreak, with the stove itself inside a home made metal windshield. This should have been fine (and would have been with my Speedster stove). But both in the evening and the morning a gust of wind lasting a few seconds caused the stove to flare up quite uncontrollably, with flames of about 30cm. In both instances the stove would not return to a normal burn until the fuel burned out, and the second time this happened it was so bad, I decided to have my porridge cold rather than risk setting the hill on fire.

While I suspect this behaviour might have, to some extent at least, been exacerbated by the high and tight-fitting windshield I was using, it is I think primarily caused by the low profile of the stove (more on this below).

Based on that experience I would never use this stove unless it was on a very large non-flammable surface (at least a meter in diameter), and certainly not inside a tent. We ended up taking a Trangia to Iceland instead (and it really grew on me during those two weeks, but that's for another time).

The Decagon

More on the Triad & Decagon Stoves The Vargo Decagon is another beautifully made, but unfortunately very poorly designed stove. It seems that the overriding objective was to create a stove that would be indestructible, but in achieving this, Vargo created one that fails miserably at its primary function. The design suffers from three flaws, each one bad enough on its own.

Priming

The Decagon is impossible to prime. The packaging suggests that the stove will prime in 1-2 minutes, but that is just a wishful thinking. Even in the controlled environment of my kitchen, it takes on average 4-5 minutes to bloom. Vargo suggest a faster alternative priming method is to splash some alcohol around the base -- I find it hard to believe that they really suggest this, for while it indeed speeds things up, it results in a large uncontrollable flame, so is only possible if your stove is inside a large completely inflammable area.

This is a pity, because this flaw can be easily remedied by adding a priming ring; the picture above shows the stove with a priming ring made of glass fibre cable insulation, with which it can be primed in under 1 min.

Excessive Burner Cooling by the Pot

To make the Decagon indestructible, Vargo did not include any pot supports. Instead the pot sits on three small bumps on the top of the burner. The result of this is that it conducts heat directly from the burner itself, with clear impact on the gasification of the fuel; the amount of heat generated by the burner visibly grows as the pot warms up (just about the very opposite of an ideal burner behaviour, I think).

One of the consequences of this design is that the pot cannot be placed on until the burner has bloomed (as is noted in the instructions). But in fact the heat loss is so bad that if you put a pot of ice cold water on even once the stove is in full bloom, it will de-bloom in a matter of seconds, and go completely out shortly after. (So no, this is not a stove that could be used to melt snow, as I hoped.)

This flaw can, again, be worked around by using external pot supports that lift the pot slightly above the burner, but this can lead to the burner overheating and flaring up, so I would not recommend it. Which brings me to the biggest problem with the Decagon.

Safe Fill Level

The thing that really appealed to me about this stove is the large capacity of just under 60ml. Unfortunately, the stove cannot be safely filled to this level. If you do, liquid fuel will spew out of the jets during priming. This then creates a very strong positive feedback loop, causing the stove to get into an uncontrollable rocket-like flair; throw a tight windshield into the mix, and it will generate enough heat to not only pulverise silicon matting but also, literally, melt aluminium (but unlike the Triad, the Decagon doesn't need a windshield to get there).

When in this state the stove cannot be blown out, you either have to cover it with a large pot to deprive it of oxygen, or let it burn out. My experiments suggest that the Decagon safe fill is only about 40ml (but this makes it impossible to prime without a priming ring!).

In spite all of this, we took the Decagon with us to Iceland as a secondary stove, and used it a couple of times with a frying pan. Assuming it is not filled up more than the 40ml and precautions are taken so it can be safely left to burn out just in case, it is OK for that, I might use it again that way now that I have it, but I expect you have guessed by now, I'd not buy it again.

Lessons Learnt

I think the basic problem with both of these stoves is the jets are too close to the fluid level. It's worth noting that the Trangia instructions say not to fill their burner more than 3/4 of the depth of the bath, which means there there is at least 1cm gap between the fluid and the jets. In contrast, the instructions for the Vargo stoves say they must be filled to capacity to prime (and, indeed, will not prime otherwise) but that makes the jets very close to the fuel surface, and susceptible to spewing out liquid fuel when the alcohol boils. Whichever way you look at it, and whatever the burner design, this can never be considered a good thing.

So my quest for a winter alcohol stove goes on -- I think I am going to give the Trangia a shot in spite of its bulk. But in any case, I am definitely going to avoid any pressurised alcohol stove where there is not a sufficient gap between the jets and the fluid level.

by tf at August 14, 2018 12:44 PM

July 28, 2018

Tomas Frydrych

The Wind

The Wind

I saw the front in a distance. A solid wall of water, just obscuring where Kings House once stood (a view improved, I dare say). It was upon me before I had the tent up, a scramble to get inside, wait it out by candle light.

Half an hour or so and it’s over. A tiny light on Beinn Bheag, a kindred spirit I imagine. But there is no sign of the blood moon (unlikely as it was, the reason I rushed here this evening after work). Though the cloud has cleared somewhat, so there is still a tiny glimmer of hope, some time left.

Alas, it’s not to be. I notice a white cloud oozing from Lairig Gartain, then a rather menacing black one emerges from behind the Buchaille. But the real weather is behind my back. The wind has done a half circle turn and another wall of rain just passed Beinn Fhada and is advancing my way, fast. The torrents hit just as I am unzipping the tent. More time for idle thoughts by the candle light; much to be recommended.

The first lightning takes me by surprise. Sure, it was forecast, but out there just now it did not feel like a stormy sort of a day. One elephant, two elephants, three elephants, four elephants. One elephant, two elephants, ... eight elephants. Good. One elephant, ... three elephants. Less good; glad I didn’t camp any higher up, sparing a thought for the tiny light on Beinn Bheag.

It lasts another half hour. Time to put the boots on again, just in case the moon has appeared; there is still a little time left. But, naturally, there is no moon, just some sheet lightning far to the east. For some reason the road is very busy just now, so I decide to take some cliche long exposure pictures, but the rain returns before I am ready, so back into the tent. More lightning. This is it, out of time, turn in for the night.

When I wake up in the morning, the first thing I see are two unfamiliar blotches. As my eyes manage to focus, I make out two birch seeds stuck to the outside of the inner tent. They must have come on the wind during the night, possibly a long way – I can’t recall any birches around here.

They make me smile. Symbols of new life, of change; of the very possibility of change. But they make me sad at the same time, for, unwittingly, by my being here, their progress came to a sudden halt. Another hard to escape symbolism.

I know, it’s just a couple of birch seeds. But symbolism matters. As every religion ever understood, symbolism makes it possible to participate in what we do not see, in what is not (yet) here. Symbols turn the abstract and theoretical, into personal and tangible; they turn thoughts into surrogate experiences.

Take the plastic straw. It’s been pointed out (unhelpfully) that if you, and I, and everyone else, give up on plastic straws, nothing much will change, for most of that plastic pollution in our seas comes from fishing nets. Perhaps, but that is to miss the point.

The moment I decide ‘no more plastic straws’ is the moment I, personally, start owning the bigger problem, the moment I accept my complicity. The symbolic value of this act is immense, for without such an ownership, both individual and collective, real change cannot happen.

Perhaps that is also the reason we need the lynx back in Scotland.

I will be honest, I have read the NGO materials, the economic forecasts, the description of the methodologies used to derive these. They leave me cold – they do not read like economic forecasts from people rushing to make money out of the lynx. They read like statements from people who believe it is the right thing to do, but the only way to achieve that is to convince the world at large there is money to be made. I, for one, am not sold on this Lynx-the-tourist-attraction, not least because should the economic case fail, the whole project is doomed, and doomed for a long time after.

But the argument that, ecologically, and I think also morally, it’s the right, even necessary, thing to do, seems to me very strong. Lynx as a symbol of change, of accepting that our natural resources cannot be managed purely form the perspective of primitive market economics, that repairing damage done in the past rather than merely maintaining status quo has to be part of modern environmental ethos, that is, I think very potent and could have ramifications beyond the few roe deer that will not need to be culled.

Perhaps. I am neither an ecologist, nor a sheep farmer, but, FWIW, I know a bit about religion and symbolism.

I forgot all about my two seeds until just now, drying the tent in the garden. They are no longer there, just two small stains on the fabric. And now I am not even sure they were there in the first place. Did my eyesight deceive me? It would not be the first time.

It doesn’t matter. Just now there are birch seeds everywhere I look. There are thousands of them in the air, in my hair, they fall behind my shirt, and land in my pockets, my shoes are full of them. And the wind? The wind is picking up again!

by tf at July 28, 2018 06:23 PM

April 30, 2018

Tomas Frydrych

Six Months with Cotton Analogy®

Six Months with Cotton Analogy®

When the row over the National Trust for Scotland trademarking the name ‘Glencoe’ erupted last summer, I had never heard of a company called Hilltrek. But for a while then I had been on the look out for some clothes for pottering about the woods with binoculars and a camera during the winter months, and had not seen anything that would be well suited to the (sodden) Scottish conditions. And I liked what I saw at the Hilltrek website.

Hilltrek are a tiny Scottish company offering a range of clothing made from Ventile®. If you, like me, have not heard of Ventile® before, it’s a cotton fabric developed in the 1930s essentially for fire hoses. When subject to water its very dense weave swells so much it prevents water penetration. The swelling is not instantaneous, so a single layer of the material is not enough to keep a wearer completely dry when subject to lot of water, but two layers, so called Double Ventile®, are.

Hilltrek make clothes in three fabric options: Single Ventile®, Double Ventile®, and Cotton Analogy®. The latter is a single layer of Ventile combined with the Nikwax Analogy® lining, also used by the Paramo® range of clothes. It was this that caught my attention, for the Nikwax Analogy® lining is well proven, and while I have never owned any Paramo® clothing (I am more of a Buffalo man myself), I know many who swear by it, and I have seen it perform excellently in some ‘real’ Scottish and Welsh weather. Unlike Paramo®, Cotton Analogy® offers the natural feel of cotton, and the lack of the irritating rustling of nylon — I was sold.

The Conival Trousers

While I was looking for something to wear about the woods, with the Conival Trousers I got more than I bargained for — without exaggeration, these are the best outdoor trousers I have ever owned. Over the last six months I have spent somewhere in the region of thirty five days wearing them, from sodden days in the woods, to numerous big full on days in the hills, including multi-day camping trips in the snow and temperatures dropping below -10C. In all of this they performed impeccably.

The Conivals have a no-nonsense cut, can be customised at the point of ordering, and if you have special requirements, all you need to do is to lift the phone (the great thing about dealing with small companies). There are two zipped pockets on the back, and two front hand pockets; cargo pockets can be ordered as an extra.

Unlike typical waterproof fabrics, the Analogy® lining is pleasant enough to wear next to skin, so these really are trousers rather than over-trousers, and they breathe very well. I tend to sweat fairly heavily, and so I normally avoid wearing waterproofs until it is really raining — these are the first waterproof trousers I have owned that don’t feel like being inside a banya and that I am happy to wear all the time.

The two layer construction is quite warm. I have found them good down to a few degrees C below zero on their own, and with a pair of thin merino long johns in temperatures down to -10C. On the upper end, I find them fine to about 12C, beyond that they are too warm for me (but then I don't usually wear waterproofs in those sort of temperatures anyway, and I am so impressed I am saving up for the Single Ventile® version Hilltrek make).

I have heard it said of Paramo® trousers that if you kneel on wet ground the water gets through. I have knelt in the Conivals in mud and snow on numerous occasions, pitching a tent or resting calves on long steep front pointing stints, and I have not found that to be the case, perhaps it is the benefit of the Ventile® itself being shower proof (or perhaps it was just an evil rumour about Paramo®).

The Ventile® fabric is quite heavy compared to ‘modern’ ‘technical’ kit, but I am really growing sick and tired of this current obsession with weight, which invariably translates into equipment that lasts a season or two. Indeed, the Conivals have shown themselves to be (I admit, surprisingly) hard wearing. I have done a fair bit of sliding about in them, sometimes on quite coarse icy ground, without noticeable surface wear. Some of the stitching around where the front pockets merge the side seam is starting to come undone, but that’s easily fixed.

The main wear-related issue with the Conivals is to do with the Ventile® dye, which does not seem to penetrate deep into the fibre, so where the fabric creases regularly, it starts reverting to the natural colour of cotton, and this happens so easily that somewhat disconcertingly the trouser started showing these whitish marks from the very first short walk in them, and it gets progressively worse, though it does appear purely cosmetic.

Six Months with Cotton Analogy®

The biggest drawback of Ventile® is that, according to the manufacturer’s recommendation, it is supposed to be dry cleaned. For a jacket this might be OK, but for outdoor trousers this is not practical. A closer look at the Ventile® site shows that the fabric can be washed with soap. I have been washing mine in 30C using the Nikwax® Tech wash, and can report no ill effects.

(It's worth noting that as with all waterproof fabrics the special care requirements have naught to do with the fabric per se, but the DWR coating that is applied to it, which has largely worn off before the first wash. I have tried Nikwax® Cotton Proof per the manufacturer recommendation; it does not produce the same sort of beading the original DWR did. It does seem to slow the water absorption a bit, but I am not entirely convinced it merits the expense.)

The Assynt Jacket

The Assynt jacket is billed as ‘ideal for field sports, nature watching and photography’. It has a corresponding cut with a waist level draw cord, two voluminous, low down, front pockets with stud closures, two chest level hand warming pockets, and a 5” high collar, with a stowaway hood.

In terms of size, based on the official size chart I am bang on for S, and indeed, have found the chest size to allow for adequate layering for winter use. But the sleeves are a different story. If anything the nominal size suggests these should be too long for me, but in fact they are well on the short size (1-2” shorter than on any other jacket of a comparable size I own), which becomes very noticeable with more layers underneath.

The snug fitting collar is the jacket’s best feature, keeping the dreich weather a bay. The stowing of the hood works better than is usual with such an arrangement, but unavoidably results in a hood of a low volume. This is the jacket’s main limitation. I have used it on a couple of fairly full on mountain days to see what it would be like, and the hood is not up to the task (this is not the intended use, and there are other jackets in the Hilltrek range that come with big volume, helmet-compatible, hoods).

Other minor drawback is that the hand warming pockets don’t have any closures, and, as they are not Ventile lined, this makes them draughty in moderately strong side-on wind. This feels as a bit of an oversight within the overall well thought out design.

All in all, I have found the jacket to be excellent within the parameters for which it was intended. I do wish the hood was bigger, I find I keep it out most of the time, simply because Scotland, and a bigger, non stowable, hood would make this a much more versatile garment.

None of the Hilltrek clothing is cheap, especially if you decide to do some customisation, but not incomparable to prices of some big brand mass produced outdoor kit. On the other hand, I expect it to last longer. I own a very nice Gore-Tex jacket from a big brand name that cost a similar amount as the Assynt jacket. It’s my ‘special occasions’ jacket, for on past experiences I know that in intensive use it wouldn’t last more than a season. I have no such quibbles with the Hilltrek clothing, there is a sturdy feel to it, and it is obvious that it was not only made in Scotland, but also for Scotland.

by tf at April 30, 2018 09:57 AM

April 16, 2018

Tomas Frydrych

Cooking with Alcohol

Cooking with Alcohol

In the last couple of years I have become a great fan of alcohol stoves. For three reasons. On short trips they are very weight-efficient. Alcohol is a much more environmentally friendly fuel than gas. And alcohol stoves are cheap to run!

As I have mentioned before, through my childhood and teenage years outdoor cooking involved an open fire. My first real stove was MSR WhisperLite™ purchased in the Mountain Equipment Coop in Vancouver in ’96. I still have it, with the original seals and that, though I haven’t used it for some years. The truth is that petrol stoves really come into their own on long remote trips and I don’t do those. And they take a bit of getting used to, the priming can easily get out of hand!

(On one particularly memorable occasion in Glen Brittle in the late ’90s the WhisperLite™ got me invited to cook in the kitchen of a giant luxury mobile home by a kind German couple who thought my stove was broken when I misjudged the volume of the priming fuel resulting in a flare worthy of Grangemouth. The trick, I learnt eventually, is to use little cotton wool and meths, but by then I also realised that this, excellent, stove was a poor match to my needs.)

And so, like everyone else, I switched to gas.

Gas stoves, without a question, win on the convenience front. There is no risk of spilling stinky fuel, no priming. But they have their drawbacks, not least the fuel is expensive and environmentally unfriendly — the LPG gas brings with it the whole oil industry baggage, the cartridges are manufactured in the Far East then shipped around the globe, and, being non-refillable, they end up in landfill (or left in a bothy); these things increasingly bother me.

Gas stoves are also rather weight inefficient. I didn’t fully appreciate this until I started thinking of multi day running trips, and was forced to rationalise the weight of my kit. My first move was, of course, a lighter gas stove, the 25g BRS-3000T. It only took a couple of trips to realise this was a dangerous piece of crap (mine flares uncontrollably sideways at any attempt to reduce the flame; sometimes we really get what we pay for).

In any case, if the objective is to reduce weight, even the lightest of gas stoves doesn’t help much, for the fundamental problem lies with the canister: on the one hand, I have very little control over how much fuel I take, and on the other the canister is far too heavy. So if my requirement is, say, for 60g of gas, I have to take 110g, plus the 120g of the canister; if I need 120g, I have to take 220g of it, plus 180g of the canister, etc.

Just as petrol/kerosine stoves beat gas in the weight game for long trips, alcohol stoves do so for short trips. Alcohol has two big advantages: it is very easy to store and transport, with minimal weight overheads, and it is very easy to burn, making it possible to create simple, light stoves.

Of course, burning alcohol produces only about half the amount of energy per weight as gas. But for short trips this is more than offset by the weight of the canister: if you need 60g of gas you have to pack 230g of fuel + canister; for an alcohol stove the equivalent will come to ~140g. Broadly speaking the weight game works out in favour of alcohol, or at least level pegging, until you need enough fuel to take the big 460g gas canister. How long a trip that is will depend on your cooking style, but in my case that is 3+ solo nights when snow melting, and something like 10+ nights in the summer.

And alcohol is cheap, and the environmental footprint is much smaller. There are, of course, downsides, most notably cooking with alcohol takes longer, how much longer will depend on the actual stove, so let’s talk about the stoves.

Alcohol burners come in two basic types: pressurised and unpressurised. An unpressurised stove is really just a small bowl holding the fuel, burning the vapours as they rise from the surface. While this works perfectly fine, such an open bath stove is potentially quite dangerous because of the risk of spilling the burning fuel; this is easily remedied by filling the bowl with some kind of a fireproof porous material. The simplicity means unpressurised stoves are usually home, or cottage, made.

In case of a pressurised stove, the fuel vapour is expelled under pressure from an enclosed fuel reservoir through a series small holes, resulting in discrete jets of flame. Unlike with petrol/kerosene this pressure is simply created by heating up the fuel in the reservoir, and is not very high. It, however, means that the stove has to have some way of priming. Most often it comes in the form of an open bath in the centre of the burner. The best known pressurised alcohol stove is undoubtedly the Trangia, but this type of burner can also be made fairly easily at home from a beer can, e.g., the famous Penny Stove — beer can stoves are neat and really fun to experiment with (but they are also quite large and fragile).

(Before going any further, it is worth saying that alcohol stoves always need a windshield, the flame is just too feeble to cope with even a slight breeze. The cheapest, and also most lightweight option is to make some from a double layer of kitchen foil. If you look after it, it will last quite a while, but it is too light for use in real wind, though perfectly fine for in-tent use. Of course, alternatives, commercial or otherwise, exist.)

Back to stoves. So, which one is better, pressurised or not? The cued up reader, who undoubtedly now expects a detailed discussion on fuel efficiency of the different designs is going to be most disappointed in me. As exciting as carefully measuring the fuel burnt by different models to find the Ultimate Stove is, when it comes to alcohol such comparisons are of a very limited value.

The fuel efficiency of any stove really comes down to a single thing: is the vapourised fuel mixed with enough oxygen to allow complete combustion? In the case of all alcohol stoves the mixing happens above the burner, and so is given more by the size of the pot, its distance from the burner, and the airflow provided by the windshield, than the design of the burner itself. Consequently any comparison is only valid for the one specific testing configuration, and you will almost certainly be able to come up with a different setup to produce quite different results.

OK, but, which is better? They both have advantages and disadvantages. The great thing about unpressurised burners is you can put in as little or as much fuel in as you want, and if there is any left, you screw the lid on, and it will keep till the next time. Also the variety with the absorbent material is the safest alcohol stove there is (and one shouldn’t really underestimate the danger of spilling the burning fuel, as the flames are nigh invisible).

The main advantage of a pressurised stove is a higher rate of burn, i.e., it cooks faster. But it is quite difficult to make a really tiny one, because below certain size the priming/gasification doesn’t work very well. Also the usual method of priming from an open bath on the top of the burner is super inefficient, and for there to be enough fuel in the bath, the stove generally needs to be filled near to capacity. For the smaller stoves, this will be around 30ml — if like me you only use 50ml per day and less than 15ml at a time, this a nuisance, as there is always unavoidable significant loss due to continued evaporation while the stove cools down before you can pour the excess out of it, and there is always spillage draining it.

If the burner doesn’t sit directly on the ground, it is possible to prime from below, using a small vessel, e.g., a bottle cap. This is much more efficient and needs just a few drops of fuel. But it is quite tricky to get right and requires practice — if you use too much fuel, you get a flair up, not ideal in a tent!

An unpressurised stove is great for solo summer use. Mine is of the makeup case variety; if you look carefully, you can see it among my other cooking paraphernalia in the title picture (taken on an unexpectedly cold autumn morning in the Cairngorms; the -4C meant I had to boil an extra pot that morning to pour into the running shoes to soften them up).

The stove came from redspeedster on eBay (you could easily make your own, but it’s not worth sourcing the materials for just one). It has a 30ml capacity, and with the nice pot supports he also makes comes to 24g. In my setup using 1/2l pot it will bring 400ml from 8C to rolling boil in about 12min, using 11g of fuel — yes, it’s slow, but then I rarely need rolling boil, so my actual ‘boil’ time is ~8 min, and really, I have all the time in the world, after all I am escaping the time-obsessed rat race.

But once you start looking at cooking for more than one person the unpressurised stove becomes impractical. I still want something small, i.e., not the family sized Trangia, but nevertheless something faster.

The Vargo Triad fits the bill. It’s a nicely made little gadget, and has about double the rate of burn of my makeup stove, bringing 0.8l of water from 8C to rolling boil in just under 13min, using 23g of fuel. This will do nicely for our next summer trip, I reckon. It’s a pity the burner does not have a cap, but I have cut a circle from a silicon baking sheet to cover it, which reduces fuel evaporation after the stove is extinguished and is cooling down.

[Updated 14/8/18 -- I have run into significant issues in real use of this stove, see this post]

Cooking with Alcohol

My current quest is to find an alcohol stove I’d be happy with for winter use. During winter I tend to heat up about twice the amount of water than I do during the summer (~3l), while at the same time snow melting roughly doubles the energy requirements (I am sure I could cut this down by manning up, but TBH, the winter brings enough misery as it is). The Triad at near-full fill of 35g of fuel will just bring 1/2 litre of water from snow to boil in 20min — for winter solo use I think this is borderline, I’d prefer something that would do about 0.8l at a time and a bit faster.

The Vargo Decagon looks a possible option. The 60ml capacity should be enough to melt 0.8l from snow, and it appears to have considerably higher burn rate than the Triad. But by all accounts, the Decagon is very slow priming, and unlike the Triad the pot can’t go on until the priming is finished (the pot sits directly on the top of the burner, so it conducts heat away from the burner); it also doesn’t lend itself as well to bottom priming as the Triad, nor can be so easily capped. Nevertheless, I am keen to give it try, preferably while there is still some snow in the local hills.

[Updated 14/8/18 -- I have run into significant issues in real use of this stove, see this post]

by tf at April 16, 2018 12:46 PM