Planet Closed Fist

July 26, 2017

Tomas Frydrych

The Unfinished Business of Stob Coir an Albannaich

The Unfinished Business of Stob Coir an Albannaich

I have a confession to make: I find great, some might think perverse, pleasure at times in bypassing Munro summits. It is the source of profound liberation -- once the need to 'bag' is overcome, a whole new world opens up in the hills, endless possibilities for exploring, leading to all kinds of interesting and unexpected places. Plans laid out in advance become mere sketches, to be refined and adjusted on the go and on a whim.

But I prefer to make such impromptu changes because I can rather than because my planning was poor, or because of factors beyond my control (of course, the latter often is just a euphemism for the former!); my ego does not relish that. Which is why today I have a firmer objective in mind than usual, namely Stob Coir an Albannaich.

Let me rewind. Some time back, while pouring over the maps, the satisfying line of Aonach Mor (the north spur of Stob Ghabhar) caught my eye. And so just over a week ago, I set off from Alltchaorunn in Glen Etive with the intention to take Aonach Mor onto Stob Ghabhar, and then follow the natural ridge line over Stob a' Bhruaich Leith to Meall Odhar, Meall nan Eun, Meall Tarsuinn and onto Stob Coir an Albannaich, and then back over Beinn Ceitlein. About 27km with 2,100m vertical, so I am thinking 6 hours.

But for the first half of the day there is thick low cloud hanging about, and above 500m visibility is very poor, wind quite strong. I don't mind being out in such conditions. That is often when the hills are at their most magical, the brief glimpses of the hidden world down below more memorable than endless blue sky.

The Unfinished Business of Stob Coir an Albannaich

Also, it's not just me who can't see, and I generally find I have many more close encounters with wildlife in conditions such as these; today is no exception, and I get to see quite a few small waders, and even a curlew. But I end up moving by the needle all the way to Meall Odhar, making slow progress.

The cloud clears just as I am having my lunch on the boundary wall below Meal Odhar. The second half of the day is a cracker. I quickly pick up the walkers' path and jog to the Meall nan Eun summit where four seniors are enjoying the sunshine. They tell me not to stop, that the sight of me is too demoralising; I am thinking to myself that I hope I'll still be able to get up the hills when I reach their age, they must have thirty years on me. The usual Scottish banter; an inherent part of the hill experience, as much as the rain, the bog and the midges.

From here onwards the running is good and flowing. As I am about to start the climb onto Albannaich, I do some mental arithmetic. I am five hours in, the planned descent from Albannaich looks precarious from here, and I have no suntan lotion. I decide to cut my losses, run down the glorious granite slabs below Meall Tarsuinn, and return via the Allt a' Chaorainn glen.

It turns out to be a good call, as it still takes me two hours to get back, and a touch of sun on my neck. Nevertheless, it leaves me with the sense of unfinished business.

And so this week I am back. Not the same route, obviously. Rather, I set off from Victoria Bridge, gain the natural ridge line via Beinn Toaig's south west spur, planning to descend Albannaich either over Cuil Ghlas, or Sron na h-Iolaire. It's a wee bit bigger outing than last week (I am expecting eight hours), but there is nothing like responding to a 'failure' with a little bit more ambition!

The weather is glorious, if anything just a bit too hot, views all around. Yet, looking over Rannoch Moor it's impossible not to reflect on how denuded of trees this landscape is. Just a small woodland around the Victoria Bridge houses, a couple of small sitka plantations, and an endless sea of bright green grass. During the autumn and winter months there is a little bit more colour, but this time of the year the monotonous green drives home to me how little varied the vegetation here is.

I make good progress along the ridge (no navigation required), skip (with the aforementioned degree of satisfaction) Meall nan Eun summit and arrive on Stob Coir an Albannaich in exactly five hours. The views are breathtaking; in spite of the heat there is no haze, and Ben Nevis can be seen clearly to the north.

The Unfinished Business of Stob Coir an Albannaich

After a brief chat with a couple of fellow hillgoers I decide to descend down the Cuil Ghlas spur, where slabby granite promises fun.

The heat is beginning to get to me, and I can't wait to take a dip in the river below. The high tussocky grass that abounds on these hills makes the descent from the ridge to Allt Coire Chaorach awkward, and the stream is not deep enough for a dip, but I at least get some fresh water and soak my cap. Not much farther down the river bed becomes a long section of granite slab; the water level is low, and so lot of it is dry to run on.

As an unexpected bonus, at the top of the slabby section is a beautiful pool: waist deep, with a smooth granite bottom, and even a set of steps in. I sit in for a while cooling down, then, refreshed, jog down the slabs. When they run out, I stay in the riverbed; hopping from boulder to boulder is lot more fun than the grassy bank, even if it's not any faster.

The floor of the glen is boggy and covered in stumps of ancient Caledonian pine; on a day like this, it is hard to not pine for their shadow. There is some new birch planted on the opposite side of the glen; perhaps one day there will be more.

34km / 2,300m ascent / 8h

by tf at July 26, 2017 05:41 PM

July 10, 2017

Tomas Frydrych

Eastern Mamores and the Grey Corries

Eastern Mamores and the Grey Corries

The Mamores offer some exceptionally good running. The landscape is stunning, the natural lines are first rate, and the surface is generally runner-friendly. The famed (and now even raced) Ring of Steal provides an obvious half day outing, but I dare to say the Mamores have a lot more to offer! On the western end it is well worth venturing all the way to Meall a'Chaorain for the remarkable change in geology and the unique views of Ben Nevis, but it is the dramatic 'loch and mountain' type of scenery (of a quality rare this far south) of the eastern end that is the Mamore's true crown jewel.

I have two days to play with, and rather than spending them both in the Mamores, I decide to combine the eastern Mamores with the Grey Corries that frame the other side of Glen Nevis. The forecast is not entirely ideal, it's looking reasonable for Saturday, but the forecasters cannot agree on the Sunday, most predicting heavy rain, and some no rain at all. Unfortunately, the eastern Mamores are subject to stalking, and this is probably my last chance of the summer to visit them -- I decide (as you do) to bet on the optimistic forecast.

Day 1 -- Eastern Mamores (20km, 2,200m ascent, 6h)

As I set off from Kinlochleven, it's looking promising. It is not raining, I can see all the way down Loch Leven, and (at the same time!) the tops of the hills above me. The sun even breaks through occasionally, reminding me I neither put on, nor brought with me, suntan lotion. I follow the path up Coire na Ba, enjoying the views. My solitude is interrupted by a single mountain hare. This is red deer country, and after recent trips to Assynt and the Cairngorms, it is impossible not to notice the relative paucity of life in these hills.

As I climb higher, the wind starts picking up, but it is not cold, and once I put on gloves I am comfortable enough in shorts and a shirt. The two summits of Na Gruagaichean are busy, the views braw. I carry on toward Binnein Mor, enjoying the cracking ridge line.

At Binnein Mor summit the well-defined path comes to an end, the Munroists venture no further. The short section beyond is the preserve of independent minds (and the Ramsay Round challengers). I take a faint path descending precariously directly off the summit, but this turns out to be a mistake. It is steep, there is poor grip, and slipping is not an option. I expect a better (and much faster) way would have been to take the north ridge, then cut down into the coire.

I end up traversing out of the precarious ground into the coire. There is another runner coming up the other way. We stop for a brief chat; he was planning to attempt the Ramsay Round solo this weekend, but decided to postpone in the view of the overnight forecast. I have lot of time for anyone taking on Ramsay solo, that is one very serious undertaking; I hope the guy gets his weather window soon.

I stop at the two small lochans to get some water and to have a couple of oatcakes and chunk of Comte for lunch (my staple long run food), then carry on past the bigger lochan up Binnein Beg. The wind has picked up considerably, and near the summit is probably exceeding 40mph. I do not linger.

The gently sloping old stalkers' path leading eventually into Coire an Lochain is delightful, my eyes feasting on the scenery on offer -- I cannot imagine anywhere else I would rather be just now. I follow the narrow track down to Allt Coire a'Bhinnein.

Eastern Mamores and the Grey Corries

The large coire formed by the two Binneins and the two Sgurrs is a classic example of glacial landscape. The retreating glacier left behind huge moraine deposits, forming spectacular steep ridges and dykes, the true nature of which is exposed at the couple of places where the vegetation has eroded, and the very ancient record of history is laid bare for all to see. The gravelly river itself too reminds me more of the untidy watercourses of the Alps than your typical Scottish mountain stream.

Rather than following the zigzag path, I take one of the moraine ridges into Coire an Lochain, then head up Sgurr Eilde Mor. The geology changes, the red tones of the hill due to significant presence of red granite. I find a small hollow sheltered from the worst of the wind and have another couple of oatcakes and Comte, then brace myself for the wind on the summit.

The gently sloping north east ridge Sgurr Eilde Mor again lies outwith the Munroist lands, and there is no path on it to speak off. It's a glorious afternoon, the sun is out, the sky is blue, and as I descend the sodden and slimy slopes of Meall Doire na h-Achlais toward the river, I spot a couple of walkers sunning themselves on the beach below near the river junction. We say hello as I wade across and follow the watercourse up north.

It's time to look for a suitable campsite. My original plan was to camp bit further on in the bealach between Meall a'Bhurich and Stob Ban, but it's far too windy for a high level camp. The opportunities down here are limited, the ground is saturated with water and the grass is very tussocky, but I find a good spot on a large flat sandy deposit inside the crook of the river. It's about 18 inches above the river level, and from the vegetation it would appear it does not flood too often, perhaps only during the snow melt.

The rain arrives not much later, and I have an early night. After getting an updated forecast on the InReach SE (heavy rain throughout the night and tomorrow), I make a slight revision to tomorrow's plans (deciding to leave out the two Sgurr Choinnichs), then listen to Michael Palin narrating Ian Rankin's Knots and Crosses; the Irish-sounding fake Scottish accents are mildly amusing, and I wonder why they could not get a native speaker to do the reading, but the story is too good to be spoilt by that.

It rains steadily all night and the sound of the river gradually changes. I eventually poke my head out to check on the water level, but there are no signs of it spilling out yet, and so I continue to sleep soundly till half seven.

Day 2 -- Grey Corries (27km, 1,750m ascent, 8h)

I start the day by knocking over the first pot of boiling water, but eventually get my coffee and porridge. The river rose by about six inches during the night. The rain has turned into light drizzle, cloud base is at no more than 600m. It is completely still, and as I faff about the midges are spurring me on.

I soon head into the clouds, up Meall a'Bhuirich, where I come across a family of ptarmigan, apart from the numerous frogs, the first real wildlife since the hare I saw yesterday morning. The chicks are quite grown up by now, and they seem unfazed by my presence. At the summit of the roundish hill I have to get the compass out, the visibility is no more than 30 yards and it would be easy to descend in a completely wrong direction.

More ptarmigan on Stob Ban, but no views from the summit. I was planning to descend directly into the bealach above Coire Rath, but this side of the hill is very steep and in the poor visibility I am unable to judge if this is, in fact, possible. I decide to take the well defined path heading east instead, and then traverse the north side of the hill at around the 800m contour line.

This proves to be a good choice and I even pick up a faint, runnable, path traversing at around 780m; perhaps an old stalkers' path. The cloud has lifted a bit and I am now just around its bottom edge; the views down into the coire are absolutely magical. While there is much to be said for sunshine, I have found over the years that some of the most precious moments in the hills come when there is none.

Eastern Mamores and the Grey Corries

I stop at the wee lochan for a bite to eat, and to have a drink, as this is the last water source until after the Grey Corries.

The Grey Corries quartzite ridge line is very fine and dramatic. The low cloud makes it even more so. While it would be hard to get lost here, I realise this is a great opportunity to practice some micro navigation, and so carefully track my progress along the ridge. The running is hard and very technical, and the wet quartzite is rather slippery. I make a mental note that if I ever decide to attempt the Ramsay solo, I should do so in a clockwise direction, so as to get the most serious and committing part of it out of the way on fresh legs (sod the aesthetics of finishing with the Ben!).

My plan is to go as far as the bealach before Sgurr Choinnich More and then descend through Coire Easain into Glen Nevis. The final part of the Stob Coire Easain south west ridge proves quite tricky. Pure, white, blocky quartzite abounds, and in the present conditions it is as slippery as a Zamboni smoothed ice ring -- I am taking my time not to inadvertently reach the coire head first. An unkindness of six or so ravens is circling noisily around me, and somehow in these eerie conditions, that group name seems more than appropriate.

Near the very end of the ridge, the quartzite seems to turn pink, almost red. On close inspection, the colour is due to a fine layer of lichen (my uninformed guess is Belonia nidarosiensis, but I could be wrong, not least since the alternative name Clathroporina calcarea suggests a somewhat different habitat). Under one of these large blocks there is a tiny spruce seedling, trying to carve a life for itself. I wonder where it came from, perhaps the seed arrived on the wind, perhaps attached to the sole of someone's boots; either way, it would have come some distance. I wish it good luck, for it will need it.

A quick stop for water and the last of my oatcakes and cheese, and then down Coire Easain. Up close and personal this turns out to be a real gem of a place. Whereas from above it has the appearance of a simple bowl, it is in fact a series of wet, bright green, terraces (sphagnum moss and frogs abound), with a myriad of small pools from which numerous quickly growing streams start their life, eventually joining up into the mighty Allt Coire Easain. There are a couple of large quartzite escarpments, the lower of which is clearly visible when looking up from Glen Nevis. With a little care, wet feet notwithstanding, this can be descended through the grassy break next to the east bank of the main stream.

Eastern Mamores and the Grey Corries

I cross the stream, and head south on a gently descending traverse, more or less aiming for Tom an Eite. The ground runs well, until it becomes more tussocky near the floor of the glen. I negotiate the edge of the peat hags by Tom an Eite, cross the beginnings of the Water of Nevis and start following Allt Coire a'Bhinnein along its western bank.

Initially, before a faint deer track appears, this is a bit of a trot through bog grass and peat hags strewn with numerous stumps of ancient Caledonian pine. I try to picture what it would have looked like before man came with an axe and a saw. It would have been quite a different, better, more vibrant, landscape. I am, inevitably, thinking of the previous weekend with Linda, the lovely running among the pines in upper Glen Quoich, and the encouraging progress of regeneration on Mar Lodge Estate. Perhaps one day we will see the pines returning here as well.

Nevertheless, this is yet another gem of a landscape. Midway up into the coire, for two, maybe three hundred yards, the river, at this point at least five yards wide already, is squeezed into a channel between two vertical plates of quartzite only a couple of feet apart. The effect is dramatic; I am peaking over the edge with a degree of unease, falling in would not bode well. I spot a ringed plover, and a short while later a ringed ouzel -- it's good to see some other life than the red deer which abounds.

Just above this constriction in the river, Allt a' Gharbh Coire is coming down on the right in a spectacular waterfall. I am taken aback by the sight, thinking this is, by far, the most dramatic waterfall I have seen in Scotland -- it is only a few weeks back since I watched the famous Eas a' Chual Aluinn from the summit of the Stack of Glencoul, but the waterfall I am looking at just now is in a different league. (I am so gobsmacked by the sight it never occurs to me to take a photograph!)

Soon I am climbing up to Coire an Lochain for the second time this trip -- this time I take the zigzags, my legs are beginning to feel tired. But soon I am on the path that skirts Sgorr Eilde Beag, which provides a most enjoyable run back down to Kinlochleven. As I come around the 'corner' formed by the south ridge, I bump into a group of youngsters heading up to the coire on an (DoE?) expedition.

'Does it get flatter soon?!'

They are the first people I have seen, never mind met, since yesterday afternoon -- on reflection there are benefits to weather less then perfect after all!

by tf at July 10, 2017 09:13 PM

June 30, 2017

Chris Lord

Hello world!

Welcome to WordPress. This is your first post. Edit or delete it, then start writing!

by admin at June 30, 2017 12:02 AM

June 29, 2017

Tomas Frydrych

The Debt of Magic

The Debt of Magic

My gran married young, and was widowed young, my current age. I have a very few regrets in life, but not getting to know grandpa is one of them. He was a great lover of nature, a working man with little spare time, escaping into the woods with binoculars and a camera whenever he could. A passion borne out by countless strips of film left behind. As I am getting older I too am drawn into the woods, increasingly not for 'adventure', but for the tranquility and the sense of awe it invariably brings. I sense we were kindred spirits, but I can only imagine, he died before my third birthday and I have no memories of him at all.

That in itself I find strange, for I have some very early memories. Hiding a hairbrush in the oven, not older than two, the commotion as it was 'found' (providing extra aroma for the Sunday roast). Sitting on a potty on the balcony of our new flat, not yet three, scaffolding all around, watching a cheery brickie in a red and black striped shirt at work (gone now, a scaffolding collapse some years later). Only just turned three, standing in front of the maternity hospital with my dad, waving, my sister just born.

These are all genuine enough, but at the same time mere fragments, without context and continuity. My first real memories come from the summer after my fifth birthday: it's August and my gran and I are going on a holiday in the Krkonoše mountains. A train, a bus, then a two hour hike to Petrova Bouda mountain refuge, our home for the next two weeks, wandering the hills, armed with a penknife and walking sticks gran fashioned out of some dead wood.

For me this was the time of many firsts. I learned my first (and only) German, 'Nein, das ist meine!', as my gran ripped out our sticks from the hands of a lederhosen-wearing laddie a few years older than me; he made the mistake of laying claim to them while we were browsing the inside of a souvenir shop (I often think somewhere in Germany is a middle aged man still having night terrors). It was the only time ever I heard my gran speaking German; she was fluent, but marked by the War (just as I am marked by my own history).

Up there in the hills I had my first encounter with the police state, squatting among the blaeberries, attending to sudden and necessary business, unfortunately, in the search for a modicum of privacy, on the wrong side of the border. The man in uniform, in spite of sporting a scorpion sub machine gun (the image of the large leather holster forever seared into my memory) stood no chance, and retreated hastily as gran rushed to the rescue. She was a formidable woman, and I was her oldest grandchild. Funnily enough, I was to have a similar experience a decade later in the Tatras, bivvying on the wrong side of the border only to wake up staring up the barrel of an AK-47, but that was all in the future then, though perhaps that future was already being shaped, little by little.

It was also the first time I drank from a mountain stream. A crystal clear water springing out of a miniature cave surrounded by bright green moss, right by the side of the path. Not a well known beauty spot sought after by many, but a barely noticeable trickle of water on the way to somewhere 'more memorable'. We sat there having our lunch. Gran hollowed out the end bit of a bread stick to make a cup and told me a story about elves coming to drink there at night. We passed that spring several times during those days, and I was always hoping to catch a glimpse of that magical world. I still do, perhaps now more than ever.

Gran's ventures into nature were unpretentious and uncomplicated: she came, usually on her old bicycle, she ate her sandwiches, and she saw. And I mean, really saw. Not just the superficially obvious, but the intricate interconnections of life, the true magic. One of her favourite places was a disused sand pit in a pine wood a few miles from where she lived. We spent many a summer day there picking cranberries and mushrooms, and then, while eating our pieces, watched the bees drinking from a tiny pool in the sand. Years later, newly married, I took Linda there, and I recall how, for the first time, I was struck by the sheer ordinariness of the place. Where did the magic go?

The magic is in the eye of the beholder. It is always there, it requires no superhuman abilities, no heroic deeds, no overpriced equipment. But seeing is an art, and a choice. It takes time developing, and a determination practicing. Gran was a seer, and she set me on the path of becoming one; I am, finally, beginning to make some progress.

In the years to come there were to be many more mountain streams. Times when the magic once only imagined by my younger self became real, tangible, perhaps even character-building. Times more intense, more memorable in the moment, piling on like cards, each on the top of the other, leaving just a corner here and corner there to be glimpsed beneath. The present consuming the past with the inevitability we call growing up.

Yet, ever so often it is worth pausing to browse through that card deck. There are moments when we stand at crossroads we are not seeing, embarking on a path that only comes into focus as time passes. Like a faint trace of a track on a hillside hard to see up close, but clearly visible from afar in the afternoon light, I can see now that my passion for the hills and my lifelong quest for the magic go back to the two weeks of a summer long gone in a place I have long since stopped calling home.

Gran passed away earlier this year. Among her papers was an A6 card, a hiking log from those two summer weeks. A laconic record of the first 64km of a life long journey she took me on; an IOU that shall remain outstanding.

by tf at June 29, 2017 03:27 PM

June 28, 2017

Chris Lord

Goodbye Mozilla

Today is effectively my last day at Mozilla, before I start at Impossible on Monday. I’ve been here for 6 years and a bit and it’s been quite an experience. I think it’s worth reflecting on, so here we go; Fair warning, if you have no interest in me or Mozilla, this is going to make pretty boring reading.

I started on June 6th 2011, several months before the (then new, since moved) London office opened. Although my skills lay (lie?) in user interface implementation, I was hired mainly for my graphics and systems knowledge. Mozilla was in the region of 500 or so employees then I think, and it was an interesting time. I’d been working on the code-base for several years prior at Intel, on a headless backend that we used to build a Clutter-based browser for Moblin netbooks. I wasn’t completely unfamiliar with the code-base, but it still took a long time to get to grips with. We’re talking several million lines of code with several years of legacy, in a language I still consider myself to be pretty novice at (C++).

I started on the mobile platform team, and I would consider this to be my most enjoyable time at the company. The mobile platform team was a multi-discipline team that did general low-level platform work for the mobile (Android and Meego) browser. When we started, the browser was based on XUL and was multi-process. Mobile was often the breeding ground for new technologies that would later go on to desktop. It wasn’t long before we started developing a new browser based on a native Android UI, removing XUL and relegating Gecko to page rendering. At the time this felt like a disappointing move. The reason the XUL-based browser wasn’t quite satisfactory was mainly due to performance issues, and as a platform guy, I wanted to see those issues fixed, rather than worked around. In retrospect, this was absolutely the right decision and lead to what I’d still consider to be one of Android’s best browsers.

Despite performance issues being one of the major driving forces for making this move, we did a lot of platform work at the time too. As well as being multi-process, the XUL browser had a compositor system for rendering the page, but this wasn’t easily portable. We ended up rewriting this, first almost entirely in Java (which was interesting), then with the rendering part of the compositor in native code. The input handling remained in Java for several years (pretty much until FirefoxOS, where we rewrote that part in native code, then later, switched Android over).

Most of my work during this period was based around improving performance (both perceived and real) and fluidity of the browser. Benoit Girard had written an excellent tiled rendering framework that I polished and got working with mobile. On top of that, I worked on progressive rendering and low precision rendering, which combined are probably the largest body of original work I’ve contributed to the Mozilla code-base. Neither of them are really active in the code-base at the moment, which shows how good a job I didn’t do maintaining them, I suppose.

Although most of my work was graphics-focused on the platform team, I also got to to do some layout work. I worked on some over-invalidation issues before Matt Woodrow’s DLBI work landed (which nullified that, but I think that work existed in at least one release). I also worked a lot on fixed position elements staying fixed to the correct positions during scrolling and zooming, another piece of work I was quite proud of (and probably my second-biggest contribution). There was also the opportunity for some UI work, when it intersected with platform. I implemented Firefox for Android’s dynamic toolbar, and made sure it interacted well with fixed position elements (some of this work has unfortunately been undone with the move from the partially Java-based input manager to the native one). During this period, I was also regularly attending and presenting at FOSDEM.

I would consider my time on the mobile platform team a pretty happy and productive time. Unfortunately for me, those of us with graphics specialities on the mobile platform team were taken off that team and put on the graphics team. I think this was the start in a steady decline in my engagement with the company. At the time this move was made, Mozilla was apparently trying to consolidate teams around products, and this was the exact opposite happening. The move was never really explained to me and I know I wasn’t the only one that wasn’t happy about it. The graphics team was very different to the mobile platform team and I don’t feel I fit in as well. It felt more boisterous and less democratic than the mobile platform team, and as someone that generally shies away from arguments and just wants to get work done, it was hard not to feel sidelined slightly. I was also quite disappointed that people didn’t seem particular familiar with the graphics work I had already been doing and that I was tasked, at least initially, with working on some very different (and very boring) desktop Linux work, rather than my speciality of mobile.

I think my time on the graphics team was pretty unproductive, with the exception of the work I did on b2g, improving tiled rendering and getting graphics memory-mapped tiles working. This was particularly hard as the interface was basically undocumented, and its implementation details could vary wildly depending on the graphics driver. Though I made a huge contribution to this work, you won’t see me credited in the tree unfortunately. I’m still a little bit sore about that. It wasn’t long after this that I requested to move to the FirefoxOS systems front-end team. I’d been doing some work there already and I’d long wanted to go back to doing UI. It felt like I either needed a dramatic change or I needed to leave. I’m glad I didn’t leave at this point.

Working on FirefoxOS was a blast. We had lots of new, very talented people, a clear and worthwhile mission, and a new code-base to work with. I worked mainly on the home-screen, first with performance improvements, then with added features (app-grouping being the major one), then with a hugely controversial and probably mismanaged (on my part, not my manager – who was excellent) rewrite. The rewrite was good and fixed many of the performance problems of what it was replacing, but unfortunately also removed features, at least initially. Turns out people really liked the app-grouping feature.

I really enjoyed my time working on FirefoxOS, and getting a nice clean break from platform work, but it was always bitter-sweet. Everyone working on the project was very enthusiastic to see it through and do a good job, but it never felt like upper management’s focus was in the correct place. We spent far too much time kowtowing to the desires of phone carriers and trying to copy Android and not nearly enough time on basic features and polish. Up until around v2.0 and maybe even 2.2, the experience of using FirefoxOS was very rough. Unfortunately, as soon as it started to show some promise and as soon as we had freedom from carriers to actually do what we set out to do in the first place, the project was cancelled, in favour of the whole Connected Devices IoT debacle.

If there was anything that killed morale for me more than my unfortunate time on the graphics team, and more than having FirefoxOS prematurely cancelled, it would have to be the Connected Devices experience. I appreciate it as an opportunity to work on random semi-interesting things for a year or so, and to get some entrepreneurship training, but the mismanagement of that whole situation was pretty epic. To take a group of hundreds of UI-focused engineers and tell them that, with very little help, they should organised themselves into small teams and create IoT products still strikes me as an idea so crazy that it definitely won’t work. Certainly not the way we did it anyway. The idea, I think, was that we’d be running several internal start-ups and we’d hopefully get some marketable products out of it. What business a not-for-profit company, based primarily on doing open-source, web-based engineering has making physical, commercial products is questionable, but it failed long before that could be considered.

The process involved coming up with an idea, presenting it and getting approval to run with it. You would then repeat this approval process at various stages during development. It was, however, very hard to get approval for enough resources (both time and people) to finesse an idea long enough to make it obviously a good or bad idea. That aside, I found it very demoralising to not have the opportunity to write code that people could use. I did manage it a few times, in spite of what was happening, but none of this work I would consider myself particularly proud of. Lots of very talented people left during this period, and then at the end of it, everyone else was laid off. Not a good time.

Luckily for me and the team I was on, we were moved under the umbrella of Emerging Technologies before the lay-offs happened, and this also allowed us to refocus away from trying to make an under-featured and pointless shopping-list assistant and back onto the underlying speech-recognition technology. This brings us almost to present day now.

The DeepSpeech speech recognition project is an extremely worthwhile project, with a clear mission, great promise and interesting underlying technology. So why would I leave? Well, I’ve practically ended up on this team by a series of accidents and random happenstance. It’s been very interesting so far, I’ve learnt a lot and I think I’ve made a reasonable contribution to the code-base. I also rewrote python_speech_features in C for a pretty large performance boost, which I’m pretty pleased with. But at the end of the day, it doesn’t feel like this team will miss me. I too often spend my time finding work to do, and to be honest, I’m just not interested enough in the subject matter to make that work long-term. Most of my time on this project has been spent pushing to open it up and make it more transparent to people outside of the company. I’ve added model exporting, better default behaviour, a client library, a native client, Python bindings (+ example client) and most recently, Node.js bindings (+ example client). We’re starting to get noticed and starting to get external contributions, but I worry that we still aren’t transparent enough and still aren’t truly treating this as the open-source project it is and should be. I hope the team can push further towards this direction without me. I think it’ll be one to watch.

Next week, I start working at a new job doing a new thing. It’s odd to say goodbye to Mozilla after 6 years. It’s not easy, but many of my peers and colleagues have already made the jump, so it feels like the right time. One of the big reasons I’m moving, and moving to Impossible specifically, is that I want to get back to doing impressive work again. This is the largest regret I have about my time at Mozilla. I used to blog regularly when I worked at OpenedHand and Intel, because I was excited about the work we were doing and I thought it was impressive. This wasn’t just youthful exuberance (he says, realising how ridiculous that sounds at 32), I still consider much of the work we did to be impressive, even now. I want to be doing things like that again, and it feels like Impossible is a great opportunity to make that happen. Wish me luck!

by Chris Lord at June 28, 2017 11:16 AM

June 20, 2017

Ross Burton

Identifying concurrent tasks in Bitbake logs

One fun problem in massively parallel OpenEmbedded builds is when tasks have bad dependencies or just bugs and you can end up with failures due to races on disk.

One example of this happened last week when an integration branch was being tested and one of the builds failed with tar error: file changed as we read it whilst it was generating the images. This means that the root filesystem was being altered whilst tar was reading it, so we've a parallelism problem. There's only a limited number of tasks that could be having this effect here so searching the log isn't too difficult, but as they say: why do something by hand when you can write a script to do it for you.

findfails is a script that will parse a Bitbake log and maintain the set of currently active tasks, so when it finds a task that fails it can tell you what other tasks are also running:

$ findfails log
Task core-image-sato-dev-1.0-r0:do_image_tar failed
Active tasks are:
 core-image-sato-sdk-ptest-1.0-r0:do_rootfs
 core-image-sato-dev-1.0-r0:do_image_wic
 core-image-sato-dev-1.0-r0:do_image_jffs2
 core-image-sato-dev-1.0-r0:do_image_tar
 core-image-sato-sdk-1.0-r0:do_rootfs```

We knew that there were changes to do_image_wic in that branch, so it was easy to identify and drop the patch that was incorrectly writing to the rootfs source directory. Sorted!

by Ross Burton at June 20, 2017 02:24 PM

June 13, 2017

Ross Burton

Dynamic source checksums in OpenEmbedded

Today we were cleaning up some old bugs in the Yocto Project bugzilla and came across a bug which was asking for the ability to specify a remote URL for the source tarball checksums (SRC_URI[md5sum] and/or SRC_URI[sha256um]). We require a checksum for tarballs for two reasons:

  1. Download integrity. We want to be sure that the download wasn't corrupted in some way, such as truncation or bad encoding.
  2. Security. We want to be sure that the tarball hasn't changed over time, be it the maintainer regenerating the tarball for an old release but with different content (this happens more than you'd expect, with non-trivial changes too), or alternatively a malicious attack on the file which now contains malware (such as the Handbrake hack in May).

The rationale for reading remote URLs for checksums was that for files that are changing frequently it would be easier to upgrade the recipe if the checksums didn't need to be altered too. For some situations I can see this argument, but I don't want to encourage practices that nullify the security checksums. For this reason I rejected the bug but thanks to the power of Bitbake I did provide a working example of how to do this in your recipe.

The trick is to observe that the only time the SRC_URI[md5sum] is read is during do_fetch. By adding a new function to do_fetch[prefuncs] (the list of functions that will be executed before do_fetch is executed) we can download the checksums and write the variable just before the fetcher needs it. Here is a partial example that works for GNOME-style checksums, where each upload generates foo-1.2.tar.bz2, foo-1.2.tar.xz, foo-1.2.sha256sum, and foo-1.2.md5sum. To keep it interesting the checksum files contain the sums for both compression types, so we need to iterate through the file to find the right line:

SRC_URI = "https://download.gnome.org/sources/glib/2.52/glib-2.52.2.tar.xz"
SHASUM_URI = "https://download.gnome.org/sources/glib/2.52/glib-2.52.2.sha256sum"

do_fetch[prefuncs] += "fetch_checksums"
python fetch_checksums() {
    import urllib
    for line in urllib.request.urlopen(d.getVar("SHASUM_URI")):
        (sha, filename) = line.decode("ascii").strip().split()
        if filename == "glib-2.52.2.tar.xz":
            d.setVarFlag("SRC_URI", "sha256sum", sha)
            return
    bb.error("Could not find remote checksum")
}

Note that as fetch_checksums is a pre-function for do_fetch it is only executed just before do_fetch and not at any other time, so this doesn't impose any delays on builds that don't need to fetch.

If I were taking this beyond a proof of concept and making it into a general-purpose class there's a number of changes I would want to make:

  1. Use the proxies when calling urlopen()
  2. Extract the filename to search for from the SRC_URI
  3. Generate the checksum URL from the SRC_URI

I'll leave those as an exercise to the reader though. Patches welcome!

by Ross Burton at June 13, 2017 01:43 PM

Tomas Frydrych

The Case for 'Make No Fire'

The Case for 'Make No Fire'

I agree with David Lintern that we (urgently) need a debate about the making of fires in our wild spaces, and I am grateful that he took the plunge and voiced that need. But while I think David's is, by far, the most sensible take on the matter among some of the other advice dished out recently, I want to argue that we, the anonymous multitude of outdoor folk, need to go a step further and make the use of open fire in UK wild places socially unacceptable. Not making a fire is the only responsible option available to us. Not convinced? Here is my case.

There are three key issues that need to be addressed when it comes to the responsible use of fire: 1. the risk of starting a wildfire, 2. the immediate damage a controlled fire causes, and, 3. the paucity of fuel and the damage caused by foraging for it.

Wildfire risk

There are two main ways in which a controlled fire can start a wildfire: an underground burn and overground spark. The former happens when a fire is located on the top of material that is itself combustible. Such material can smoulder underground for days, and travel some distances before flaring up. The most obvious risk here comes from peat, which happens to be the second largest carbon store on the planet (after the Amazonian rain forest), i.e., it burns extremely well, and most of which is in Scotland; majority of our wild places are covered in it -- it might seem obvious to some not to start a fire on peat, but I suspect many don't know what peat actually looks like, particularly when dry, or don't realise how ubiquitous it is.

Peat is not the only problem, underground roots can smoulder away for ages thanks to high resin content, and are very hard to put out, a couple of guys pissing into the fire pit is nowhere near enough. (I once spent over an hour extinguishing a fire someone built on the top of an old stump, it wouldn't stop sizzling in spite of copious amounts of water repeatedly poured onto it, it scared the hell out of me.)

Then there is the flying spark igniting stuff outside your controlled fire pit. Open fire always generates sparks, even wood burning stoves do, when the pot is not on. In the right conditions it takes a very tiny spark to get things going. Sparks can fly considerable distances, and might well jump any perceived safe buffer zone around your fire. The amount and size of sparks generated grows with the size of the fire, plus the bigger the fire, the bigger the updraft and the less control you have over where your sparks land.

In practice, the risk of wildfire can be reduced, but is hard to eliminate. It's ultimately a numbers game. If the individual chances of unwittingly starting a wild fire from a small controlled fire are 1 in 1000, then a 1000 people each making a fire once will start one wildfire. Whatever the actual numbers, the growth in participation works against us. Let's be under no illusion: an outdoor culture that accepts fire in wild spaces as a part of the game will start wildfires. It's not a question of whether, just of how often. Is that something we are happy to accept as a price worth paying? How often is OK? Once a year, once a decade? Once a month?

Immediate Damage

Fire is the process of rapid release of energy, and that energy has to go somewhere; in our wild places it goes somewhere where it should not, where it is not expected and in doing so it affects an irreversible change. Fire kills critters in the soil. The rising and radiating heat damages vegetation in the vicinity (it takes surprisingly little heat to cause lasting damage to trees, I reckon an irreversible damage happens to at least three times the distance to which a human face can comfortably bear the radiation). Such damage is not necessarily immediately obvious, but is there, and adds up with repeated use. A single fire under the Inchriach pines might seem as doing no harm, but tomorrow that's someone else fire in its place. (Next time you pass Inchriach, look up directly above the ignominious fire pit, compare the two sides of the pine.)

There are other, more subtle issues. Ash is a fertiliser; it is also alkaline, affecting soil acidity. The repeated dumping of ash around a given locus will inevitably change that ecosystem, particularly if it's naturally nutrient poor and/or acidic. Dumping ashes into water suffers from the same problem. Individually, these might be minute, seemingly insignificant changes but they are never isolated. We might feel like it, but we are not lonely travellers exploring vast sways of wilderness previously untouched by human foot. I am but one of many, and increasingly more, passing through any given of UK's wild places. The numbers, again, work against us.

Lot of folk seem to think that if they dig a fire pit, then replace the turf next day they are 'leaving no trace' -- that's not no trace, that's a scar with little superficial make up applied to it. It does not work even on the cosmetic level; it might look good when you are leaving, but doesn't last long. The digging damages the turf, particularly around the edges, as does the fire. The fire bakes the ground in the pit, making it hard for the replaced turf to repair its roots, and it will suffer partial or even complete dieback as a result. Even if the turf catches eventually, it takes weeks for the damaged border to repair -- the digging of single use fire pits, particularly at places that are used repeatedly by many is far more damaging than leaving a single, tidy fire ring to be reused. Oh, the sight of it offends you? The issues surrounding fire go lot deeper than the cosmetics.

Paucity of fuel

In the UK we have (tragically) few trees. It takes surprisingly large quantity of wood to feed even a small fire to just make a cup of coffee. It is possible to argue that the use of stoves, gas or otherwise, too has a considerable environmental impact, just less obvious, less localised, and that the burning of local wood is more environmentally friendly. It's a good argument, worth reflecting upon, but it only works for small numbers; it doesn't scale. Once participation gets to a certain level, it burns lot quicker than it grows, and in the UK we have long crossed that line.

There are many of us heading into the same locations, and it is always possible to straight away spot places where people make fire by how denuded of dead wood they are (been to a bothy recently?). This is not merely cosmetic, the removal of dead wood reduces biodiversity. Fewer critters on the floor mean fewer birds in the trees, and so on. Our 'wild' places suffer from the lack of biodiversity as is, no change for the worse is insignificant. If you have not brought the fuel with you, your fire is not locally sustainable, it's simple as that. If it's not locally sustainable, it has no place in our wild locations.

The Fair Share

It comes down to the numbers. As more of us head 'out there', the chances of us collectively starting a wildfire grow, as does the damage we cause locally by having our fires. We can't beat the odds, indeed, as this spring has shown, we are not beating the odds. There is only one course of action left to us and that's to completely abstain from open fires in our wild places. I use the word abstain deliberately. The making of fire is not a necessity, not even a convenience. It's about a brief individual gratification that comes at a considerable collective price in the long run.

As our numbers grow we need the personal discipline not to claim more than our fair share of the limited and fragile resource our wild places are. The aspirations of 20 or 30 years ago are no longer enough, what once might have been acceptable no longer can be. We must move beyond individual definitions of impact and start thinking in combined, collective, terms -- sustainable behaviour is not one which individually leaves no obvious visual trace, but one which can be repeated over and over again by all without fundamentally changing the locus of our activity. I believe the concept of fair share is the key to sustainable future. Any definition of responsible behaviour that does not consciously and deliberately take numbers into account is delusory. And fire doesn't scale.

by tf at June 13, 2017 10:05 AM

June 12, 2017

Tomas Frydrych

Eagle Rock and Ben More Assynt

Eagle Rock and Ben More Assynt

The south ridge of Ben More Assynt has been on my mind for a while, ever since I laid eyes on it a few years back from the summit. It's a fine line. Today is perhaps not the ideal day for it, it's fairly windy and likely to rain for a bit, but at least for now the cloud base is, just, above the Conival summit. I dither whether to take the waterproof jacket, it will definitely rain, but it's not looking very threatening just now, and it's not so cold. In the end common sense prevails and I add it to the bag, then set off from Inchnadamph up along Traligill river.

The rain starts within a couple of minutes, and by the time I reach the footbridge below the caves it is sustained enough for the jacket to come out of the bag. In a moment of fortuitous foresight I also take the camera out of its shower resistant shoulder pouch and put it in a stuff sack, before I carry on, past the caves, following Allt a'Bhealaich.

This is a familiar ground I keep returning to, fascinated by the stream which on the plateau above Cnoc nan Uamh runs mostly underground, yet leaving a clearly defined riverbed on the surface; a good running surface. Not far after the Cnoc there is a rather large sinkhole, new since the last time I was here. It's perhaps five meters in diameter, and about as deep, the grass around its edges beginning to subside further. I wonder where it leads, how big the underground space might be, would love to have seen it forming.

Another strange thing, strewn along the grassy river bed are large balls of peat, about the size of a medicine ball, and really quite round. I don't remember seeing these before either, and wonder how they formed and where they came from, presumably they were shaped, and brought down, by a torrent of water from higher up; the dry riverbed is a witness to significant amounts of water at least occasionally running through here on the surface.

As Allt a' Bhealaich turns south east into the steepsided reentrant below Bealach Trallgil, I start climbing up the eastern slopes of Conival to pick up the faint path that passes through the bealach, watching the cloud oozing out of it. The wind has picked up considerably as it channels through the narrow gap between Conival and Braebag, it doesn't bode well for the high ground above.

The bealach provides an entry into a large round natural cauldron, River Oykel the only break in its walls. On a good clear day its circumnavigation would provide a fine outing. Today it's filled with dense cloud, there is no chance of even catching sight of the dramatic cliffs of Breabag, though I can see briefly that the Conival summit is above the clouds, and I wonder if perhaps I might be lucky enough to climb above them later.

The rain hasn't let off, and as I descend toward Dubh Loch Mor I chastise myself (not for the first time) for not reproofing my jacket. But there is no time to dwell on such trivialities as being wet. The southwest bank of the loch is made of curious dunes, high and rounded just like sand dunes, but covered in short grass. I have seen nothing like this before in Scotland's hills. I have an inkling; this entire cauldron shows classic signs of glaciation and I expect under the thin layer of peat and vegetation of these dunes is the moraine the retreating glacier left behind.

I reset my altimeter, there is some navigation to be done to reach the bealach north of Eagle Rock, and I am about to enter the cloud. As I climb, the visibility quickly drops to about fifteen yards. Suddenly I catch the sight of a white rump, then another. Thinking these are sheep I carry on ... about ten yards up wind from me is a herd of a dozen or so deer, all facing away from me. I am spotted after a few seconds and we all stand very still looking at each other for what seems like ages. I have the distinct sense I am being studied as a strange curiosity, if not being outright mocked for being out in this weather.

I break the stalemate carrying on up the hill to the 600m line, then start contouring. The weather is truly miserable now, the wind picked up some more and I wish I brought the Buffalo gloves, they are made for days like these. The compass and map come out so I can track my progress on the traverse until the slope aspect reaches the 140 degrees I am looking for. I consider giving Eagle Rock the miss today, but decide to man up.

Perhaps it's the name, but I am rather surprised, dare I say, disappointed, by the tame character of this hill. It can be fairly accurately described as a rounded heap of coarse aggregate, with little soil and some vegetation filling up the cracks, I suspect it's a bigger brother of the smaller dunes below, a large moraine deposit from a long time ago. It is quite unpleasant to run on in the fell shoes, there is no give in it, and on every step multiple sharp edges are making themselves felt through the soles.

I reach the trig point, take a back bearing (if anything, the visibility is even worse) and start heading down. Suddenly a female ptarmigan shoots out from a field of slightly bigger stones just to the side of me, and starts running tight circles around me, on no more than a three feet radius. The photographer in me has a brief urge to get the camera out, but it seems unfair. I admire her pluckiness, no regard for her own safety, she repeatedly tries to side step me and launch herself at me from behind, and I expect had I let her, I'd have been in for some proper pecking. But she will not take me head on, for which I am grateful as we dance together.

I have no idea in which way to retreat, for I haven't caught the sight of her young; I assume they are somewhere to my right where she came from, so head away from there. She continues to circle around frantically for some twenty yards or so, and I begin to wonder whether I might in fact be heading toward her hatchlings. Then her tactic changes. She runs for about five yards ahead of me in the direction I am moving in, then crouches down watching until I get within three feet or so, then runs on another five yards, and so on. We travel this way some three hundred yards, then she flies off some ten yards to the side; for the first time she stands upright, tall and proud, wings slightly stretched, watching me to carry on down the hill, her job is done. A small flock of golden plovers applaud, her textbook performance deserves nothing less.

This brief encounter made me forget all about the miserable weather and my cold hands, a moment like this well outweighs hours of discomfort, and is perhaps even unachievable without them. The rain has finally eased off, but it's clear now the whole ridge will be in this thick cloud.

The line is fine indeed, narrow, for prolonged sections a knife edge. Surprisingly, above 750m or so the wind is relatively light, nothing like in the cauldron below, which is just as well. On a different day this would be an exhilarating outing. But I am taken aback by how slippery the wet gneiss is, even in the normally so grippy fell shoes. Along the ridge are a number of tricky points; they are not scrambles in the full sense of the word, just short awkward steps and traverses that in the dry would present little difficulty, but are very exposed. Slipping on any of these is not just a question of getting hurt, a fall either side of the ridge would be measured in hundreds rather than tens of meters. I have done a fair amount of 'real' climbing in the past, but I am struggling to recall the last time I have felt this much out of my comfort zone. There are no escape routes and after negotiating a couple of particularly awkward bits, I realise I am fully committed to having to reach Ben More.

I am glad when the loose quartzite eventually signals I am nearly there. Normally quite lethal in the wet, today it feels positively grippy compared to the gneiss back there. I still can't get my head around it. As I start descending the summit, a person, in what from distance looks like a bright orange onesie, emerges from the fog. I expect we are both surprised to meet anyone else on the hill. We exchange a few sentence, I mention how slippery the ridge is, thinking today I'd definitely not want to be on it in heavy boots.

I jog over Conival without stopping, down the usual tourist route. I was planning to head to Loch nan Cuaran, and descend from there, but am out of time, Linda will already be waiting for me at Inchnadamph. As I drop to 650m or so I finally emerge from the cloud into the sunlit glen below, shortly reaching the beautiful path in Gleann Dubh; I will it to go on longer. The rain is forgotten, my clothes rapidly drying off, this is as perfect as life gets.

I stop briefly at River Traligill, I am about to reenter that other, 'normal', world and my legs are covered in peat up to my thighs -- best not to frighten the tourists having a picnic in the carpark.

PS: I think it's high time we got rid of the term 'game birds', it befits neither us nor them.

26km / 1,700m ascent / 5h

by tf at June 12, 2017 10:47 AM

June 07, 2017

Tomas Frydrych

Assynt Ashes

Assynt Ashes

Today I walked through one of my favourite Assynt places, off the path well trodden, just me, birds, deer ... and ash from a recent wild fire. I couldn't but think of MacCaig's frogs and toads, always abundant around here, yet today conspicuous by their absence.

A flashback to earlier this year: I am just the other side of this little rise, watching a pair of soaring eagles, beyond the reach of my telephoto lens. A brief conversation with a passing local. I mention the delight of walking in the young birch woodland, the pleasure of seeing it burst into life after winter. He worries about it being destroyed by wild fire, had seen a few around here. I think him somewhat paranoid, I can't imagine it happening, not here.

Now I am weeping among the ashes. Over a tree, of which this landscape could bear many more, up in a rock face, years of carving out life away from human intrusion brought to an abrupt end, for what? Over this invasive, all destroying, parasitic species that we call human, that has long outlived its usefulness. Different tears, of sadness, of frustration.

I pity such emotional poverty that needs fire to find fulfilment in the midst of the wonders of nature. I curse those who encourage it, those who feed, and feed on, this neediness. The neo-romantic evangelists preaching Salvation through Adventure to electronic pews of awestruck followers. I loath what 'adventure' has come to represent in recent years, the endless, selfie-powered quest for publicity, for likes, the k-tching sound likes make, the distortion of reality they inflict, the mutual ego stroking.

Out of the ashes, from distance at least, life is slowly getting reborn. Yet, this is not the rebirth of a landscape that has adapted to being regularly swept by fire. On closer inspection, the new greenery is just couch grass and bracken, the latter rapidly colonising the space where heather once was. This is a landscape yet again reshaped by man, and yet again for worse not better. For what? For the delusion of primeval 'authenticity' (carefully to be documented by a smartphone for the 'benefit' of those less authentic)?

Can we please stop looking into the pond to see how adventurous we are, and maybe, just once, look for what is there instead?

by tf at June 07, 2017 08:45 PM

June 04, 2017

Damien Lespiau

Building and using coverage-instrumented programs with Go


tl;dr We can create coverage-instrumented binaries, run them and aggregate the coverage data from running both the program and the unit tests.

In the Go world, unit testing is tightly integrated with the go tool chain. Write some unit tests, run go test and tell anyone that will listen that you really hope to never have to deal with a build system for the rest of your life.

Since Go 1.2 (Dec. 2013), go test has supported test coverage analysis: with the ‑cover option it will tell you how much of the code is being exercised by the unit tests.

So far, so good.

I've been wanting to do something slightly different for some time though. Imagine you have a command line tool. I'd like to be able to run that tool with different options and inputs, check that everything is OK (using something like bats) and gather coverage data from those runs. Even better, wouldn't be neat to merge the coverage from the unit tests with the one from those program runs and have an aggregated view of the code paths exercised by both kind of testing?

A word about coverage in Go

Coverage instrumentation in Go is done by rewriting the source of an application. The cover tool inserts code to increment a counter at the start of each basic block, a different counter for each basic block of course. Some metadata is kept along side each of the counters: the location of the basic block (source file, start/end line & columns) and the size of the basic block (number of statements).

This rewriting is done automatically by go test when coverage information has been asked by the user (go test -x to see what's happening under the hood). go test then generates an instrumented test binary and runs it.

A more detailed explanation of the cover story can be found on the Go blog.

Another interesting thing is that it's possible to ask go test to write out a file containing the coverage information with the ‑coverprofile option. This file starts with the coverage mode, which is how the coverage counters are incremented. This is one of set, count or atomic (see blog post for details). The rest of the file is the list of basic blocks of the program with their metadata, one block per line:

github.com/clearcontainers/runtime/oci.go:241.29,244.9 3 4

This describes one piece of code from oci.go, composed of 3 statements without branches, starting at line 241, column 29 and finishing at line 244, column 9. This block has been reached 4 times during the execution of the test binary.

Generating coverage instrumented programs

Now, what I really want to do is to compile my program with the coverage instrumentation, not just the test binary. I also want to get the coverage data written to disk when the program finishes.

And that's when we have to start being creative.

We're going to use go test to generate that instrumented program. It's possible to define a custom TestMain function, an entry point of a kind, for the test package. TestMain is often used to setup up the test environment before running the list of unit tests. We can hack it a bit to call our main function and jump to running our normal program instead of the tests! I ended up with something like this:


The current project I'm working on is called cc-runtime, an OCI runtime spawning virtual machines. It definitely deserves its own blog post, but for now, knowing the binary name is enough. Generating a coverage instrumented cc-runtime binary is just a matter of invoking go test:

$ go test -o cc-runtime -covermode count

I haven't used atomic as this binary is really a thin wrapper around a library and doesn't use may goroutines. I'm also assuming that the use of atomic operations in every branch a "quite a bit" higher then the non-atomic addition. I don't care too much if the counter is off by a bit, as long as it's strictly positive.

We can run this binary just as if it were built with go build, except it's really a test binary and we have access to the same command line arguments as we would otherwise. In particular, we can ask to output the coverage profile.

$ ./cc-runtime -test.coverprofile=list.cov list
[ outputs the list of containers ]

And let's have a look at list.cov. Hang on... there's a problem, nothing was generated: we din't get the usual "coverage: xx.x% of statements" at the end of a go test run and there's no list.cov in the current directory. What's going on?

The testing package flushes the various profiles to disk after running all the tests. The problem is that we don't run any test here, we just call main. Fortunately enough, the API to trigger a test run is semi-public: it's not covered by the go1 API guarantee and has "internal only" warnings. Not. Even. Scared. Hacking up a dummy test suite and running is easy enough:


There is still one little detail left. We need to call this FlushProfiles function at the end of the program and that program could very well be using os.Exit anywhere. I couldn't find better than having a tiny exit package implementing the equivalent of the libc atexit() function and forbid direct use of os.Exit in favour of exit.Exit(). It's even testable.

Putting everything together

It's now time for a full example. I have a small calc program that can compute additions and substractions.

$ calc add 4 8
12

The code isn't exactly challenging:


I've written some unit-tests for the add function only. We're going to run calc itself to cover the remaining statements. But first, let's see the unit tests code with both TestAdd and our hacked up TestMain function. I've swept the hacky bits away in a cover package.


Let's run the unit-tests, asking to save a unit-tests.cov profile.

$ go test -covermode count -coverprofile unit-tests.cov
PASS
coverage: 7.1% of statements
ok github.com/dlespiau/covertool/examples/calc 0.003s

Huh. 7.1%. Well, we're only testing the 1 statement of the add function after all. It's time for the magic. Let's compile an instrumented calc:

$ go test -o calc -covermode count

And run calc a few times to exercise more code paths. For each run, we'll produce a coverage profile.

$ ./calc -test.coverprofile=sub.cov sub 1 2
-1
$ covertool report sub.cov
coverage: 57.1% of statements

$ ./calc -test.coverprofile=error1.cov foo
expected 3 arguments, got 1
$ covertool report error1.cov
coverage: 21.4% of statements

$ ./calc -test.coverprofile=error2.cov mul 3 4
unknown operation: mul
$ covertool report error2.cov
coverage: 50.0% of statements

We want to aggregate those profiles into one single super-profile. While there are some hints people are interested in merging profiles from several runs (that commit is in go 1.8), the cover tool doesn't seem to support these kind of things easily so I wrote a little utility to do it: covertool

$ covertool merge -o all.cov unit-tests.cov sub.cov error1.cov error2.cov

Unfortunately again, I discovered a bug in Go's cover and so we need covertool to tell us the coverage of the aggregated profile:

$ covertool report all.cov
coverage: 92.9% of statements

Not Bad!

Still not 100% though. Let's fire the HTML coverage viewer to see what we are missing:

$ go tool cover -html=all.cov


Oh, indeed, we're missing 1 statement. We never call add from the command line so that switch case is never covered. Good. Seems like everything is working as intended.

Here be dragons

As fun as this is, it definitely feels like very few people are doing this kind of instrumented binaries. Everything is a bit rough around the edges. I may have missed something obvious, of course, but I'm sure the Internet will tell me if that's the case!

It'd be awesome if we could have something nicely integrated in the future.

by Damien Lespiau (noreply@blogger.com) at June 04, 2017 04:36 PM

May 29, 2017

Damien Lespiau

Testing for pending migrations in Django

DB migration support has been added in Django 1.7+, superseding South. More specifically, it's possible to automatically generate migrations steps when one or more changes in the application models are detected. Definitely a nice feature!

I've written a small generic unit-test that one should be able to drop into the tests directory of any Django project and that checks there's no pending migrations, ie. if the models are correctly in sync with the migrations declared in the application. Handy to check nobody has forgotten to git add the migration file or that an innocent looking change in models.py doesn't need a migration step generated. Enjoy!

See the code on djangosnippets or as a github gist!

by Damien Lespiau (noreply@blogger.com) at May 29, 2017 04:15 PM

May 28, 2017

Tomas Frydrych

Fraochaidh and Glen Creran Woods

Fraochaidh and Glen Creran Woods

The hills on the west side of Glen Creran will be particularly appreciated by those searching for some peace and quiet. None of them reach the magic 3,000ft mark, and so are of no interest to the Munroist, while the relatively small numbers of Corbettistas follow the advice of the SMC guidebook and approach their target from Ballachuilish. Yet, the lower part of Glen Creran, with its lovely deciduous woodland, deserves a visit, and the east ridge of Fraochaidh offers excellent running.

Start from the large carpark at the end of the public road (NN 0357 4886). From here, you have two options. The first is to follow the marked pine marten trail to its most westerly point (NN 0290 4867). From here a path leads off in a SW direction; take this to an old stone foot bridge over Eas an Diblidh (NN 0273 4846; marked on OS 25k map).

Alternatively, set off back along the road until it crosses Eas an Diblidh, then immediately pick up the path heading up the hill (see the 25k map) to the aforementioned bridge; this is my preferred option, the surrounding woodland is beautiful, and the Eas Diblidh stream rather dramatic -- more than adequate compensation for the brief time spent on the road.

Whichever way you get to the bridge, take the level path heading SW; after just a few meters a faint track heads directly up the hill following the stream. In the spring the floor in this upper part of the woods is covered in a mix of bluebells and wild garlic, providing an unusual sensory experience.

The path eventually peters out and the woodland comes to an end. Above the woodland is a typical Scottish overgrazed hillside, and as you emerge from the woods buzzing with life, it's impossible not to be struck by the apparent lack of it. Follow the direction of the stream up to the bealach below Beinn Mhic na Ceisich (391m point on 25k map).

From the bealach head up N to the 627m summit and from here follow the old fence line onto the summit of Fraochaidh (879m). As indicated on the 25k map, this section is damp underfoot, the fence line follows the best ground. The final push onto Fraochaidh is steep, but without difficulties.

Once you have taken in the views from Fraochaidh summit, follow the faint path along its east ridge. The running and scenery are first class, with Sgorr Deargh forming the main backdrop; the path worn out to its summit, so obvious even from this distance, perhaps a cause for reflection on our impact on he hills we love.

Fraochaidh exhibits some interesting geology. The upper part of the mountain is made of slate, which, as you approach Bealach Dearg, briefly changes (to my untrained eye at least) to gneiss, promptly followed by a band of quartzite forming the knoll on its other side. Then, as the ridge turns NE, it changes to orange coloured limestone, covered in alpine flora, with excellent view back at Fraochaidh.

Follow the ridge all the way to Mam Uchdaich bealach where it is crossed by the Ballachuilish path. This has been impacted by recent forestry operations on the Glen Creran side, and a new, broad hard surface path zigzags toward the forestry track. As of the time of writing, it is still possible to pick up the original path near the first sharp turn, descending through a grassy fire break in the woods -- this much to be preferred.

The forestry track initially has little to commend it, other than being gently downhill, but for the last couple of kilometres it renters the lovely deciduous woodland for a pleasant final jog to the finish.

20km / 1600m ascent / ~4h

by tf at May 28, 2017 08:08 AM

May 27, 2017

Chris Lord

Free Ideas for UI Frameworks, or How To Achieve Polished UI

Ever since the original iPhone came out, I’ve had several ideas about how they managed to achieve such fluidity with relatively mediocre hardware. I mean, it was good at the time, but Android still struggles on hardware that makes that look like a 486… It’s absolutely my fault that none of these have been implemented in any open-source framework I’m aware of, so instead of sitting on these ideas and trotting them out at the pub every few months as we reminisce over what could have been, I’m writing about them here. I’m hoping that either someone takes them and runs with them, or that they get thoroughly debunked and I’m made to look like an idiot. The third option is of course that they’re ignored, which I think would be a shame, but given I’ve not managed to get the opportunity to implement them over the last decade, that would hardly be surprising. I feel I should clarify that these aren’t all my ideas, but include a mix of observation of and conjecture about contemporary software. This somewhat follows on from the post I made 6 years ago(!) So let’s begin.

1. No main-thread UI

The UI should always be able to start drawing when necessary. As careful as you may be, it’s practically impossible to write software that will remain perfectly fluid when the UI can be blocked by arbitrary processing. This seems like an obvious one to me, but I suppose the problem is that legacy makes it very difficult to adopt this at a later date. That said, difficult but not impossible. All the major web browsers have adopted this policy, with caveats here and there. The trick is to switch from the idea of ‘painting’ to the idea of ‘assembling’ and then using a compositor to do the painting. Easier said than done of course, most frameworks include the ability to extend painting in a way that would make it impossible to switch to a different thread without breaking things. But as long as it’s possible to block UI, it will inevitably happen.

2. Contextually-aware compositor

This follows on from the first point; what’s the use of having non-blocking UI if it can’t respond? Input needs to be handled away from the main thread also, and the compositor (or whatever you want to call the thread that is handling painting) needs to have enough context available that the first response to user input doesn’t need to travel to the main thread. Things like hover states, active states, animations, pinch-to-zoom and scrolling all need to be initiated without interaction on the main thread. Of course, main thread interaction will likely eventually be required to update the view, but that initial response needs to be able to happen without it. This is another seemingly obvious one – how can you guarantee a response rate unless you have a thread dedicated to responding within that time? Most browsers are doing this, but not going far enough in my opinion. Scrolling and zooming are often catered for, but not hover/active states, or initialising animations (note; initialising animations. Once they’ve been initialised, they are indeed run on the compositor, usually).

3. Memory bandwidth budget

This is one of the less obvious ideas and something I’ve really wanted to have a go at implementing, but never had the opportunity. A problem I saw a lot while working on the platform for both Firefox for Android and FirefoxOS is that given the work-load of a web browser (which is not entirely dissimilar to the work-load of any information-heavy UI), it was very easy to saturate memory bandwidth. And once you saturate memory bandwidth, you end up having to block somewhere, and painting gets delayed. We’re assuming UI updates are asynchronous (because of course – otherwise we’re blocking on the main thread). I suggest that it’s worth tracking frame time, and only allowing large asynchronous transfers (e.g. texture upload, scaling, format transforms) to take a certain amount of time. After that time has expired, it should wait on the next frame to be composited before resuming (assuming there is a composite scheduled). If the composited frame was delayed to the point that it skipped a frame compared to the last unladen composite, the amount of time dedicated to transfers should be reduced, or the transfer should be delayed until some arbitrary time (i.e. it should only be considered ok to skip a frame every X ms).

It’s interesting that you can see something very similar to this happening in early versions of iOS (I don’t know if it still happens or not) – when scrolling long lists with images that load in dynamically, none of the images will load while the list is animating. The user response was paramount, to the point that it was considered more important to present consistent response than it was to present complete UI. This priority, I think, is a lot of the reason the iPhone feels ‘magic’ and Android phones felt like junk up until around 4.0 (where it’s better, but still not as good as iOS).

4. Level-of-detail

This is something that I did get to partially implement while working on Firefox for Android, though I didn’t do such a great job of it so its current implementation is heavily compromised from how I wanted it to work. This is another idea stolen from game development. There will be times, during certain interactions, where processing time will be necessarily limited. Quite often though, during these times, a user’s view of the UI will be compromised in some fashion. It’s important to understand that you don’t always need to present the full-detail view of a UI. In Firefox for Android, this took the form that when scrolling fast enough that rendering couldn’t keep up, we would render at half the resolution. This let us render more, and faster, giving the impression of a consistent UI even when the hardware wasn’t quite capable of it. I notice Microsoft doing similar things since Windows 8; notice how the quality of image scaling reduces markedly while scrolling or animations are in progress. This idea is very implementation-specific. What can be dropped and what you want to drop will differ between platforms, form-factors, hardware, etc. Generally though, some things you can consider dropping: Sub-pixel anti-aliasing, high-quality image scaling, render resolution, colour-depth, animations. You may also want to consider showing partial UI if you know that it will very quickly be updated. The Android web-browser during the Honeycomb years did this, and I attempted (with limited success, because it’s hard…) to do this with Firefox for Android many years ago.

Pitfalls

I think it’s easy to read ideas like this and think it boils down to “do everything asynchronously”. Unfortunately, if you take a naïve approach to that, you just end up with something that can be inexplicably slow sometimes and the only way to fix it is via profiling and micro-optimisations. It’s very hard to guarantee a consistent experience if you don’t manage when things happen. Yes, do everything asynchronously, but make sure you do your book-keeping and you manage when it’s done. It’s not only about splitting work up, it’s about making sure it’s done when it’s smart to do so.

You also need to be careful about how you measure these improvements, and to be aware that sometimes results in synthetic tests will even correlate to the opposite of the experience you want. A great example of this, in my opinion, is page-load speed on desktop browsers. All the major desktop browsers concentrate on prioritising the I/O and computation required to get the page to 100%. For heavy desktop sites, however, this means the browser is often very clunky to use while pages are loading (yes, even with out-of-process tabs – see the point about bandwidth above). I highlight this specifically on desktop, because you’re quite likely to not only be browsing much heavier sites that trigger this behaviour, but also to have multiple tabs open. So as soon as you load a couple of heavy sites, your entire browsing experience is compromised. I wouldn’t mind the site taking a little longer to load if it didn’t make the whole browser chug while doing so.

Don’t lose sight of your goals. Don’t compromise. Things might take longer to complete, deadlines might be missed… But polish can’t be overrated. Polish is what people feel and what they remember, and the lack of it can have a devastating effect on someone’s perception. It’s not always conscious or obvious either, even when you’re the developer. Ask yourself “Am I fully satisfied with this” before marking something as complete. You might still be able to ship if the answer is “No”, but make sure you don’t lose sight of that and make sure it gets the priority it deserves.

One last point I’ll make; I think to really execute on all of this, it requires buy-in from everyone. Not just engineers, not just engineers and managers, but visual designers, user experience, leadership… Everyone. It’s too easy to do a job that’s good enough and it’s too much responsibility to put it all on one person’s shoulders. You really need to be on the ball to produce the kind of software that Apple does almost routinely, but as much as they’d say otherwise, it isn’t magic.

by Chris Lord at May 27, 2017 12:00 PM

May 19, 2017

Emmanuele Bassi

Further experiments in Meson

Meson is definitely getting more traction in GNOME (and other projects), with many components adding support for it in parallel to autotools, or outright switching to it. There are still bugs, here and there, and we definitely need to improve build environments — like Continuous — to support Meson out of the box, but all in all I’m really happy about not having to deal with autotools any more, as well as being able to build the G* stack much more quickly when doing continuous integration.

Now that GTK+ has added Meson support, though, it’s time to go through the dependency chain in order to clean up and speed up the build in the lower bits of our stack. After an aborted attempt at porting GdkPixbuf, I decided to port Pango.

All in all, Pango proved to be an easy win; it took me about one day to port from Autotools to Meson, and most of it was mechanical translation from weird autoconf/automake incantations that should have been removed years ago1. Most of the remaining bits were:

  • ensuring that both Autotools and Meson would build the same DSOs, with the same symbols
  • generating the same introspection data and documentation
  • installing tests and data in the appropriate locations

Thanks to the ever vigilant eye of Nirbheek Chauhan, and thanks to the new Meson reference, I was also able to make the Meson build slightly more idiomatic than a straight, 1:1 port would have done.

The results are a full Meson build that takes about the same time as ./autogen.sh to run:

* autogen.sh:                         * meson
  real        0m11.149s                 real          0m2.525s
  user        0m8.153s                  user          0m1.609s
  sys         0m2.363s                  sys           0m1.206s

* make -j$(($(nproc) + 2))            * ninja
  real        0m9.186s                  real          0m3.387s
  user        0m16.295s                 user          0m6.887s
  sys         0m5.337s                  sys           0m1.318s

--------------------------------------------------------------

* autotools                           * meson + ninja
  real        0m27.669s                 real          0m5.772s
  user        0m45.622s                 user          0m8.465s
  sys         0m10.698s                 sys           0m2.357s

Not bad for a day’s worth of work.

My plan would be to merge this in the master branch pretty soon; I also have a branch that drops Autotools entirely but that can wait a cycle, as far as I’m concerned.

Now comes the hard part: porting libraries like GdkPixbuf, ATK, gobject-introspection, and GLib to Meson. There’s already a GLib port, courtesy of Centricular, but it needs further testing; GdkPixbuf is pretty terrible, since it’s a really old library; I don’t expect ATK and GObject introspection to be complicated, but the latter has a non-recursive Make layout that is full of bees.

It would be nice to get to GUADEC and have the whole G* stack build with Meson and Ninja. If you want to help out, reach out in #gtk+, on IRC or on Matrix.


  1. The Windows support still checks for GCC 2.x or 3.x flags, for instance. 

by ebassi at May 19, 2017 05:20 PM

February 23, 2017

Chris Lord

Machine Learning Speech Recognition

Keeping up my yearly blogging cadence, it’s about time I wrote to let people know what I’ve been up to for the last year or so at Mozilla. People keeping up would have heard of the sad news regarding the Connected Devices team here. While I’m sad for my colleagues and quite disappointed in how this transition period has been handled as a whole, thankfully this hasn’t adversely affected the Vaani project. We recently moved to the Emerging Technologies team and have refocused on the technical side of things, a side that I think most would agree is far more interesting, and also far more suited to Mozilla and our core competence.

Project DeepSpeech

So, out with Project Vaani, and in with Project DeepSpeech (name will likely change…) – Project DeepSpeech is a machine learning speech-to-text engine based on the Baidu Deep Speech research paper. We use a particular layer configuration and initial parameters to train a neural network to translate from processed audio data to English text. You can see roughly how we’re progressing with that here. We’re aiming for a 10% Word Error Rate (WER) on English speech at the moment.

You may ask, why bother? Google and others provide state-of-the-art speech-to-text in multiple languages, and in many cases you can use it for free. There are multiple problems with existing solutions, however. First and foremost, most are not open-source/free software (at least none that could rival the error rate of Google). Secondly, you cannot use these solutions offline. Third, you cannot use these solutions for free in a commercial product. The reason a viable free software alternative hasn’t arisen is mostly down to the cost and restrictions around training data. This makes the project a great fit for Mozilla as not only can we use some of our resources to overcome those costs, but we can also use the power of our community and our expertise in open source to provide access to training data that can be used openly. We’re tackling this issue from multiple sides, some of which you should start hearing about Real Soon Now™.

The whole team has made contributions to the main code. In particular, I’ve been concentrating on exporting our models and writing clients so that the trained model can be used in a generic fashion. This lets us test and demo the project more easily, and also provides a lower barrier for entry for people that want to try out the project and perhaps make contributions. One of the great advantages of using TensorFlow is how relatively easy it makes it to both understand and change the make-up of the network. On the other hand, one of the great disadvantages of TensorFlow is that it’s an absolute beast to build and integrates very poorly with other open-source software projects. I’ve been trying to overcome this by writing straight-forward documentation, and hopefully in the future we’ll be able to distribute binaries and trained models for multiple platforms.

Getting Involved

We’re still at a fairly early stage at the moment, which means there are many ways to get involved if you feel so inclined. The first thing to do, in any case, is to just check out the project and get it working. There are instructions provided in READMEs to get it going, and fairly extensive instructions on the TensorFlow site on installing TensorFlow. It can take a while to install all the dependencies correctly, but at least you only have to do it once! Once you have it installed, there are a number of scripts for training different models. You’ll need a powerful GPU(s) with CUDA support (think GTX 1080 or Titan X), a lot of disk space and a lot of time to train with the larger datasets. You can, however, limit the number of samples, or use the single-sample dataset (LDC93S1) to test simple code changes or behaviour.

One of the fairly intractable problems about machine learning speech recognition (and machine learning in general) is that you need lots of CPU/GPU time to do training. This becomes a problem when there are so many initial variables to tweak that can have dramatic effects on the outcome. If you have the resources, this is an area that you can very easily help with. What kind of results do you get when you tweak dropout slightly? Or layer sizes? Or distributions? What about when you add or remove layers? We have fairly powerful hardware at our disposal, and we still don’t have conclusive results about the affects of many of the initial variables. Any testing is appreciated! The Deep Speech 2 paper is a great place to start for ideas if you’re already experienced in this field. Note that we already have a work-in-progress branch implementing some of these ideas.

Let’s say you don’t have those resources (and very few do), what else can you do? Well, you can still test changes on the LDC93S1 dataset, which consists of a single sample. You won’t be able to effectively tweak initial parameters (as unsurprisingly, a dataset of a single sample does not represent the behaviour of a dataset with many thousands of samples), but you will be able to test optimisations. For example, we’re experimenting with model quantisation, which will likely be one of multiple optimisations necessary to make trained models usable on mobile platforms. It doesn’t particularly matter how effective the model is, as long as it produces consistent results before and after quantisation. Any optimisation that can be made to reduce the size or the processor requirement of training and using the model is very valuable. Even small optimisations can save lots of time when you start talking about days worth of training.

Our clients are also in a fairly early state, and this is another place where contribution doesn’t require expensive hardware. We have two clients at the moment. One written in Python that takes advantage of TensorFlow serving, and a second that uses TensorFlow’s native C++ API. This second client is the beginnings of what we hope to be able to run on embedded hardware, but it’s very early days right now.

And Finally

Imagine a future where state-of-the-art speech-to-text is available, for free (in cost and liberty), on even low-powered devices. It’s already looking like speech is going to be the next frontier of human-computer interaction, and currently it’s a space completely tied up by entities like Google, Amazon, Microsoft and IBM. Putting this power into everyone’s hands could be hugely transformative, and it’s great to be working towards this goal, even in a relatively modest capacity. This is the vision, and I look forward to helping make it a reality.

by Chris Lord at February 23, 2017 04:55 PM

February 13, 2017

Emmanuele Bassi

On Vala

It seems I raised a bit of a stink on Twitter last week:

Of course, and with reason, I’ve been called out on this by various people. Luckily, it was on Twitter, so we haven’t seen articles on Slashdot and Phoronix and LWN with headlines like “GNOME developer says Vala is dead and will be removed from all servers for all eternity and you all suck”. At least, I’ve only seen a bunch of comments on Reddit about this, but nobody cares about that particular cesspool of humanity.

Sadly, 140 characters do not leave any room for nuance, so maybe I should probably clarify what I wrote on a venue with no character limit.

First of all, I’d like to apologise to people that felt I was attacking them or their technical choices: it was not my intention, but see above, re: character count. I may have only about 1000 followers on Twitter, but it seems that the network effect is still a bit greater than that, so I should be careful when wording opinions. I’d like to point out that it’s my private Twitter account, and you can only get to what it says if you follow me, or if you follow people who follow me and decide to retweet what I write.

My PSA was intended as a reflection on the state of Vala, and its impact on the GNOME ecosystem in terms of newcomers, from the perspective of a person that used Vala for his own personal projects; recommended Vala to newcomers; and has to deal with the various build issues that arise in GNOME because something broke in Vala or in projects using Vala. If you’re using Vala outside of GNOME, you have two options: either ignore all I’m saying, as it does not really apply to your case; or do a bit of soul searching, and see if what I wrote does indeed apply to you.

First of all, I’d like to qualify my assertion that Vala is a “dead language”. Of course people see activity in the Git repository, see the recent commits and think “the project is still alive”. Recent commits do not tell a complete story.

Let’s look at the project history for the past 10 cycles (roughly 2.5 years). These are the commits for every cycle, broken up in two values: one for the full repository, the other one for the whole repository except the vapi directory, which contains the VAPI files for language bindings:

Commits

Aside from the latest cycle, Vala has seen very little activity; the project itself, if we exclude binding updates, has seen less than 100 commits for every cycle — some times even far less. The latest cycle is a bit of an outlier, but we can notice a pattern of very little work for two/three cycles, followed by a spike. If we look at the currently in progress cycle, we can already see that the number of commits has decreased back to 55/42, as of this morning.

Commits

Number of commits is just a metric, though; more important is the number of contributors. After all, small, incremental changes may be a good thing in a language — though, spoiler alert: they are usually an indication of a series of larger issues, and we’ll come to that point later.

These are the number of developers over the same range of cycles, again split between committers to the full repository and to the full repository minus the vapi directory:

Developers

As you can see, the number of authors of changes is mostly stable, but still low. If we have few people that actively commit to the repository it means we have few people that can review a patch. It means patches linger longer and longer, while reviewers go through their queues; it means that contributors get discouraged; and, since nobody is paid to work full time on Vala, it means that any interruption caused by paid jobs will be a bottleneck on the project itself.

These concerns are not unique of a programming language: they exist for every volunteer-driven free and open source project. Programming languages, though, like core libraries, are problematic because any bottleneck causes ripple effects. You can take any stalled project you depend on, and vendor it into your own, but if that happens to the programming language you’re using, then you’re pretty much screwed.

For these reasons, we should also look at how well-distributed is the workload in Vala, i.e. which percentage of the work is done by the authors of those commits; the results are not encouraging. Over that range of cycles, Only two developers routinely crossed the 5% of commits:

  • Rico Tzschichholz
  • Jürg Billeter

And Rico has been the only one to consistently author >50% of the commits. This means there’s only one person dealing with the project on a day to day basis.

As the maintainer of a project who basically had to do all the work, I cannot even begin to tell you how soul-crushing that can become. You get burned out, and you feel responsible for everyone using your code, and then you get burned out some more. I honestly don’t want Rico to burn out, and you shouldn’t, either.

So, let’s go into unfair territory. These are the commits for Rust — the compiler and standard library:

Rust

These are the commits for Go — the compiler and base library:

Go

These are the commits for Vala — both compiler and bindings:

Vala

These are the number of commits over the past year. Both languages are younger than Vala, have more tools than Vala, and are more used than Vala. Of course, it’s completely unfair to compare them, but those numbers should give you a sense of scale, of what is the current high bar for a successful programming language these days. Vala is a niche language, after all; it’s heavily piggy-backing on the GNOME community because it transpiles to C and needs a standard library and an ecosystem like the one GNOME provides. I never expected Vala to rise to the level of mindshare that Go and Rust currently occupy.

Nevertheless, we need to draw some conclusions about the current state of Vala — starting from this thread, perhaps, as it best encapsulates the issues the project is facing.

Vala, as a project, is limping along. There aren’t enough developers to actively effect change on the project; there aren’t enough developers to work on ancillary tooling — like build system integration, debugging and profiling tools, documentation. Saying that “Vala compiles to C so you can use tools meant for C” is comically missing the point, and it’s effectively like saying that “C compiles to binary code, so you can disassemble a program if you want to debug it”. Being able to inspect the language using tools native to the language is a powerful thing; if you have to do the name mangling in your head in order to set a breakpoint in GDB you are elevating the barrier of contributions way above the head of many newcomers.

Being able to effect change means also being able to introduce change effectively and without fear. This means things like continuous integration and a full test suite heavily geared towards regression testing. The test suite in Vala is made of 210 units, for a total of 5000 lines of code; the code base of Vala (vala AST, codegen, C code emitter, and the compiler) is nearly 75 thousand lines of code. There is no continuous integration, outside of the one that GNOME Continuous performs when building Vala, or the one GNOME developers perform when using jhbuild. Regressions are found after days or weeks, because developers of projects using Vala update their compiler and suddenly their projects cease to build.

I don’t want to minimise the enormous amount of work that every Vala contributor brought to the project; they are heroes, all of them, and they deserve as much credit and praise as we can give. The idea of a project-oriented, community-oriented programming language has been vindicated many times over, in the past 5 years.

If I scared you, or incensed you, then you can still blame me, and my lack of tact. You can still call me an asshole, and you can think that I’m completely uncool. What I do hope, though, is that this blog post pushes you into action. Either to contribute to Vala, or to re-new your commitment to it, so that we can look at my words in 5 years and say “boy, was Emmanuele wrong”; or to look at alternatives, and explore new venues in order to make GNOME (and the larger free software ecosystem) better.

by ebassi at February 13, 2017 01:12 PM

February 11, 2017

Emmanuele Bassi

Epoxy

Epoxy is a small library that GTK+, and other projects, use in order to access the OpenGL API in somewhat sane fashion, hiding all the awful bits of craziness that actually need to happen because apparently somebody dosed the water supply at SGI with large quantities of LSD in the mid-‘90s, or something.

As an added advantage, Epoxy is also portable on different platforms, which is a plus for GTK+.

Since I’ve started using Meson for my personal (and some work-related) projects as well, I’ve been on the lookout for adding Meson build rules to other free and open source software projects, in order to improve both their build time and portability, and to improve Meson itself.

As a small, portable project, Epoxy sounded like a good candidate for the port of its build system from autotools to Meson.

To the Bat Build Machine!

tl;dr

Since you may be interested just in the numbers, building Epoxy with Meson on my Kaby Lake four Core i7 and NMVe SSD takes about 45% less time than building it with autotools.

A fairly good fraction of the autotools time is spent going through the autogen and configure phases, because they both aren’t parallelised, and create a ton of shell invocations.

Conversely, Meson’s configuration phase is incredibly fast; the whole Meson build of Epoxy fits in the same time the autogen.sh and configure scripts complete their run.

Administrivia

Epoxy is a simple library, which means it does not need a hugely complicated build system set up; it does have some interesting deviations, though, which made the porting an interesting challenge.

For instance, on Linux and similar operating systems Epoxy uses pkg-config to find things like the EGL availability and the X11 headers and libraries; on Windows, though, it relies on finding the opengl32 shared or static library object itself. This means that we get something straightforward in the former case, like:

# Optional dependencies
gl_dep = dependency('gl', required: false)
egl_dep = dependency('egl', required: false)

and something slightly less straightforward in the latter case:

if host_system == 'windows'
  # Required dependencies on Windows
  opengl32_dep = cc.find_library('opengl32', required: true)
  gdi32_dep = cc.find_library('gdi32', required: true)
endif

And, still, this is miles better than what you have to deal with when using autotools.

Let’s take a messy thing in autotools, like checking whether or not the compiler supports a set of arguments; usually, this involves some m4 macro that’s either part of autoconf-archive or some additional repository, like the xorg macros. Meson handles this in a much better way, out of the box:

# Use different flags depending on the compiler
if cc.get_id() == 'msvc'
  test_cflags = [
    '-W3',
    ...,
  ]
elif cc.get_id() == 'gcc'
  test_cflags = [
    '-Wpointer-arith',
    ...,
  ]
else
  test_cflags = [ ]
endif

common_cflags = []
foreach cflag: test_cflags
  if cc.has_argument(cflag)
    common_cflags += [ cflag ]
  endif
endforeach

In terms of speed, the configuration step could be made even faster by parallelising the compiler argument checks; right now, Meson has to do them all in a series, but nothing except some additional parsing effort would prevent Meson from running the whole set of checks in parallel, and gather the results at the end.

Generating code

In order to use the GL entry points without linking against libGL or libGLES* Epoxy takes the XML description of the API from the Khronos repository and generates the code that ends up being compiled by using a Python script to parse the XML and generating header and source files.

Additionally, and unlike most libraries in the G* stack, Epoxy stores its public headers inside a separate directory from its sources:

libepoxy
├── cross
├── doc
├── include
│   └── epoxy
├── registry
├── src
└── test

The autotools build has the src/gen_dispatch.py script create both the source and the header file for each XML at the same time using a rule processed when recursing inside the src directory, and proceeds to put the generated header under $(top_builddir)/include/epoxy, and the generated source under $(top_builddir)/src. Each code generation rule in the Makefile manually creates the include/epoxy directory under the build root to make up for parallel dispatch of each rule.

Meson makes is harder to do this kind of spooky-action-at-a-distance build, so we need to generate the headers in one pass, and the source in another. This is a bit of a let down, to be honest, and yet a build that invokes the generator script twice for each API description file is still faster under Ninja than a build with the single invocation under Make.

There are sill issues in this step that are being addressed by the Meson developers; for instance, right now we have to use a custom target for each generated header and source separately instead of declaring a generator and calling it multiple times. Hopefully, this will be fixed fairly soon.

Documentation

Epoxy has a very small footprint, in terms of API, but it still benefits from having some documentation on its use. I decided to generate the API reference using Doxygen, as it’s not a G* library and does not need the additional features of gtk-doc. Sadly, Doxygen’s default style is absolutely terrible; it would be great if somebody could fix it to make it look half as good as the look gtk-doc gets out of the box.

Cross-compilation and native builds

Now we get into “interesting” territory.

Epoxy is portable; it works on Linux and *BSD systems; on macOS; and on Windows. Epoxy also works on both Intel Architecture and on ARM.

Making it run on Unix-like systems is not at all complicated. When it comes to Windows, though, things get weird fast.

Meson uses cross files to determine the environment and toolchain of the host machine, i.e. the machine where the result of the build will eventually run. These are simple text files with key/value pairs that you can either keep in a separate repository, in case you want to share among projects; or you can keep them in your own project’s repository, especially if you want to easily set up continuous integration of cross-compilation builds.

Each toolchain has its own; for instance, this is the description of a cross compilation done on Fedora with MingW:

[binaries]
c = '/usr/bin/x86_64-w64-mingw32-gcc'
cpp = '/usr/bin/x86_64-w64-mingw32-cpp'
ar = '/usr/bin/x86_64-w64-mingw32-ar'
strip = '/usr/bin/x86_64-w64-mingw32-strip'
pkgconfig = '/usr/bin/x86_64-w64-mingw32-pkg-config'
exe_wrapper = 'wine'

This section tells Meson where the binaries of the MingW toolchain are; the exe_wrapper key is useful to run the tests under Wine, in this case.

The cross file also has an additional section for things like special compiler and linker flags:

[properties]
root = '/usr/x86_64-w64-mingw32/sys-root/mingw'
c_args = [ '-pipe', '-Wp,-D_FORTIFY_SOURCE=2', '-fexceptions', '--param=ssp-buffer-size=4', '-I/usr/x86_64-w64-mingw32/sys-root/mingw/include' ]
c_link_args = [ '-L/usr/x86_64-w64-mingw32/sys-root/mingw/lib' ]

These values are taken from the equivalent bits that Fedora provides in their MingW RPMs.

Luckily, the tool that generates the headers and source files is written in Python, so we don’t need an additional layer of complexity, with a tool built and run on a different platform and architecture in order to generate files to be built and run on a different platform.

Continuous Integration

Of course, any decent process of porting, these days, should deal with continuous integration. CI gives us confidence as to whether or not any change whatsoever we make actually works — and not just on our own computer, and our own environment.

Since Epoxy is hosted on GitHub, the quickest way to deal with continuous integration is to use TravisCI, for Linux and macOS; and Appveyor for Windows.

The requirements for Meson are just Python3 and Ninja; Epoxy also requires Python 2.7, for the dispatch generation script, and the shared libraries for GL and the native API needed to create a GL context (GLX, EGL, or WGL); it also optionally needs the X11 libraries and headers and Xvfb for running the test suite.

Since Travis offers an older version of Ubuntu LTS as its base system, we cannot build Epoxy with Meson; additionally, running the test suite is a crapshoot because the Mesa version if hopelessly out of date and will either cause most of the tests to be skipped or, worse, make them segfault. To sidestep this particular issue, I’ve prepared a Docker image with its own harness, and I use it as the containerised environment for Travis.

On Appveyor, thanks to the contribution of Thomas Marrinan we just need to download Python3, Python2, and Ninja, and build everything inside its own root; as an added bonus, Appveyor allows us to take the build artefacts when building from a tag, and shoving them into a zip file that gets deployed to the release page on GitHub.

Conclusion

Most of this work has been done off and on over a couple of months; the rough Meson build conversion was done last December, with the cross-compilation and native builds taking up the last bit of work.

Since Eric does not have any more spare time to devote to Epoxy, he was kind enough to give me access to the original repository, and I’ve tried to reduce the amount of open pull requests and issues there.

I’ve also released version 1.4.0 and I plan to do a 1.4.1 release soon-ish, now that I’m positive Epoxy works on Windows.

I’d like to thank:

  • Eric Anholt, for writing Epoxy and helping out when I needed a hand with it
  • Jussi Pakkanen and Nirbheek Chauhan, for writing Meson and for helping me out with my dumb questions on #mesonbuild
  • Thomas Marrinan, for working on the Appveyor integration and testing Epoxy builds on Windows
  • Yaron Cohen-Tal, for maintaining Epoxy in the interim

by ebassi at February 11, 2017 01:34 AM

January 11, 2017

Emmanuele Bassi

Constraints editing

Last year I talked about the newly added support for Apple’s Visual Format Language in Emeus, which allows to quickly describe layouts using a cross between ASCII art and predicates. For instance, I can use:

H:|-[icon(==256)]-[name_label]-|
H:[surname_label]-|
H:[email_label]-|
H:|-[button(<=icon)]
V:|-[icon(==256)]
V:|-[name_label]-[surname_label]-[email_label]-|
V:[button]-|

and obtain a layout like this one:

Boxes approximate widgets

Thanks to the contribution of my colleague Martin Abente Lahaye, now Emeus supports extensions to the VFL, namely:

  • arithmetic operators for constant and multiplication factors inside predicates, like [button1(button2 * 2 + 16)]
  • explicit attribute references, like [button1(button1.height / 2)]

This allows more expressive layout descriptions, like keeping aspect ratios between UI elements, without requiring hitting the code base.

Of course, editing VFL descriptions blindly is not what I consider a fun activity, so I took some time to write a simple, primitive editing tool that lets you visualize a layout expressed through VFL constraints:

I warned you that it was primitive and simple

Here’s a couple of videos showing it in action:

At some point, this could lead to a new UI tool to lay out widgets inside Builder and/or Glade.

As of now, I consider Emeus in a stable enough state for other people to experiment with it — I’ll probably make a release soon-ish. The Emeus website is up to date, as it is the API reference, and I’m happy to review pull requests and feature requests.

by ebassi at January 11, 2017 02:30 PM

December 25, 2016

Tomas Frydrych

A Year in the Hills

A Year in the Hills

TL;DR: ~440 hours of running, 3,000km travelled, 118km ascended, an FKT set on the Assynt Traverse. Yet, the numbers don't even begin to tell the story ...

It's been a good year, with some satisfying longer days in the hills: an enjoyable two day round of Glen Affric & Kintail in April (still in full-on winter conditions), a two day loop around Glen Lyon in May (taking in the Lawers and Carn Mairg ridges), a round of the seven Crianlarich Munros (East to West in May, West to East in June), a two day trot through the Mamores in September, a three day run through the Cairngorms in October (with some of the most amazing light I have ever seen, and shoes turning into solid blocks of ice over night). There have also been many great shorter days, the Carn Eighe Horse Shoe and the Coigach Horse Shoe come to mind. But the highlight of my year, without any question, was the July Assynt Traverse, at 74km of largely off track running, and some 6,400m of a vertical ascent, by far the most physically challenging thing I have ever attempted, setting a new FKT (23h 54min) an icing on the cake.

The Assynt Traverse had been haunting me since the summer of 2013. During those three years I had gone through a random mixture of great enthusiasm, physical setbacks (from too much enthusiasm!), and self doubt (as the scale of the challenge had become clear). I came very close to not attempting it (again), thinking failure was inevitable. Fortunately, a brief, incidental, conversation with a friend helped me to refocus -- at this scale DNF is never a failure, just an attempt, the only real possibility of a failure is a DNS, a failure of the mind. From that point on the rest was just logistics, and some running in the most beautiful landscape I know!

The real significance of the Traverse for me, however, was neither in completing it, nor in setting the FKT. Rather, the Traverse turned out to be a condensed essence of the totality of my running experiences, neatly packaged into a single day. As such it brought much clarity into my understanding of why I run, and, in particular, what drives me into the hills.

Obviously, there are the views, at times but brief glimpses, at times sustained (and, far too often, none at all). There are the brief encounters with wildlife: the sense of awe over a golden eagle you nearly run into, the envy of a raven playing with the wind that to me, a supposedly superior species, is proving such a nuisance (and in Assynt, the ever present frogs and toads).

Then there is the whole mind over matter thing, like when merely three and half hours into your twenty four hour venture the body declares it can't go any further, but your mind knows it's nothing but load of BS, and you somehow manage to carry on. There is the simple enjoyment of running, six continuous hours of it negotiating the ridges of the Ben More Assynt massive, hopping from boulder to boulder under blue skies. There is that sense of complete physical and mental liberation as the dopamine high goes through the roof after fifteen hours of hard graft. There is the need to hold it together, sleep deprived in the wee hours on Quinag, simply because there is no other alternative, it's just you and the hills.

All of the above are reasons why I run hills. But the reason that exceeds all of the above is the time to think it affords me, time to reflect in the peace and quiet, senses sharpened by physical exertion -- that is the real reason why I run, and why I unashamedly enjoy running in my own company.

In a place like Assynt, in the midst of the seemingly immutable, aeons old landscape, it is impossible to escape the sense of one's own transience and insignificance. The knowledge that these hills have been around long before me, and will remain long after my brief intrusion somehow puts everything into perspective. The hills ask not merely 'what are you doing here?' but also 'what do you do when you are not here?', and 'why?'. They question our priorities, our commitments, or the the lack of thereof. They encourage us to look forward beyond the immediate horizon of tomorrow, of the next pay check.

There is much thinking to be done in twenty four hours, and on the back of that some decisions have been made in the weeks that followed, some plans laid, there are some changes on the horizon for the coming year. It's too early days to say more for now, maybe in a couple of months.

As for my running, the Ramsay Round has been in my thoughts since the morning after Assynt -- I am toying with the unsupported solo option (I don't think I have it in me to meet the 24h limit anyway, so might just as well, and it simplifies the logistics), but I expect a realistic timetable for that is 2018. I am hoping for some more multiday runs, there is so much exploring to be still done in this wee country of ours, so little time.

Happy 2017!

by tf at December 25, 2016 11:16 AM

December 17, 2016

Emmanuele Bassi

Laptop review

Dell XPS 13 (Developer Edition 2016)

After three and a half years with my trusty mid-2013 MacBook Air, I decided to get a new personal laptop. To be fair, my Air could have probably lasted another 12-18 months, even though its 8GB of RAM and Haswell Core i7 were starting to get pretty old for system development. The reason why I couldn’t keep using it reliably was that the SSD had already started showing SMART errors in January, and I already had to reset it and re-install from scratch once. Refurbishing the SSD out of warranty is still an option, if I decided to fork over a fair chunk of money and could live without a laptop for about a month1.

After getting recommendations for the previous XPS iterations by various other free software developers and Linux users, I waited until the new, Kaby Lake based model was available in the EU and ordered one. After struggling a bit with Dell’s website, I managed to get an XPS 13 with a US keyboard layout2 — which took about two weeks from order to delivery.

The hardware out of the box experience is pretty neat, with a nice, clean box; very Apple-like. The software’s first boot experience could be better, to say the least. Since I chose the Developer Edition, I got Ubuntu as the main OS instead of Windows, and I have been thoroughly underwhelmed by the effort spent by Dell and Canonical in polishing the software side of things. As soon as you boot the laptop, you’re greeted with an abstract video playing while the system does something. The video playback is not skippable, and does not have volume controls, so I got to “experience” it at full blast out of the speakers.

Ubuntu’s first boot experience UI to configure the machine is rudimentary, at best, and not really polished; it’s the installer UI without the actual installation bits, but it clearly hasn’t been refined for the HiDPI screen. The color scheme has progressively gone worse over the years; while all other OSes are trying to convey a theme of lightness using soft tones, the dark grey, purple, and dark orange tones used by Ubuntu make the whole UI seem heavier and oppressive.

After that, you get into Unity, and no matter how many times I try it, I still cannot enjoy using it. I also realized why various people coming from Ubuntu complain about the GNOME theme being too heavy on the whitespace: the Ubuntu default theme is super-compressed, with controls hugging together so closely that they almost seem to overlap. There is barely no affordance for the pointer, let alone for interacting through the touchscreen.

All in all, I resisted half a day on it, mostly to see what was the state of stock Ubuntu after many years of Fedora3. After that, I downloaded a Fedora 25 USB image and re-installed from scratch.

Sadly, I still have to report that Anaconda doesn’t shine at all. Luckily, I didn’t have to deal with dual booting, so I only needed to interact with the installer just enough to tell it to use the stock on disk layout and create the root user. Nevertheless, figuring out how to tell it to split my /home volume and encrypt it required me to go through the partitioning step three times because I couldn’t for the life of me understand how to commit to the layout I wanted.

After that, I was greeted by GNOME’s first boot experience — which is definitely more polished than Ubuntu’s, but it’s still a bit too “functional” and plain.

Fedora recognised the whole hardware platform out of the box: wifi, bluetooth, webcam, HiDPI screen. On the power management side, I was able to wring out about 8 hours of work (compilation, editing, web browsing, and a couple of Google hangouts) while on wifi, without having to plug in the AC.

Coming from years of Apple laptops, I was especially skeptical of the quality of the touchpad, but I have to say I was pleasantly surprised by its accuracy and feedback. It’s not MacBook-level, but it’s definitely the closest anyone has ever been to that slice of fried gold.

The only letdowns I can find are the position of the webcam, which is on the bottom of the panel and to the left, which makes for very dramatic angles when doing video calls, and requires you never type if you don’t want your fingers to be in the way; and the power brick, which has its own proprietary connector. There’s a USB-C port, though, so there may be provisions for powering the laptop through it.

The good

  • Fully supported hardware (Fedora 25)
  • Excellent battery life
  • Nice keyboard
  • Very good touchpad

The bad

  • The position of the webcam
  • Yet another power brick with custom connector I have to lug around

Lenovo Yoga

Thanks to my employer I now have a work laptop as well, in the shape of a Lenovo Yoga 900. I honestly crossed off Lenovo as a vendor after the vast amounts of stupidity they imposed on their clients — and that was after I decided to stop buying ThinkPad-branded laptops, given their declining build quality and bad technical choices. Nevertheless, you don’t look a gift horse in the mouth.

The out of the box experience of the Yoga is very much on par with the one I had with the XPS, which is to say: fairly Apple-like.

The Yoga 900 is a fairly well made machine. It’s an Intel Sky Lake platform, with a nice screen and good components. The screen can fold and turn the whole thing into a “tablet”, except that the keyboard faces downward, so it’s weird to handle in that mode. Plus, a 13” tablet is a pretty big thing to carry around. On the other hand, folding the laptop into a “tent” and using an external keyboard and pointer device is a nice twist on the whole “home office” approach. The webcam is, thankfully, centered and placed at the top of the panel — something that Lenovo has apparently changed in the 910 model, when they realised that folding the laptop would put the webcam at the bottom of the panel.

On the software side, the first boot experience into Windows 10 was definitely less than stellar. The Lenovo FBE software was not HiDPI-aware, which posed interesting challenges to the user interaction. This is something that a simple bit of QA would have found out, but apparently QA is too much to ask when dealing with a £1000 laptop. Luckily, I had to deal with that only inasmuch as I needed to get and install the latest firmware updates before installing Linux on the machine. Again, I went for Fedora.

As in the case of the Dell XPS, Fedora recognised all components of the hardware plaform out of the box. Even the screen rotation and folding works out of the box — though it can still get into inconsistent states when you move the laptop around, so I kind of recommend you keep the screen rotation locked until you actually need it.

On the power management side, I was impressed by how well the sleep states conserve battery power; I’m able to leave the Yoga suspended for a week and still have power on resume. The power brick has a weird USB-like connector to the laptop which makes me wonder what on earth were Lenovo engineers thinking; on the other hand, the adapter has a USB port which means you can charge it from a battery pack or from a USB adapter as well. There’s also a USB-C port, but I still haven’t tested if I can put power through it.

The keyboard is probably the biggest let down; the travel distance and feel of the keys is definitely not up to par with the Dell XPS, or with the Apple keyboards. The 900 has an additional column of navigation keys on the right edge that invariably messes up my finger memory — though it seems that the 910 has moved them to Function key combinations.5 The power button is on the right side of the laptop, which makes for unintended suspend/resume cycles when trying to plug in the headphones, or when moving the laptop. The touchpad is, sadly, very much lacking, with ghost tap events that forced me to disable the middle-click emulation everywhere4.

The good

  • Fully supported hardware (Fedora 25)
  • Solid build
  • Nice flip action
  • Excellent power management

The bad

  • Keyboard is a toy
  • Touchpad is a pale imitation of a good pointing device

  1. Which may still happen, all things considered; I really like the Air as a travel laptop. 

  2. After almost a decade with US layouts I find the UK layout inferior to the point of inconvenience. 

  3. On my desktop machine/gaming rig I dual boot between Windows 10 and Ubuntu GNOME, mostly because of the nVidia GPU and Steam. 

  4. That also increased my hatred of the middle-click-to-paste-selection easter egg a thousandfold, and I already hated the damned thing so much that my rage burned with the intensity of a million suns. 

  5. Additionally, the keyboard layout is UK — see note 2 above. 

by ebassi at December 17, 2016 12:00 AM

December 02, 2016

Tomas Frydrych

Winter's upon us

Winter's upon us

It's that time of the year again when the white stuff is covering the hills. This year it's come early and without a warning, one day still running in shorts, next day rummaging for the winter gear (and, typically, by the time I have finished writing this, much of the snow is gone again). Winter hill running is bit of an acquired taste, but taking on the extra challenges is, often, worth it.

The key to having an enjoyable time in the Scottish hills throughout the winter can, I think, be summed up in one word: respect. The winter hills are a serious place. Whereas getting benighted in split shorts and a string vest during the summer will earn one a (perhaps very) uncomfortable night and a bruised ego, the same scenario during the winter would quite likely end up with one's mates taking the piss over sausage rolls after the funeral (editor's note: mate quality varies). As hill runners, we tend to operate with smaller margins of comfort and safety, and in winter time it is critical to maintain those margins when things don't go to plan (which, among other things, means running the winter hills is not a sensible way to be learning the rudiments of mountain craft, one should serve that apprenticeship in some other way first).

However, with that 'participation statement' out of the way, the winter does bring exciting running opportunities for those minded to take on the challenges, and one need not to be called Kilijan or Finlay to venture into the snow. So here are some of my thoughts on the matter, things I have learnt, at times through bad mistakes; it's stuff that works for me running, it might not work for you, but perhaps it might help someone avoiding some of my mistakes.

Planning

In the winter months careful planning is doubly important for two reasons: the limited daylight hours mean longer runs are always a tight squeeze, and, the rate of progress along snow covered ground is impossible to predict -- a firm neve is grin-inducing, foot deep powder will make you work hard for a good pace, and a breakable crust on the top of a foot of soft snow will turn the air blue, and reduce one's pace to a crawl; I have been venturing into mountains for some four decades, but I still know of no way to reliably predict which you will find up there from down below. As such, it is important to have realistic expectations. My rule of the thumb is to expect to need around 30% extra time compared to the summer, and to plan for 50% more. Your adjustments might be quite different, but it pays to be conservative, and, always, to plan for the dark.

On the longer runs, it is also important to have a bailout plan. What do I do if the ground conditions make it impossible to complete the run? (In the winter, this a perfectly normal scenario.) Is there a point of no return? What are the early exit lines beyond this? Looking for alternative options when the sh!t is about to hit the fan is a recipe for turning minor difficulties into an emergency, and (this should not need saying, but does) expecting to be lifted out by an MRT simply because 'too much snow' / 'getting tired' / etc., is not a responsible plan. Sometimes there is no convenient bailout possible, that too is worth knowing in advance.

It is also important to understand how winter conditions impact on the technical difficulty of terrain -- the basic rule of thumb is that even the easiest of scrambles turn into full blown technical climbs that cannot be safely tackled with the sort of an equipment we as runners might carry. Corries get corniced, and pose avalanche risk. Consequently, some excellent summer routes might require variations in the winter, some are simply not suitable -- it is important to assess this beforehand to avoid getting caught out, and, yes, to have a bailout plan.

Finally, one should pay attention to the weather forecast, and in particular wind speed and direction at the target altitude (the MeteoEarth app is a great resource for this). Even moderately strong headwind has a big impact on a runner's rate of progress; in the summer, one might wave that off as MTFU but for the winter runner wind is a significant risk factor beyond the obvious windchill.

Emergency Kit

Anyone venturing into the winter hills should be equipped well enough to last safely for a few stationary hours on the open hill. This is nothing more than the common sense. Say I find myself in a genuine emergency necessitating an MRT call out. And say I am lucky enough to be somewhere with sufficient mobile phone coverage to raise the alarm. It takes time for the call to go out, the team to mobilise, and to get to me. Even if I am literally in the MRT's back yard, I can expect to spend a couple of hours stationary on my own before the help can reach me, and possibly lot more if I am in a remote area, or the weather conditions are poor. While in the summer MTFU can be a perfectly valid backup plan, the winter hills don't stand for such hubris.

As runners, we rely on our high energy expenditure to maintain our body temperature. In order to do that, we have to maintain a good pace, which in turn requires that we travel light. This works great, until it doesn't. While 'safe' is not the same as 'comfortable', it is fair to say, I think, that during the winter running kit falls well below even the most optimistic view of a safety margin. In the summer I generally aim to only carry the stuff I intend to use, in the winter I always carry a few things I hope not to have to use (yes, it's a nuisance, but I just think of it as weight training):

  • Some sort of a shelter (for a solo runner a crisp-bag type of a bivvy bag might be a reasonable option, for bigger parties a bothy bag is a much better alternative),
  • A properly warm layer well above what I anticipate to need for the run itself (in my case usually North Face Thermaball hooded jacket),
  • Very thick woollen socks (wet running shoes and socks facilitate rapid heat loss when not moving; if I had to use that crisp bag, I would want them off my feet),
  • Buffalo mitts -- these are the best weight to warmth gloves I know off, good to well below zero and working when wet; everyone should have a pair of these (they are bit slippery and clumsy, which doesn't matter running, and in any case, perfect as a spare).
  • Extra emergency food (chocolate is a good carbohydrate / fat mix, with decent calorie density).

It is also worth keeping in mind, particularly when running solo in remote, quiet areas, that mobile phone coverage in the Scottish hills is still very patchy. There are technological answers to this, such as PLBs or satellite messengers, and if you spend lot of time in the hills on your own, this might be worth considering (I wrote a bit about the Delorme InReach SE gadget previously). And, of course, there is the old fashioned letting someone know your route and expected return time, technology is great, but sometimes the old fashioned ways are even better.

Feet

One of the big challenges during the winter months is keeping the feet warm. Ironically, the mild Scottish weather tends to work against us, with runs often starting below the freezing line in a bog, then moving well above the freezing line higher up -- wet shoes are the inescapable reality of Scottish hill running, and a real nuisance in subzero temperatures (anyone who has had their running shoes freeze solid on their feet during a brief stop to have a bite to eat, knows exactly what I mean).

These days specialist waterproof winter shoes with integral gaiters exist, but, being aimed at the European market, they seem to be impossible to get hold of in the UK, and, who knows, they might not be that great in our peculiar conditions. The basic approach to this problem that works quite well for me is:

  • Thickish woolen socks (I tend to wear two pairs of the inov-8 mudsock),
  • Waterproof socks; they are not really waterproof (the elastic membrane in these cannot take the sort of pressures running exerts in the toe box), and not too warm, but they prevent water from circulating freely, effectively creating sort of a wetsuit effect,
  • A 1/2 size bigger shoe than I use in the summer to accommodate the socks.

Traction

Fell shoes provide surprisingly good traction on snow covered ground, in fact considerably better than any mountaineering boots I have ever owned, but they have their limits: they don't work on iced up ground, and beyond certain, not very steep, gradient. These days there are specialist winter shoes with tiny carbide spikes, but these are tiny indeed and don't work if the ice is dusted with snow -- the gain over a fell shoe is too marginal to make it worth the expense, IMO.

For lot of my winter needs Kahoola Microspikes provide the answer (other traction devices exist, but unless you plan to accessorise with a shopping bag on wheels, my advice would be to go with Kahtoola). The microspikes are easy to put on and take off, they don't impede running, and, in a limited range of conditions, the 9mm spikes provide excellent traction -- they work brilliantly on ice and neve (well worth the looks you get hammering it down an iced up hill) but they have a gradient limit, roughly an angle on which you can consistently maintain the entire forefoot on the slope, reduced further if a hard surface is covered with some fresh snow; how much depends on the type of snow, etc.

However, as much as I love the microspikes, it must be said emphatically, microspikes are not a crampon substitute, and they become positively lethal when the ground steepens enough to automatically switch into 'front pointing' mode. My simple rule is, if the terrain does not call for packing an ice axe, I take the microspikes.

In fact most of the winter Scottish hills do require brining an ice axe (and knowing how to use it), and crampons. Normal crampons are, for good reasons, designed for stiff-soled boots, and can't be used with running shoes. The Kahtoola KTS Crampon can, as its linkage bar is made of a flexible leaf spring. The 23mm spikes are bit shorter than on a typical walking crampon, but more than adequate for the sort of conditions I run in. The front points are quite steep, which makes them less of a trip hazard, and with a bit of practice it is possible to run in these quite well. By the same token, the steep front points make them unsuitable for very steep ground.

The most important thing to be aware of with these is that wearing flexible shoes means steep front pointing is difficult, and very strenuous on the calfs; they are quite capable, but the experience is nothing like a normal crampon, that's for sure.

As for axes, there are many lightweight models on the market. Personally, I have great misgivings about any axe that doesn't have a solid steel head, as experiments have shown that aluminium picks are less effective in emergency self arrest, and I don't trust a bit of steel riveted to an alloy head either. My axe of choice is the BD Raven Ultra, though I have to say, the spike design is poor, making it hard work driving the shaft into snow.

Have fun!

by tf at December 02, 2016 09:55 PM

November 01, 2016

Emmanuele Bassi

Constraints (reprise)

After the first article on Emeus various people expressed interest in the internals of the library, so I decided to talk a bit about what makes it work.

Generally, you can think about constraints as linear equations:

view1.attr1 = view2.attr2 × multiplier + constant

You take the the value of attr2 on the widget view2, multiply it by a multiplier, add a constant, and apply the value to the attribute attr1 on the widget view1. You don’t need view2.attr2 either, for instance:

view1.attr1 = constant

is a perfectly valid constraint.

You also don’t need to use an equality; these two constraints:

view1.width ≥ 180
view1.width ≤ 250

specify that the width of view1 must be in the [ 180, 250 ] range, extremes included.

Layout

A layout, then, is just a pile of linear equations that describe the relations between each element. So, if we have a simple grid:

+--------------------------------------------+
| super                                      |
|  +----------------+   +-----------------+  |
|  |     child1     |   |     child2      |  |
|  |                |   |                 |  |
|  +----------------+   +-----------------+  |
|                                            |
|  +--------------------------------------+  |
|  |               child3                 |  |
|  |                                      |  |
|  +--------------------------------------+  |
|                                            |
+--------------------------------------------+

We can describe each edge’s position and size using constraints. It’s important to note that there’s an implicit “reading” order that makes it easier to write constraints; in this case, we start from left to right, and from top to bottom. Generally speaking, it’s possible to describe constraints in any order, but the Cassowary solving algorithm is geared towards the “reading” order above.

Each layout has some implicit constraint already available. For instance, the “trailing” edge is equal to the leading edge plus the width; the bottom edge is equal to the top edge plus the height; the center point is equal to the width or height, divided by two, plus the leading or bottom edges. These constraints help solving the layout, as well as provide additional values to other constraints.

So, let’s start.

From the first row:

  • the leading edge of the super container is the same as the leading edge of child1, minus a padding
  • the trailing edge of child1 is the same as the leading edge of child2, minus a padding
  • the trailing edge of child2 is the same as the trailing edge of the super container, minus a padding
  • the width of child1 is the same as the width of child2

From the second row:

  • the leading edge of the super container is the same as the leading edge of child3, minus a padding
  • the trailing edge of child2 is the same as the trailing edge of the super container, minus a padding

From the first column:

  • the top edge of the super container is the same as the top edge of child1, minus a padding
  • the bottom edge of child1 is the same as the top edge of child3, minus a padding
  • the bottom edge of the super container is the same as the bottom edge of child3, minus a padding
  • the height of child3 is the same as the height of child1

From the second column:

  • the top edge of the super container is the same as the top edge of the child2, minus a padding
  • the bottom edge of child1 is the same as the top edge of child3, minus a padding
  • the bottom edge of the super container is the same as the bottom edge of child3, minus a padding
  • the height of child3 is the same as the height of child2

As you can see, there are some redundancies; these are necessary to ensure that the layout is fully resolved, though obviously there are some properties of the elements of the layout that implicitly eliminate some results. For instance, if child3s height is the same as child1, and child1 lies on the same row as child2 and it’s an axis-aligned rectangle, the it immediately follows that child3 must have the same height of child2 as well. It’s important to note that, from a solver perspective, there only are values, not boxes, and you could use the solver with any kind of geometric shape; only the constraints give us the information on what those shapes should be. It’s also easier to start from a fully constrained layout and then remove constraints, than to start from a loosely constrained layout and add constraints until it’s stable.

Representation

From the text description we can now get into a system of equations:

  • super.start = child1.start - padding
  • child1.end = child2.start - padding
  • super.end = child2.end - padding
  • child1.width = child2.width
  • super.start = child3.start - padding
  • super.end = child3.end - padding
  • super.top = child1.top - padding
  • child1.bottom = child3.top - padding
  • super.bottom = child3.bottom - padding
  • child3.height = child1.height
  • super.top = child2.top - padding
  • child2.bottom = child3.top - padding
  • child3.height = child2.height

Apple, in its infinite wisdom and foresight, decided that this form is still too verbose. After looking at the Perl format page for far too long, Apple engineers came up with the Visual Format Language, or VFL for short.

Using VFL, the constraints above become:

H:|-(padding)-[child1(==child2)]-(padding)-[child2]-(padding)-|
H:|-(padding)-[child3]-(padding)-|
V:|-(padding)-[child1(==child3)]-(padding)-[child3]-(padding)-|
V:|-(padding)-[child2(==child3)]-(padding)-[child3]-(padding)-|

Emeus, incidentally, ships with a simple utility that can take a set of VFL format strings and generate GtkBuilder descriptions that you can embed into your templates.

Change

We’ve used a fair amount of constraints, or four lines of faily cryptic ASCII art, to basically describe a non-generic GtkGrid with two equally sized horizontal cells on the first row, and a single cell with a column span of two; compared to the common layout managers inside GTK+, this does not seem like a great trade off.

Except that we can describe any other layout without necessarily having to pack widgets inside boxes, with margins and spacing and alignment rules; we also don’t have to change the hierarchy of the boxes if we want to change the layout. For instance, let’s say that we want child3 to have a different horizontal padding, and a minimum and maximum width; we just need to change the constraints involved in that row:

H:|-(hpadding)-[child3(>=250,<=500)]-(hpadding)-|

Additionally, we now want to decouple child1 and child3 heights, and make child1 a fixed height item:

V:|-(padding)-[child1(==250)]-(padding)-[child3]-(padding)-|

And make the height of child3 move within a range of values:

V:|-(padding)-[child2]-(padding)-[child3(>=200,<=300)]-(padding)-|

For all these cases we’d have to add intermediate boxes in between our children and the parent container — with all the issues of theming and updating things like GtkBuilder XML descriptions that come with that.

Future

The truth is, though, that describing layouts in terms of constraints is another case of software engineering your way out of talking with designers; it’s great to start talking about incremental simplex solvers, and systems of linear equations, and ASCII art to describe your layouts, but it doesn’t make UI designers really happy. They can deal with it, and having a declarative language to describe constraints is more helpful than parachuting them into an IDE with a Swiss army knife and a can of beans, but I wouldn’t recommend it as a solid approach to developer experience.

Havoc wrote a great article on how layout management API doesn’t necessarily have to suck:

  • we can come up with a better, descriptive API that does not make engineers and designers cringe in different ways
  • we should have support from our tools, in order to manipulate constraints and UI elements
  • we should be able to combine boxes (which are easy to style) and constraints (which are easy to lay out) together in a natural and flexible way

Improving layout management should be a goal in the development of GTK+ 4.0, so feel free to jump in and help out.

by ebassi at November 01, 2016 05:07 PM

October 17, 2016

Emmanuele Bassi

Constraints

GUI toolkits have different ways to lay out the elements that compose an application’s UI. You can go from the fixed layout management — somewhat best represented by the old ‘90s Visual tools from Microsoft; to the “springs and struts” model employed by the Apple toolkits until recently; to the “boxes inside boxes inside boxes” model that GTK+ uses to this day. All of these layout policies have their own distinct pros and cons, and it’s not unreasonable to find that many toolkits provide support for more than one policy, in order to cater to more use cases.

For instance, while GTK+ user interfaces are mostly built using nested boxes to control margins, spacing, and alignment of widgets, there’s a sizeable portion of GTK+ developers that end up using GtkFixed or GtkLayout containers because they need fixed positioning of children widget — until they regret it, because now they have to handle things like reflowing, flipping contents in right-to-left locales, or font size changes.

Additionally, most UI designers do not tend to “think with boxes”, unless it’s for Web pages, and even in that case CSS affords a certain freedom that cannot be replicated in a GUI toolkit. This usually results in engineers translating a UI specification made of ties and relations between UI elements into something that can be expressed with a pile of grids, boxes, bins, and stacks — with all the back and forth, validation, and resources that the translation entails.

It would certainly be easier if we could express a GUI layout in the same set of relationships that can be traced on a piece of paper, a UI design tool, or a design document:

  • this label is at 8px from the leading edge of the box
  • this entry is on the same horizontal line as the label, its leading edge at 12px from the trailing edge of the label
  • the entry has a minimum size of 250px, but can grow to fill the available space
  • there’s a 90px button that sits between the trailing edge of the entry and the trailing edge of the box, with 8px between either edges and itself

Sure, all of these constraints can be replaced by a couple of boxes; some packing properties; margins; and minimum preferred sizes. If the design changes, though, like it often does, reconstructing the UI can become arbitrarily hard. This, in turn, leads to pushback to design changes from engineers — and the cost of iterating over a GUI is compounded by technical inertia.

For my daily work at Endless I’ve been interacting with our design team for a while, and trying to get from design specs to applications more quickly, and with less inertia. Having CSS available allowed designers to be more involved in the iterative development process, but the CSS subset that GTK+ implements is not allowed — for eminently good reasons — to change the UI layout. We could go “full Web”, but that comes with a very large set of drawbacks — performance on low end desktop devices, distribution, interaction with system services being just the most glaring ones. A native toolkit is still the preferred target for our platform, so I started looking at ways to improve the lives of UI designers with the tools at our disposal.

Expressing layout through easier to understand relationships between its parts is not a new problem, and as such it does not have new solutions; other platforms, like the Apple operating systems, or Google’s Android, have started to provide this kind of functionality — mostly available through their own IDE and UI building tools, but also available programmatically. It’s even available for platforms like the Web.

What many of these solutions seem to have in common is using more or less the same solving algorithm — Cassowary.

Cassowary is:

an incremental constraint solving toolkit that efficiently solves systems of linear equalities and inequalities. Constraints may be either requirements or preferences. Client code specifies the constraints to be maintained, and the solver updates the constrained variables to have values that satisfy the constraints.

This makes it particularly suited for user interfaces.

The original implementation of Cassowary was written in 1998, in Java, C++, and Smalltalk; since then, various other re-implementations surfaced: Python, JavaScript, Haskell, slightly-more-modern-C++, etc.

To that collection, I’ve now added my own — written in C/GObject — called Emeus, which provides a GTK+ container and layout manager that uses the Cassowary constraint solving algorithm to compute the allocation of each child.

In spirit, the implementation is pretty simple: you create a new EmeusConstraintLayout widget instance, add a bunch of widgets to it, and then use EmeusConstraint objects to determine the relations between children of the layout:

simple-grid.js [Lines 89-170] download
        let button1 = new Gtk.Button({ label: 'Child 1' });
        this._layout.pack(button1, 'child1');
        button1.show();

        let button2 = new Gtk.Button({ label: 'Child 2' });
        this._layout.pack(button2, 'child2');
        button2.show();

        let button3 = new Gtk.Button({ label: 'Child 3' });
        this._layout.pack(button3, 'child3');
        button3.show();

        this._layout.add_constraints([
            new Emeus.Constraint({ target_attribute: Emeus.ConstraintAttribute.START,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button1,
                                   source_attribute: Emeus.ConstraintAttribute.START,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_object: button1,
                                   target_attribute: Emeus.ConstraintAttribute.WIDTH,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button2,
                                   source_attribute: Emeus.ConstraintAttribute.WIDTH }),
            new Emeus.Constraint({ target_object: button1,
                                   target_attribute: Emeus.ConstraintAttribute.END,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button2,
                                   source_attribute: Emeus.ConstraintAttribute.START,
                                   constant: -12.0 }),
            new Emeus.Constraint({ target_object: button2,
                                   target_attribute: Emeus.ConstraintAttribute.END,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_attribute: Emeus.ConstraintAttribute.END,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_attribute: Emeus.ConstraintAttribute.START,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button3,
                                   source_attribute: Emeus.ConstraintAttribute.START,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_object: button3,
                                   target_attribute: Emeus.ConstraintAttribute.END,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_attribute: Emeus.ConstraintAttribute.END,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_attribute: Emeus.ConstraintAttribute.TOP,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button1,
                                   source_attribute: Emeus.ConstraintAttribute.TOP,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_attribute: Emeus.ConstraintAttribute.TOP,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button2,
                                   source_attribute: Emeus.ConstraintAttribute.TOP,
                                   constant: -8.0 }),
            new Emeus.Constraint({ target_object: button1,
                                   target_attribute: Emeus.ConstraintAttribute.BOTTOM,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button3,
                                   source_attribute: Emeus.ConstraintAttribute.TOP,
                                   constant: -12.0 }),
            new Emeus.Constraint({ target_object: button2,
                                   target_attribute: Emeus.ConstraintAttribute.BOTTOM,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button3,
                                   source_attribute: Emeus.ConstraintAttribute.TOP,
                                   constant: -12.0 }),
            new Emeus.Constraint({ target_object: button3,
                                   target_attribute: Emeus.ConstraintAttribute.HEIGHT,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button1,
                                   source_attribute: Emeus.ConstraintAttribute.HEIGHT }),
            new Emeus.Constraint({ target_object: button3,
                                   target_attribute: Emeus.ConstraintAttribute.HEIGHT,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_object: button2,
                                   source_attribute: Emeus.ConstraintAttribute.HEIGHT }),
            new Emeus.Constraint({ target_object: button3,
                                   target_attribute: Emeus.ConstraintAttribute.BOTTOM,
                                   relation: Emeus.ConstraintRelation.EQ,
                                   source_attribute: Emeus.ConstraintAttribute.BOTTOM,
                                   constant: -8.0 }),
        ]);

A simple grid

This obviously looks like a ton of code, which is why I added the ability to describe constraints inside GtkBuilder XML:

centered.ui [Lines 28-45] download
            <constraints>
              <constraint target-object="button_child"
                          target-attr="center-x"
                          relation="eq"
                          source-object="super"
                          source-attr="center-x"
                          strength="required"/>
              <constraint target-object="button_child"
                          target-attr="EMEUS_CONSTRAINT_ATTRIBUTE_CENTER_Y"
                          relation="eq"
                          source-object="super"
                          source-attr="center-y"/>
              <constraint target-object="button_child"
                          target-attr="width"
                          relation="ge"
                          constant="200"
                          strength="EMEUS_CONSTRAINT_STRENGTH_STRONG"/>
            </constraints>

Additionally, I’m writing a small parser for the Visual Format Language used by Apple for their own auto layout implementation — even though it does look like ASCII art of Perl format strings, it’s easy to grasp.

The overall idea is to prototype UIs on top of this, and then take advantage of GTK+’s new development cycle to introduce something like this and see if we can get people to migrate from GtkFixed/GtkLayout.

by ebassi at October 17, 2016 05:30 PM

October 07, 2016

Tomas Frydrych

The MTB Impact Myth

The MTB Impact Myth

The mountain bike is a great iteration in the evolution of the bicycle, opening a whole new world of possibilities as well as challenges. There are places where this is undoubtedly more true than others, and Scotland is, unquestionably, such a place. Not simply because of our long standing tradition of access, but because much of our spectacular landscape lends itself well to what the mountain bike has to offer.

Folk who don't ride mountain bikes sometimes (often?) don't get it. I do. Over the years, I have done a fair bit of rides of all kinds, the trail centres and the local skooshes, the winter mud and darkness, the long days of multiday expeditions into some of the more remote parts of Scotland, sometimes in sunshine, sometimes in torrential rain, the alpine summer holidays in search of dust, the amazing MegaAvalanche extravaganza. I have even earned some titanium bits somewhere along the way.

And like many mountain bikers, I have for years believed that the environmental impact of a person on a mountain bike is more or less identical to that of a person on foot. Well, I have since lost faith in that entrenched dogma, it is not supported by my day to day experience, and as it happens, it is not supported by the research either.

But There are Studies!

The reason most mountain bikers claim the bike has a similar environmental impact as a pedestrian is because back there in the pre-historic times of mountain biking there have been a couple of studies done that came to that conclusion, specifically a 1994 study from Montana by Wilson and Seney[1], and a 1997 Canadian study by Thurston and Reader[2]. The former of these has little relevance to the Scottish situation, or the issues as they stand today. It examines the impact of different users on pre-existing, hardened mountain trails -- in Scotland the equivalent might be recent hard packed path restoration work, but in this context it makes perhaps sense to talk about cost of maintenance, but not of environmental impact. In addition to that the study is seriously methodologically flawed (see the critique in Pickering, 2010[3]).

The study by Thurston and Reader in 1997[2] looks at the impact of hikers and bikes on pristine floor of a deciduous forest. The attempt to quantify the impact in this study is rigorous (they essentially count individual blades of grass), but unfortunately the very desire to exactly quantify the impact leads to a set up of the load test that does not at all reflect how bikes are ridden. The test areas are just 4m long, with an extra 0.5m buffer either end (i.e., about 2 bike length), 1m wide, with a marked centre line to follow, as they roll the bikes down:

Bikers traveled at a moderate speed, usually allowing bicycles to roll down lanes with-out pedaling where the slope would allow. Brakes were applied as needed to keep bicycles under control. Over rough terrain, some firm braking, occasional skidding, and some side-to-side movement of the front tire was required to maintain balance until a path developed. Once participants reached the bottom of a lane, they would turn and circle around the nearest end of the block back to the top of the lane to make a second pass.

Considering the detail to which the experiment set up is documented up to this point, the failure to set parameters for the riding is striking. The 'moderate speed' declaration needs to be interpreted in the light of 'side-to-side movement of the front tire was required to maintain balance', and the fact that the runs are just 5m long on not extremely steep slopes -- it is reasonable to conclude that by today's mountain biking standards these test runs are extremely slow.

The 4m length of the test area is entirely inadequate to capture bike behaviour -- there is an implicit assumption that the bike impact is homogeneous along its run, which is simply not the case, mountain bike riding is extremely dynamic. There are no corners, yet the area immediately before and in a corner is where the greatest impact of a bike is generally observed (i.e., braking bumps and skid marks). Similarly there are no uphill runs, for 'bikers could not make uphill passes, even in the lowest of 21 gears' -- this happens to be a normal part of mountain biking, and a realistic scenario would have made the biker to make best effort and then push. There is no defined braking protocol, and the 'firm braking' has to be seen in the context of what has been said about the speed above. The study has also been restricted to dry days only, which eliminates the situation when the ground is most vulnerable.

Not to beat about the bush, this is not how an average mountain biker rides their bike; bikes are invariably ridden fast, braked and cornered hard, and wheels dug in / span on climbs -- the best that can be said about this study is that it represents the most optimistic case of a cautious rider on a dry day, and that in itself should raise a few alarm bells.

Pickering's 2010[3] survey shows that up to 2010 there has been only one other study on comparative impact between pedestrian and bike traffic done in Australia (Chiu and Kriwoken, 2003[4]) -- this study suffers from similar limitations to its two predecessors; it takes place on a hardened track (a fire road), and the bike load application is not adequately defined beyond 'recreational riding'.

There is a more recent Australian study by Pickering, 2011[5], that looks on a mountain biking impact on subalpine grass. It uses very similar experimental set as used by Thurston and Reader (in spite of the earlier Pickering survey[3] noting the flaws in the methodology), on an 8 degree slope. This study has also been limited to dry days. As such it again represents the most optimistic case. This study is, however, interesting for two reasons. The results show that mountain biking impact on vegetation scales worse than that of a pedestrian, as by 500 passes the bike lanes were significantly worse off. It further shows that riding up or down hill is considerably more damaging than riding on flat; I shall return to this point later.

So here you go, that's the studies, and, no, the results are not too encouraging. They show that at best mountain bikes can hope for an equivalent impact to pedestrians when ridden with the greatest care in the dry. Nevertheless, this does not stop the mountain biking press from peddling the myth and worse. As recently as November of last year, the MBR has claimed that 'Research reveals walkers do more damage to trails than mountain bikers'. This is complete BS. The basis for the claim is a 2009 USGS study[5], which, however, was not conducted in a way that would allow meaningful comparison between different user groups. This point is clearly made in the paper:

In contrast, mountain biking, at 3.5 m 3 /km, has the lowest estimated level of soil loss, about 30% as much as on hiking trails. This finding reflects a limited mileage of trails where mountain biking was the predominant use (3.1 km), and these trails received low to moderate levels of use. [emphasis mine]

This point is also reiterated in Pickering's 2010, review of research[5], but neither of Pickering's papers is mentioned -- one can only wonder whether this is because the MBR writer is completely inept at Googling, or whether it has anything to do with the fact the reality does not fit his agenda (and headline).

Empirical Observations from the Real World

As I have noted above, the main issue with all of the rigorous studies is that they fail to create realistic riding conditions for their experiments. This is understandable. In order to get realistic data the research would have to be carried out on a real trail, considerably longer than the 4m test lanes, using real riders, preferably unaware of the study. This, however, makes quantitative assessment very hard, never mind comparison with other types of use. What we are left with is having to make judgements based on empirical observations in the real world.

So here are some of mine, mostly from the Dumyat hill above Stirling (for those interested, I also have 65 images that capture virtually all of the erosion on the west side of the hill, with comments, at Flickr). About four years ago I picked up an injury that kept me of the bike for the better part of six months. During this time I started hill running to keep the weight from piling on, and got quickly hooked. Much of my initial running was done in the exact same places around Stirling I used to ride, and in the Ochils, with Dumyat, the closest reasonably sized hill to where I live, becoming a regular destination for my midweek runs.

Compared to its neighbours in the Ochils, Dumyat is quite a peculiar hill: a lump of loosely bonded volcanic rocks, covered in a thin layer of soil (at places no more than a couple of inches thick), which in turn is held together by vegetation, primarily grass and some heather. In its natural form, the hill is pretty resilient to erosion. However, when the vegetation is damaged, water quickly washes off the top soil, and once the rock is exposed, it starts to disintegrate rapidly into gravel, forming ever bigger and deeper water channels. By this point the erosion is well beyond the hill's ability to self repair. (Just to be clear on this point, bulk of the erosion on the hill, i.e., the actual moving of soil, is caused by water run off, not by boots or wheels. The character of the rock is such that once exposed it does not require any further mechanical disturbance to erode. This makes the pace and physical scale of the erosion rather large relative the overall visitor numbers.)

The overall pattern of vegetation stripping followed by rapid water erosion is one of the main erosion patterns that can be observed on many of Scotland's more popular hills, except in somewhat accelerated form. Being heavily used by both walkers and bikes, this makes Dumyat a possible useful case study with a view toward the bigger picture.

The thing about running off road, particularly as a novice, is that you are very acutely aware of what is under your feet; far more than either walking or cycling. I did not really set out at any point to investigate erosion on Dumyat, I simply started noticing more clearly than before where the erosion is, and, over time, also how it forms. Over the next few months I made a couple of observations which forced me to change my view on the bike impact.

The first of these was noticing that there are significant differences in the way in which pedestrian and wheeled traffic impacts the vegetation. A pedestrian exerts a primarily downward crushing force. When this force is applied repeatedly in the same location, the area eventually gets de-vegetated, creating a foot shaped hollow, or rather a series of such hollows forming staggered steps. As the pressure continues to be applied, these hollows enlarge, until they form a continuous erosion scar. Notably, while the hollows remain in their discrete state, the area continues to be fairly resistant to water erosion, which only kicks in in earnest once the individual hollows merge.

In contrast, a bicycle wheel primarily exerts a tangential drag force, generated either by accelerating (i.e., pedalling), decelerating (i.e., braking) or centrifugal force (i.e., cornering). Pickering, 2011, as mentioned above, noted that bikes cause considerably more damage when going up or down, than on level. I believe this is the reason why, the wheel does not simply crush vegetation, it pulls on it, tending to damage roots faster. In real off road riding, the bike nearly always generates drag, for freewheeling in a straight line without any braking is a fairly rare occurrence (on Dumyat there is only one longer section that allows for this, and, it also happens to be one that shows very limited signs of erosion). The other important, if self-evident, difference from the pedestrian impact is that the wheel generates a single continuous trace. This, unlike the pedestrian indentations, tends to be subject to additional water run off immediately.

One of the obvious fall-outs from this is that at the start of, and well into, the erosion process we can reliably differentiate the erosion trigger. Over the last 3 years, I have only observed a handful of new areas of erosion that were clearly pedestrian triggered. At the same time, any erosion scar that is narrower than around 45cm can be unambiguously classed as bike-triggered. My conservative estimate is that bikes account for somewhere around 80% of the vegetation stripping on the hill that triggers subsequent water erosion, even though the pedestrian numbers seem considerably higher than the number of bike users.

(The actual numbers are hard to estimate without someone sitting there for a couple of weeks with a clicker. My mate Callum reckons bikes only account for around 11% of the hill users. I expected this number is somewhat understated. My own experience shows that the user make up is very weather dependent, with the walkers being mainly fair weather users; in good weather conditions, a 10% estimate for the bikes might be in the right ballpark. However, on a clagged out autumn day, the pedestrians tend to be reduced mainly to the local hill runners, whose numbers are fairly limited even compared to the local mountain bikers. The biker numbers are less effected by weather, they come for the exceptional quality of the riding, not the views, tend to ride all year around as well as after dark. I think it is reasonable to say that the pedestrians outnumber bikes by several times. This, however, is not a good news for the bikes, in view of what said in the previous paragraph.)

The other observation I made was regarding the pace with which the erosion develops to the 'beyond self repair' state. To my surprise, erosion initiated by pedestrian traffic develops relatively slowly -- because the discrete hollows tend to resist water erosion, it takes a large number of repetitions before a continuous scar forms. In some cases I have observed on Dumyat, as well as elsewhere, the process from the first de-vegetated hollows appearing to the forming of a continuous scar can take a year or more.

The situation with bike initiated erosion is very different. When the ground is saturated with water (which is 6-8 months of the year), it can take just a single, one off, bike track to remove most of the vegetation and kick-start the water erosion process. In one particular place I have observed how a single drag mark in the grass has turned into a 'beyond self-repair' two inch wide deep scar in the matter of weeks, and then rapidly progressed from there.

The empirical observations I have made over the recent years make me believe that the actual bike impact in the typical wet and soggy Scottish conditions is considerably worse than the best case scenarios of the idealised studies. The relative damage is, of course hard to quantify, but an educated guess can be made based on noting visual disturbances caused by a repetitive use by each of the two user groups. Having observed conditions of the ground after both hill running races and mates bike races taking place on similar types of a ground, I am inclined to believe that the impact of a single bike on surface vegetation might be as much as an order of magnitude bigger than that of a pedestrian, i.e., that it takes about 100 strong race field in a hill running race to have the impact of 10 bikes in the same sort of environment.

As far as I am concerned, the relative comparison does not really matter. I have simply stated my observations this way because that is how the mountain biking community chooses to frame the issue. Also, this is not about Dumyat, a rather insignificant hill used for grazing, on the boundary between urban environment on one side and heavily over grazed hills (chunk of which has been recently turned over to commercial forestry) on the other. This puts the erosion somewhat into perspective. This is about discernible patterns of damage, which I am noting with concern at other, more environmentally and culturally significant sites (e.g., Ben Lawers, the Cairngorms). Ridging big hills on bikes has in the recent years become very popular, and I understand why; but this makes it important that the mountain biking community understands the reality of bike impact on the mountain environment, and that it commits to the 'responsible' in Responsible Access. Sometimes we have to adjust the way in which we access the 'wild' places. At times bike is simply not appropriate for a given environment and/or conditions. At times runner is not appropriate either (FWIW, I now tend to stay away from Dumyat during the worst of the wet season when even the fell shoes are just too much).

PS: Well done for making it this far. :)

Literature Cited

[1] Wilson, J.P., Seney, J.P., 1994. Erosional impacts of hikers, horses, motors cycles, and off-road bicycles on mountain trails in Montana. Mountain Research and Development 14, 77–88.

[2] Thurston, E., Reader, R.J., 2001. Impacts of experimentally applied mountain biking and hiking on vegetation and soils of a deciduous forest. Journal of Environmental Management 27, 397–409.

[3] Pickering, C.M., Hill, W., Newsome, D, Yu-Fai, L., 2010. Comparing hiking, mountain biking and horse riding impacts on vegetation and soils in Australia and the United States of America. Journal of Environmental Management 91, 551-562.

[4] Chiu, L., Kriwoken, L., 2003. Managing recreational mountain biking in Wellington Park, Tasmania, Australia. Annals of Leisure Research 6, 339–361.

[5] Pickering, C.M., Rossi, S., Barros, A., Assessing the impacts of mountain biking and hiking on subalpine grassland in Australia using an experimental protocol. Journal of Environmental Management 92, 3049-3055.

[6] Olive, N.D., Marion, J.L., 2009. The influence of use-related, environmental, and managerial factors on soil loss from recreational trails. Journal of Environmental Management 90, 1483–1493.

by tf at October 07, 2016 12:58 PM

September 21, 2016

Emmanuele Bassi

Who wrote GTK+ and more

I’ve just posted on the GTK+ development blog the latest article in the “who writes GTK+” series. Now that we have a proper development blog, this is the kind of content that should be present there instead of my personal blog.

If you’re not following the GTK+ development blog, you really should do so now. Additionally, you should follow @GTKToolkit on Twitter and, if you’re into niche social platforms, a kernel developer, or a Linus groupie, there’s the GTK+ page on Google Plus.

Additionally, if you’re contributing to GTK+, or using it, you should consider writing an article for the developers blog; just contact me on IRC (I’m ebassi on irc.gnome.org) or send me an email if you have an idea about something you worked on, a new feature, some new API, or the internals of the platform.

by ebassi at September 21, 2016 10:19 AM

September 08, 2016

Tomas Frydrych

Glen Affric: Carn Eighe Horseshoe

Glen Affric: Carn Eighe Horseshoe

The Lochness Marathon is approaching fast, and with it my turn to be the support crew. Because of the race logistics there is a fair bit of hanging around ... but Glen Affric being just down the road, I know the perfect way to 'kill' the time -- the Carn Eighe loop is just the right length to be back at the finish line in a good time! A run along a great natural line, without any significant technical or navigational challenges, yet offering stunning views, on the edge of one of the more remote feeling parts of Scotland.

From the carpark at NH 216 242 head up the track following the river, then just after crossing Allt Toll Easa take the path following its W bank. After about half a km a path not on the map heads up the Creag na h-Inghinn ridge -- take this to the Tom a Choinnich summit (NB: Allt Toll Easa is the last available water source for the whole of the horseshoe).

Glen Affric: Carn Eighe Horseshoe

Follow the obvious ridge line onto An Leath-chreag, Carn Eighe and Mam Sodhail. From here continue along the E ridge onto Sgurr na Lapaich, and descent along the SE ridge into the glen to pick up the path toward Affric Lodge, and from here along the track/road back to start.

25km / 1,700m ascent

Creag Chaorainn Variation

The Sgurr na Lapaich descent is not the only option on this run, as there are two other inviting, even irresistible, ridges just little farther to the SW of Mam Sodhail.

Glen Affric: Carn Eighe Horseshoe

When time (and legs) permit, it is well worth to follow the SW ridge over the 1108m, 1068m, and 1055m points as it curves around the spectacular Coire Coulavie, for it offers the single best view of Loch Affric that there is to be had. The ground is a little bit more technical, as there is no path through the rocky, but generally enjoyable, terrain.

To reach the floor of Glen Affric, negotiate your way down Creag a'Chaorainn as the ridge curves NE onto the the flat area at around the 750m contour line. From here a direct descent S is possible, however, the grassy slope is very steep, the ground is pocketed, and sodden, and lot of care is required. I suspect a better option might be to descend N into the coire and then pick up the stalker's path marked on OS map) below An Tudair Beag.

The path on the floor of Glen Affric provides enjoyable running through a lovely Caledonian pine woodland, worth a visit in its own right. Follow this toward Affric Lodge and from there back to start along the road.

29km / 1,950m ascent

Glen Affric: Carn Eighe Horseshoe

[Updated 27/09/2016 -- I made it back with whole 5 minutes to spare!]

by tf at September 08, 2016 07:36 PM

September 03, 2016

Tomas Frydrych

Round of Camban Bothy

Round of Camban Bothy

Bothies are, in my mind at least, a national treasure, capturing something of the very essence of the Scottish character and generous attitude to strangers. They are more than shelters, they provide for chance encounters of likeminded folk, to share stories (and drams) the old fashioned way, their logbooks testimony to whatever it is that drives us out of our sterile urban existences. And for the runner, they are an excellent resource for multi day trips, cutting on the amount of kit required, as well as extending the window of opportunity beyond the summer months.

The Scottish north west is one of the most spectacular places I know, the Kintail area is no exception, and the Camban bothy is bang in the heart of it. The hills and glens around it offer a genuine sense of remoteness, rare in that it persists even on the summits, for obvious signs of human activity in this area are a few. It is only when one registers the names on the map -- Kintail Forest, Inverinate Forest, Glenaffric Forest, West Benula Forest -- that the full extent to which our species has left its mark on this landscape dawns. Nevertheless, this area is a real paradise for the walker and runner alike, offering some longer ventures to both.

The run described here is a two day outing, taking in some of the iconic hills of this area, the awe-inspiring Falls of Glomach, and spending a night in the aforementioned Camban bothy. I should say at the outset that this is not a run for an outdoor novice; ability to navigate reliably regardless of weather is a necessary pre-requisite, as is being adequately equipped for two days and a night in these hills. At the same time multiple manageable single day outings in this area are perfectly feasible.

The starting point for this trip is the tiny hamlet of Camas-Luinie. My recommendation would be to book a place in the Hiker's Bunk House for the night before your run. The bunk house is small (IIRC sleeps 6), basic but cosy, with a wood burner, well equipped kitchen, and most friendly owners. Using the bunk house solves two big problems -- there is no public space where you could leave a car, and the Camas-Luinie area is completely unsuitable for wild camping.

Day 1

From the end of the public road take the path heading east through Glen Elchaig, crossing the Elchaig river via a bridge at a cluster of houses. Continue east along the undulating landy track past Loch na Leitreach to Carnach. From here take the path heading SE. After about a kilometer the path splits (NH 0311 279), only to rejoin later -- you can take either fork to where they again rejoin (NH 040 276), the W one being the more scenic option, then follow the path to where it ends near a small lochan at the foot of Creag Ghlas. (NB: if you do take the W fork, it is easy to get carried away and miss the NE turn, as the path heading S to Loch Lon Mhurchaidh runs very well!)

Round of Camban Bothy

Gain the ridge, and follow it in southerly direction over Stuc Fraoch Coire, Stuc Mor, Stuc Beag to Sgurr nan Ceathramhnan; if Munros are your thing, you might want to make the detour to bag Mullach na Dheiragain, otherwise follow the ridge east onto Stob Coire na Cloiche. (However, please note that in poor visibility the descent from Sgurr nan Ceathramhnan requires care, as it is easy to pick up the NE ridge by a mistake. Also, when the ridge is covered in snow, the section between the west and main summits as well as the start of the descent is fairly serious, with some steep front pointing down and exposed narrow ridge traversing -- once on this hill, your options are very limited; also the significant altitude means snow stays on these hills well into the spring.)

Round of Camban Bothy

Again, you might want to nip up An Socach, otherwise head directly down the excellent path to Alltbeithe (Youth Hostel), and from here take the path SW to Camban.

~28km / 1,700m ascent

Round of Camban Bothy

Day 2

Head up directly up the hill behind the bothy to gain the ridge, and follow it over Sgurr a'Dubh Doire onto the summit of Beinn Fhada, enjoying the view of the Five Sisters of Kimtail. From the summit descend along the N ridge to the bealach below (S of) Meall a' Bheallaich. From here the obvious, pure, line is to continue over it, down to Bealach an Sgairne and up A Ghlas-bheinn, then follow the ridge over Meall Dubh to pick up the Bealach na Sroine path -- this is undoubtedly possible, but I have no idea what the ground is like ...

Round of Camban Bothy While on this section in mid April this year, the weather deteriorated badly and I was forced to retreat from the high ground, taking a path W from the bealach and contouring the E side of Coire an Sgairne -- this path is excellent, an the coire is very scenic, so this might be a worthwhile alternative for getting into Bealach an Sgairne. I can also say with some authority that the short gap between the two most westerly forestry roads on the slopes of A' Mhac is not to be recommended, the ground is very steep, and there are some rusty fences and Sitka bashing involved -- it's not worth the 120m altitude saved.

Which ever way you reach the Bealach na Sroine path, take it past the Falls of Glomach back onto the track alongside River Elchaig, and to Camas-Luinie.

~22km / 1800m ascent

by tf at September 03, 2016 08:50 PM

August 27, 2016

Emmanuele Bassi

GSK Demystified (III) — Interlude

See the the tag for the GSK demystified series for the other articles in the series.

There have been multiple reports after GUADEC about the state of GSK, so let’s recap a bit by upholding the-long standing tradition of using a FAQ format as a rhetorical device.


Q: Is GSK going to be merged in time for 3.22?

A: Short answer: no.

Long-ish answer: landing a rewrite of how GTK renders its widgets near the end of a stable API cycle, when a bunch of applications that tend to eschew GTK+ itself for rendering their content — like Firefox or LibreOffice — finally, after many years, ported to GTK+ 3, seemed a bit sadistic on our part.

Additionally, GSK still has some performance issues when it comes to large or constantly updating UIs; try running, for instance gtk3-widget-factory on HiDPI using the wip/ebassi/gsk-renderer branch and marvel at the 10 fps we achieve currently.

Q: Aside from performance, are there any other concerns?

A: Performance is pretty much the biggest concern we found. We need to reduce the amount of rasterizations we perform with Cairo, and we need better ways to cache and reuse those rasterizations across frames; we really want all buttons with the same CSS state and size to be rasterized once, for instance, and just drawn multiple times in their right place. The same applies to things like icons. Caching text runs and glyphs would also be a nice win.

The nice bit is that, with a fully retained render tree, now we can actually do this.

The API seems to have survived contact with the widget drawing code inside GTK+, so it’s a matter of deciding how much we need to provide in terms of convenience API for out-of-tree widgets and containers. The fallback code is in place, right now, which means that porting widgets can proceed at its own pace.

There are a few bugs in the rendering code, like blend modes; and I still want to have filters like blur and color conversions in the GskRenderNode API.

Finally, there’s still the open question of the mid-level scene graph API, or GskLayer, that will replace Clutter and Clutter-GTK; the prototype is roughly done, but things like animations are not set in stone due to lack of users.

Q: Is there a plan for merging GSK?

A: Yes, we do have a plan.

The merge window mostly hinges on when we’re going to start with a new development cycle for the next API, but we decided that as soon as the window opens, GSK will land. Ideally we want to ensure that, by the time 4.0 rolls around, there won’t be any users of GtkWidget::draw left inside GNOME, so we’ll be able to deprecate its use, and applications targeting the new stable API will be able to port away from it.

Having a faster, more featureful, and more optimized rendering pipeline inside GTK+ is a pretty good new feature for the next API cycle, and we think that the port is not going to be problematic, given the amount of fallback code paths in place.

Additionaly, by the time we release GTK+ 4.0, we’ll have a more battle-tested API to replace Clutter and Clutter-GTK, allowing applications to drop a dependency.

Q: How can I help?

A: If you’re a maintainer of a GTK+ library or application, or if you want to help out the development of GTK+ itself, then you can pick up my GSK development branch, fork it off, and look at porting widgets and containers. I’m particularly interested in widgets using complex drawing operations. See where the API is too bothersome, and look for patterns we can wrap into convenience API provided by GTK+ itself. For instance, the various gtk_render_* family of functions are a prime candidate for being replaced by equivalent functions that return a GskRenderNode instead.

Testing is also welcome; for instance, look at missing widgets or fragments of rendering.


Hopefully, the answers above should have provided enough context for the current state of GSK.

The next time, we’ll return to design and implementation notes of the API itself.

by ebassi at August 27, 2016 09:28 AM

August 26, 2016

Ross Burton

So long Wordpress, thanks for all the exploits

I've been meaning to move my incredibly occasional blog away from Wordpress for a long time, considering that I rarely use my blog and it's a massive attack surface. But there's always more important things to do, so I never did.

Then in the space of ten days I received two messages from my web host, one that they'd discovered a spam bot running on my account, and after that was cleared and passwords reset another that they discovered a password cracker.

Clearly I needed to do something. A little research led me to Pelican, which ticks my "programming language I can read" (Python), "maintained", and "actually works" boxes. A few evenings of fiddling later and I just deleted both Wordpress and Pyblosxom from my host, so hopefully that's the end of the exploits.

No doubt there's some links that are now dead, all the comments have disappeared, and the theme needs a little tweaking still, but that is all relatively minor stuff. I promise to blog more, too.

by Ross Burton at August 26, 2016 10:10 PM

August 18, 2016

Emmanuele Bassi

GUADEC/2

Writing this from my home office, while drinking the first coffee of the day.

Once again, GUADEC has come and gone.

Once again, it was impeccably organized by so many wonderful volunteers.

Once again, I feel my batteries recharged.

Once again, I’ve had so many productive conversations.

Once again, I’ve had many chances to laugh.

Once again, I’ve met both new and long since friends.

Once again, I’ve visited a new city, with interesting life, food, drinks, and locations.

Once again, thanks to everybody who worked very hard to make this GUADEC happen and be the success it was.

Once again, we return.

by ebassi at August 18, 2016 08:00 AM

August 10, 2016

Tomas Frydrych

On Running, Winning, and Losing

On Running, Winning, and Losing

I have a thing about hills. It goes back a long way. Aged five, my granny took me on a holiday in the mountains, and I have been drawn back ever since. Forty-plus years later, out there on the high ground, the inner child comes out just as wide-eyed as when during those two weeks I listened to tales of mountain creatures, real and mythical alike, and imagined the fairies and elfs coming out after dark.

Over the years I have walked, climbed, skied and biked the hills. Now that I am wiser, I mostly run. A means to an end. Don't get me wrong, I enjoy running per se. I like the sense of rhythm and flow. But I run hills because of the convenience (I can travel farther in less time), and because of the sense of freedom (unencumbered by excess of kit, I can go pretty much where I like).

To scratch my itch, I try to sneak in a longer hill run most weekends. And so on this particular Saturday in October I find myself at the Ben Lawers Nature Reserve, a second weekend in a row, after a last minute change of plans late the previous night. I ran the Lawers main ridge the previous week, and wanting to do something different, it is time for the Tarmachan.

As far as I am concerned the Tarmachan is right out there with such Scottish classics as the Anoach Eagach, the setting is nothing short of stunning. But also the ridge provides steady, first class running. The usual Tarmachan loop is a bit on the short side, but this is easily remedied by taking in Beinn Ghlas, Ben Lawers and Meall Corranaich to start with, and then ascending Meall nan Tarmachan off track along its North East ridge rather than via the usual walkers' path from the South.

This proves an excellent choice, hard work rewarded by solitude on an otherwise always busy hill, the otherworldly magic of Lochan an Tairbh-uisge, amplified by a low rolling cloud, taking the sting out of the steep push up onto the main ridge.

It's funny how our mind often works on different levels, how while our conscious thoughts are pre-occupied with this or that, we still continue to function in a kind of a semi-autonomous, semi-conscious way, that gets us through much of life. As I finally emerge on the walkers' path heading for the summit, I am still thinking about the magic of the lochan below, and then, as I check my watch, about being a half an hour behind schedule -- I will definitely not make my intended (wholly arbitrary) target time.

At the same time, in the background, I register that the summit is fairly busy (not surprising on a dry Saturday), people are grinning at me and getting out of my way (Scots are quite nice people, all in all), saying 'well done' without the usual Scottish self-deprecating, yet somehow still always slightly barbed banter (surprising), and even holding dogs on a short leash (definitely unusual!).

As I am approaching the summit cairn, I register a couple unmistakable Mountain Rescue Team jackets, and having noticed a couple of fluorescent cajoles down on the walkers' path (looking very much like the Police) I am (not)thinking 'oh, a rescue in progress', and then seeing that two guys are very glad to see me, (not)thinking 'bummer, someone saw me plodding up the open hill and raised alarm thinking me lost' (far fetched but not wholly inconceivable). All that, while still pre-occupied with that half an hour behind schedule.

It is only when I am asked about my number for the second time, that my consciousness finally takes over and all the pieces fall in place -- there is a race on, and all these people think I am in the lead (impressive lead, I must add, I can see way down the path, and there is nobody else coming up; and yet, by sheer coincidence, I realise later, not an unfeasible lead in the light of the standing course record).

I am not endowed by any particular athletic abilities. I have never won any sporting event in my whole life. I do the occasional Scottish hill race, and I consider it a success if I place in the top half (I often don't, so I don't race that often either). But suddenly, for this brief moment, I get a glimpse of what it feels like to be in the lead. And I tell you what -- I could get used to this!

But then clarifications are made, and I am again just another ordinary, anonymous, runner, and carry on along the ridge, briefly confusing another pair of race marshals as I pass the point where the race course descends the ridge early, while I stay on it.

I rejoin the course an hour or so later on my way back. I can see a handful of runners on the hillside above me, but from the footprints on the ground, and the fact that I can't see anyone at all in front of me, I surmise I am now near the end of the field. I settle into a comfortable jogging pace, my legs enjoying the rhythm in the knowledge that there are no climbs and no descends left. I am happy, it's been a great day.

As I approach the car park, I start encountering runners jogging in the opposite direction, returning from the finish back to their cars. Of course, they assume I am in the race, and so I get another round of 'well dones'. Funny how nominally identical phrases can be so semantically different. The one 'awesome, how the f* did you get here so fast -- well done!', the other 'poor sod, still running, but, good on him, still running; well done!'.

As a self confessed child of Derrida and Mann, I know that meaning is a construct of my mind, and has often little to do with what might have been intended. I know these are genuine words of encouragement. I know that on some level the experience of a hill runner approaching the finish line is very similar whether we are at the front or back -- we have pushed ourselves, we are suffering and we are only running at this point because we make ourselves; that when the car door opens at the end of our journey home, we will fall out rather than step out, and that come Monday we will put immense effort into hobbling only when we think no one is looking. I know it is this shared experience that is behind those 'well dones'.

I know all of that. And if that was not enough, I am not even in the freaking race, I have no reason to feel irate, at myself or anyone else -- I have been on the go for five hours, covered over 25km, climbed 2000m and I am still going well and enjoying it. And yet, somehow, I do feel irate about being at the end of a field of a race that I am not taking part in!

There are some who believe that running itself has a sort of magical, life transforming quality; I don't know, perhaps it does, for some. What I do know is that at times, up there in the hills, I experience brief moments of extreme clarity and self understanding. Today was one of those days: I learnt that losing, more than anything, is a state of mind, that I can make myself lose even where there is nothing to be lost and everything to be won …

But then the moment passes and all I can think of is the burger I'll have in Mhor-84 caffe on my way home --all pretense aside, there lies the real reason I run!

P.S. The 2015 Tarmachan Hill Race (9.5km / 700m of ascent) was won by George Foster of fellicionado.com in a very respectable, though not earth shattering, time of 00:54:13. Well done!

by tf at August 10, 2016 07:43 PM

Emmanuele Bassi

GUADEC

Speaking at GUADEC 2016

I’m going to talk about the evolution of GTK+ rendering, from its humble origins of X11 graphics contexts, to Cairo, to GSK. If you are interested in this kind of stuff, you can either attend my presentation on Saturday at 11 in the Grace Room, or you can just find me and have a chat.

I’m also going to stick around during the BoF days — especially for the usual GTK+ team meeting, which will be on the 15th.

See you all in Karlsruhe.

by ebassi at August 10, 2016 02:40 PM

GSK Demystified (II) — Rendering

See the previous article for an introduction to GSK.


In order to render with GSK we need to get acquainted with two classes:

  • GskRenderNode, a single element in the rendering tree
  • GskRenderer, the object that effectively turns the rendering tree into rendering commands

GskRenderNode

The usual way to put things on the screen involves asking the windowing system to give us a memory region, filling it with something, and then asking the windowing system to present it to the graphics hardware, in the hope that everything ends up on the display. This is pretty much how every windowing system works. The only difference lies in that “filling it with something”.

With Cairo you get a surface that represents that memory region, and a (stateful) drawing context; every time you need to draw you set up your state and emit a series of commands. This happens on every frame, starting from the top level window down into every leaf object. At the end of the frame, the content of the window is swapped with the content of the buffer. Every frame is drawn while we’re traversing the widget tree, and we have no control on the rendering outside of the state of the drawing context.

A tree of GTK widgets

With GSK we change this process with a small layer of indirection; every widget, from the top level to the leaves, creates a series of render nodes, small objects that each hold the drawing state for their contents. Each node is, at its simplest, a collection of:

  • a rectangle, representing the region used to draw the contents
  • a transformation matrix, representing the parent-relative set of transformations applied to the contents when drawing
  • the contents of the node

Every frame, thus, is composed of a tree of render nodes.

A tree of GTK widgets and GSK render nodes

The important thing is that the render tree does not draw anything; it describes what to draw (which can be a rasterization generated using Cairo) and how and where to draw it. The actual drawing is deferred to the GskRenderer instance, and will happen only once the tree has been built.

After the rendering is complete we can discard the render tree. Since the rendering is decoupled from the widget state, the widgets will hold all the state across frames — as they already do. Each GskRenderNode instance is, thus, a very simple instance type instead of a full GObject, whose lifetime is determined by the renderer.

GskRenderer

The renderer is the object that turns a render tree into the actual draw commands. At its most basic, it’s a simple compositor, taking the content of each node and its state and blending it on a rendering surface, which then gets pushed to the windowing system. In practice, it’s a tad more complicated than that.

Each top-level has its own renderer instance, as it requires access to windowing system resources, like a GL context. When the frame is started, the renderer will take a render tree and a drawing context, and will proceed to traverse the render tree in order to translate it into actual render commands.

As we want to offload the rendering and blending to the GPU, the GskRenderer instance you’ll most likely get is one that uses OpenGL to perform the rendering. The GL renderer will take the render tree and convert it into a (mostly flat) list of data structures that represent the state to be pushed on the state machine — the blending mode, the shading program, the textures to sample, and the vertex buffer objects and attributes that describe the rendering. This “translation” stage allows the renderer to decide which render nodes should be used and which should be discarded; it also allows us to create, or recycle, all the needed resources when the frame starts, and minimize the state transitions when doing the actual rendering.

Going from here to there

Widgets provided by GTK will automatically start using render nodes instead of rendering directly to a Cairo context.

There are various fallback code paths in place in the existing code, which means that, luckily, we don’t have to break any existing out of tree widget: they will simply draw themselves (and their children) on an implicit render node. If you want to port your custom widgets or containers, on the other hand, you’ll have to remove the GtkWidget::draw virtual function implementation or signal handler you use, and override the GtkWidget::get_render_node() virtual function instead.

Containers simply need to create a render node for their own background, border, or custom drawing; then they will have to retrieve the render node for each of their children. We’ll provide convenience API for that, so the chances of getting something wrong will be, hopefully, reduced to zero.

Leaf widgets can remain unported a bit longer, unless they are composed of multiple rendering elements, in which case they simply need to create a new render node for each element.

I’ll provide more example of porting widgets in a later article, as soon as the API will have stabilized.

by ebassi at August 10, 2016 02:20 PM

July 24, 2016

Tomas Frydrych

Coigach Horseshoe

Coigach Horseshoe

The Coigach hills provide perhaps the single best short run in the entire Coigach / Assynt area. The running is easy on excellent ground (if at places exposed -- not recommended on a windy day!), the views are magnificent in all directions, and the caffe in the Achiltibuie Piping School provides excellent post-run cakes!

Start from the small parking area just before the road dips down toward Culnacraig. Where the public road ends, take the small footpath behind the houses as it contours the hill, aiming to cross Allt nan Coisiche at the apex of the E to S bend below the water fall (NC 071 037; NB: there are no other suitable crossing points), then follow a steep faint path on its S side until reaching easier angled ground in the coirre.

From the coirre it is possible to either head directly for the edge of the W ridge of Garbh Choireachan (more hands on), or slightly to the N, aiming for the right (S) edge of the obvious erosion on the NW facing side of the ridge, where a faint path heads up onto the ridge proper.

Coigach Horseshoe

The high ridge is well defined and narrow. The drops are steep on both sides, but the S side in particular is at places vertigo inducing. However, no scrambling is required, and the more rocky crest beyond the 738m high point can be avoided by staying on a path running along its N side.

After reaching the summit of Ben More Coigach return back onto the, now broader, ridge and continue along it in E direction to where the Speicein Coinnich ridge begins, but instead of heading onto Speicen Coinnich, descend down the broad N facing coirre into the bealach below Sgurr an Fhidhleir (around NC 097 049), enjoying the dramatic views over the north facing cliffs. From here continue W for a bit to ascend up onto the Fhidhleir from the SW.

Coigach Horseshoe

The Fiddler summit once again offers excellent views (because Beinn an Eoin and the Fiddler ridge are of similar heights, the view changes quite dramatically along the different points of this outing). Once you have enjoyed the views from the summit, descend initially SW, then contour into the bealach that separates the Fiddler from Beinn nan Caorach; from here you get an excellent view of the Fiddler's north cliffs.

Coigach Horseshoe

Next head up near the N cliffs up onto the 648m summit for more views and then SW onto the main summit of Beinn an Caorach. From here the easiest descent is to return back to the bealach below the Fiddler, then contour along the 600m line to pickup the well trodden walkers' path that descends along the Fiddler's SW ridge; take this back to the road.

14km / 1100m ascent

by tf at July 24, 2016 05:57 PM

July 17, 2016

Tomas Frydrych

The Assynt Traverse: Blow by blow

The Assynt Traverse: Blow by blow

It's 2:55am and the day, which is supposed to be a culmination of a three year long dream, is being (un)ceremonially drummed in by rain on a skylight window in Inchnadamph Lodge Hostel.

Over the last couple of weeks I have dithered much over this 'alpine' start. I expect to be on my feet for over 24 hours and the lack of good night's sleep doesn't seem conducive to success. But the middle leg of the Assynt Traverse is over difficult technical ground, and there is lot to be said for covering as much of it as possible in daylight. This consideration (together with Linda, my long suffering wife, pointing out I am unlikely to sleep well anyway), wins the argument. This time of the year there are around 18 hours of daylight -- almost, but not entirely, enough for the first two legs on an overall 24 hour schedule. Hence a sunrise (4:34am) start.

Lot of planning and careful preparation has preceded this morning, but the date, 14 July 2016, is a complete gamble. That wasn't the plan. The plan was to run the twelve peak traverse a month earlier during our two week holiday up here. It was the perfect plan. The weather was exceptionally good. Two weeks of blue sky and sunshine on the back of an already prolonged dry spell. The peat bogs were bone dry. Instead of returning from our runs caked in mud as usual, we were covered in dust stuck to suntan lotion. But I wasn't ready.

It wasn't because of not training hard enough. After summarily failing to reach the requisite fitness in 2014, and again in 2015, I finally did the unthinkable and adopted an eight month training plan. But the weather in January and February meant I was stuck doing hill reps on a road, and you can't get ready for Assynt on a road. Of the 74km or so, at least 60 is not just off road, but off track. Plus they don't make roads steep enough anyway (mine is 25%, in Assynt terms, a pleasant uphill jog). But the training plan had, nevertheless, paid off. Unlike in the previous years, come June it is beginning to come together nicely, it's just that I am a month behind the schedule, and the attempt has to wait.

Of course, Linda and I can't afford to come back for another two week stay just a month later. Linda, who is my support team without whom this can't happen, can perhaps get a couple of days off work -- this has to be a precision operation. And naturally, the weather has turned in late June. From early July I follow the forecast anxiously, hoping for a 'window', but it is consistently awful; heavy rain and high winds. There is also the fact that I need a period of taper, but too long a taper and I will start losing fitness. And there is the approaching deer stalking season, which closes the window of opportunity for the year -- I am running out of time.

And then last Saturday the forecast suggests there will be a window this Thursday; it is not perfect, light rain is expected till 6am or so, and then more rain on Friday, but it is not going to be too windy. And so here we are, five days later, at first light in Stac Pollaidh car park. The forecast has changed a bit since. The morning rain is not so light, and the front coming on Friday is quite a bad one, but there is still supposed to be a wind-free window in between. And just now there is a pause in the rain, enough for a 'before' picture. We are a few minutes early, but there is no point hanging around -- 4:26 seems as a good time to start as any.

The Assynt Traverse: Blow by blow

With the exception of the safety-critical descent from Cul Mor, the first leg (Stac Pollaidh, Cul Beag, Cul Mor, Suilven and Canisp) is best described as straight forward but absolutely brutal -- the climbs and descents are very steep, and the 'easy' ground in between is bog and knee high grass. Stac Pollaidh is the easiest bit there will be, and just now on fresh legs I have to apply a lot of self control to conserve energy and not run.

I take the classic route reaching the western summit at 5:06, 20 minutes ahead of schedule. For a fleeting moment the thought that this might be lot easier than expected enters my mind, but I promptly banish it -- I have spent three years exploring the route, and know what lies ahead. My mind is spared further idle thoughts by a torrential shower that soaks me to the skin in seconds. The showers will keep coming for the next four or five hours, but fortunately none again as heavy.

The Assynt Traverse: Blow by blow

I take it easy on the next segment. Too easy as it turns out; 6:27 on the summit of Cul Beag means I am 4 minutes off pace for this segment. Having investigated the direct descent via the north east coire previously, and found it unexpectedly unpleasant, I head back down the way I came and pick up the stalker's path through Doire Dubh, which allows for a good pace and minimal effort.

The Assynt Traverse: Blow by blow
But climbing Cul Mor, things are not going well. My legs are hurting badly, and in spite of my best efforts I am moving very slowly. I am only a bit over three hours into this outing and this should not be happening -- I have serious doubts about being able to finish the first leg, never mind the whole run. I summit 7 minutes behind schedule at 8:36, which means I have lost 23 minutes on this segment. Cloud base is down to 600m, visibility about five yards.

The descent from Cul Mor is the most safety-critical point on the route. The north face is made of vertical cliffs scarred by narrow bottomless gullies which the OS map doesn't do credit to. The only safe descent to my knowledge can be initiated around the NE edge of Coire Gorm, but locating the exit point in low visibility is tricky, as a number of the bottomless gullies have inviting grassy openings looking just like it but leading to oblivion. I am glad I have recced this descent thoroughly, but even then this not just another hill run.

Safely descending the upper mountain is only the first part of the challenge. The lower section of Cul Mor is made up of terraces separated by 10-15m vertical sandstone cliffs, difficult to find a way through even when heading up. The easiest way down the first terrace is a hundred yards or so east of the stream that flows out of the coire, the rest can then be descended near the stream.

The Assynt Traverse: Blow by blow

I am glad to be out of the clouds now. The views are beautiful, all the lochs around here have reddish sandy beaches, and I stop on the one at the head of Loch na Claise for a quick bite to eat, before heading down to the sandbar that separates Loch a' Mhadail from Loch Veyatie. Someone built a cairn here from a couple of shoebox sized blocks of quartzite -- very minimalist, very arty. I can't resist the temptation to haul another block on the top. It's very wobbly, and, like my footprints in the sand, it will be gone as soon as the wind picks up tomorrow -- in this aeons old landscape it is impossible to be unaware of one's own transience.

Another heavy shower, then the wading of Uidh Fhearna. In spite of the recent rain, it is only knee deep, though the current is lot stronger than usual. The sun comes out and I am moving well, the earlier crisis but a distant memory -- I am making up the lost time. Then on the steep path up to Suilven's Bealach Mor I start getting a sharp pain in my right knee. This is new, I have never before experienced knee pain going uphill, nor a knee pain this intense. But as the ground eases off at the bealach the pain fades away.

The Assynt Traverse: Blow by blow

I am at the summit cairn at 11:17, a whole 9 minutes ahead of my schedule. As I descend, I meet a few walkers not far above the Glen Canisp landy track; they are the first people I have run into, and as it happens they will be the last people I will meet on the open hill until I finish. This suits me fine; this is about me and my hills.

Canisp is my favourite Assynt hill. It lacks the ostentatiousness of Suilven, its drama queen little brother, but its western ridge is very runnable (not that I am running just now), and it is rainbow like -- the gneiss at the bottom is replaced by a band of pink/orange pipe stone, which in turn is replaced by dark red Torridonian sandstone, while the summit is topped by blocky grey and white quartzite. I seem to run out of energy just below that last band, have a gel on the summit (13:13, 11 minutes ahead of schedule), then head down. Weather is improving and I can see the Bone Caves car park where I am supposed to meet Linda.

The Assynt Traverse: Blow by blow

The Canisp descent is one of the few bits of this run I couldn't be bothered to investigate; there seemed no need, there is a path down to the road, then a bit on the road. In a moment of poetic justice for this blazé attitude I somehow manage to pick up the wrong path on the summit, which peters out soon enough, but too late to go back. I make most of the stunning landscape. The quartzite here is pure white, contrasting sharply with the green grass. Not more than twenty yards away a golden eagle rises off the ground, and flaps lazily to settle down just a little bit farther on. I imagine it feels like I felt when bumping into the walkers on Suilven -- 'Is there nowhere one can have some me-time'?

Further down a fox is sniffing around. Foxes are not that common around here, and this one is not like the urban foxes of Glasgow either. Its fur is fluffy and shiny. It doesn't see me coming, so I give it a courteous whistle, nobody likes to get a fright. I am assessing my physical condition. I am tired, my quads are very tight and sore. I am certain now I don't have it in me to pull this off. But I am not spent yet, and on the next leg there are four points I can bail out descending directly to the Inchnadamph Lodge where we are staying. I make the decision to carry on, for I need to find out how far I can manage so I am better prepared next year. I hit the road a couple of hundred yards south of the carpark just as Gita and David are driving from the other direction. I am 2 minutes ahead of schedule.

Gita and David are Inverkirkaig crofters. We first stayed with them in their bespoke Lazybed Cabin almost four years ago, and have done so ever since. They are also both ultra runners; they understand. During our stay in June David successfully completed the South Downs Way 100 miler and talking to him afterwards about the experience brought some badly needed clarity into my own thinking. I expect they don't fully appreciate the importance of their being here at this moment, both for me and Linda, but their infectious enthusiasm raises the mood at the pit stop.

As is typical of Assynt these days, the small carpark is hogged by the camper van menace, but Linda has decisively made most of the little space there was to be found. I have a pint of full-fat milk (my secret weapon), some home made black eye bean and bacon soup with a bit of bread, a chunk of ginger bread and sweet coffee. After 10 hours of sodden feet it is nice to put on dry socks and a dry pair of Inov-8 TrailRock shoes, while Linda's already stuffing newspapers into the soaked Mudclaws. I try to loosen the quads with a foam roller, but it's not working and my 30 minutes are up.

The Assynt Traverse: Blow by blow

The stop has made much more difference than I would have thought and I jog comfortably past the Bone Caves. The second leg of this run has a more amenable profile, the 20km or so long ridge forming an aesthetically pleasing line. The price for this is much more technical ground, the whole ridge is essentially just a one long heap of stones (hence the TrailRocks rather than Mudclaws). Navigation on the Breabag section in particular is difficult in poor visibility, and the quartzite becomes lethal when wet -- this has been weighing on my mind since the morning rain. But it is a beautiful afternoon, blue sky, sunshine, and no wind to speak of.

From afar Braebag appears to be an impenetrable, intimidating, wall of shiny quartzite. As I am approaching it, I start feeling a twinge in my right hamstring -- I have been expecting it, I have always been very cramp prone, the only surprise is it took so long; I have got some salt tablets. Except during the pit stop I made the mistake of just replenishing my supplies rather than simply swapping out the ready made bags for each leg; in doing so I managed to leave both the tablets for the previous leg and this leg behind. Then I remember there should be four tablets in my minimalist emergency kit; I always carry these, and never use them ... I chew one hoping the salty taste will fool my brain, in the knowledge this is probably no more than a placebo. But it works.

On the way up Breabag there are some lovely pitted quartzite slabs to negotiate. In my climbing days I used to have a thing for slabs, and so make most of it. The knowledge I am not going to finish the Traverse this time round has taken some of the pressure off, and this is real fun. I reach the summit at 16:15, 9 minutes ahead of schedule. On the previous occasions I have been up here it was either clagged out or damp. But today running Breabag is pure pleasure. The dry quartzite is grippy, the blocks are generally firmly locked together, rarely moving under my feet, and I am able to link up numerous rising slabs together into satisfying lines. The views are stunning in all directions -- it has been a while since I enjoyed a run this much (this is why I hate training schedules, they make running about something other than the fun).

I descend into the narrow bealach below Conival along the east ridge of Breabag Tarsainn. Above the bealach the ridge changes from quartzite to limestone, and I pass two small water holes, each no more than four or five feet across, full of tadpoles and black as night. The water itself is crystal clear, I imagine that under my feet there is likely to be a deep cavern filled with water -- I make a note to come back one day with a string to plumb it out.

On my previous visits I have concluded that the fastest, as well as most pleasing, way up Conival is to scramble up its SSE ridge. The scramble is not particularly difficult, but there is no obvious line lower down, so one has to be prepared to head 'into the unknown', while the upper section is quite exposed. I review this briefly in the light of being on my feet for 14 hours, and carry on. I am not moving very fast. The pain in my right knee returns, and then, perhaps because I am overcompensating, I feel a few twitches in my left calf. I chew on another salt tablet, and again, it seems to do the trick.

I summit at 18:39, a whopping 26 minutes ahead of schedule, have a bite to eat, and then, as I am setting off for Ben More, I realise I made a mistake in my schedule calculations -- I had only allowed myself 11 minutes to get to Ben More. I have done this on much fresher legs before and it took me 20 minutes. I feel scunnered and angry with myself for not noticing it at home, but there is nothing for it; it takes me 25 minutes. But then as I am running back to Conival I suddenly realise that there is no pain in my legs, those sore quads and twitching hamstrings and calves, it's all gone. I am not moving very fast, but I am moving very comfortably. And it is at this point, 15 hours into the run, all thoughts of quitting are gone. I know with absolute, irrational, certainty that I am going to finish; suddenly another 9 hours feel like just another long run.

By the time I am back on Conival I am 7 minutes behind schedule. It's time to pull my finger out and start running. I am struck by how crystal clear the water in the unnamed lochan just below the Beinn an Fhurain bealach is, then up the little rise into the bealach. I recced the section between here and Loch nan Cuaran in April when there was still a fair bit of snow, and got the impression it is going to be a proper bog fest. I am in for a pleasant surprise. Sure, the ground here is completely waterlogged (and for inexplicable reasons the water is ice cold), but the surface is covered in short grass, and is most pleasant to run on. And so I run, surrounded by deer on all sides. Down below me are remnants of a WWII RAF aircraft, and the story of the six airmen buried somewhere here below my feet plays out vividly in my imagination.

The Assynt Traverse: Blow by blow

At the north west end of Loch nan Cuaran I pick up the ancient pony track, which is not on maps, but which I stumbled on a few years back on what has become my favourite Assynt run; today it just provides an easy way onto the Beinn Uidhe ridge. This is another part of the run I did not bother to recce. The line here is obvious, and equally obviously unpleasant. It is amazing how the nature of the 'heap of stones' changes along this long ridge -- on Breabag, the blocks tend to be quite large and firmly interlocked, on Conival and Ben More they are smaller and looser, making metallic clanking sounds, requiring more care, but still fun to run; on Beinn Uidhe, the stones are smaller, more round, and invariably moving about! I pick my way carefully, twice nearly falling when a large block that should not be able to move does. I arrive at Glas Beinn at sunset at 22:11, 25min ahead of schedule.

The Assynt Traverse: Blow by blow

I manage most of the descent without a head torch, but I am not doing that well going down; I lose 11 minutes by the time I reach the second pit stop car park below Quinag, where Linda is waiting for me, at 23:05. I have some milk, a big chunk of ginger bread, a banana and some coffee. I put on extra clothes, dry socks and the Mudclaws (which Linda somehow managed to get nearly dry!), and leave at 23:27. By now it is pitch dark, there is no light pollution here, and the sky is clouded -- the next weather front is on its way. I am 22 minutes ahead of schedule, but I am very much aware that at this stage in the game that means very little.

The third leg consists of the three Quinag tops, and then a final descent toward Unapool. The hill itself poses no technical difficulties per se, but there is some significant exposure on the descent from Spidean Coinich, the first summit. The knowledge that if it comes to it I will be tackling it in the dark, sleep deprived and at the end of an already long day has weighed heavily on my mind in the preceding weeks -- if something was to go seriously wrong on this run, this is most likely where it is going to happen. There is also the final descent, which I investigated only partially -- as will become clear shortly, this turns out to be the biggest strategic mistake of my entire 'campaign'.

The Assynt Traverse: Blow by blow

The Spidean Coinich ascent is straight forward, and dry-footed. Lower down I am making most of joining together the various slabs, only to realise that I am moving too far away from the north edge of the ridge. I make a correction. For some reason, I can't get the six airmen buried on Caorach out of my head. Their story drives home not only the unforgiving nature of these hills, but also the insignificance and frivolity of my own undertaking.

As the hill steepens, the sharp pain in my right knee comes back. Also, because the head torch has a constant depth of field, it feels like I am making no progress, there is always just seventy-five metres or so to go. The wind has picked up significantly and the cloud base just touches the 764m summit. I arrive at 00:36, now only 18 minutes ahead of schedule. My original plan was to stop for a double espresso Clif Shot gel to wake me up for the descent, but I do not want hang about, and eating while moving is not an option just now.

If heading up with a head torch is frustrating, heading down is outright spooky, as if beyond the reach of the beam there was nothing there, just impenetrable black abyss. Worse though, as I start descending it is as if all proprioception disappeared from my legs, and I am relying heavily on my hands to stay upright -- this is pretty much what I was afraid of. I only manage to drop about 15m before deciding to stop and have that gel. I wait for the caffeine to kick in, which it does surprisingly quickly and with it some of my sense of balance comes back, enough to safely descend into Bealach a'Chornaidh. From here I know the rest of the mountain is straight forward.

Visibility on Sail Gharbh is not the best and there is an unwelcome wind chill. My right knee now hurts even on easy uphill inclines. I summit at 1:37, my gain has dropped to 17 minutes. As I start heading toward Sail Ghorm there is a spittle of rain, which makes me push that little bit harder, but at least now I have the wind in my back. Sail Ghorm is an easy ascent, though there is, again, that headtorch effect, and the knee is now continuously hurting. I reach the cairn at 2:32, my lead has shrunk again, I am now only 11 minutes ahead of schedule. But I just need to drop down to the road, and I have a whole hour and fifty four minutes to get it wrapped up in 'a day' -- I relax a bit.

The Assynt Traverse: Blow by blow

My chosen descent is along the eastern ridge of Sail Ghorm, into the raised coire of Allt a' Bhathanaich and from there down onto the moor, then the 3km or so passing just east of Loch Airigh na Beinne onto B869 just where it 'touches' Loch Unapool, and finally a 1km jog on the road to the finish. I had done a partial descent of the eastern ridge to make sure this would work, but I had not bothered with the moor below -- it looks like a typical, complex, Assynt landscape, but neither from the OS map, nor the visual inspection from a distance is there any indication of significant difficulties.

The descent down the ridge, though rather steep, is indeed fairly uncomplicated and I am down at Allt a' Bhathanaich at around the 400m contour line by 3:08, only to discover that the stream turns into a cascading waterfall as it drops to the moor floor over a series of grassy, but nevertheless big vertical steps. There is no obvious line down. As I am scanning the ground with my head torch, trying to decide on which side of the stream I have a better chance, I spot a deer track near the edge of the terrace I am on -- it's not great, but it's something. The step below the next terrace is even bigger. I immediately start looking for a deer track, it takes me a while to find it, but it's there. In this way I negotiate a couple more steps, until I am finally on the moor. I am surprised it is only 3:29, those 21 minutes have felt like hours.

I see Loch Airigh na Beinne somewhat to my right, and it feels like getting there is going to be unpleasant. It is dawn now and I can clearly see the rise behind which is the road. I decide to make a small route change. I don't bother with the map, I have plenty of time, I am at most 20 minutes from the road, then maybe 5 minutes on the road, and I have 56 minutes to do that. I simply follow the stream until it meets the one which flows out from Loch Airigh na Beinne at its north west end, then, at leisurely pace, onto the little rise beyond which is the road.

Except, when I get there, there is no road. Instead there is a deep cut narrow glen. This is a disaster. The road must be beyond the steep rise at the other side, but I cannot face another steep climb. I can see that the glen is not very long and is shallow at its lower end -- that's where I will get out. I start ploughing through the waist deep header and grass ... and immediately fall into a hole well above my knees. I carry on and shortly I catch sight of the tarmac across the foot of the glen. I picture the map in my mind -- that's the B869 just off the junction with A894, which will be coming down the rise on my right. Indeed, I can see the tarmac ... then my brain registers, and processes the significance of, the passing place sign above me -- the A894 is a normal two lane road. I scramble onto the tarmac. I am suddenly fully awake. I am somewhere on the B869, but I have no idea how far from the main road; the fact that I am at the bottom of a hill is not a good sign at all.

It is 4:06, this is the last time I will look at the watch, there is no time to mess about with a map, there is only one thing left now ... to run. And so I run, up this hill, hoping beyond it is the junction. It is not, just more passing signs, then a bend. Surely, beyond the bend. It is not, there are more passing signs and then a loch. I keep running. Then I see headlights ... they are not very close, but neither impossibly far ... when the car does not appear, I know that's the main road. I finally reach it. I see a car park sign, and for a moment think I have arrived, then realise it's on the wrong side of the road. I see an access track, but I can't see the car park, and for the world I can't remember how far down this road it is. Finally I see Linda and the car, another couple hundred meters. I arrive, completely convinced that through sheer stupidity I missed the chance to complete the Traverse in 'a day'. I stop my watch, but dare not to look. There is lot of swearing; Linda clearly confused as to what is going on keeps saying 'you have done it'. I assume she means completing the 74km and 6,400m of ascent, but when I finally look, the watch says 23:54. I can't believe my eyes (I work out later that over the last 14 minutes I averaged 5min:23s/km while climbing over 60m; my typical road running pace these days is only about 6min/km ... I still have no idea how.)

The Assynt Traverse: Blow by blow

The End

So, I have done it, though I realise that I no longer know what 'it' is. The last 24 hours have been a real journey of self-discovery. I have always expected that the Traverse was going to be a classic example of type-2 fun, and that if I ever got to finish it, it would take a long time to see it as 'fun'. I am somewhat embarrassed to admit that I found this not to be the case, and that perhaps with the exception of the last half hour, I have really enjoyed myself (and that as soon as I woke up after 3.5h of sleep on Friday, I started wondering what I am going to do next year ...).

But there is more to it than fun. I have many times before experienced moments of insight in the hills, there is something about physical exertion in the midst of peace and quiet that clears the mind, sorts out the important from the drivel. On this occasion the Caorach air crash site in particular spoke rather powerfully to me on a number of different levels; I have tried to express some of those thoughts here the day after while still fresh in my sleep-deprived mind.

I have also come to appreciate the value of companionship and friendship afresh. Those brief car park encounters were of a significance that is difficult to put into words. I'll leave it at that.

But above all, if there is one all important thing I learned from the Trans Assynt Run, it is this:

I now understand MacCaig's obsession with frogs and toads, for my eyes have seen what he saw.

by tf at July 17, 2016 11:00 AM

July 15, 2016

Tomas Frydrych

Assynt Reflection

Assynt Reflection

Every time I pass through the grassy bowl north of Beinn an Fhurain, a shiver runs down my spine. Here a temporal singularity is created by the intersection of the merciless nature of these 'wee hills' of ours with the brokenness of the world we have created for ourselves on the one hand, and the cruelty of fate on the other.

It's 13 April 1941. An RAF Avro Anson on a training flight suffers a second engine failure over the quartzite ridge above Inchnadamph. As I run down the gentle grassy slope sixteen hours into my Trans Assynt attempt, the scene plays out vividly in my imagination. The hopelessness of the situation as the pilot struggles controlling the plane trying to land it on the only flat bit of ground around. He has no chance, for, perhaps unknown to him, the 'meadow' is scared by peat hags.

Two seriously injured, but the crew survives -- testimony to immense skill, nerves of steel, courage against overwhelming odds. If this was a Hollywood movie, they would start a fire, munch on rations, make cups of tea, the rescuers arriving short time later. They would live happily ever after.

But this is not a Hollywood movie, the mountain has other ideas, and the mountain knows no mercy, not even for heroes. 13 April 1941 brings a sudden, unseasonal cold snap, and three local shepherds, who know this mountain and its fickle nature like no one else, perish. And our crew finds itself in the midst of the mountain's sudden rage.

If this was a Hollywood movie, they would huddle together in what is left of the fuselage, and, fingers too cold to strike a match, tell stories of the green and pleasant land, of families, of comrades fallen. They would send one of the uninjured to bring help. And they would live happily ever after, telling the story of their ordeal to their grandchildren.

But this is not a Hollywood movie. The chap setting out into the blizzard to bring help has to make a choice we make hundreds of times, but which today becomes about life and death. Left or Right? Fate is not on his side. He choose east, heading out into the vast uninhabited wilderness, while help is less than an hour away to the west.

This, indeed, is no Hollywood movie, there is no happy ending; the entire crew of six perishes of exposure. The 'rescuers' arrive six weeks later, they scatter the remains of the plane, they burry the crew up here on the mountain. As I pass the two engine blocks, propellers mangled, pieces of wings and undercarriage, I cannot but wonder, am I treading on your grave? I expect they, the men who flew these aircraft, wouldn't mind. The mountain, I am less sure of, for she has no favourites and takes no prisoners, indifferent to our love.

by tf at July 15, 2016 11:00 AM

July 05, 2016

Emmanuele Bassi

GSK Demystified (I) — A GSK primer

Last month I published an article on how GTK+ draws widgets on the toolkit development blog. The article should give you some background on the current state of what GTK does when something asks it to draw what you see on the screen — so it’s probably a good idea to read that first, and then come back here. Don’t worry, I’ll wait…


Welcome back! Now that we’re on the same page… What I didn’t say in that article is that most of it happens on your CPU, rather than on your GPU — except the very last step, when the compositor takes the contents of each window and pushes them to the GPU, likely via the 3D pipeline provided by your windowing system, to composite them into what you’ll likely see on your screen.

The goal for GUI toolkits, for the past few years, has been to take advantage of the GPU programmable pipeline as much as possible, as it allows to use the right hardware for the job, while keeping your CPU free for working on the application logic, or simply powered down and avoid polar bears to squeeze on an ever reducing sheet of artic ice. It also allows to improve the separation of jobs internally to the toolkit, with the potential of splitting up the work across multiple CPU cores.

As toolkit developers, we currently have only one major API for talking to the GPU, programming it, and using it to put the contents of a window on the screen, and that’s OpenGL.

You may think: well, we use Cairo; Cairo has support for an OpenGL device. Just enable that, and we’re good to go, right? and you wouldn’t be entirely wrong — except that you really don’t want to use the OpenGL Cairo device in production, as it’s both a poor fit for the Cairo drawing model and it’s basically unmaintained. Also, Cairo is pretty much 2D only, and while you can fake some 3D transformations with it, it’s definitely not up to the task of implementing the full CSS transformation specification.


Using OpenGL to generate pixel-perfect results is complicated, and in some cases it just goes against the expectations of the GPU itself: reading back data; minuscule fragments and tesselations; tons of state changes — those are all pretty much no-go areas when dealing with a GPU.

On the other hand, we really want to stop relying so much on the CPU for drawing; leaving your cores idle allows them to go into low power states, preserving them and improving your battery life; additionally, any cycle that is not spent inside the toolkit is a cycle available to your application logic.

As you may know from the past few years, I’ve been working on writing a new API that lets GTK offload to the GPU what currently happens on the CPU; it’s called GSK — short for GTK Scene Kit — and its meant to achieve two things:

  • render the contents of a GTK application more efficiently
  • provide a scene graph API to both the toolkit and applications

With these two goals in mind, I want to give a quick overview on how GSK works, and at which point we are in the development.


As GSK is meant to serve two purposes it makes sense to have two separate layers of API. This is a design decision that solidified after various discussions at GUADEC 2015. As such, it required a fair amount of rework of the existing code base, but very much for the better.

At the lowest level we have:

  • GskRenderNode, which is used to describe a tree of textures, blend modes, filters, and transformations; this tree is easily converted in render operations for graphics API like Cairo and OpenGL, and Vulkan in the near future.
  • GskRenderer, an object that takes a tree of GskRenderNode instances that describes the contents of a frame, and renders it on a given GdkDrawingContext.

Every time you wish to render something, you build a tree of render nodes; specify their content; set up their transformations, opacity, and blending; and, finally, you pass the tree to the renderer. After that, the renderer owns the render nodes tree, so you can safely discard it after each frame.

On top of this lower level API we can implement both the higher level scene graph API based on GskLayer that I presented at GUADEC; and GTK+ itself, which allows us to avoid reimplementing GTK+ widgets in terms of GSK layers.

I’m going to talk about GskRenderer and GskRenderNode more in depth in a future blog post, but if you’re looking for some form of prior art, you can check the ClutterPaintNode API in Clutter.

Widgets in GTK+ would not really be required to use render nodes: ideally, we want to get to a future where widgets are a small, composable unit whose appearances that can be described using CSS; while we build towards that future, though, we can incrementally transition from the current immediate more rendering model to a more structured tree of rendering operations that can be reordered and optimized for the target graphics layer.

Additionally, by sharing the same rendering model between the more complex widget API and the more freeform layers one, we only have to care about optmizing a single set of operations.

You can check the current progress of my work in the gsk-renderer branch of the GTK+ repository.

by ebassi at July 05, 2016 10:11 AM

June 19, 2016

Tomas Frydrych

Delorme InReach SE

Delorme InReach SE

The InReach SE is a location tracker and two way (SMS-like) messaging device utilising the Iridium satellite network (which means it has a genuinely 100% global coverage). I have been using it for about 2.5 years, so I thought it might be worth saying something about it.

When I first took up hill running, I quickly developed taste for ground less trodden – it might seem hard to believe but within a half an hour drive of Stirling it is perfectly possible to do a 5+ hour hill run without bumping into another person, even at the weekend. And as the mobile phone coverage in Scotland is still patchy at the best of time, the phone rarely works in the hills. There are good reasons for having some means of communication on the longer outings at least, not least to avoid MRT call outs when what was supposed to be 4.5-5h run turns into a 7h one because ‘reasons’ (as has actually happened to me). Plus it makes my wife less anxious, I think!

When I started looking into the options, it really came down to two: the Spot, and the InReach SE – the InReach won, because of (a) Iridum having a truly global coverage, and (b) it being lot more flexible.

How It Works

The InReach SE has three basic functions: tracking, messaging and SOS. When the device is put into tracking mode, it uploads tracking points at an interval that can be adjusted from 10min to 4h. This tracking info can be accessed live via a web portal – this is quite useful, for example, if you need to be picked at the end of your run. The messaging is like SMS, i.e., you can send and receive short text messages, which can be delivered via either SMS gateway, or via email. The messages are automatically tagged with location info. The SOS function provides a one-button access to a 24/7 emergency response centre. There are a few other functions, for example, it is possible to get the coordinates of your current position, and there is some integration with Twitter and Facebook.

In order to use the Iridum network, you need a 'plan’, not unlike a mobile phone contract; Delorme provide a number of these that match different use cases and provide different allowances for tracking point upload and messaging. I use a plan that provides unlimited tracking points and 40 messages per month for ~$20; there are both cheaper and more expensive options.

The Gadget

The gadget itself is very rugged. It conforms to some military specifications for impact, and is waterproof to IP67, essential to make it usable in Scottish summer months – to put it simply, it is the only gadget I own which I don’t feel needs some extra protective case when traveling or when in use; it lives in a mesh pocket on the outside of my backpack (it needs to be upright for most efficient function).

The device has a small, fairly low resolution, colour screen. It is not a touch screen, which makes the unit feel somewhat dated, but that really is the only sensible option for a device of this type (a resistive touch screen would mean stylus, which would be a disaster in the outdoors, and capacitive screens don’t work when it rains!).

The user interface, including on-screen keyboard, is navigated using a four-way rocker, and the buttons are big enough to work in mid-weight gloves. The keyboard is, unavoidably, awkward to use, though I feel this is not helped by the keys being alphabetically organised; there is an autocomplete, but I find the suggestions rarely useful. You can preprogram three messages in advance to be dispatched with a dedicated button. All in all, the messaging works well, the messages dispatch fast. By default the device only checks for messages every 20min, but you can make it checked manually, and while this is not a device to swap silly messages with your friends there and fro, it is perfectly possible to have a fairly fluent 'conversation’ with it.

Power comes from a rechargeable battery, charged from USB; Delorme claim with the 10min tracking interval the battery provides 100h of use. I have never had a need to run it for so long, so I can’t confirm it, but based on my experience I have no reason to doubt it. The power management on the unit seem to be implemented very well, and the battery holds charge when powered off; in my use, I only need to charge it once every couple of months.

The thing that perhaps impresses me most is that in the two and half years I have used the InReach I have not run into any real bugs on the device; the update software had some issues couple of times, but the device itself has been rock solid. As an embedded software engineer I tend to be unhappy with most gadgets I buy, the InReach is one of the rare exceptions. The whole unit is designed in a very utilitarian manner, and does exactly what it should do very well – my one hope is that now Delorme is part of Gramin, they will be able to stick to this pragmatic approach.

I only have one feature request: could we please have support for coordinates using UK national grid; it would make the position readout more useful over here.

Final Thoughts

I tend to be fairly obsessive about kit weight. At 196g the InReach is not light, but on the longer runs it’s rarely what ends up left behind; I think that sums it up better than anything else I could say.

PS: Delorme now have a posher model, the InReach Explorer. It seems to be basically the SE with a conventional GPS functionality thrown in.

by tf at June 19, 2016 11:00 AM

June 18, 2016

Tomas Frydrych

On the Importance of Being Lost

On the Importance of Being Lost

It seems that the GPS is now considered to be a part of the essential outdoor kit, and most of the people I meet ‘out there’ seem to have one (the other day I saw someone ‘navigating’ the towpath along the Forth and Clyde Canal using one). The experts even assure me that it is possible to program in hazards to keep me safe!

With all this technology keeping us on the straight and narrow, I wonder, have we stopped being explorers and became followers of (other people’s) tracks? When does adventure stop being adventure and becomes haulage? And is it really keeping me safer?

I learnt the rudimentary outdoor 'craft’ in summer camps on the 'evil’ side of the Czech - Austrian border. For many years each July, together with some thirty other boys, four weeks of camping in the woods, no running water, no electricity, doing everything for ourselves, parental visits discouraged and resented by those who’s parents dared (few did), the 'seniors’ of an age that these days I barely think of as 'adult’. These were some of the happiest times of my childhood, like when we … I am getting sidetracked, and that would take for ever.

We had this game we played, sort of a rite of passage into teenagehood. It had a legend that I can’t recall too well, something to do with British commandos airdropped into occupied France (these were not the camps the Party leaders would have approved of, and now that I am old enough to understand, I am grateful to those who took such personal risk to make these things happen for us, but I am, again, digressing). The game went like this: you got woken up in the middle of the night, blindfolded, frisked to ensure you had nothing on you other than an ID card, bundled into a car … and dropped off in the middle of nowhere. Your task was to be back in camp in time for the daily conditional at 7am, avoiding any human contact.

This part of Southern Bohemia was covered by beautiful deep forests, where deer and wild boar roamed. To say it was underpopulated would be to imply that somebody lived there. With just a few kilometres separating us from the 'depraved’ West, the only vehicles on the roads at night were the military and the police, and there were plenty of them, for those were tense times when the Wicked Witch and the Evil Thespian plotted bad things. To be delivered to the camp in the back of a police car would have been the ultimate failure and shame (not to mention a lot of awkward explaining).

Once the sound of the engine died down and you took of the blindfold (for those were the rules), you could be sure there wouldn’t be the slightest glimmer of artificial light, and you would have no idea where you were, the lads who dropped you off made sure of that with great dedication and pride. But you knew you were long way away from the camp, you would count yourself lucky if you ended up walking less than 20k by the morning.

The game generated its own lore passed down the 'generations’, stories of boys climbing trees like Hans of Hans and Gretle, boys walking for hours at brisk pace in the wrong direction, boys hiding in stinky ditches from passing cars, boys taking till lunch to return. Lore, nostalgia, and H&S aside, it occurred to me recently that after that first time of finding my way back through the spooky forest, I never again felt threatened by being lost. The game taught me that in these micro-wildernesses of ours, the 'problem’ of being lost can always be solved by walking for a few hours, and if worst comes to worst, for a few hours more.

It seems to me that nowadays we put lot of emphasis on location accuracy, while the biggest threats in the outdoors, come, I think, from not being prepared (physically, mentally, equipment-wise) for the environment we venture into. Sure, walking of a cliff is a problem, but it is not one that the GPS magically solves, the accuracy is not that good, and neither is the accuracy of the maps we use to program our routes.

Over the years I have found the most important navigation skill to be, for my needs at least, the ability to read terrain, to be able to judge how far things are, how tall, how fast I am likely to be able to move forward, upward, downward, backward. Different plants like different things, it is often possible to make an educated guess about what ground lies ahead from the colour of the vegetation, and suddenly venturing off a beaten track is less of a gamble. In Scotland being able to identify Sitka at a distance can save hours of time and ton of effort; not all rock is made equal, especially when wet, etc.

When the ability to read terrain is combined with an ability to read a map, much of day to day navigation can be done without any other tools, at least in good visibility. That is not to say I don’t need a compass – compass is my lifeline, especially in a place like Scotland, where poor visibility is common, dare I say, even the norm. (If it was up to me, I would make proficiency with map and compass part of the Scottish school curriculum!) But what I think makes the biggest difference to my safety is situational awareness, which becomes progressively more critical as conditions grow more 'awkward’ – being spot on the planned route will do me f*** all good if that spot is a loaded snow slope about to avalanche – and situational awareness comes from paying attention to my surroundings, here and now, and over days, weeks, years.

Long time ago the philosopher Michael Polanyi coined the expression personal knowledge to describe type of knowledge that can only be acquired first hand. A simple example is 'pain’; it is not possible to teach someone what pain is, that knowledge can only be acquired by undergoing pain first hand. I think situational awareness is also a type of personal knowledge, we can teach people about terrain and hazards, and we can point out specific hazards, and techniques for dealing with them, but we cannot make them aware on an ongoing basis in a constantly changing environment; that requires first hand personal knowledge which builds up over time from first hand exposure to the 'out there’.

And here lies my problem with the GPS. Being little lost all the time is a way to learn to 'find’ myself, it forces me to keep working out where I am, and, through the process of trial and error, I get better at it, and, while I am doing so, I become more acutely aware of my surroundings. Navigation becomes an instinctive, ongoing, nearly subconscious state, rather than an occasional, discreet, disjointed activity – when I rely on GPS for day to day navigation, I am depriving myself of learning opportunities, not developing my more rudimentary navigation skills, which, nevertheless, remain critical, because gadgets get broken or lost, batteries run out, signals get disrupted, etc.

It is tempting to think that I might, in fact, use that accurate information at my fingertips to learn inversely from it to achieve the same, i.e., that I can test my independent judgement against that referential knowledge. But in my personal experience at least this simply doesn’t work. If I tell my brain 'you are here, where did you think you were?’ it will invariably answer, 'I knew that, of course’, but if I make it commit itself before hand, it will often be wrong, at which point I start analysing why it was wrong, so I can be right the next time.

So, does it make me safer? I think that is, in fact, a wrong question to ask. There are many things that, viewed on their own, make me safer; if I were to bring with me all the things that can make me safer, I’d need a caravan of Sherpas to accompany me each time I head for the hills. The questions that need to be asked instead are 'by what margin does it improve my safety?’ and 'how good investment to improve the safety margin is it?’.

I have an inkling that when it comes to the GPS, the answers might well be 'small’ and 'not the best’ respectively. As long as I still have to be able to navigate competently when the batteries run out, then the gain is convenience, rather than safety margin. And in that case, would the same money spent on better waterproofs / boots / sleeping bag, or a course on using map and compass, not, in fact, provide a greater improvement to my safety margin?

So, I can’t help to see the GPS as a convenience rather than a significant safety investment. But convenience is a funny thing. The old fashioned compass is a very simple device, it has no buttons, no menus, easy to operate in thick gloves. And in high stress situations simplicity greatly improves safety margins. I was given a poignant lesson on this a few years back, on an ugly winter day, needing to navigate off the Ben Nevis summit, in zero visibility, fast. I had my GPS, with the 'escape route’ programmed in, but when it came to it, the speed and accuracy with which my mate Bob was able to do this using a compass made it no contest. (Shortly after that the GPS got relegated to a shelf, and it’s been there ever since; every so often I come across some trade-in offer, but then I think, what’s the point?)

I am also under no illusion that there are significant commercial pressures at play. Safety sells; safety products have naturally bigger profit margins. It is in the interest of the industry to make the GPS about safety. But when it comes to gadgets, I expect there are ones that score higher on the safety margin improvement scale. Why, for example, are we, in Scotland, not talking more about avalanche transceivers? There is a gadget with a proven track record (but with a smaller market …).

And so, I am back where I started; to me the GPS is little more than a way of sharing and following tracks, and at the end of the day, I don’t want to turn into a haulier, I like being lost, a little bit at a time.

by tf at June 18, 2016 11:00 AM

June 15, 2016

Emmanuele Bassi

Long term support for GTK+

Dear Morten,

A belief that achieving stability can be done after most of the paid contributors have run off to play with new toys is delusional. The record does not support it.

The record (in terms of commit history) seems to not support your position — as much as you think everyone else is “delusional” about it, the commit log does not really lie.

The 2.24.0 release was cut in January, 2011 — five and half years ago. No new features, no new API. Precisely what would happen with the new release plan, except that the new plan would also give a much better cadence to this behaviour.

Since then, the 2.24 branch — i.e. the “feature frozen” branch has seen 873 commits (as of this afternoon, London time), and 30 additional releases.

Turns out that people are being paid to maintain feature-frozen branches because that’s where the “boring” bits are — security issues, stability bugs, etc. Volunteers are much more interested in getting the latest and greatest feature that probably does not interest you now, but may be requested by your users in two years.

Isn’t it what you asked multiple times? A “long term support” release that gives you time to port your application to a stable API that has seen most of the bugs and uncertainty already squashed?

by ebassi at June 15, 2016 05:24 PM

June 08, 2016

Emmanuele Bassi

Experiments in Meson

Last GUADEC I attended Jussi Pakkanen’s talk about his build system, Meson; if you weren’t there, I strongly recommend you watch the recording. I left the talk impressed, and I wanted to give Meson a try. Cue 9 months later, and a really nice blog post from Nirbheek on how Centricular is porting GStreamer from autotools to Meson, and I decided to spend some evening/weekend time on learning Meson.

I decided to use the simplest project I maintain, the one with the minimal amount of dependencies and with a fairly clean autotools set up — i.e. Graphene.

Graphene has very little overhead in terms of build system by itself; all it needs are:

  • a way to check for compiler flags
  • a way to check for the existence of headers and types
  • a way to check for platform-specific extensions, like SSE or NEON

Additionally, it needs a way to generate documentation and introspection data, but those are mostly hidden in weird incantations provided by other projects, like gtk-doc and gobject-introspection, so most of the complexity is hidden from the maintainer (and user) point of view.

Armed with little more than the Meson documentation wiki and the GStreamer port as an example, I set off towards the shining new future of a small, sane, fast build system.

The Good

Meson uses additional files, so I didn’t have to drop the autotools set up while working on the Meson one. Once I’m sure that the results are the same, I’ll be able to remove the various configure.ac, Makefile.am, and friends, and leave just the Meson file.


Graphene generates two header files during its configuration process:

  • a config.h header file, for internal use; we use this file to check if a specific feature or header is available while building Graphene itself
  • a graphene-config.h header file, for public use; we expose this file to Graphene users for build time detection of platform features

While the autotools code that generates config.h is pretty much hidden from the developer perspective, with autoconf creating a template file for you by pre-parsing the build files, the part of the build system that generates the graphene-config.h one is pretty much a mess of shell script, cacheable variables for cross-compilation, and random m4 escaping rules. Meson, on the other hand, treats both files exactly the same way: generate a configuration object, set variables on it, then take the appropriate configuration object and generate the header file — with or without a template file as an input:

# Internal configuration header
configure_file(input: 'config.h.meson',
               output: 'config.h',
               configuration: conf)

# External configuration header
configure_file(input: 'graphene-config.h.meson',
               output: 'graphene-config.h',
               configuration: graphene_conf,
               install: true,
               install_dir: 'lib/graphene-1.0/include')

While explicit is better than implicit, at least most of the time, having things taken care for you avoids the boring bits and, more importantly, avoids getting the boring bits wrong. If I had a quid for every broken invocation of the introspection scanner I’ve ever seen or had to fix, I’d probably retire on a very small island. In Meson, this is taken care by a function in the gnome module:

    import('gnome')

    # Build introspection only if we enabled building GObject types
    build_gir = build_gobject
    if build_gobject and get_option('enable-introspection')
      gir = find_program('g-ir-scanner', required: false)
      build_gir = gir.found() and not meson.is_cross_build()
    endif

    if build_gir
      gir_extra_args = [
        '--identifier-filter-cmd=' + meson.source_root() + '/src/identfilter.py',
        '--c-include=graphene-gobject.h',
        '--accept-unprefixed',
        '-DGRAPHENE_COMPILATION',
        '--cflags-begin',
        '-I' + meson.source_root() + '/src',
        '-I' + meson.build_root() + '/src',
        '--cflags-end'
      ]
      gnome.generate_gir(libgraphene,
                         sources: headers + sources,
                         namespace: 'Graphene',
                         nsversion: graphene_api_version,
                         identifier_prefix: 'Graphene',
                         symbol_prefix: 'graphene',
                         export_packages: 'graphene-gobject-1.0,
                         includes: [ 'GObject-2.0' ],
                         install: true,
                         extra_args: gir_extra_args)
    endif

Meson generates Ninja rules by default, and it’s really fast at that. I can get a fully configured Graphene build set up in less that a couple of seconds. On top of that, Ninja is incredibly fast. The whole build of Graphene takes less than 5 seconds — and I’m counting building the tests and benchmarks, something that I had to move to be on demand for the autotools set up because they added a noticeable delay to the build. Now I always know if I’ve just screwed up the build, and not just when I run make check.


Jussi is a very good maintainer, helpful and attentive at issues reported to his project, and quick at reviewing patches. The terms for contributing to Meson are fairly standard, and the barrier for entry is very low. For a project like a build system, which interacts and enables other projects, this is a very important thing.

The Ugly

As I said, Meson has some interesting automagic handling of the boring bits of building software, like the introspection data. But there are other boring bits that do not have convenience wrappers, and thus you get into overly verbose section of your meson.build — and while it’s definitely harder to get those wrong, compared to autoconf or automake, it can still happen.

Even in the case of automagic handling, though, there are cases when you have to deal with some of the magic escaping from under the rug. Generally it’s not hard to understand what’s missing or what’s necessary, but it can be a bit daunting when you’re just staring at a Python exception barfed on your terminal.


The documentation is kept in a wiki, which is generally fine for keeping it up to date; but it’s hard to search — as all wikis are — and hard to visually scan. I’ve lost count of the times I had to search for all the methods on the meson built-in object, and I never remember which page I have to search for, or in.

The inheritance chain for some objects is mentioned in passing, but it’s hard to track; which methods does the test object have? What kind of arguments does the compiler.compiles() method have? Are they positional or named?

The syntax and API reference documentation should probably be generated from the code base, and look more like an API reference than a wiki.


Examples are hard to come by. I looked at the GStreamer port, but I also had to start looking at Meson’s own test suite.


Modules are all in tree, at least for the time being. This means that if I want to add an ad hoc module for a whole complex project like, say, GNOME, I’d have to submit it to upstream. Yeah, I know: bad example, Meson already has a GNOME module; but the concept still applies.


Meson does not do dist tarballs. I’ve already heard people being skeptical about this point, but I personally don’t care that much. I can generate a tarball from a Git tag, and while it won’t be self-hosting, it’s already enough to get a distro going. Seriously, though: building from a Git tag is a better option than building from a tarball, in 2016.

The Bad

There shocking twist is that nothing stands out as “bad”. Mostly, it’s just ugly stuff — caused either by missing convenience functionality that will by necessity appear once people start using Meson more; or by the mere fact that all build systems are inherently ugly.

On the other hand, there’s some badness in the tooling around project building. For instance, Travis-CI does not support it, mostly because they use an ancient version of Ubuntu LTS as the base environment. Jhbuild does not have a Meson/Ninja build module, so we’ll have to write that one; same thing for GNOME Builder. While we wait, having a dummy configure script or a dummy Makefile that would probably help.

These are not bad things per se, but they definitely block further adoption.

tl;dr

I think Meson has great potential, and I’d love to start using it more for my projects. If you’re looking for a better, faster, and more understandable build system then you should grab Meson and explore it.

by ebassi at June 08, 2016 11:00 PM

June 01, 2016

Chris Lord

Open Source Speech Recognition

I’m currently working on the Vaani project at Mozilla, and part of my work on that allows me to do some exploration around the topic of speech recognition and speech assistants. After looking at some of the commercial offerings available, I thought that if we were going to do some kind of add-on API, we’d be best off aping the Amazon Alexa skills JS API. Amazon Echo appears to be doing quite well and people have written a number of skills with their API. There isn’t really any alternative right now, but I actually happen to think their API is quite well thought out and concise, and maps well to the sort of data structures you need to do reliable speech recognition.

So skipping forward a bit, I decided to prototype with Node.js and some existing open source projects to implement an offline version of the Alexa skills JS API. Today it’s gotten to the point where it’s actually usable (for certain values of usable) and I’ve just spent the last 5 minutes asking it to tell me Knock-Knock jokes, so rather than waste any more time on that, I thought I’d write this about it instead. If you want to try it out, check out this repository and run npm install in the usual way. You’ll need pocketsphinx installed for that to succeed (install sphinxbase and pocketsphinx from github), and you’ll need espeak installed and some skills for it to do anything interesting, so check out the Alexa sample skills and sym-link the ‘samples‘ directory as a directory called ‘skills‘ in your ferris checkout directory. After that, just run the included example file with node and talk to it via your default recording device (hint: say ‘launch wise guy‘).

Hopefully someone else finds this useful – I’ll be using this as a base to prototype further voice experiments, and I’ll likely be extending the Alexa API further in non-standard ways. What was quite neat about all this was just how easy it all was. The Alexa API is extremely well documented, Node.js is also extremely well documented and just as easy to use, and there are tons of libraries (of varying quality…) to do what you need to do. The only real stumbling block was pocketsphinx’s lack of documentation (there’s no documentation at all for the Node bindings and the C API documentation is pretty sparse, to say the least), but thankfully other members of my team are much more familiar with this codebase than I am and I could lean on them for support.

I’m reasonably impressed with the state of lightweight open source voice recognition. This is easily good enough to be useful if you can limit the scope of what you need to recognise, and I find the Alexa API is a great way of doing that. I’d be interested to know how close the internal implementation is to how I’ve gone about it if anyone has that insider knowledge.

by Chris Lord at June 01, 2016 04:54 PM

May 16, 2016

Emmanuele Bassi

Reviving the GTK development blog

The GTK+ project has a development blog.

I know it may come as a shock to many of you, and you’d be completely justified in thinking that I just made that link up — but the truth is, the GTK+ project has had a development blog for a long while.

Sadly, the blog hasn’t been updated in five years — mostly around the time 3.0 was released, and the GTK+ website was revamped; even before that, the blog was mostly used for release announcements, which do not make for very interesting content.

Like many free and open source software projects, GTK+ has various venues of interaction between its contributors and its users; mailing lists, personal blogs, IRC, Stack Overflow, reddit, and many, many other channels. In this continuum of discussions it’s both easy to get lost and to lose the sense of having said things before — after all, if I repeat something at least three times a week on three different websites for three years, how can people still not know about it? Some users will always look at catching up after three years, because their projects live on very different schedules that the GTK releases one; others will try to look for official channels, even if the free and open source software landscape has fragmented to such a degree that any venue can be made “official” by the simple fact of having a contributor on it; others again will look at the API reference for any source of truth, forgetting, possibly, that if everything went into the API reference then it would cease to be useful as a reference.

The GTK+ development blog is not meant to be the only source for truth, or the only “official” channel; it’s meant to be a place for interesting content regarding the project, for developers using GTK+ or considering to use it; a place that acts as a hub to let interested people discover what’s up with GTK+ itself but that don’t want to subscribe to the commits list or join IRC.

From an editorial standpoint, I’d like the GTK+ development blog to be open to contribution from people contributing to GTK+; using GTK+; and newcomers to the GTK+ code base and their experiences. What’s a cool GTK+ feature that you worked on? How did GTK+ help you in writing your application or environment? How did you find contributing to GTK+ for the first time? If you want to write an article for the GTK+ blog talking about this, then feel free to reach out to me with an outline, and I’ll be happy to help you.

In the meantime, the first post in the This Week in GTK+ series has gone up; you’ll get a new post about it every Monday, and if you want to raise awareness on something that happened during a week, feel free to point it out on the wiki.

by ebassi at May 16, 2016 06:38 PM

May 05, 2016

Emmanuele Bassi

Who wrote GTK+ 3.20

Last time I tried to dispel the notion that GTK+ is dead or dying. Others have also chimed in, and it seems that we’re picking up the pace into making GTK a more modern, more useful driving force into the Linux desktop ecosystem.

Let’s see how much has changed in the six months of the 3.20 development cycle.

Once again, to gather the data, I’ve used the most excellent git-dm tool that Jonathan Corbet wrote for the “Who wrote the Linux kernel” columns for LWN. As usual, I’ve purposefully skipped the commits dealing with translations, to avoid messing up the statistics.

You should look at my previous article as a comparison point.

Activity

For the 3.20 cycle, the numbers are:

Version Lines added Lines removed Delta Contributors
GLib 2.48 20597 7544 13053 55
GTK+ 3.20 158427 117823 40604 81

More or less stable in terms of contributors, but as you can see the number of lines added and removed has doubled. This is definitely the result of the changes in the CSS machinery that have (finally) brought it to a stable as well as more featureful state.

Contributors

GLib

Of the 55 developers that contributed the 271 changesets of GLib during the 3.20 development cycle, the most active are:

Name Per changeset Name Per changed lines
Ignacio Casal Quinteiro 56 (20.7%) Ignacio Casal Quinteiro 8530 (39.7%)
Philip Withnall 42 (15.5%) Philip Withnall 5402 (25.1%)
Allison Ryan Lortie 27 (10.0%) Matthias Clasen 3228 (15.0%)
Chun-wei Fan 22 (8.1%) Chun-wei Fan 1440 (6.7%)
Matthias Clasen 18 (6.6%) Allison Ryan Lortie 1338 (6.2%)
Dan Winship 9 (3.3%) Javier Jardón 565 (2.6%)
Mikhail Zabaluev 7 (2.6%) Iain Lane 149 (0.7%)
Marc-André Lureau 6 (2.2%) Ruslan Izhbulatov 147 (0.7%)
Ruslan Izhbulatov 6 (2.2%) Dan Winship 95 (0.4%)
Rico Tzschichholz 6 (2.2%) Lars Uebernickel 79 (0.4%)
Xavier Claessens 6 (2.2%) Xavier Claessens 74 (0.3%)
Emmanuele Bassi 5 (1.8%) Christian Hergert 71 (0.3%)
Iain Lane 4 (1.5%) Mikhail Zabaluev 48 (0.2%)
Lars Uebernickel 3 (1.1%) Rico Tzschichholz 45 (0.2%)
Sébastien Wilmet 3 (1.1%) Daiki Ueno 42 (0.2%)
Simon McVittie 3 (1.1%) Simon McVittie 27 (0.1%)
Javier Jardón 3 (1.1%) Emmanuele Bassi 25 (0.1%)
Christian Hergert 3 (1.1%) Robert Ancell 23 (0.1%)
coypu 2 (0.7%) Marc-André Lureau 14 (0.1%)
Sebastian Geiger 2 (0.7%) Jan de Groot 14 (0.1%)

Ignacio has been hard at work, helped by Ruslan and Fan, in making 2.48 the best GLib release ever in terms of supporting Windows — both for cross and native compilation, using autotools and the Microsoft Visual C compiler suite. If you can build an application for Windows as reliably as you can on Linux, it’s because of their work.

GTK+

For GTK+, on the other hand, the most active of the 81 contributors are:

Name Per changeset Name Per changed lines
Matthias Clasen 1220 (43.7%) Matthias Clasen 78960 (41.1%)
Benjamin Otte 472 (16.9%) Benjamin Otte 35975 (18.7%)
Lapo Calamandrei 203 (7.3%) Lapo Calamandrei 35352 (18.4%)
Cosimo Cecchi 167 (6.0%) Cosimo Cecchi 10408 (5.4%)
Carlos Garnacho 147 (5.3%) Jakub Steiner 6927 (3.6%)
Timm Bäder 107 (3.8%) Carlos Garnacho 5334 (2.8%)
Emmanuele Bassi 41 (1.5%) Alexander Larsson 3128 (1.6%)
Paolo Borelli 39 (1.4%) Chun-wei Fan 2394 (1.2%)
Ruslan Izhbulatov 29 (1.0%) Paolo Borelli 1771 (0.9%)
Carlos Soriano 28 (1.0%) Ruslan Izhbulatov 1635 (0.9%)
Jakub Steiner 26 (0.9%) Timm Bäder 1326 (0.7%)
Olivier Fourdan 26 (0.9%) Takao Fujiwara 1269 (0.7%)
Jonas Ådahl 23 (0.8%) Jonas Ådahl 1243 (0.6%)
Chun-wei Fan 22 (0.8%) Emmanuele Bassi 885 (0.5%)
Piotr Drąg 18 (0.6%) Olivier Fourdan 646 (0.3%)
Ray Strode 18 (0.6%) Ray Strode 570 (0.3%)
Ignacio Casal Quinteiro 16 (0.6%) Sébastien Wilmet 494 (0.3%)
William Hua 16 (0.6%) Carlos Soriano 427 (0.2%)
Alexander Larsson 14 (0.5%) Ignacio Casal Quinteiro 333 (0.2%)
Christoph Reiter 10 (0.4%) William Hua 321 (0.2%)

Benjamin has worked on the new CSS gadget internal API; Matthias, Cosimo, and Timm have worked on porting existing widgets to it, in order to validate the API. Lapo and Jakub have worked on updating Adwaita and the other in tree themes to the new style declarations.

Carlos Soriano has worked on the widgets shared between the file chooser dialog and Nautilus.

Carlos Garnacho has worked on the input layer in GDK, in order to make it behave correctly under the new world order of Wayland; and speaking of Wayland, Carlos, Jonas, and Olivier have worked really hard to implement all the missing features in the Wayland backend, as well as the fallout of the Wayland switch when it comes to window sizing and positioning.

Affiliation

GLib

Affiliation Per changeset Affiliation Per lines Affiliation Per contributor (total 55)
(Unknown) 136 (50.2%) (Unknown) 10942 (50.9%) (Unknown) 35 (60.3%)
Collabora 49 (18.1%) Collabora 5491 (25.6%) Red Hat 9 (15.5%)
Canonical 41 (15.1%) Red Hat 3398 (15.8%) Canonical 5 (8.6%)
Red Hat 36 (13.3%) Canonical 1612 (7.5%) Collabora 4 (6.9%)
Endless 6 (2.2%) Endless 34 (0.2%) Endless 2 (3.4%)
Centricular 1 (0.4%) Centricular 4 (0.0%) Centricular 1 (1.7%)
Intel 1 (0.4%) Intel 2 (0.0%) Intel 1 (1.7%)
Novell 1 (0.4%) Novell 1 (0.0%) Novell 1 (1.7%)

As usual, GLib is a little bit more diverse, in terms of employers, because of its versatility and use in various platforms.

GTK+

Affiliation Per changeset Affiliation Per lines Affiliation Per contributor (total 81)
Red Hat 1940 (69.5%) Red Hat 131833 (68.7%) (Unknown ) 63 (75.9%)
(Unknown) 796 (28.5%) (Unknown) 59204 (30.8%) Red Hat 15 (18.1%)
Endless 41 (1.5%) Endless 885 (0.5%) Canonical 4 (4.8%)
Canonical 13 (0.5%) Canonical 104 (0.1%) Endless 1 (1.2%)

Not many changes in these tables, but if your company uses the GNOME core platform and you wish to have a voice in where the platform goes, you should really consider contributing employee time to work upstream.

It is also very important to note that, while Red Hat still retains the majority of commits, the vast majority of committers are unaffiliated.

Methodology

The command line I used for gitdm is:

git log \
 --numstat \
 -M $START..$END | \
 gitdm -r '.*(?<!po)$' -l 20 -u -n

For GLib, I started from commit 37fcab17 which contains the version bump to 2.47, and ended on the 2.48.0 tag.

For GTK+, I started from commit 2f0d4b68 which contains the first new API of the 3.19 cycle and precedes the version bump, and ended on the 3.20.0 tag.

The only changes to the gitdm stock configuration are the addition of a couple of email/name/employer association; I can publish them on request.

by ebassi at May 05, 2016 11:00 PM

March 08, 2016

Chris Lord

State of Embedding in Gecko

Following up from my last post, I’ve had some time to research and assess the current state of embedding Gecko. This post will serve as a (likely incomplete) assessment of where we are today, and what I think the sensible path forward would be. Please note that these are my personal opinions and not those of Mozilla. Mozilla are gracious enough to employ me, but I don’t yet get to decide on our direction 😉

The TLDR; there are no first-class Gecko embedding solutions as of writing.

EmbedLite (aka IPCLite)

EmbedLite is an interesting solution for embedding Gecko that relies on e10s (Electrolysis, Gecko’s out-of-process feature code-name) and OMTC (Off-Main-Thread Compositing). From what I can tell, the embedding app creates a new platform-specific compositor object that attaches to a window, and with e10s, a separate process is spawned to handle the brunt of the work (rendering the site, running JS, handling events, etc.). The existing widget API is exposed via IPC, which allows you to synthesise events, handle navigation, etc. This builds using the xulrunner application target, which unfortunately no longer exists. This project was last synced with Gecko on April 2nd 2015 (the day before my birthday!).

The most interesting thing about this project is how much code it reuses in the tree, and how little modification is required to support it (almost none – most of the changes are entirely reasonable, even outside of an embedding context). That we haven’t supported this effort seems insane to me, especially as it’s been shipping for a while as the basis for the browser in the (now defunct?) Jolla smartphone.

Building this was a pain, on Fedora 22 I was not able to get the desktop Qt build to compile, even after some effort, but I was able to compile the desktop Gtk build (trivial patches required). Unfortunately, there’s no support code provided for the Gtk version and I don’t think it’s worth the time me implementing that, given that this is essentially a dead project. A huge shame that we missed this opportunity, this would have been a good base for a lightweight, relatively easily maintained embedding solution. The quality of the work done on this seems quite high to me, after a brief examination.

Spidernode

Spidernode is a port of Node.js that uses Gecko’s ‘spidermonkey’ JavaScript engine instead of Chrome’s V8. Not really a Gecko embedding solution, but certainly something worth exploring as a way to enable more people to use Mozilla technology. Being a much smaller project, of much more limited scope, I had no issues building and testing this.

Node.js using spidermonkey ought to provide some interesting advantages over a V8-based Node. Namely, modern language features, asm.js (though I suppose this will soon be supplanted by WebAssembly) and speed. Spidernode is unfortunately unmaintained since early 2012, but I thought it would be interesting to do a simple performance test. Using the (very flawed) technique detailed here, I ran a few quick tests to compare an old copy of Node I had installed (~0.12), current stable Node (4.3.2) and this very old (~0.5) Spidermonkey-based Node. Spidermonkey-based Node was consistently over 3x faster than both old and current (which varied very little in performance). I don’t think you can really draw any conclusions than this, other than that it’s an avenue worth exploring.

Many new projects are prototyped (and indeed, fully developed) in Node.js these days; particularly Internet-Of-Things projects. If there’s the potential for these projects to run faster, unchanged, this seems like a worthy project to me. Even forgetting about the advantages of better language support. It’s sad to me that we’re experimenting with IoT projects here at Mozilla and so many of these experiments don’t promote our technology at all. This may be an irrational response, however.

GeckoView

GeckoView is the only currently maintained embedding solution for Gecko, and is Android-only. GeckoView is an Android project, split out of Firefox for Android and using the same interfaces with Gecko. It provides an embeddable widget that can be used instead of the system-provided WebView. This is not a first-class project from what I can tell, there are many bugs and many missing features, as its use outside of Firefox for Android is not considered a priority. Due to this dependency, however, one would assume that at least GeckoView will see updates for the foreseeable future.

I’d experimented with this in the past, specifically with this project that uses GeckoView with Cordova. I found then that the experience wasn’t great, due to the huge size of the GeckoView library and the numerous bugs, but this was a while ago and YMMV. Some of those bugs were down to GeckoView not using the shared APZC, a bug which has since been fixed, at least for Nightly builds. The situation may be better now than it was then.

The Future

This post is built on the premise that embedding Gecko is a worthwhile pursuit. Others may disagree about this. I’ll point to my previous post to list some of the numerous opportunities we missed, partly because we don’t have an embedding story, but I’m going to conjecture as to what some of our next missed opportunities might be.

IoT is generating a lot of buzz at the moment. I’m dubious that there’s much decent consumer use of IoT, at least that people will get excited about as opposed to property developers, but if I could predict trends, I’d have likely retired rich already. Let’s assume that consumer IoT will take off, beyond internet-connected thermostats (which are actually pretty great) and metered utility boxes (which I would quite like). These devices are mostly bespoke hardware running random bits and bobs, but an emerging trend seems to be Node.js usage. It might be important for Mozilla to provide an easily deployed out-of-the-box solution here. As our market share diminishes, so does our test-bed and contribution base for our (currently rather excellent) JavaScript engine. While we don’t have an issue here at the moment, if we find that a huge influx of diverse, resource-constrained devices starts running V8 and only V8, we may eventually find it hard to compete. It could easily be argued that it isn’t important for our solution to be based on our technology, but I would argue that if we have to start employing a considerable amount of people with no knowledge of our platform, our platform will suffer. By providing a licensed out-of-the-box solution, we could also enforce that any client-side interface remain network-accessible and cross-browser compatible.

A less tenuous example, let’s talk about VR. VR is also looking like it might finally break out into the mid/high-end consumer realm this year, with heavy investment from Facebook (via Oculus), Valve/HTC (SteamVR/Vive), Sony (Playstation VR), Microsoft (HoloLens), Samsung (GearVR) and others. Mozilla are rightly investing in WebVR, but I think the real end-goal for VR is an integrated device with no tether (certainly Microsoft and Samsung seem to agree with me here). So there may well be a new class of device on the horizon, with new kinds of browsers and ways of experiencing and integrating the web. Can we afford to not let people experiment with our technology here? I love Mozilla, but I have serious doubts that the next big thing in VR is going to come from us. That there’s no supported way of embedding Gecko worries me for future classes of device like this.

In-vehicle information/entertainment systems are possibly something that will become more of the norm, now that similar devices have become such commodity. Interestingly, the current big desktop and mobile players have very little presence here, and (mostly awful) bespoke solutions are rife. Again, can we afford to make our technology inaccessible to the people that are experimenting in this area? Is having just a good desktop browser enough? Can we really say that’s going to remain how people access the internet for the next 10 years? Probably, but I wouldn’t want to bet everything on that.

A plan

If we want an embedding solution, I think the best way to go about it is to start from Firefox for Android. Due to the way Android used to require its applications to interface with native code, Firefox for Android is already organised in such a way that it is basically an embedding API (thus GeckoView). From this point, I think we should make some of the interfaces slightly more generic and remove the JNI dependency from the Gecko-side of the code. Firefox for Android would be the main consumer of this API and would guarantee that it’s maintained. We should allow for it to be built on Linux, Mac and Windows and provide the absolute minimum harness necessary to allow for it to be tested. We would make no guarantees about API or ABI. Externally to the Gecko tree, I would suggest that we start, and that the community maintain, a CEF-compatible library, at least at the API level, that would be a Tier-3 project, much like Firefox OS now is. This, to me, seems like the minimal-effort and most useful way of allowing embeddable Gecko.

In addition, I think we should spend some effort in maintaining a fork of Node.js LTS that uses spidermonkey. If we can promise modern language features and better performance, I expect there’s a user-base that would be interested in this. If there isn’t, fair enough, but I don’t think current experiments have had enough backing to ascertain this.

I think that both of these projects are important, so that we can enable people outside of Mozilla to innovate using our technology, and by osmosis, become educated about our mission and hopefully spread our ideals. Other organisations will do their utmost to establish a monopoly in any new emerging market, and I think it’s a shame that we have such a powerful and comprehensive technology platform and we aren’t enabling other people to use it in more diverse situations.

This post is some insightful further reading on roughly the same topic.

by Chris Lord at March 08, 2016 05:22 PM

February 24, 2016

Chris Lord

The case for an embeddable Gecko

Strap yourself in, this is a long post. It should be easy to skim, but the history may be interesting to some. I would like to make the point that, for a web rendering engine, being embeddable is a huge opportunity, how Gecko not being easily embeddable has meant we’ve missed several opportunities over the last few years, and how it would still be advantageous to make Gecko embeddable.

What?

Embedding Gecko means making it easy to use Gecko as a rendering engine in an arbitrary 3rd party application on any supported platform, and maintaining that support. An embeddable Gecko should make very few constraints on the embedding application and should not include unnecessary resources.

Examples

  • A 3rd party browser with a native UI
  • A game’s embedded user manual
  • OAuth authentication UI
  • A web application
  • ???

Why?

It’s hard to predict what the next technology trend will be, but there’s is a strong likelihood it’ll involve the web, and there’s a possibility it may not come from a company/group/individual with an existing web rendering engine or particular allegiance. It’s important for the health of the web and for Mozilla’s continued existence that there be multiple implementations of web standards, and that there be real competition and a balanced share of users of the various available engines.

Many technologies have emerged over the last decade or so that have incorporated web rendering or web technologies that could have leveraged Gecko;

(2007) iPhone: Instead of using an existing engine, Apple forked KHTML in 2002 and eventually created WebKit. They did investigate Gecko as an alternative, but forking another engine with a cleaner code-base ended up being a more viable route. Several rival companies were also interested in and investing in embeddable Gecko (primarily Nokia and Intel). WebKit would go on to be one of the core pieces of the first iPhone release, which included a better mobile browser than had ever been seen previously.

(2008) Chrome: Google released a WebKit-based browser that would eventually go on to eat a large part of Firefox’s user base. Chrome was initially praised for its speed and light-weightedness, but much of that was down to its multi-process architecture, something made possible by WebKit having a well thought-out embedding capability and API.

(2008) Android: Android used WebKit for its built-in browser and later for its built-in web-view. In recent times, it has switched to Chromium, showing they aren’t adverse to switching the platform to a different/better technology, and that a better embedding story can benefit a platform (Android’s built in web view can now be updated outside of the main OS, and this may well partly be thanks to Chromium’s embedding architecture). Given the quality of Android’s initial WebKit browser and WebView (which was, frankly, awful until later revisions of Android Honeycomb, and arguably remained awful until they switched to Chromium), it’s not much of a leap to think they may have considered Gecko were it easily available.

(2009) WebOS: Nothing came of this in the end, but it perhaps signalled the direction of things to come. WebOS survived and went on to be the core of LG’s Smart TV, one of the very few real competitors in that market. Perhaps if Gecko was readily available at this point, we would have had a large head start on FirefoxOS?

(2009) Samsung Smart TV: Also available in various other guises since 2007, Samsung’s Smart TV is certainly the most popular smart TV platform currently available. It appears Samsung built this from scratch in-house, but it includes many open-source projects. It’s highly likely that they would have considered a Gecko-based browser if it were possible and available.

(2011) PhantomJS: PhantomJS is a headless, scriptable browser, useful for testing site behaviour and performance. It’s used by several large companies, including Twitter, LinkedIn and Netflix. Had Gecko been more easily embeddable, such a product may well have been based on Gecko and the benefits of that would be many sites that use PhantomJS for testing perhaps having better rendering and performance characteristics on Gecko-based browsers. The demand for a Gecko-based alternative is high enough that a similar project, SlimerJS, based on Gecko was developed and released in 2013. Due to Gecko’s embedding deficiencies though, SlimerJS is not truly headless.

(2011) WIMM One: The first truly capable smart-watch, which generated a large buzz when initially released. WIMM was based on a highly-customised version of Android, and ran software that was compatible with Android, iOS and BlackBerryOS. Although it never progressed past the development kit stage, WIMM was bought by Google in 2012. It is highly likely that WIMM’s work forms the base of the Android Wear platform, released in 2014. Had something like WebOS been open, available and based on Gecko, it’s not outside the realm of possibility that this could have been Gecko based.

(2013) Blink: Google decide to fork WebKit to better build for their own uses. Blink/Chromium quickly becomes the favoured rendering engine for embedding. Google were not afraid to introduce possible incompatibility with WebKit, but also realised that embedding is an important feature to maintain.

(2014) Android Wear: Android specialised to run on watch hardware. Smart watches have yet to take off, and possibly never will (though Pebble seem to be doing alright, and every major consumer tech product company has launched one), but this is yet another area where Gecko/Mozilla have no presence. FirefoxOS may have lead us to have an easy presence in this area, but has now been largely discontinued.

(2014) Atom/Electron: Github open-sources and makes available its web-based text editor, which it built on a home-grown platform of Node.JS and Chromium, which it later called Electron. Since then, several large and very successful projects have been built on top of it, including Slack and Visual Studio Code. It’s highly likely that such diverse use of Chromium feeds back into its testing and development, making it a more robust and performant engine, and importantly, more widely used.

(2016) Brave: Former Mozilla co-founder and CTO heads a company that makes a new browser with the selling point of blocking ads and tracking by default, and doing as much as possible to protect user privacy and agency without breaking the web. Said browser is based off of Chromium, and on iOS, is a fork of Mozilla’s own WebKit-based Firefox browser. Brendan says they started based off of Gecko, but switched because it wasn’t capable of doing what they needed (due to an immature embedding API).

Current state of affairs

Chromium and V8 represent the state-of-the-art embeddable web rendering engine and JavaScript engine and have wide and varied use across many platforms. This helps reenforce Chrome’s behaviour as the de-facto standard and gradually eats away at the market share of competing engines.

WebKit is the only viable alternative for an embeddable web rendering engine and is still quite commonly used, but is generally viewed as a less up-to-date and less performant engine vs. Chromium/Blink.

Spidermonkey is generally considered to be a very nice JavaScript engine with great support for new EcmaScript features and generally great performance, but due to a rapidly changing API/ABI, doesn’t challenge V8 in terms of its use in embedded environments. Node.js is likely the largest user of embeddable V8, and is favoured even by Mozilla employees for JavaScript-based systems development.

Gecko has limited embedding capability that is not well-documented, not well-maintained and not heavily invested in. I say this with the utmost respect for those who are working on it; this is an observation and a criticism of Mozilla’s priorities as an organisation. We have at various points in history had embedding APIs/capabilities, but we have either dropped them (gtkmozembed) or let them bit-rot (IPCLite). We do currently have an embedding widget for Android that is very limited in capability when compared to the default system WebView.

Plea

It’s not too late. It’s incredibly hard to predict where technology is going, year-to-year. It was hard to predict, prior to the iPhone, that Nokia would so spectacularly fall from the top of the market. It was hard to predict when Android was released that it would ever overtake iOS, or even more surprisingly, rival it in quality (hard, but not impossible). It was hard to predict that WebOS would form the basis of a major competing Smart TV several years later. I think the examples of our missed opportunities are also good evidence that opening yourself up to as much opportunity as possible is a good indicator of future success.

If we want to form the basis of the next big thing, it’s not enough to be experimenting in new areas. We need to enable other people to experiment in new areas using our technology. Even the largest of companies have difficulty predicting the future, or taking charge of it. This is why it’s important that we make easily-embeddable Gecko a reality, and I plead with the powers that be that we make this higher priority than it has been in the past.

by Chris Lord at February 24, 2016 06:10 PM

February 15, 2016

Damien Lespiau

Augmenting mailing-lists with Patchwork - Another try

The mailing-list problem


Many software projects use mailing-lists, which usually means mailman, not only for discussions around that project, but also for code contributions. A lot of open source projects work that way, including the one I interact with the most, the Linux kernel. A contributor sends patches to a mailing list, these days using git send-email, and waits for feedback or for his/her patches to be picked up for inclusion if fortunate enough.

Problem is, mailing-lists are awful for code contribution.

A few of the issues at hand:
  • Dealing with patches and emails can be daunting for new contributors,
  • There's no feedback that someone will look into the patch at some point,
  • There's no tracking of which patch has been processed (eg. included into the tree). A shocking number of patches are just dropped as a direct consequence,
  • There's no way to add metadata to a submission. For instance, we can't assign a reviewer from a pool of people working on the project. As a result, review is only working thanks to the good will of people. It's not necessarily a bad thing, but it doesn't work in a corporate environment with deadlines,
  • Mailing-lists are all or nothing: one subscribes to the activity of the full project, but may only care about following the progress of a couple of patches,
  • There's no structure at all actually, it's all just emails,
  • No easy way to hook continuous integration testing,
  • The tools are really bad any time they need to interact with the mailing-list: try to send a patch as a reply to a review comment, addressing it. It starts with going to look at the headers of the review email to copy/paste its Message-ID, followed by an arcane incantation:
    $ git send-email --to=<mailing-list> --cc=<reviewer> \
    --in-reply-to=<reviewer-mail-message-id> \
    --reroll-count 2 -1 HEAD~2

Alternative to mailing-lists


Before mentioning Patchwork, it's worth saying that a project can merely decide to switch to using something else than a mailing-list to handle code contributions; To name a few: Gerrit, Phabricator, Github, Gitlab, Crucible.

However, there can be some friction preventing the adoption those tools. People have built their own workflow around mailing-lists for years and it's somewhat difficult to adopt anything else over night. Projects can be big with no clear way to make decisions, so sticking to mailing-lists can just be the result of inertia.

The alternatives also have problems of their own and there's no clear winner, nothing like how git took over the world.


Patchwork


So, the path of least resistance is to keep mailing-lists. Jemery Kerr had the idea to augment mailing-lists with a tool that would track the activity there and build a database of patches and their status (new, reviewed, merged, dropped, ...). Patchwork was born.

Here are some Patchwork instances in the wild:

The KMS and DRI Linux subsystems are using freedesktop.org to host their mailing-lists, which includes the i915 Intel driver, project I've been contributing to since 2012. We have an instance of Patchwork there, and, while somewhat useful, the tool fell short of what we really wanted to do with our code contribution process.

Patches are welcome!


So? it was time to do something about the situation and I started improving Patchwork to answer some of the problems outlined above. Given enough time, it's possible to help on all fronts.

The code can be found on github, along with the current list of issues and enhancements we have thought about. I also maintain freedesktop.org's instance for the graphics team at Intel, but also any freedesktop.org project that would like to give it a try.


Design, Design, Design


First things first, we improved how Patchwork looks and feels. Belén, of OpenEmbedded/Yocto fame, has very graciously spent some of her time to rethink how the interaction should behave.

Before, ...

... and after!

There is still a lot of work remaining to roll out the new design and the new interaction model on all of Patchwork. A glimpse of what that interaction looks like so far:



Series


One thing was clear from the start: I didn't want to have Patches as the main object tracked, but Series, a collection of patches. Typically, developing a  new feature requires more than one patch, especially with the kernel where it's customary to write a lot of orthogonal smaller commits rather than a big (and often all over the place) one. Single isolated commits, like a small bug fix, are treated as a series of one patch.

But that's not all. Series actually evolve over time as the developer answers review comments and the patch-set matures. Patchwork also tracks that evolution, creating several Revisions for the same series. This colour management series from Lionel shows that history tracking (beware, this is not the final design!).

I have started documenting what Patchwork can understand. Two ways can be used to trigger the creation of a new revision: sending a revised patch as a reply to the reviewer email or resending the full series with a similar cover letter subject.

There are many ambiguous cases and some others cases not really handled yet, one of them being sending a series as a reply to another series. That can be quite confusing for the patch submitter but the documented flows should work.

REST API


Next is dusting off Patchwork's XML-RPC API. I wanted to be able to use the same API from both the web pages and git-pw, a command line client.

This new API is close to complete enough to replace the XML-RPC one and already offers a few more features (eg. testing integration). I've also been carefully documenting it.

git-pw


Rob Clark had been asking for years for a better integration with git from the Patchwork's command line tool, especially sharing its configuration file. There also are a number of git "plugins" that have appeared to bridge git with various tools, like git-bz or git-phab.

Patchwork has now his own git-pw, using the REST API. There, again, more work is needed to be in an acceptable shape, but it can already be quite handy to, for instance, apply a full series in one go:

$ git pw apply -s 122
Applying series: DP refactoring v2 (rev 1)
Applying: drm/i915: Don't pass *DP around to link training functions
Applying: drm/i915: Split write of pattern to DP reg from intel_dp_set_link_train
Applying: drm/i915 Call get_adjust_train() from clock recovery and channel eq
Applying: drm/i915: Move register write into intel_dp_set_signal_levels()
Applying: drm/i915: Move generic link training code to a separate file
...

Testing Integration



This is what kept my busy the last couple of months: How to integrate patches sent to a mailing-list with Continuous Integration systems. The flow I came up with is not very complicated but a picture always helps:

Hooking tests to Patchwork


Patchwork is exposing an API so mailing-lists are completely abstracted from systems using that API. Both retrieving the series/patches to test and sending back test results is done through HTTP. That makes testing systems fairly easy to write.

Tomi Sarvela hooked our test-suite, intel-gpu-tools, to patches sent to intel-gfx and we're now gating patch acceptance to the kernel driver with the result of that testing.

Of course, it's not that easy. In our case, we've accumulated some technical debt in both the driver and the test suite, which means it will take time to beat both into be a fully reliable go/no-go signal. People have been actively looking at improving the situation though (thanks!) and I have hope we can reach that reliability sooner rather than later.

As a few words of caution about the above, I'd like to remind everyone that the devil always is in the details:
  • We've restricted the automated testing to a subset of the tests we have (Basic Acceptance Tests aka BATs) to provide a quick answer to developers, but also because some of our tests aren't well bounded,
  • We have no idea how much code coverage that subset really exercises, playing with the kernel gcov support would be interesting for sure,
  • We definitely don't deal with the variety of display sinks (panels and monitors) that are present in the wild.
This means we won't catch all the i915 regressions. Time will definitely improve things as we connect more devices to the testing system and fix our tests and driver.

Anyway, let's leave i915 specific details for another time. A last thing about this testing integration is that Patchwork can be configured to send emails back to the submitter/mailing-list with some test results. As an example, I've written a checkpatch.pl integration that will tell people to fix their patches without the need of a reviewer to do it. I know, living in the future.

For more in-depth documentation about continuous testing with Patchwork, see the testing section of the manual.

What's next?


This blog post is long enough as is, let's finish by the list of things I'd like to be in a acceptable state before I'll happily tag a first version:
  • Series support without without known bugs
  • REST API and git pw able to replace XML-RPC and pwclient
  • Series, Patches and Bundles web pages ported to the REST API and the new filter/action interaction.
  • CI integration
  • Patch and Series life cycle redesigned with more automatic state changes (ie. when someone gives a reviewed-by tag, the patch state should change to reviewed)
There are plenty of other exciting ideas captured in the github issues for when this is done.

Links




by Damien Lespiau (noreply@blogger.com) at February 15, 2016 06:12 PM

Continuous Testing with Patchwork

As promised in the post introducing my recent work on Patchwork, I've written some more in-depth documentation to explain how to hook testing to Patchwork. I've also realized that a blog post might not be the best place to put that documentation and opted to put it in the proper manual:


Happy reading!

by Damien Lespiau (noreply@blogger.com) at February 15, 2016 06:01 PM

February 13, 2016

Hylke Bons

Film developing setup that fits your backpack

It’s lots of fun developing your own black & white film. Here’s the setup I’ve been using. My goals were to keep costs down and to have a simple, compact setup that’s easy to use.

Developing tank and reel ~ £ 22

This is the main cost and you want to make it a good one. You can shop around for a second hand for much less.

Thermometer ~ £ 4

To make sure the solutions are at the right temperature. A glass spirit thermometer also provides a means of stirring.

Developer ~ £ 5

A 120 mL bottle of Rodinal develops about 20 rolls of film at 1+25 dilutions. You can double the dilution to 1+50 for 40, that’s just 12 pence per roll! This stuff lasts forever if you store it in darkness and air tight. Rodinal is a "one shot" developer so you toss out your dilution after use.

Fixer ~ £ 3

Fixer dilution can be reused many times, so store it after use. One liter of a 1+5 dilution fixes 17 rolls of film.

To check if your fixer dilution is still good: take a piece of cut off film leader and put it in small cup filled with fixer. If the film becomes transparent after a few minutes the fixer is still good to use.

Measuring jug ~ £ 3

To mix chemicals in. Get one with a spout for easy pouring.

Spout bags ~ £ 2

These keep air out compared to using bottles, so your chemicals will last longer. They save space too. Label them well, you don’t want to mess up!

Funnel ~ £ 1

One with a small mouth, so it fits the spout bags easily when you need to pour chemicals back.

Syringe ~ £ 1

To measure the amount of developer. Around 10 to 20 mL volume will do. Make sure to get one with 1 mL marks for more accurate measuring, and a blunt needle to easily extract from the spout bag.

Common household items

You probably already have these: a clothes peg, for hanging your developed film to dry. And a pair of scissors, to remove the film from the cannister and to cut the film into strips after drying.

Developed Ilford HP5+ film

Total ~ £ 41

As you can see, it’s only a small investment. After developing a few rolls the equipment has paid for itself, compared to sending your rolls off for processing. There’s something special about seeing your images appear on a film for the first time that’s well worth it. Like magic. :)

by Hylke Bons at February 13, 2016 07:34 PM

Lockee to the rescue

Using public computers can be a huge privacy and security risk. There’s no way you can tell who may be spying on you using key loggers or other evil software.

Some friends and family don’t see the problem at all, and use any computer to log in to personal accounts. I actually found myself not being able to recommend an easy solution here. So I decided to build a service that I hope will help remove the need to sign in to sensitive services in some cases at least.

Example

You want to use the printer at your local library to print an e-ticket. As you’re on a public computer, you really don’t want to log in to your personal email account fetch the document for security reasons. You’re not too bothered about your personal information on the ticket, but typing in your login details on a public computer is a cause for concern.

This is a use case I have every now and then, and I’m sure there many other similar situations where you have to log in to a service to get some kind of file, but you don’t really want to.

Existing storage services

There are temporary file storage solutions on the internet, but most of them give out long links that are long and hard to remember, ask for an email address to send the links to, are public, or have any combination of these problems. Also, you have no idea what will happen to your data.

USB drives can help sometimes, but you may not always have one handy, it might get infected, and it’s easy to forget once plugged in.

Lockee to the rescue

Lockee is a small service that temporarily hosts files for you. Seen those luggage lockers at the railway station? It’s like that, but for files.

A Lockee locker

It allows you to create temporary file lockers, with easy to remember URLs (you can name your locker anything you want). Lockers are protected using passphrases, so your file isn’t out in the open.

Files are encrypted and decrypted in the browser, there’s no record of their real content on the server side. There’s no tracking of anything either, and lockers are automatically emptied after 24 hours.

Give it a go

I’m hosting an instance of Lockee on lockee.me. The source is also available if you’d like to run your own instance or contribute.

by Hylke Bons at February 13, 2016 04:00 PM

Ways to improve download page flow

App stores on every platform are getting more popular, and take care of downloads in a consistent and predictable way. Sometimes stores aren’t an option or you prefer not to use them, specially if you’re a Free and Open Source project and/or Linux distribution.

Here are some tips to improve your project’s download page flow. It’s based on confusing things I frequently run into when trying to download a FOSS project and think can be done a lot better.

This is in no way an exhaustive list, but is meant to help as a quick checklist to make sure people can try out your software without being confused or annoyed by the process. I hope it will be helpful.

Project name and purpose

The first thing people will (or should) see. Take advantage of this fact and pick a descriptive name. Avoid technical terms, jargon, and implementation details in the name. Common examples are: “-gui”, “-qt”, “gtk-”, “py-”, they just clutter up names with details that don’t matter.

Describe what your software does, what problem it solves, and why you should care. This sounds like stating the obvious, but this information is often buried in other less important information, like which programming language and/or free software license is used. Make this section prominent on the website and keep it down on the buzzwords.

The fact that the project is Free and Open Source, whilst important, is secondary. Oh, and recursive acronyms are not funny.

Platforms

Try to autodetect as much as possible. Is the visitor running Linux, Windows, or Mac? Which architecture? Make suggestions more prominent, but keep other options open in case someone wants to download a version for a platform other than the one they’re currently using.

Architecture names can be confusing as well: “amd64” and “x86” are labels often used to specify to distinguish between 32-bit and 64-bit systems, however they do a bad job at this. AMD is not the only company making 64-bit processors anymore, and “x86” doesn’t even mention “32-bit”.

Timestamps

Timestamps are a good way to find out if a project is actively maintained, you can’t (usually) tell from a version number when the software was released. Use human friendly date formatting that is unambiguous. For example, use “February 1, 2003” as opposed to “01-02-03”. If you keep a list of older versions, sort by time and clearly mark which is the latest version.

File sizes

Again, keep it human readable. I’ve seen instances where the file size are reported in bytes (e.g. 209715200 bytes, instead of 200 MB). Sometimes you need to round numbers or use thousands separators when numbers are large to improve readability.

File sizes are mostly there to make rough guesses, and depending on context you don’t need to list them at all. Don’t spend too much time debating whether you should be using MB or MiB.

Integrity verification

Download pages are often littered with checksums and GPG signatures. Not everybody is going to be familiar with these concepts. I do think checking (source) integrity is important, but also think source and file integrity verification should be automated by the browser. There’s no reason for it to be done manually, but there doesn’t seem to be a common way to do this yet.

If you do offer ways to check file and source integrity, add explanations or links to documentation on how to perform these checks. Don’t ditch strange random character strings on pages. Educate, or get out of the way.

Keep in mind search engines may link to the insecure version of your page. Not serving pages over HTTPS at all makes providing signatures checks rather pointless, and could even give a false sense of security.

Compression formats

Again something that should be handled by the browser. Compressing downloads can save a lot of time and bandwidth. Often though, specially on Linux, we’re presented with a choice of compression formats that hardly matter in size (.tar.gz, .tar.bz2, .7z, .xz, .zip).

I’d say pick one. Every operating system supports the .zip format nowadays. The most important lesson here though is to not put people up with irrelevant choices and clutter the page.

Mirrors

Detect the closest mirror if possible, instead of letting people pick from a long list. Don’t bother for small downloads, as the time required picking one is probably going to outweigh the benefit of the increased download speed.

Starting the download

Finally, don’t hide the link in paragraphs of text. Make it a big and obvious button.

by Hylke Bons at February 13, 2016 04:00 PM

San Francisco impressions

Had the opportunity to visit San Francisco for two weeks in March, it was great. Hope to be back there soon.

by Hylke Bons at February 13, 2016 04:00 PM

London Zoo photos

Visited the London Zoo for the first time and took a few photos.

by Hylke Bons at February 13, 2016 04:00 PM

A bit about taking pictures

Though I like going out and take pictures at the places I visit, I haven’t actually blogged about taking pictures before. I thought I should share some tips and experiences.

This is not a “What’s in my bag” kind of post. I won’t, and can’t, tell you what the best cameras or lenses are. I simply don’t know. These are some things I’ve learnt and that have worked for me and my style of taking pictures, and wish I knew earlier on.

Pack

Keep gear light and compact, and focus on what you have. You will often bring more than you need. If you get the basics sorted out, you don’t need much to take a good picture. Identify a couple of lenses you like using and get to know their qualities and limits.

Your big lenses aren’t going to do you any good if you’re reluctant to take them with you. Accept that your stuff is going to take a beating. I used to obsess over scratches on my gear, I don’t anymore.

I don’t keep a special bag. I wrap my camera in a hat or hoody and lenses in thick socks and toss them into my rucksack. (Actually, this is one tip you might want to ignore.)

Watch out for gear creep. It’s tempting to wait until that new lens comes out and get it. Ask yourself: will this make me go out and shoot more? The answer usually is probably not, and the money is often better spent on that trip to take those nice shots with the stuff you already have.

Learn

Try some old manual lenses to learn with. Not only are these cheap and able to produce excellent image quality, it’s a great way to learn how aperture, shutter speed, and sensitivity affect exposure. Essential for getting the results you want.

I only started understanding this after having inherited some old lenses and started playing around with them. The fact they’re all manual makes you realise quicker how things physically change inside the camera when you modify a setting, compared to looking at abstract numbers on the back of the screen. I find them much more engaging and fun to use compared to full automatic lenses.

You can get M42 lens adapters for almost any camera type, but they work specially well with mirrorless cameras. Here’s a list of the Asahi Takumar (old Pentax) series of lenses, which has some gems. You can pick them up off eBay for just a few tenners.

My favourites are the SMC 55mm f/1.8 and SMC 50mm f/1.4. They produce lovely creamy bokeh and great sharpness of in focus at the same time.

See

A nice side effect of having a camera on you is that you look at the world differently. Crouch. Climb on things. Lean against walls. Get unique points of view (but be careful!). Annoy your friends because you need to take a bit more time photographing that beetle.

Some shots you take might be considered dumb luck. However, it’s up to you to increase your chances of “being lucky”. You might get lucky wandering around through that park, but you know you certainly won’t be when you just sit at home reading the web about camera performance.

Don’t worry about the execution too much. The important bit is that your picture conveys a feeling. Some things can be fixed in post-production. You can’t fix things like focus or motion blur afterwards, but even these are details and not getting them exactly right won’t mean your picture will be bad.

Don’t compare

Even professional photographers take bad pictures. You never see the shots that didn’t make it. Being a good photographer is as much about being a good editor. The very best still take crappy shots sometimes, and alright shots most of the time. You just don’t see the bad ones.

Ask people you think are great photographers to point out something they’re unhappy about in that amazing picture they took. Chances are they will point out several flaws that you weren’t even aware about.

Share

Don’t forget to actually have a place to actually post your images. Flickr or Instagram are fine for this. We want to see your work! Even if it’s not perfect in your eyes. Do your own thing. You have your own style.

Go

I hope that was helpful. Now stop reading and don’t worry too much. Get out there and have fun. Shoot!

by Hylke Bons at February 13, 2016 04:00 PM

February 07, 2016

Damien Lespiau

libpneu first import

Waw, definitely hard to keep a decent pace at posting news in my blog. Nevertheless, a first import of libpneu has reached my public git repository. libpneu is an effort to make a tracing library that I could use in every single project I start. Basically, you put tracing points in your programs and libpneu prints them whenever you need to know what is happening. Different backends can be used to display traces and debug messages, from printing them to stdout, to sending them over an UDP socket. More about libpneu in a few days/weeks !

A small screenshot to better understand what it does:


by Damien Lespiau (noreply@blogger.com) at February 07, 2016 02:12 PM

ADV: ADV is a Dependency Viewer

A few months ago I wrote a small script to draw a dependency graph between the object files of a library (the original idea is from Lionel Landwerlin). You'll need an archive of your library for the tool to be able to look for the needed pieces. Let's have a look at a sample of its output to understand what it does. I ran it against the HEAD of clutter.

A view of the clutter library


This graph was generated with the following (tred is part of graphviz to do transitive reductions on graphs):

$ adv.py clutter/.libs/libclutter-glx-0.9.a | tred | dot -Tsvg > clutter.svg

You can provide more than one library to the tool:

./adv.py ../clutter/clutter/.libs/libclutter-glx-0.9.a \
../glib-2.18.4/glib/.libs/libglib-2.0.a \
../glib-2.18.4/gobject/.libs/libgobject-2.0.a \
| tred | dot -Tsvg > clutter-glib-gobject-boxed.svg




What you can do with this:
  • trim down your library by removing the object files you don't need and that are leafs in the graph. This was actually the reason behind the script and it proved useful,
  • get an overview of a library,
  • make part of a library optional more easily.

To make the script work you'll need graphviz, python, ar and nm (you can provide a cross compiler prefix with --cross-prefix).

Interested? clone it! (or look at the code)

$ git clone git://git.lespiau.name/misc/adv

by Damien Lespiau (noreply@blogger.com) at February 07, 2016 02:11 PM

shave: making the autotools output sane

updated: Automake 1.11 has been release with "silent rules" support, a feature that supersedes the hack that shave is. If you can depend on automake 1.11 please consider using its silent rules rather than shave.
updated: add some gtk-doc info
updated: CXX support thanks to Tommi Komulainen

shave


Fed up with endless screens of libtool/automake output? Fed up with having to resort to -Werror to see warnings in your code? Then shave might be for you. shave transforms the messy output of autotools into a pretty Kbuild-like one (Kbuild is the Linux build system). It's composed of a m4 macro and 2 small shell scripts and it's available in a git repository.
git clone git://git.lespiau.name/shave
Hopefully, in a few minutes, you should be able to see your project compile like this:
$ make
Making all in foo
Making all in internal
CC internal-file0.o
LINK libinternal.la
CC lib-file0.o
CC lib-file1.o
LINK libfoo.la
Making all in tools
CC tool0-tool0.o
LINK tool0
Just like Kbuild, shave supports outputting the underlying commands using:
$ make V=1

Setup


  • Put the two shell scripts shave.in and shave-libtool.in in the directory of your choice (it can be at the root of your autotooled project).
  • add shave and shave-libtool to AC_CONFIG_FILES
  • add shave.m4 either in acinclude.m4 or your macro directory
  • add a call to SHAVE_INIT just before AC_CONFIG_FILES/AC_OUTPUT. SHAVE_INIT takes one argument, the directory where shave and shave-libtool are.

Custom rules


Sometimes you have custom Makefile rules, e.g. to generate a small header, run glib-mkenums or glib-genmarshal. It would be nice to output a pretty 'GEN' line. That's quite easy actually, just add few (portable!) lines at the top of your Makefile.am:
V         = @
Q = $(V:1=)
QUIET_GEN = $(Q:@=@echo ' GEN '$@;)
and then it's just a matter of prepending $(QUIET_GEN) to the rule creating the file:
lib-file2.h: Makefile
$(QUIET_GEN)echo "#define FOO_DEFINE 0xbabe" > lib-file2.h

gtk-doc + shave


gtk-doc + shave + libtool 1.x (2.x is fine) is known to have a small issue, a patch is available. Meanwhile I suggest adding a few lines to your autogen.sh script.
sed -e 's#) --mode=compile#) --tag=CC --mode=compile#' gtk-doc.make > gtk-doc.temp \
&& mv gtk-doc.temp gtk-doc.make
sed -e 's#) --mode=link#) --tag=CC --mode=link#' gtk-doc.make > gtk-doc.temp \
&& mv gtk-doc.temp gtk-doc.make

dolt + shave


It's possible to use dolt in conjunction with shave with a surprisingly small patch to dolt.

Real world example: Clutter


$ make
GEN   stamp-clutter-marshal.h
GEN   clutter-marshal.c
GEN   stamp-clutter-enum-types.h
Making all in cogl
Making all in common
CC    cogl-util.o
CC    cogl-bitmap.o
CC    cogl-bitmap-fallback.o
CC    cogl-primitives.o
CC    cogl-bitmap-pixbuf.o
CC    cogl-clip-stack.o
CC    cogl-fixed.o
CC    cogl-color.o
cogl-color.c: In function ‘cogl_set_source_color4ub’:
cogl-color.c:141: warning: implicit declaration of function ‘cogl_set_source_color’
CC    cogl-vertex-buffer.o
CC    cogl-matrix.o
CC    cogl-material.o
LINK  libclutter-cogl-common.la
[...]

Eh! now we can see a warning there!

TODO


This is a first release, shave has not been widely tested aka it may not work for you!
  • test it with a wider range of automake/libtool versions
  • shave won't work without AC_CONFIG_HEADERS due to shell quoting problems
  • see what can be done for make install/dist (they are prettier thanks to make -s, but we probably miss a few actions)
  • there is a '-s' hardcoded in MAKEFLAGS,  I have to find a way to make it more flexible

by Damien Lespiau (noreply@blogger.com) at February 07, 2016 02:08 PM

After-shave

A few concerns have been raised by shave, namely not being able to debug build failure in an automated environment as easily as before, or users giving  useless bug reports of failed builds.

One capital thing to realize is that, even when compiling with make V=1, everything that was not echoed was not showed (MAKEFLAGS=-s).

Thus, I've made a few changes:
  • Add CXX support (yes, that's unrelated, but the question was raised, thanks to Tommi Komulainen for the initial patch),
  • add a --enable-shave option to the configure script,
  • make the Good Old Behaviour the default one,
  • as a side effect, the V and Q variables are now defined in the m4 macro, please remove them from your Makefile.am files.

The rationale for the last point can be summarized as follow:
  • the default behaviour is as portable as before (for non GNU make that is), which is not the case is shave is activated by default,
  • you can still add --enable-shave to you autogen.sh script, bootstraping your project from a SCM will enable shave and that's cool!
  • don't break tools that were relying on automake's output.

Grab the latest version! (git://git.lespiau.name/shave)

by Damien Lespiau (noreply@blogger.com) at February 07, 2016 02:06 PM

Still some hair left

I've been asked to give more input on make V=1 Vs. --disable-shave, so here it is: once again, before shipping your package with shave enabled by default, there is something crucial to understand: make V=1 (when having configured your package with --enable-shave) is NOT equivalent to no shave at all (ie --disable-shave). This is because the shave m4 macro is setting MAKEFLAGS=-s in every single Makefile. This means that make won't print the commands as is used to, and that the only way to print something on the screen is to echo it. It's precisely what the shave wrappers do, they echo the CC/CXX and LIBTOOL commands when V=1. So in short custom rules and a few automake commands won't be displayed with make V=1.

That said, it's possible to craft a rule that would display the command with shaved enabled and make V=1. The following rule:
lib-file2.h: Makefile
$(SHAVE_GEN)echo "#define FOO_DEFINE 0xbabe" > lib-file2.h
would become:
lib-file2.h: Makefile
@cmd='echo "#define FOO_DEFINE 0xbabe" > lib-file2.h'; \
if test x"$$V" = x1; then echo $$cmd; fi
$(SHAVE_GEN)echo "#define FOO_DEFINE 0xbabe" > lib-file2.h
which is quite ugly, to say the least. (if you find a smarter way, please enlighten me!).

On the development side, shave is slowly becoming more mature:
  • Thanks to Jan Schmidt, shave works with non GNU sed and echo that do not support -n. It now works on Solaris, hopefully on BSDs and various Unixes as well (not tested though).
  • SHAVE_INIT has a new, optional, parameter which empowers the programmer to define shave's default behaviour (when ./configure is run without shave any related option): either enable or disable. ie. SHAVE_INIT([autootols], [enable]) will instruct shave to find its wrapper scripts in the autotools directory and that running ./configure will actually enable the beast. SHAVE_INIT without parameters at all is supposed to mean that the wrapper scripts are in $top_builddir and that ./configure will not enable shave without the --enable-shave option.
  • however, shave has been reported to fail miserably with scratchbox.

by Damien Lespiau (noreply@blogger.com) at February 07, 2016 02:06 PM