We make predictions all the time. And, outside of some simple, deterministic systems – the motions of the planets being an obvious example – we fail miserably.
Consider a single, solitary wind turbine standing on top of a hill, generating electricity.
Wind speed is variable. Actually, for all intents and purposes, it’s random: it can be analysed statistically, but its precise value in the next time interval – whether it’s a second or hour or day – cannot be predicted precisely. It’s typically modelled using a Weibull probability distribution, which is both flexible and relatively straightforward to calculate, but is, in reality, an approximation and not always a particularly good one.
The distribution of wind speeds matters. Wind turbines installed at two different sites with the same average wind speed may generate entirely different power outputs due to differences in the speed distribution.
The relationship between wind speed and the wind’s kinetic energy is non-linear – specifically, cubic. A small change in wind speed can result in a significant change in the amount of harvestable energy. For example, there is almost 75% more energy available in a 12 km/hour wind than in a 10 km/hour wind.
There is a physical limit to how much of this kinetic energy can be converted into mechanical energy: 59%. In reality, considerably less is converted, depending upon the design of the turbine and the wind conditions. Suffice it to say, however, that the relationship between the wind’s kinetic energy and the power output of the turbine is also non-linear.
So, even in this simplest of scenarios, electricity generation is determined by one non-linear relationship built on top of another non-linear relationship built on top of an effectively random variable.
Now add in other real-world uncertainties: storm damage, servicing downtime or forced curtailment due to grid congestion, to name just a few of the more obvious ones.
And then add in the complexity associated with scaling up beyond a single turbine to an entire wind farm, when shadow effects and topography become pertinent issues.
And let’s be clear. All of this effort is devoted to predicting only the electricity generation.
To predict the associated greenhouse gas emission reductions, one has to develop a counterfactual baseline, an emissions scenario that describes the absence of the wind farm. Only by comparing the emissions associated with the wind-generated electricity (effectively zero) and the baseline emissions (typically given by some flavour of grid emission factor), can one calculate the reduction in emissions.
Here we encounter a multitude of other problems. Some are practical. For instance, obtaining run-time, fuel-type and efficiency data for operational power plants in order to calculate the grid emission factor can be surprisingly difficult; in some countries, such data are considered state secrets.
Some are fundamental, verging on philosophical. How can one predict with any certainty what the baseline emissions will be? New power stations may come online, others may go offline, sometimes quite suddenly; the sun may shine or it may not, playing havoc with solar generation forecasts; electric vehicles may suddenly take off, favouring some power sources over others; governments and policies come and go; economies grow and contract, almost always confounding the confident predictions of economists.
This is not a call to throw our hands in the air and despair at our helplessness in the face of real-world complexity.
After all, there are well over 340,000 wind turbines operating in the world, all of them having overcome a diverse range of challenges and uncertainties to get built, connect to the grid and sell electricity. Project management may consist of a hefty dose of art as well as science, but projects are completed all of the time, some even on time and on budget.
(That said, a surprisingly large fraction of completed wind farms generate less electricity than predicted, hinting at some deeper underlying problem.)
It is, however, a call for humility.
There is a worrying tendency in the climate mitigation community to indulge in overly deterministic thinking, along the lines of: “if we implement Activity X, we will reduce this very specific amount of tCO2e by Year Y.” This rather Newtonian, and invariably optimistic, view of the world is evident all around – in NDCs, in corporate strategies, in the carbon markets (where ex ante emission reduction forecasts are de rigueur) and no more so than in the theories of change that underlie project-based mitigation mechanisms, such as the GCF and the GEF.
There is much to dislike about theories of change. The very term conveys a sense of grandeur, of insight, that is all too often lacking. And it’s not being pedantic to observe that the term ‘theory’ is a misnomer: ‘untested hypothesis’ would be more accurate.
But the aim of a theory of change, to describe explicitly a set of actions and causal linkages that will lead to a particular outcome, is sensible, verging on common-sensical.
The problem is in the execution.
Theories of change tend, almost always implicitly, to describe two equilibrium states: the baseline state, flawed in some way (for instance, characterised by an over-reliance on fossil fuels for power generation), and the project state, in which the project has succeeded in addressing this flaw by, for example, introducing wind energy onto the grid.
If the theory of change is accompanied by a diagram, there are typically some boxes and arrows aligned to indicate a tidy, unilinear causal progression from one state to the other.
A completely fictional but fairly typical theory of change diagram
Very rarely is there any consideration of the intermediate states – the dynamics – of getting there. And yet it is the dynamics that are often the most interesting – and the most unpredictable.
Weather features, such as clouds, winds and storms, are transitory, disequilibrium phenomena. They are probabilistic, non-linear and often, in the scientific meaning of the term, chaotic. Any prediction invoking them must be heavily caveated with confidence intervals and a healthy dose of scepticism.
They are a manifestation of, and a means of achieving, the climate system’s desire for thermodynamic equilibrium. They are, in effect, the dynamics that give rise to climate.
But being short-lived and being uncertain and being ‘only’ means to an end does not imply that weather phenomena are uninteresting or unimportant. Quite the reverse, in fact: weather is more tangible than any long-term, equilibrium ‘climate’ and, without it, there would anyway be no such climate to speak of.
And so it is with theories of change. In focusing solely on the beginning and end states (the mitigation climate, if you like), and ignoring the intermediate steps taken to get there (the mitigation weather), much is overlooked, not least the fact that the end state is heavily contingent upon a succession of individual and institutional decisions, events, policies, investments and interactions – of varying degrees of likelihood – occurring in the right order, in the right combinations, in the right time-frame.
In short, there is effectively a ‘weather map’ that underlies any theory of change diagram.
A project’s theory of change might, for example, breezily indicate that a national wind power sector will be created and sustained through a set of interventions targeted at building technical capacities, putting in place an appropriate policy environment and introducing financial support to wind farm operators in the form of power auctions.
The logic appears impeccable, the causality – ‘if X, then Y’ – almost self-evident and the objective seemingly guaranteed to be achieved. The mitigation benefits are, it appears, a sure bet, as long as the project interventions are implemented as planned.
The underlying weather map is, however, a far more fluid, far more turbulent, far more uncertain place, constantly evolving and swirling and shifting as probabilities work themselves out, black swans appear out of nowhere and non-linearities take the project into uncharted territory.
Bad weather on the horizon: the feed-in tariff is going to be degressed. Credit: Royal Meteorological Society
Perhaps, for example, ten years ago there was a relatively minor, seemingly inconsequential change of personnel at the Ministry of Energy. This opened up a new lobbying opportunity for the gas industry, which, after years of trying, eventually succeeds in diluting some planned pro-wind revisions to the grid code. This, in turn, serves to weaken the financial case for wind power, leading several leading national banks to decide, over the following months, to raise their interest rates on loans to this nascent, and now slightly riskier, sector.
Or perhaps there is a series of storms through the year that severely damages thermal power plants, thereby strengthening the case for diversifying to wind power. Or perhaps the storms replenish depleted hydropower reservoirs instead, thereby reducing the need for wind power. Or perhaps the storms damage electricity transmission infrastructure, leading the government to reassess its entire energy strategy and reorient its attention to decentralised solar power.
Perhaps a quirk in the design of the power auctions leads to under-pricing of bids, resulting in wind farms that are built but which prove to be uneconomic and are quickly mothballed.
Each of these eventualities – and untold others – is possible, accompanied by varying degrees of unknown and, in many cases, unknowable probability.
As mitigation efforts become larger and more ambitious, exemplified by the gradual shift from CDM-like, single-site interventions (for instance, erecting a wind farm) to sector- or even economy-wide interventions (implementing an NDC roadmap or a coal phase-out strategy, for example), the mitigation weather is only going to become more changeable, more erratic and, at times, more severe.
Mitigation weather is rarely hinted at by theory of change developers – maybe because they are not aware of the underlying complexity themselves; maybe because they hope that ignoring the complexity will somehow magic it away; or maybe, less charitably, they do not want to jeopardise their funding prospects by highlighting the inherent riskiness of what it is they are proposing.
This is a shame. It lends climate mitigation an air of deterministic certainty that it doesn’t deserve. And it leaves policy-makers, investors and, yes, even funding bodies such as the GCF and GEF with a false sense of confidence that their interventions will translate seamlessly into predictable volumes of emission reductions.
The point is not that the world is a complex, uncertain place. We know that.
The point is that the world is a more complex, more uncertain place than most theories of change would have us believe.
Climate mitigation is not rocket science. As complicated as rocket science seems, it’s essentially a deterministic engineering discipline.
Climate mitigation is more akin to meteorology: a science where forecasting is educated guesswork (albeit very sophisticated guesswork) and where probability and non-linearity rule supreme.
So, mitigation practitioners, don’t forget to pack an umbrella: the forecast is for interesting weather ahead.