Category Archives: Climate Change

A Fraction for Your Thoughts

If you want an up-to-date monthly measurement of atmospheric CO2 concentration—my candidate for the most important leading indicator of welfare in the world—then I suggest you bookmark the home page of CO2now.org. The site gives you easy access to the longest running monthly time series of atmospheric CO2 concentration, which goes all the way back to March 1958.

The American scientist Charles David Keeling pioneered the accurate measurement of CO2 after setting up a laboratory 3,000 metres above sea level at Mauna Loa in Hawaii.  The location was specifically selected to eliminate sample contamination by large-scale factory emissions: no such industry exists for thousands of miles around Mauna Loa.

As the years passed, Keeling’s observations showed a natural annual cyclical movement in CO2 concentrations: an upswing in the northern hemisphere autumn as vegetation died back and CO2 was released; and a downswing in the northern hemisphere spring as renewed plant growth removed CO2 from the air. The explanation for why the northern hemisphere growing season dominates the cycle is simple: 65% of the earth’s land mass resides in the north.

The second observation was even more critical: atmospheric CO2 concentration was on an upward trajectory, a trajectory that has been dubbed the ‘Keeling curve’. And as the years went by it became obvious that the source of this rising CO2 concentration was mankind. A few contrarians (most famously the Australian geologist Ian Pilmer) have suggested that undiscovered volcanoes are responsible for rising CO2 levels, but no mainstream volcanologists support his views; the consensus is that volcano sourced CO2 emissions are around 150 times smaller than those from anthropogenic (human) sources.

An even more simple rebuttal to the ‘it isn’t us’ claim is that any serious alternative must account for where the CO2 from burnt fossil fuels has gone. We know to a high degree of certainty how much oil, coal and gas we are burning, and you can even go onto BP’s web site and look it up here. So if there exists some ‘natural’ source of the CO2 build-up, there must therefore exist a ‘natural’ withdrawal of all the fossil fuel CO2 emissions. Thus for it not to be ‘us’, an unidentified natural process must be removing all the human-produced carbon, while at the same time naturally produced carbon from a separate unidentified source is magically replacing it. I hope the absurdity of this logic is obvious.

More interesting is the fact that annual fossil fuel plus land use change CO2 emissions don’t equate with the rise in atmospheric CO2 concentrations. There are certain numbers I encourage you to get comfortable with. One such number is the annual fossil fuel carbon emission as reported by the US Carbon Dioxide Information Analysis Center (CDIAC). The time series is reported here and the most recent advance estimates for the last two years are here. According to this source, 9.1 giga tonnes of fossil fuel carbon was emitted in 2010 (which is about 33 giga tonnes of CO2).

The next number to etch into your memory is that 1 part per million (ppm) of CO2 in the atmosphere is equivalent to 2.1 giga tonnes of carbon. Keeping this figure in mind, look at the next two tables. They are taken from the Carbon Budget 2010 report published on December 5th by the Global Carbon Project (a body set up to coordinate academic research on the carbon cycle).

You will note that for the 2000-2010 period, CO2 concentration has been rising by an annual average of 1.9 parts per million (ppm), which is equivalent to around 4 giga tonnes of carbon. In 2010, CO2 rose by 2.36 ppm, which is roughly the same as 5 giga tonnes of carbon. You will also recall that fossil fuel emissions were estimated at 9.1 giga tonnes and land use change emissions at 0.9 giga tonnes in that year. So some simple maths tells us that only 50% of what is being discharged into the atmosphere is actually staying there. Ultimately, all the sources of CO2 must balances with all the sinks of CO2, and another chart from the Carbon Budget 2010 shows us what is going where (note that a peta gram, Pg, is the same thing as a giga tonne, Gt):

And similar analysis for 2010:

Over the longer term, the percentage of emissions that remain in the atmosphere, the so called airborne fraction, has stayed around 43%. Thankfully, no change in trend for this fraction has yet been found (see here). Yet year to year there is a significant flux. For example, in 2010 we saw the airborne fraction at 50%, significantly above the 43% ish long-term trend. For the first 11 months of 2011, however, CO2 is up around 1.8 ppm year on year, which is equivalent to 3.7 giga tonnes of carbon. If we assume total emissions (fossil fuel and land use change) come in a little over 10 giga tonnes, the airborne fraction will  have fallen from 50% in 2010 to somewhere around 35% in 2011. The main driver of this year to year shift is the El Nino Southern Oscillation (ENSO) cycle. Simplistically, cold water absorbs more CO2 than warm water, and during a La Nina phase you get more cold water and in an El Nino less.

Unfortunately, just because the CO2 airborne fraction has been well-behaved to date, does not mean it will always be so. The fear is that at some stage carbon land and ocean sinks will become saturated, causing the airborne fraction to rise. Thus, more of the CO2 we pump into the atmosphere will actually stay there. Should that happen over the next few decades, then temperature rise could also accelerate. The UK Met Office has modelled one scenario which sees the Amazon rain forest dying off, so causing a massive down shift in the CO2 land sink uptake. The result is a surge in CO2 concentrations.

Thankfully, the Met Office model’s horrible looking carbon cycle feedback is still an outlier among climate models globally. That said, I make a habit of comparing CO2 emissions with atmospheric CO2 concentrations. We already have one instance of climate models failing to forecast a major structural shift: the far earlier-than-expected collapse in Arctic sea ice extent. Let’s hope the airborne fraction does not follow a similar path; but keep an eye on it just in case.

The Emissions Stories We Tell

In the climate change causation chain—from emissions, to greenhouse gas concentrations, to temperature change, to environmental impacts—the impacts are often the most difficult part of the chain to grasp. A major report by the UK Met Office Hadley Centre released at the beginning of December is, therefore, especially welcome, particularly as it incorporates 24 individual country studies (here) on climate impacts.

For the UK, the Met Office makes the following projection:

The UK is projected to experience temperature increases of up to around 3°C in the south and 2.5°C further north. The agreement between models is moderate in the south of the UK and low further north.

To start putting this in context, the 3C number above is by the year 2100 and is the change over the 30 year average for the 1960-1990 period (which is used as the baseline). For rainfall, we see this summary statement:

Europe shows a strong contrast in projected precipitation changes, with large decreases in the south and large increases in the north. The UK falls towards the northern region with generally increasing precipitation, with projected increases of up to 10%, though some southern parts of the UK may experience decreases of up to 5%. There is generally good agreement between ensemble members over the north of UK, but moderate agreement further south, indicating uncertainty in the position of the transition zone between increasing and decreasing precipitation over Europe.

The report then goes on to consider a number of sectors in turn: crop yields, food security, water stress and drought, pluvial flooding (rainfall saturation), fluvial flooding (river related), cyclones (extreme winds) and coastal impacts.

With such a broad spectrum of topics to choose from, a journalist covering the report has ample opportunity to push their own particular agendas. This headline from the Guardian:

Met Office warns of UK climate risks: Britain will experience water shortages and flooding by the end of the century if temperatures are left unchecked, analysis shows

And this from the Daily Mail:

Global warming would BOOST Britain’s farm crops by 10pc

While this blog occasionally focuses on the distorted press coverage of climate change, this is not a topic I want to pursue today. Overall, I am more interested in trying to understand the risks that climate change poses to individuals and their families. In this vein, not one newspaper deemed it necessary to mention the critical assumption the Met Office made: namely, the emission path underpinning their climate impact forecasts. Change this premise and you change the projection. Accordingly, the emission path used by the Met Office, which is clearly stated in the report summary, needs to be highlighted.

For the A1B emissions scenario the UK is projected to experience temperature increases of up to around 3°C in the south and 2.5°C further north.

So the projections given by the Met Office are premised on the world following the A1B emissions scenario; if the world doesn’t follow this scenario, the Met Office’s projections are invalid.

The A1B scenario came out of a report entitled the Special Report on Emissions Scenarios (SRES) published by the Intergovernmental Panel on Climate Change (IPCC) in 2000. These SRES scenarios formed the basis of the IPCC’s Third Assessment Report (TAR) published the same year and were also used for the Fourth Assessment Report (AR4) published in 2007. In reality, the scenarios are thought exercises. In the IPCC’s words:

Future greenhouse gas (GHG) emissions are the product of very complex dynamic systems, determined by driving forces such as demographic development, socio-economic development, and technological change. Their future evolution is highly uncertain. Scenarios are alternative images of how the future might unfold and are an appropriate tool with which to analyse how driving forces may influence future emission outcomes and to assess the associated uncertainties.

Indeed, the IPCC is so keen to disabuse us of the idea that the scenarios have any objective probability that they use the term ‘storylines’ (hence the title of this post):

Four different narrative storylines were developed to describe consistently the relationships between emission driving forces and their evolution and add context for the scenario quantification. Each storyline represents different demographic, social, economic, technological, and environmental developments, which may be viewed positively by some people and negatively by others.

The scenarios cover a wide range of the main demographic, economic, and technological driving forces of GHG and sulfur emissions2 and are representative of the literature. Each scenario represents a specific quantitative interpretation of one of four storylines. All the scenarios based on the same storyline constitute a scenario “family”.

If you are not familiar with the the IPCC’s scenario categories, then the ‘A’ scenarios are broadly fast growth while ‘B’ ones sacrifice some growth for environmental sustainability. Likewise the ‘1’ refers to a converging world, while ‘2’ sees lots of different developmental paths. This gives a matrix of four major families (or ‘storylines’ to use the IPCC’s wording) as depicted by the image below:

Simplistically, A1 is close to what we have seen over the last decade: the triumph of the so called Washington consensus model of rapid industrialisation, free markets and open borders; global capitalism reins supreme, and citizen-consumers realise their personal dreams through shopping at the mall. B1, by contrast, would be something like a sustainable nirvana: perhaps E.F. Schumacher’s ‘Small is Beautiful’ but with rich countries making sacrifices to lift up the living standards of poorer countries in the name of equity and fairness.

The four storylines are then further subdivided into more scenarios. Branching out from A1 are the three scenarios A1FI, A1B and A1T. The first is a fossil fuel intensive growth scenario, the second a balanced fossil fuel/renewable scenario and the last a renewables heavy scenario.

These divisions are then subdivided yet again to produce a grand total of 40 scenarios. Note that the designation HS means that all these particular scenarios share “harmonized” assumptions on global population, gross world product, and final energy. With the OS designation, such assumptions are relaxed within a particular model.

After that quick tour of the IPCC emissions scenario methodology, let’s return to the Met Office’s choice of the A1B scenario to be the basis of its climate impact analysis. Before we do, just note again that the IPCC bent over backwards not to emphasise one scenario when its report was issued back in 2000:

No judgment is offered in this Report as to the preference for any of the scenarios and they are not assigned probabilities of occurrence, neither must they be interpreted as policy recommendations.

In other words, you choose your scenario at your own peril. The report, however, does gives us some metrics to assess how particular scenarios are fairing as the years go by. The chart below shows the A1 (dash for global growth) emission scenarios. Rather counterintuitively, A1B (the balanced technology scenario) is initially a higher CO2 emitting scenario than A1FI (the fossil fuel intensive scenario). This is purely because of the particular models chosen for each scenario.

More important, the A1FI (the top dotted line) and A1B (the thick line) scenarios part company around 2020, with CO2 emission levelling off for the latter as wind, solar and such like comes on stream in mass. Looking at the predicted CO2 emission in 2050, we can see the divergence quite starkly in the table below (click for larger image).

In a similar manner, the table below (again click for larger image) shows an emerging difference between A1FI and A1B with respect to the reliance on coal and introduction of zero carbon renewables between 2020 and 2050:

So which path is more realistic? A1B or A1FI? Over the very near term, the two paths see nearly identical emissions, but it is worth just checking to see how actual emissions have been trending vis-a-vis the scenario paths projected by the IPCC back in 2000. The chart below is taken from a presentation by the Met Office’s Richard Betts at the ‘4 Degrees and Beyond‘ climate change conference held at Oxford University. If we update the chart with the advance estimate fossil fuel CO2 emission number for 2010, which is 9.1 giga tonnes of carbon, we can see that we are currently trending along the top of the IPCC’s overall band, but close to the A1B path.

Please note though that A1B in its early years is not really a  ‘balanced technology’ path as the renewables build-out has yet to take place. And, as noted before, because of some idiosyncrasies between the A1B and A1FI models, we are currently above the fossil fuel intensive A1FI path.

Against this background, fossil fuel emissions to date don’t really help us much in  deciding whether we will end up following the A1B balanced technology path or the A1FI fossil fuel intensive path.

The next question is whether we can calibrate the IPCC scenarios by using the highly detailed International Energy Agency (IEA) scenarios as presented each year in their flagship World Energy Outlook report (which I recently posted on here). The IEA has three main emission scenarios: the 450 Scenario, New Policies Scenario and Current Policies Scenario. The 450 Scenario is a thought exercise on what is needed to be done to keep the atmospheric concentration of CO2 below 450 parts per million. Unfortunately, renewable infrastructure build is not even coming close to this scenario path, so we can put it to one side.

The Current Policies Scenario is defined as follows:

WEO-2011 also presents updated projections for the Current Policies Scenario to show how the future might look on the basis of the perpetuation, without change, of the government policies and measures that had been enacted or adopted by mid-2011.

Note this includes policies that had been adopted but not executed as, for example, China’s 12th Five-Year Plan, which includes energy intensity targets.

By contrast, the New Policies Scenario includes a broader wish list of policies that countries have flagged but not necessarily done anything about:

The New Policies Scenario incorporates the broad policy commitments and plans that have been announced by countries around the world to tackle climate energy insecurity, climate change and local pollution, and other pressing energy related challenges, even where specific measures to implement these commitments have yet to be announced.

Keeping these definitions in mind, the IEA’s CO2 emissions projections under different scenarios go out as far as 2035. In that year, the IEA sees 36.4 giga tonnes (Gt) of CO2 being emitted under the New Policies Scenario and 43.3 Gt of CO2 under the Current Policies Scenario (under the 45o Scenario, emissions would be only 21.6 Gt, around a 40% fall from the 2009 level of 28.8 Gt of CO2). By contrast, the two IPCC scenarios, A1B and A1FI, have CO2 emissions moving above 40 Gt by 2020 (note to convert carbon emissions to CO2 emissions we have to multiply by 3.67), so in the early decades they are both high growth.

By 2035, the A1FI scenario is accelerating away from the A1B scenario, with emissions that year around 66 Gt of CO2 for the former against 55 Gt for the latter. The worst case IEA Current Policies Scenario has 43.3 Gt for that year.

The conclusion we can draw this far is that the selection of the A1B scenario by the Met Office for their climate impact study looks reasonable when we look out over the foreseeable time frame of 2011-2035 as covered by the IEA. However, as we go further into the future, is does become more of a ‘story’.

The IPCC sees emissions peaking under A1B at around 60 Gt somewhere around 2050, which is about double the current level of annual fossil fuel emissions. Others are less sanguine. Ross Garnaut, the well-know Australian economist published a paper (here) that suggests emission growth rates will outstrip even the IPCC’s A1FI scenario:

Accordingly, A1B could be viewed as quite conservative near term (emission overshoots unlikely) but quite aggressive long term (emission overshoots likely). In short, we would need to base our climate impact studies on a more negative emissions  scenario if we concur with Garnaut’s observations that 1) world GDP growth rates have continued to surprise on the upside and are likely to continue to do so and 2) key countries like China, India and Indonesia are being forced into an ever greater reliance on coal to meet their increasing energy needs, and will continue to do so.

At the same time, many of the trumpeted new technology hydrocarbon sources such as shale gas, shale oil and tar sands are not carbon emission friendly. They also suffer from  a particular Peak Oil community concern: we are having to use ever-larger amounts of energy to extract a given amount of energy as the easily accessible sources have become exhausted.

Personally, I think the Met Office would be wise to tell two stories based on the IEA’s Current Policies Scenario and one with the New Policies Scenario but with the latter extrapolated out for a world with high economic growth. Policy makers need an understanding of how bad things could get from a climate impacts perspective, and the use of A1B alone does not provide this. Stories are wonderful things, but they can sometimes be misleading—and ultimately dangerous—as well.

The Dystopian Dance

The issues of peak oil and climate change can both come across as having a certain millennial taint. Humanity, in its stupidity, is punished by nature. Or, as James Lovelock would put it, we are seeing the ‘Revenge of Gaia’. The Millennialists, however, see a happy ending at the other end of the tumult—at least for the chosen, enlightened few—while those of an atheistic or agnostic view of the world are condemned to a permanent descent into dystopia. No escaping ‘the end of days’ for them in a society under collapse.

Nonetheless, the fact that dystopias have frequently been the province of cranks does not mean they are not worthy of closer inspection. Prose writers have traditionally been the first ‘unto the breach’ when it comes to contemplating what the man (or woman) on the street deems unmentionable. Wells, Huxley, Orwell and Burgess come immediately to mind when we think of technological or political dystopias. Who having read Orwell’s ‘Animal Farm’ and ‘1984’ could not be a little more aware that a government (from any part of the spectrum) offering a political utopia may not instead transform our lives into a permanent dystopia.

Back in the 1970s, I read Nevil Shute’s ‘On the Beach’ and the completely abstract concept of a nuclear exchange had a little more meaning in one teenager’s mind.

With a novel, it is very difficult to throw the epithet ‘alarmist’; the writer is not telling us with certainty what will be but rather imagining what can be. And it is the description of a possibility that will alter our brain’s cognition of risk more than any number of reports from the Intergovernmental Panel on Climate Change report (as I touch on in my last post here).

To date Peak Oil has brought out better works of fiction than climate change (although usually climate change has a walk on part). James Howard Kunstler’s ‘World Made by Hand’ and ‘The Witch of Hebron’ both bring home in vivid colours the day-to-day struggles in a world with no easy access to cheap oil. Like many such works, though, there is a strong thread of the irrational. Religion (although not as we traditionally know it) and magic become a greater part of life’s mix in the author’s eyes, as a result of the failure of rationalism as embodied in science.

Kunstler is still somewhat a cult figure and has not acquired the literary fame of two other novelists that have dealt with dystopias head on: Cormac McCarthy and Margaret Atwood. The environmental campaigner and journalist  George Monbiot even had these words to say about McCarthy’s ‘The Road’:

“It could be the most important environmental book ever. It is a thought experiment that imagines a world without a biosphere, and shows that everything we value depends on the ecosystem.”

Both McCarthy’s ‘The Road’ and Atwood’s ‘The Year of the Flood’, however, start their stories after some unknown cataclysmic event. The reader may be left with a sense of unease, as was my teenage mind with Shute’s ‘On the Beach’, but an unease with what (with Shute I knew exactly)? Genetic engineering, global capitalism, advanced technology, pandemics, climate change? Tellingly, the publicity shot below from the movie ‘The Road’ has recently been wheeled out to accompany press articles on the potential impact of a euro break-up; the movie has become a generic metaphor for collapse, and climate change has to get in the queue.

Personally, I believe climate change has yet to find its Tolstoy. We see such luminaries from the world of science as Martin Rees openly contemplating the catastrophic potential of climate change, but this has had little resonance in the arts—or at least art that has caught the public’s imagination. Bill McKibben, the founder of the campaigning organisation 350.org, contrasts the situation with HIV, which produced “a staggering outpouring of art that, in turn, has had real political effect” (here). McKibben’s frustration is palpable:

Here’s the paradox: if the scientists are right, we’re living through the biggest thing that’s happened since human civilization emerged. One species, ours, has by itself in the course of a couple of generations managed to powerfully raise the temperature of an entire planet, to knock its most basic systems out of kilter. But oddly, though we know about it, we don’t know about it. It hasn’t registered in our gut; it isn’t part of our culture. Where are the books? The poems? The plays? The goddamn operas?

McKibben goes on to lists some of the reasons artists have not effectively engaged: diffuse perpetrators, disbursed victims, different time frames—in fact, a nightmare plot to narrate. But despite the difficulties, I believe that until we get the ‘goddamn operas’ communicators of climate chance science will have an uphill battle in changing people’s minds.

Do the Kids Care?

Climate change, if nothing else, is a time horizon risk: the longer you live, the more you are exposed to climate change and its impacts. Thus, to follow the logic, the old (and especially childless) should be less sensitive to climate change risk than the young. (For the different question of “Should the kids care?” see ‘Odds of Cooking the Kids’ here, here and here.) But do the young care?

survey last year suggests the young care a little less about climate change than anyone else. This seems rather strange, since the young adults involved would have had a high exposure to the topic from early adolescence both through the media and school.

The first Climate Change Conference took place in Geneva 1979 a few years after a landmark paper by Wally Broecker in 1975 established a link between anthropogenic (human) CO2 emissions and temperature rise. The Intergovernmental Panel on Climate Change (IPCC) was established in 1988, but it probably took another decade before the topic spilled out of the academic community and into the public domain.

By around 2006 or 2007, few people would have remained unaware of the issue, even if they differed about the causes and severity of the problem. The documentary ‘An Inconvenient Truth’ show cased Al Gore’s campaign to educate citizens about the dangers of global warming and received extensive publicity. Meanwhile, the IPCC’s Fourth Assessment Report declared that human-caused factors were ‘very likely’ the cause of climate change and was widely reported. In retrospect, these years appear to have seen the high water mark for public awareness of the risks from climate change (partly because carbon-industry financed lobby groups had only just started to enter the debate on the skeptics’ side).

For a younger generation, the general media buzz over climate change was also supplemented by information they received via their school curricula.

In the UK’s case, a child in high school in the 1980s would only have come across climate change in school if introduced to the topic by an enthusiastic science teacher. In 1995, however, climate change was formally introduced into the National Curriculum, and nowadays a pupil has no choice but to bump up against it in variety of contexts including science, geography and even, occasionally, religious education.

In the United States, the federal, state and local involvement in education have made the delivery of climate change education a little more variable between schools. Nonetheless, there appears to be a consensus among teachers that climate change is taking place and that it should be taught. A position paper (here) from the US National  Association of Geoscience Teachers (NAGT) is unequivocal:

The National Association of Geoscience Teachers (NAGT) recognizes: (1) that Earth’s climate is changing, (2) that present warming trends are largely the result of human activities, and (3) that teaching climate change science is a fundamental and integral part of earth science education.

The National Association of Science Teachers (NSTA) is a little less forthright on the subject, but in a 2007 NSTA President’s report  entitled ‘Teaching About Global Climate Change’ we see this:

Central to environmental literacy is students’ ability to master critical-thinking skills that will prepare them to evaluate issues and make informed decisions regarding stewardship of the planet. The environment also offers a relevant context for the learning and integration of core content knowledge, making it an essential component of a comprehensive science education program.

Two of the most reliable sources of information for classroom teachers are the National Oceanic and Atmospheric Administration and the United Nations Intergovernmental Panel on Climate Change, both offering materials that are scientifically based and bias-free.

No prizes for bravery here, but by endorsing two sources that document the risks related to human-induced climate change, the NSTA in effect is adopting a similar position to the NAGT—but at one remove. The NSTA’s reticence is obviously because science teachers who promote awareness of the problem are likely to receive a lot of push-back; an NSTA survey (here) gives a sense of this:

(Rather disappointingly for a science-based organisation, neither the number of educators who responded nor the climate change beliefs of the responding educators were reported, rendering any firm conclusions problematic).

Overall, however, for those students who had not already taken a firm position vis-a-vis the veracity of human-induced climate change from their parents, the senior school experience over the last 10 years or so would have taught most of them that the climate is changing and anthropogenic carbon emissions are to blame (based on scientific evidence). For those 1990s high school graduates, the school input on the topic would likely have been far more mixed. But by contrast, anyone over 35 is unlikely to have come across climate change at school.

So back to the survey—conducted jointly by the American University, Yale University and George Mason University—titled ‘The Climate Change Generation?’ The generation in question as per the survey definition was a sample of 1001 adults aged between 22 and 35 as of when the survey took place (between December 24, 2009 and January 3, 2010).

Given the educational backdrop of the ‘Climate Change Generation’ we get two immediate counter-intuitive findings from the survey. Younger people neither think about climate change more nor worry about it more (or at least no more than others):

And this being a risk blog, I am particularly interested in people’s perceptions of the personal harm they could incur. Again, the young don’t appear particularly concerned.

Moreover, despite the impression that climate change concern (and activism) is a province of the young (and almost a social norm these days), the data just don’t show this to be true:

Could it be that factor ‘youth’ is not determining the direction of the survey responses  (and when it does, the sign is opposite of what one would expect) because the ‘old young’, who had come of age in the 1990s when climate change was less reported, were diluting the signal in the data? The answer to this is ‘no’ since the survey also split the young adults into two cohorts: in effect, the ‘young young’ and the ‘old young’. Note the answer ‘not at all worried about global warming’ at the bottom of the chart sees the ‘young young’ the least concerned of all:

On reflection, it appears that education has had no impact on the brain’s perception of risk, which takes us into the realm of cognitive psychology. A traditional view of the risk appetite of adolescents has suggested that they have a feeling of invulnerability (and perhaps this extends to those in their twenties as well). However, more modern findings such as a paper by Cohn et al entitled ‘Risk Perception: Differences Between Adolescents and Adults’ suggests this is not the case:

Adolescent involvement in health-threatening activities is frequently attributed to unique feelings of invulnerability and a willingness to take risks. The present findings do not support either proposition and instead suggest that many adolescents do not regard their behavior as extremely risky or unsafe. Compared with their parents, teenagers minimized the harm associated with periodic involvement in health-threatening activities. Ironically, it is periodic involvement in these activi- ties that jeopardizes the health of most adolescents. Thus teenagers may be underestimating the risk associated with the very activities that they are most likely to pursue, such as occasional intoxication, drug use, and reckless driving.

So to get a better idea of what is going on, it is worth moving on to the field of heuristics and biases in the perception or risk, which has become a key area of study in economics and finance over the last 30 years. This new area of investigaton was kicked off by the pioneering work of Nobel Laureates Daniel Kahneman and Amos Tversky; a good and accessible summary of the work can be found in Kahneman’s recent book “Thinking fast and slow“.

One critical finding was the distinction between ‘choice from experience’ and ‘choice from description’. Experimental data show that rare outcomes are overweighted when they are vividly described but are frequently underweighted if they are abstract. By extension, a more abstract threat, like harm from radiation, may be overweighted as a risk as it calls forth rich associations that provide a vivid description: for example, images from Chernobyl, a scene from the movie ‘China Syndrome’ or a picture of a child atom bomb victim suffering from radiation sickness.

Keeping this in mind, climate change risk is rather difficult to grasp in terms of the potential impact on oneself: no photos of dying babies to give us a descriptive representation—or at least only abstract theoretical ones.

Furthermore, risks are underweighted if we have no experience of them. The experience can also go beyond one’s own experience and encompass those of others. Accordingly, a particular teen or adult may not have experienced an auto crash through reckless driving, but it is almost certain that the adult will know someone personally, either family or friend, who has suffered from a reckless driving act. They thus get an experience boost by proxy.

Thankfully, few of us have yet to experience severely negative effects from climate change. However, an elderly person is more likely to have experienced, or known someone who has experienced, a rare event that gives them a proxy association of climate risk. Through having touched on the experience of war, flood  and other natural disasters (and possibly even famine for immigrants from low income countries), older people are better aware that ‘ really bad stuff’ happens.

In all this, sets of statistical tables showing objective probabilities have far less impact on people’s perceptions of risk than one would expect if humans were no more than purely rationale calculating machines. Presenting a person with a dry set of stats will barely move the risk perception needle—whether the subject is vulnerability to HIV infection or the destruction of the planet. We are just not built that way (even if we did do some stats at school).

Critically, though, the old perceive only a little more climate change risk than the young. Humans, as a whole, look like a teenager engaging in unprotected sex when it comes to global warming. Whether this poor risk perception can be changed is something I want to return to in a future post.

Back to that Big Number

In my post “A Big Number Gets Tweaked” I focused on ‘climate sensitivity’, aka the global mean surface temperature response to a doubling of CO2. It is an important number, and a basic understanding of what it means is a basic part of what I would call ‘climate change literacy’.

Going back to the Intergovernmental Panel on Climate Change (IPCC)’s Assessment Report 4 (AR4) published in 2007, a definition of climate sensitivity can be found on page 12 of the Summary for Policy Makers here.

The equilibrium climate sensitivity is a measure of the climate system response to sustained radiative forcing. It is not a projection but is defined as the global average surface warming following a doubling of carbon dioxide concentrations. It is likely to be in the range 2°C to 4.5°C with a best estimate of about 3°C, and is very unlikely to be less than 1.5°C. Values substantially higher than 4.5°C cannot be excluded, but agreement of models with observations is not as good for those values. Water vapour changes represent the largest feedback affecting climate sensitivity and are now better understood than in the TAR. Cloud feedbacks remain the largest source of uncertainty.

The chart below gives a sense of the different sensitivity estimates that provided the background to the IPCC’s final number:

This definition of climate sensitivity dates back to a landmark paper by Jule Charney et al in 1979 (here). In fact, to avoid confusion, we could call it Charney sensitivity. Now what Charney sensitivity isn’t (surprisingly) is the real world sensitivity of surface temperatures to a doubling of CO2. This is because Charney sensitivity was a blending of the results of two climate models that held a number of the variables constant. Of course, the Charney sensitivity in its modern version is now backed up by  a multitude of models of far greater sophistication, but interestingly the sensitivity number that came out of the 30-year old Charney report has held up pretty well. Nonetheless, the Charney sensitivity has a somewhat narrow definition. The excellent climate scientist run blog RealClimate (www.realclimate.org) explains this in more detail here:

The standard definition of climate sensitivity comes from the Charney Report in 1979, where the response was defined as that of an atmospheric model with fixed boundary conditions (ice sheets, vegetation, atmospheric composition) but variable ocean temperatures, to 2xCO2. This has become a standard model metric (because it is relatively easy to calculate. It is not however the same thing as what would really happen to the climate with 2xCO2, because of course, those ‘fixed’ factors would not stay fixed.

A wider definition is usually termed the Earth System sensitivity that allows all the fixed boundary conditions in the Charney definition to vary. As such, ice sheets, vegetation changes and atmospheric composition can provide feedbacks to temperature and thus cause a greater temperature response over the longer term. The Earth System sensitivity is in theory closer to the real world as it tells us at what temperature the system will ultimately get back to equilibrium.

The most influential calculation of Earth System sensitivity has been that made by NASA’s Jim Hansen, since it forms the scientific foundation for the 350.org climate change campaigning organisation. As the name suggests, 350.org urges humanity to strive toward a target of 350 parts per million (ppm) of CO2. The rationale for the target can be found here and rests heavily on a paper by Jim Hansen and his coauthors entitled “Target atmospheric CO2: Where should humanity aim?“.

In the abstract of the Hansen article, we immediately see a differentiation between a sensitivity that includes only fast feedback processes (a Charney sensitivity) and an equilibrium sensitivity that includes slower feedbacks (an Earth System sensitivity):

Paleoclimate data show that climate sensitivity is ~3°C for doubled CO2, including only fast feedback processes. Equilibrium sensitivity, including slower surface albedo feedbacks, is ~6°C for doubled CO2 for the range of climate states between glacial conditions and ice-free Antarctica. Decreasing CO2 was the main cause of a cooling trend that began 50 million years ago, the planet being nearly ice-free until CO2 fell to 450 ± 100 ppm; barring prompt policy changes, that critical level will be passed, in the opposite direction, within decades.

The paper then goes on to make a pretty forceful policy recommendation:

If humanity wishes to preserve a planet similar to that on which civilization developed and to which life on Earth is adapted, paleoclimate evidence and ongoing climate change suggest that CO2 will need to be reduced from its current 385 ppm to at most 350 ppm, but likely less than that.

Note that the article does contain a number of caveats over climate variability, climate models and other uncertainties. Further, as is the usual process in science, it has received various critiques, many suggesting that a figure of 6 degree Celsius is too high for long term sensitivity. What is not in dispute, however, is that, an Earth System sensitivity with long-term feedbacks will have a higher sensitivity number than a Charney sensitivity with only short-term feedbacks (almost by definition).

Despite this fact, we see numerous media reports getting tangled up between the two types of sensitivities following the publishing of the new Schmittner et al paper I talked about in a previous post. This from the Houston Chronicle:

To me, the real effect of this paper will be to really impair the credibility of the more extreme environmentalists who have been saying the planet faces certain doom from climate change.

I am thinking about such efforts as Bill McKibben’s 350 campain, in which he asserts that 350 ppm is the most important number in the world. Such environmentalists assert that the planet will warm as much as 6 Celsius degrees with a doubling of atmospheric carbon dioxide levels.

That’s a big number and doubtless would have catastrophic consequences for the planet. This is not in dispute. But scientists are now telling us this is not going to happen.

Well ‘no’ actually. Since we are comparing apples and pears, scientists are not now telling us that catastrophic outcomes are not going to happen.

Getting back to the topic of risk, we can now see how a better understanding of the different sensitivity concepts allows ordinary people to get a better idea of the climate risk they and their families face.

To reiterate, we are going from CO2, to temperature (via sensitivity) to impacts. To get a good idea of overall risk we need a sense of of how carbon emissions are trending; then we need a feeling for how sensitive temperature is to CO2; and lastly an understanding of how much the earth changes (and the impact on us of those changes) once the world warms.

The Charney sensitivity is very useful since it gives a floor to the kind of temperature changes we will experience. If the best estimate of this sensitivity number if found in the future to be smaller than the current consensus of 3 degrees, then that—other things being equal—is a positive thing. However, we are not in a position, yet, to reduce the consensus based on the Schmittner paper.

The Hansen 6 degree Celsius number is probably a little too high, but if we get anywhere close to this number, we are still in the bad lands of catastrophic climate change. Nonetheless, the time horizon for the full warming stretches generations into the future; thus, it is probably not the risk metric you would use if your concern only goes our as far out as grandchildren. But I think Jim Hansen receives a lot of underserved ridicule in certain parts of the blogosphere and American press for his championing of a number that implies the yet unborn have rights too.

Putting this question of human ethics to one side, those alive today are really interested in a Charney sensitivity plus alpha from a climate risk perspective. The components that make up that ‘plus alpha’ are a topic for another post.

Enough Good Energy?

Last week, I attended a presentation entitled ‘The Future of Energy’ given by the company Good Energy hosted by the Pangbourne and Whitchurch Sustainability Group (PAWS). The firm was established to help promote sustainable energy production (ie non-fossil fuel) and thus help tackle the problem of climate change, which it sees as predominantly an energy problem.

Good Energy offers a service allowing householders to purchase electricity that is 100% generated by sustainables. In addition, the company has been at the vanguard of microgeneration initiatives, allowing small electricity producers to sell their surpluses back onto the grid. Overall, Juliet Davenport, the CEO of Good Energy, appears to relish her role of providing a genuine alternative to the incumbent Big Six UK energy suppliers.

I was intrigued to see how a company with an established track record of bottom-up, micro-economic initiatives would paint a macro-economic big picture outlook of what energy provision in the UK could look like in the year 2050.

The first thing that struck me was on how heavily their renewables scenario relied on the work of David MacKay, a professor in the Physics Department of Cambridge University and Chief Scientific Advisor to the Department of Energy and Climate Change. MacKay has done an invaluable service to all things sustainable by providing a non-nonsense tool kit of what would work (and what would not work) in the form of a resource web site at www.withouthotair.com. I own the book, ‘Sustainable Energy-Without the Hot Air’, which is an offspring of this site, and think it belongs on every bookshelf.

The starting point for Good Energy’s analysis is current UK per-capita energy consumption. They put this at 125 kilowatt-hours per day (kWh/d) of energy broken down as follows:

Current  UK  Consumption  kWh/d  per  person

What struck me most about this chart was the relatively modest role that electricity was currently playing in the mix, with only a 14% share.

A new, post-carbon world was then presented for 2050. Note first that the average consumption has shrunk rapidly from 125 kWh/d to 68 kWh/d, or by about 46%. Further, for the transport sector, you can see that electricity has almost completely replaced fossil fuel.

Future  UK  Supply  kWh/d  per  person

To achieve this end, electrical energy production has undergone a multi-decade boom, rising from 18 kWh/d per person to 38 kWh/d, a jump of 110%. Fossil fuels are out of the mix by 2035, by which time we have also said goodbye to nuclear. Note that if we believe the demographers, total energy production would have jumped even higher on a percentage basis owing to current UK population projections.

And to the question, “what renewable is doing the heavy lifting?” the answer from the chart below is plainly “wind”, with a combined onshore, offshore contribution of 25 kWh/d per person, which is almost 40% higher than total current per capita electricity consumption.

Now I think that Good Energy would be the first to agree that the projection is more of a thought exercise rather serious stab at what the UK energy world will like look in the 2050: a big ‘what if’. The presenter admitted that the underlying numbers were the work of a summer intern, who had been let loose on David MacKay’s database. I think most of us have tortured the numbers to make them speak at one time in our careers, and I suspect this may have happened in this particular case.

More interesting, though, is how marginal the micro initiatives appear when compared with the overall picture 40 years out. And this was from a presentation that lauded the empowerment that would arise from micro-generation and decentralization. Mega wind projects, and especially mega offshore wind projects, are not within the financial capacity of individuals or local communities. As such, my initial suspicion would have been that the micro-generation figures would have been tortured most to produce the largest jumps, which does not appear the case.

A few tentative conclusions. To dispense with fossil fuels altogether, each individual’s energy consumption has to take a pretty big hit. That said, through rejigging the ways in which we use energy, that is greater efficiency, we would be able to maintain standards of living pretty close to those of today (and new technology in fields such as medicine would likely take them higher). But to do that, the government has to become the major player in sustainables. A grass roots sustainable energy generation revolution just won’t cut it according to the Good Energy numbers.

There is a darker future though. Through debt, demographics and peak resource constraints, the government could become severely impaired in its ability to set the energy agenda. This is the kind of scenario that could come out of the pages of Joseph Tainter’s ‘The Collapse of Complex Societies’. In such a situation, individuals and communities would be forced back onto their own energy generating abilities. Accordingly, the energy descent could be from 100 kWh/d plus per person to 10 kWh/d or less. Now that would be a very different society from the one today.

The IEA's Closing Window

One small mercy in the global warming debate is that the International Energy Agency (IEA) has not become a closet supporter of the carbon lobby under the mantra “drill, baby drill”. The IEA, basically a rich country club that focuses on energy security and research, has come under criticism over the years for downplaying the potential for renewables and for underplaying the threat from peak oil, bit can’t be accused of neglecting the issue of climate change (or at least not recently).

Every year, the IEA issues its flagship publication, the World Energy Outlook (WEO), around November. Jump back a decade to see the WEO’s view of the world in 2001  here. This publication mentions both renewables and carbon capture and sequestration (CCS) technology—but there is no explicit mention of climate change. For renewables, an economic case exists for their introduction irrespective of global arming (if you can get down to the right price point). But the WEO was being rather coy with CCS; it is a pretty pointless technology if you don’t already buy into the threat posed by climate change. So at the time, climate change could be characterised as the energy issue ‘that dares not speak its name’.

As the years rolled by, the WEO become rather less reserved about the issue, culminating with a sea change in its approach in the 2006 report, in which the Executive Summary kicked off with this sentence:

The world is facing twin energy-related threats: that of not having adequate and secure supplies of energy at affordable prices and that of environmental harm caused by consuming too much of it.

And, at last, we are out of the closet:

Safeguarding energy supplies is once again at the top of the international policy agenda. Yet the current pattern of energy supply carries the threat of severe and irreversible environmental damage – including changes in global climate.

The report then went on to note that its mandate had been revised to “advise on alternative energy scenarios and strategies aimed at a clean, clever and competitive energy future” and that “greenhouse-gas emissions would follow their current unsustainable paths through to 2030 in the absence of new government action”.

The following year, the 2007 report (here) explained the scale of the  challenge in more detail:

According to the best estimates of the Intergovernmental Panel on Climate Change, this concentration would correspond to an increase in average temperature of around 3°C above pre-industrial levels. In order to limit the average increase in global temperatures to a maximum of 2.4°C, the smallest increase in any of the IPCC scenarios, the concentration of greenhouse gases in the atmosphere would need to be stabilised at around 450 ppm.

To achieve this, CO2 emissions would need to peak by 2015 at the latest and to fall between 50% and 85% below 2000 levels by 2050.

We estimate that this would require energy-related CO2 emissions to be cut to around 23 Gt in 2030 – 19 Gt less than in the Reference Scenario and 11 Gt less than in the Alternative Policy Scenario. In a “450 Stabilisation Case”, which describes a notional pathway to achieving this outcome, global emissions peak in 2012 at around 30 Gt.

In the subsequent WEO in 2008, an entire section of the report (the full report can be found here) was devoted to setting out the energy production paths required to hit both 450 ppm and 550 ppm CO2-equivalent targets.  Through providing such depth of analysis, the IEA had thus positioned itself as a vital source of information for anyone trying to understand the challenge of climate change.

Fast forward to November 2011, and we have another 4 years of data compared to what the IEA had in hand back in 2007. So how well are we doing in terms of achieving their “450 Stabilisation Case” (stabilisation at 450 parts per million of  CO2 equivalent)?

As I stressed in my ‘Odds of Cooking the Kids’ posts, it is possible for any person on the planet to answer this kind of question by periodically checking into the fossil fuel carbon emission data releases found here. The most up-to-date data we have is the advance estimate for fossil fuel carbon emissions in 2010, which is 9.1 giga tonnes. Translate this into CO2 (remembering to multiply by 3.67) and we saw 33.5  giga tonnes of CO2 emitted in 2010. So the 2007’s global emissions peak of 30 giga tonnes in 2012 looks a bit of a stretch goal!

But if we go back to my post here, we should note that such peaks are not cast in stone: we have a trade-off between early CO2 emission peaks and slow reductions, and late peaks and aggressive reductions.

This brings us to the current World Energy Outlook for 2011, in which some new knowledge is brought to the table in the form of energy production inertia and the price of production. (Note that for past issues of the WEO it is possible to access the full reports for free, but for the most recent issue only the Executive Summary  (here) is available without payment.)

Let’s just accept for the moment that 450 ppm CO2-equivalent is our atmospheric goal, and this will likely restrict the rise on global mean temperature rise to below 2 degrees Celsius above pre-industrial revolution levels—the level deemed dangerous. Then ignoring cost, we may be ambivalent between the three pathways in the chart above. But, of course, cost does matter.

Now it could be argued that the cost of renewables will come down as technology advances, so suggesting we hang back in their deployment. However, the IEA pokes holes in this line of reasoning through emphasising the large upfront sunk costs of energy production infrastructure and the long life of such plant once it is built. In short, once you have installed a coal-fired power station, it makes little sense to decommission it before the end of its useful life.

Imagine that you buy a brand-new SUV with the intention of going green at some stage in the future by buying an all electric Nissan Leaf. Assuming there is no second hand market (which there isn’t for power stations), then the time to switch cars from an economic perspective is when the useful life of your SUV has come to an end. Now you could make the trade three years after buying your SUV, but—remembering there is no second market—that would mean trashing a vehicle with many years left of useful life and considerable economic value. This is the logic the IEA follows. In their words:

Four-fifths of the total energy-related CO2 emissions permissible by 2035 in the 450 Scenario are already “locked-in” by our existing capital stock (power plants, buildings, factories, etc.). If stringent new action is not forthcoming by 2017, the energy-related infrastructure then in place will generate all the CO2 emissions allowed in the 450 Scenario up to 2035, leaving no room for additional power plants, factories and other infrastructure unless they are zero-carbon, which would be extremely costly.

In sum, to hit the 450 CO2-equivalent target, we would have to stop building new fossil fuel based energy capacity completely after 2017. And if we maintain the current rate of fossil fuel based energy production installation, we will head for far higher degrees of warming as can be seen in the chart below.

The major culprit is, of course, coal, which has been taking the lion’s share of new capacity installation. Indeed, over the last decade new coal plant has been almost equivalent to all other types of energy generation capacity put together.

The gloomy conclusion is that the 450-eq, and thus 2 degree of warming target, is already almost out of reach. The key question then is “how far could we overshoot looking at the cost dynamics of energy installation alone?” This is a topic I will return to in future posts.

Richard Betts and the 2 Degree Threshold

The climate skeptic and denier blogs have been awash with the blog comments left by Richard Betts on the skeptic site Bishop Hill. Betts is Head of Climate Impacts at the Met Office and one of the key contributors to the ground-breaking ‘4 Degrees and Beyond‘  conference held in Oxford in 2009 and the subsequent set of formal academic papers published by the Royal Society (see here).

I only occasionally frequent the skeptic blogs for the reason that they rarely manage to convey both sides of the argument. An intellectual prerequisite of any successful hedge fund manager is the ability to assimilate and assess an investment thesis from all angles: if your initial judgement is that a particular stock or bond is a ‘buy’, the first thing you do is go and read every piece of research that suggests it is a ‘sell’. To put it bluntly, if you invest your money with a portfolio manager who does not have this discipline as some part of their investment  process, you are a fool and deserve to lose your money.

Keeping this in mind, the blogosphere is rife with analysis by persons who not only don’t try to understand the merits of the opposing argument, but frequently don’t even bother reading them.

So what had me trawling through the comments on Bishop Hill (the original post can be found here, and Betts is active in the comments sections as well)? Well this attack on the concept of ‘dangerous’ climate change:

Most climate scientists do not subscribe to the 2 degrees “Dangerous Climate Change” meme (I know I don’t). “Dangerous” is a value judgement, and the relationship between any particular level of global mean temperature rise and impacts on society are fraught with uncertainties, including the nature of regional climate responses and the vulnerability/resilience of society. The most solid evidence for something with serious global implications that might happen at 2 degrees is the possible passing of a key threshold for the Greenland ice sheet, but even then that’s the lower limit and also would probably take centuries to take full effect.

Andrew Revkin, the New York Times opinion writer on all things environment and climate change in his Dot Earth column asked Betts to amplify his thoughts. Betts reply is worth reading in its entirety and can be found here. However, here are a couple of the most important passages:

The science suggests that, by and large, the risk of major negative impacts of climate change increases with higher levels of global warming. However, this in itself is not enough to define what level of warming is “dangerous,” especially since the projections of actual impacts for any level of warming are highly uncertain, and depend on further factors such as how quickly these levels are reached….., and what other changes are associated with them.

And then critically,

With such uncertainties, it’s all down to attitude to risk — “dangerous climate change” should be defined in the context of the level of risk that is considered acceptable. It’s a judgment call.

A deep dive into the comments in the original Bishop Hill post throws further light on what Betts actually means—and what he doesn’t mean. To a another commentator’s claim that Betts has admitted that the 2 degree meme is ‘in no way dangerous’, Betts has this to say:

 You are wrong, I didn’t say that. What I actually said was:

“While really bad things may happen at 2 degrees, they may very well not happen either – especially in the short term (there may be a committment to longer-term consequences such as ongoing sea level rise that future generations have to deal with, but imminent catastrophe affecting the current generation is far less certain than people make out. We just don’t know.”

He follows this with the following comments that comes close to encapsulating my own view of how climate change should be viewed: that is, through a prism of probabilities and outcomes. In Betts words:

As I think I’ve said before, it’s all down to attitude to risk. Uncertainty works both ways – large uncertainties mean there are risks at the bad end, including the possibility of low-probability, high-impact outcomes. Given that we don’t know what the risk is, but think it is non-zero, do we take action to reduce that risk? That’s a political decision not a scientific one, and will rely on judgement calls (like most complex decisions, political or otherwise). I’m just saying that having made that judgement call, the science should not then be skewed to support it, as this could then influence other important decisions in undesireable ways. The uncertainties are large and we have to recognise that, formulate policy accordingly and be sure that scientific evidence appropriate informs several different policy areas that may rely on it.

Where I think I disagree with him is in the idea that climate scientists can remain honest, neutral and detached brokers of impartial information, with politicians making all the judgment calls. The politicians are floundering with the complexities of climate change as things stand anyway. But that is a topic for another post.

The Daily Onion, FT and Climate Change Coverage

The Onion continues to mine climate change related news releases for its brand of biting satire. Here is a little piece they did in response to the sharp jump in carbon emissions in 2010 to a new record that I flagged in my last post.

And this on the disconnect between contemplation of the implications of climate change and actual action.

My all time favourite, though, is the article published by them in November 2010 with the headline: Global Warming Issue From 2 or 3 Years Ago May Still Be Problem.

The article captured beautifully the public and politicians’ inability to stay with the topic in the face of numerous other issues that have jockeyed for their time:

“Global warming, if you remember correctly, was the single greatest problem of our lifetime back in 2007 and the early part of 2008,” CGD president Nancy Birdsall said. “But then the debates over Social Security reform and the World Trade Center mosque came up, and the government had to shift its focus away from the dramatic rise in sea levels, the rapid spread of deadly infectious diseases, and the imminent destruction of our entire planet.”

Contrast and compare with an article by Simon Kuper titled “Climate Change: Who Cares Anymore” in the Financial Times Magazine.

Kuper highlights how the world has, in effect, grown bored with climate change and moved on despite the risks. As he puts it:

We journalists are dropping the topic too. It’s been a thrilling year for news, but the great absence on the news sites is climate change…… The environment bores readers.

And then moves on to look at the implications of neglecting the issue of climate change:

The question then becomes: what will happen? Nobody is sure. Almost all climate scientists think the outcome will be bad, perhaps catastrophic. They foresee more storms, droughts, floods and crop failures around the world, as Obama said in 2009 when he was still talking about these things.

But then we have an absolutely shocking passage confusing uncertainty with risk.

However, climate is far too complex a system to permit exact predictions. Nobody knows whether global temperatures will rise two degrees centigrade this century, nor whether that is the tipping point for catastrophe. When climate scientists make exact predictions, says Pielke, it’s usually a bid to focus the minds of politicians and voters. It hasn’t worked.

Well no actually, climate isn’t that complex to get a sense of risk (see my last three posts here, here and here). You can’t make an ‘exact prediction’ over whether a 60-a-day smoker will die from lung cancer thirty years hence. In fact, you only will know with certainty once he or she is dead—from lung cancer or other causes. But you do have a sense of probabilities and outcomes with a 60-a-day habit, and so it is with CO2.

Accordingly, the strategy that Kuper implicitly endorses makes no sense whatsoever.

Rich countries now have a semi-conscious plan: whatever happens, we’ll have the money to cope.

For we know what a world of 4 or 5 degrees Celsius of warming will bring since the earth has been there before. And we know the rough probabilities of whether we will reach 4 or 5 degrees of warming if we continue on our current emissions path.

Thus it is really The Onion, and its spoof climatologist Helen Marcus, that gets the reporting right, with a final paragraph in their piece that should make you laugh—if it doesn’t make you cry:

“Climate change is real, and we are killing our planet more every day,” said climatologist Helen Marcus, who has made similar statements in interviews in 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, and 2010. “We need to make a serious effort to stop it, or, you know, we’ll all die. There really isn’t much else to say.”

Odds of Cooking the Kids: Part 3

This is the final post of this particular series. In my previous post, we set out a framework that allowed us to get a sense of whether our kids lives would be warmed up to the degree that they would be very different from ours. And when I say ‘different’, I don’t mean ‘different’ as in access to different technology, but ‘different’ as in living in a different economic and political world.

In many decisions in life, we use simplistic rules of thumb as starting points for decision-making. In the same manner, I have suggested that the Intergovernmental Panel on Climate Change (IPCC)’s benchmark 2 degrees Celsius of warming yardstick (from pre-industrial levels) provides one such rule of thumb: it tells us what degree of warming is likely to have a material impact on food production and economic growth. This has been termed ‘dangerous’ climate change.

In a similar vein, ‘extremely dangerous’ climate change has been associated with 3 or 4 degrees Celsius of warming—a degree of warming that would suggest socio-economic instability; that is, the failure of sovereign states, and possibly a reworking of the post-war international political order.

Rules of thumb are by definition approximations. We don’t exactly know how resilient world food production will be in a 2-degree warmer world; we don’t know whether economies can grow enough to easily adapt to a 2 degree world—or, indeed, whether it is actually possible to inoculate ourselves against climate change through economic growth as economists such as William Nordhaus suggest (I think not, others would beg to differ, we shall see).

What we do know, however, is that a 2-degree world poses an appreciable risk to our welfare. We could blindly go about our everyday lives (as most of us are currently doing) and get lucky: the world economy grows, technology advances, we prove able to cope with climate change, and 2 degrees of warming proves the peak. But then again, may be not.

But will we be able to get a better handle on how bad things could get as data comes in over the next decade? The answer to this is ‘yes’ —up to a point, but we will never escape uncertainty entirely. Yet, the human condition is one of decision making under uncertainty. This is why the climate skeptic refrain that we don’t have enough certainty to make any decisions over fossil fuel emissions is so absurd. How much certainty do we have of future outcomes when choosing a college, selecting a spouse or applying for a job? The answer is that we take a stab at future outcomes based on the information currently available. As more information comes in, we can then revise our view of what the future will bring—but we will never achieve certainty.

So in the spirit of empowerment, I suggested in my last post that you periodically check to see whether our annual fossil fuel carbon emissions are consistent with a world that will stay within 2 degrees of warming or suggest a world where we overshoot this target. The Carbon Dioxide Information Analysis Center’s advance estimate for 2010 fossil fuel carbon emission was 9.1 gita tonnes of carbon, a 5.9% rise from 2009. Quite simplistically, if this number does not start declining by 2020, then it is extremely unlikely that warming will be kept below 2 degrees (or much worse). Further, if this number continues to grow as the current decade progresses, then we will move into the territory of ‘extremely dangerous’ climate change of 3 or 4 degrees Celsius—or more.

Let’s dig a little and see if we can get a better idea of how the CO2 emission number will likely evolve over the coming decade. In my last post I introduced the idea of a carbon budget following the work of Meinhaussen and other scientists and suggested that we had already used up 28% of that budget since 2000. Note though that the precise number should not be given too much weight; rather it is a best estimate of where we are in terms of emissions. Moreover, if we heroically manage to hold emissions at a constant level from now onward, we would run out of our budget by 2030 if a) we wanted a 75% chance of success keeping within 2 degrees of warming and b) reduced emissions to zero the following year after 2030.

Two assumptions here are patently unrealistic: first, that near term emission growth will miraculously flat line; and second, that emissions will instantaneously revert to zero at some appointed year.

Why? Firstly, because economic growth is highly energy dependent over the near term (and probably mid term as well). As such, we have now entered the realm of economic cause and effect as opposed to physical science cause and effect (for example, the link between atmospheric C02 and global mean temperatures).

The correlation between economic growth and energy has long been known, but is not necessarily truly appreciated. Indeed, many scholars now believe that the industrial revolution was as much a fossil fuel energy revolution (for example see here) as opposed to a pure technology (hard technology such as James Watt’s steam engine and soft technology as in such capitalist corner stones as the joint stock company). If there had been no available coal (and later oil), there would have been no industrial revolution—and no miraculous jump in GDP growth and associated living standards.

The degree of correlation between growth and energy, however, can shift through time. Moreover, the correlation between energy and CO2 emissions can also alter with the years. Thus, we have a three-step process: moving from GDP growth, to energy production growth, to CO2 emissions growth.

More specifically, an advanced serviced orientated economy may be able to grow with little increase in energy inputs (although frequently we find that this is because they have, in effect, outsourced their energy needs to less-developed countries who make solid ‘stuff’ that complements the services the advanced countries provide). In addition, this advanced service economy may be able to fulfill what little additional energy needs it has through developing sustainables such as solar and wind that produce minimum (at least explicitly) CO2 emissions.

If we put the developed countries and less-developed countries together, we get a sense of the dependency of global GDP to carbon emissions. PWC put out a report in 2006 which looked at the relationship between growth and carbon and then mapped out possible future scenarios (here).

As you can see from the chart above (click for a larger image), in all the scenarios carbons emissions grow at lower rates than GDP growth, but the growth rates differ widely from the ‘Scorched Earth’ scenario (we increasingly rely on more dirty sources of energy like coal and tar sands to meet our energy needs) to a green/technology nirvana scenario (Green Growth plus Carbon Capture and Storage) where we have managed to almost completely decarbonise economic growth. A more detailed definition of the various scenarios is given below (click for larger image):

Note that since the PWC report was published in 2006. the link between carbon emissions and GDP has followed a path a little bit worse than the ‘Baseline’ scenario.

So what happens if we continue on a path of relatively strong global growth (even if there is meagre growth in OECD countries) and relatively little success in decarbonisation (following on from the failure of the Copenhagen talks to come up with hard global goals on carbon emission mitigation)?

The United Nations Environment Programme (UNEP)’s “The Emissions Gap Report” provides some answers. It builds on the carbon budget concepts discussed in my last post and highlights the choice between achieving an early peak in carbon emissions and a slow decline, and a late peak in carbon emissions and a rapid decline.

The report then goes on to look at what emissions need to look like in the year 2020 if we wish to hold global mean temperature rise from pre-industrial levels to 2 degrees Celsius of warming (what they term a ‘likely’ chance of success). They use 2005 as their base year, when a total of 45 giga tonnes of CO2 equivalent was emitted.

Note the use of CO2 equivalent. The term means CO2 emissions plus emissions of other non-greenhouse gases translated into equivalent CO2 units. This is a different unit of measure than that found at the Carbon Dioxide Information Analysis Center (CDIAC) web site that I referred to in my last post (they focus on giga tonnes of carbon) and is different than the units the International Energy Agency (IEA) uses in its reports (they look at giga tonnes of CO2). We will return to this issue a little later, so let us stay with CO2 equivalent, or CO2-eq.

The first thing to note in the chart above is that the best estimate of having a 66% chance of keeping within the 2 degree limit requires CO2-eq emissions to be actually less than 2005 levels by 2020. How realistic is that? Looking at the alternative scenarios, the answer must be: “it isn’t realistic”. Business as usual sees us a whopping 11 giga tonnes of CO2-eq above the required target (25% above), while even the strict observance of binding pledges sees an overshoot.

So if we are likely to wander into the realm of ‘dangerous’ climate change what is the likelihood of us getting into the even worse situation of ‘extremely dangerous’ climate change where our kids’ lives could be transformed? Building on the UNEP’s analysis, an academic paper by Joeri Rogelj and co-authors in the journal Nature throws some light on this question (see here). They model a series of emission pathways consistent with a 66% probability of staying within a particular temperature increase based on the spread of sensitivity of temperature to CO2.

Remember, if we get lucky, temperature could turn out to be less sensitive to a doubling of CO2 than our best estimate of 3 degrees; if we are unlucky, it could turn out to be more sensitive. In the former case, we can get away with emitting more CO2; in the latter case, less. Unfortunately, we won’t get a better handle on which sensitivity is correct for a decade or two, by which time any CO2 emitted into the atmosphere will stay there for hundreds of years—and warm the earth for hundreds of years.

The Rogelj paper, therefore, provides a road map of where various emission paths of CO2-eq will take us temperature rise, with sign posts for the year 2020 and 2050. For a larger image, you can click on this link to the chart from the original article.

Now 2050 is a long way away, but 2020 is less than a decade. Moreover, given the inertia in energy production systems (you can’t replace coal powered power stations with renewables overnight)  you can get a sense of where we will be in 2020 by just seeing whether current emissions growth is decelerating, accelerating or staying the same. So let’s look at this signpost in more detail (click on the table for a larger image).

In column three of the table above, the central number is the median estimate of C02 emissions in the year 2020 that are consistent with a peak in global mean temperature given in the left hand column. So for us to stay within 2 degrees of warming with a 66% probability, we would need to limit CO2-eq emissions to 44 giga tonnes in 2020, one giga tonne less than we emitted in 2005. What about for ‘extremely dangerous’ climate change of 3 degrees of warming? The median emission given for this limit is 52 giga tonnes, 7 giga tonnes above 2005.

Unfortunately—and this goes back to my frustration with the dog’s dinner of units that the scientific community uses for climate change publications—there isn’t a readily accessibly and timely updated source of CO2-eq data. However, given that we are interested in changes, this doesn’t really matter. So let us use 2005 as our base year and see what fossil fuel carbon emissions were in that year according to the CDIAC (see here). You can find that in 2005 a total of 8.1 giga tonnes of carbon was emitted. In 2010, the CDIAC’s advance estimate has the world emitting 9.1 giga tonnes of CO2. So in the past five years, carbon emissions have increased by 1 giga tonnes.

Extrapolating that growth rate, we could then make a quick and dirty estimate that carbon emissions will rise by another 2 giga tonnes by 202o, making a grand total of 3 extra tonnes of carbon emissions over our 2005 base year. Now we have to translate carbon emissions into CO2 emissions by weight, remembering from the last post that the CO2 molecule is 3.667 times heavier than a single carbon atom. That means that CO2 emissions will be 11 giga tonnes higher in 2020 than 2005.

Now let’s be conservative and assume that non-CO2 greenhouse gases flatline over this period (in reality, atmospheric concentrations of these gases will likely rise a bit), so the 11 giga tonnes increase in CO2 from 2005 to 2020 will be the same thing as an 11 tonnes rise in CO2-eq for the period. Put that figure on top of our starting amount of 44 giga tonnes of CO2-eq in 2005 gives us 55 giga tonnes of CO2-eq emissions in 2020. And 55 giga tonnes of CO2 emissions in 2020 is consistent with a CO2 emissions pathway that leads us to 5 degrees of warming. This, in turn, is in line with UNEP’s base line scenario.

True, we could get lucky even if we are emitting 55 giga tonnes of CO2-eq in 2020 (although it is unlikely that this luck will allow us to defend the 2 degree target, or even higher). Temperature could prove less sensitive to CO2 than our current best estimate. Some miraculous technology could appear within a decade or two that allows us to decarbonise energy production at a breakneck pace. But this is a blog about risk—and unfortunately there is an appreciable risk that our current emissions path will cook our kids. And to come to that conclusion, you don’t need to rely on the word of others. Just look at the data yourself: the key numbers are all here.