/Analysis of a carbon forecast gone wrong: the case of the IPCC FAR

Analysis of a carbon forecast gone wrong: the case of the IPCC FAR

Reposted from Dr. Judith Curry’s Climate Etc.

Posted on January 31, 2020 by curryja |

by Alberto Zaragoza Comendador

The IPCC’s First Assessment Report (FAR) made forecasts or projections of future concentrations of carbon dioxide that turned out to be too high.

From 1990 to 2018, the increase in atmospheric CO2 concentrations was about 25% higher in FAR’s Business-as-usual forecast than in reality. More generally, FAR’s Business-as-usual scenario expected much more forcing from greenhouse gases than has actually occurred, because its forecast for the concentration of said gases was too high; this was a problem not only for CO2, but also for methane and for gases regulated by the Montreal Protocol. This was a key reason FAR’s projections of atmospheric warming and sea level rise likewise have been above observations.

Some researchers and commentators have argued that this means FAR’s mistaken projections of atmospheric warming and sea level rise do not stem from errors in physical science and climate modelling. After all, emissions are for climate models an input, not an output. Emissions depend largely on economic growth, and can also be affected by population growth, intentional emission reductions (such as those implemented by the aforementioned Montreal Protocol), and other factors that lie outside the field of physical science. Under this line of reasoning, it makes no sense to blame the IPCC for failing to predict the right amount of atmospheric warming and sea level rise, because that would be the same as blaming it for failing to predict emissions.

This is a good argument regarding Montreal Protocol gases, as emissions of these were much lower than forecasted by the IPCC. However, it’s not true for CO2: the over-forecast in concentrations happened because in FAR’s Business-as-usual scenario over 60% of CO2 emissions remain in the atmosphere, which is a much higher share than has been observed in the real world. In fact, real-world CO2 emissions were probably higher than forecasted by FAR’s Business-as-usual scenario. And the only reason one cannot be sure of this because there is great uncertainty around emissions of CO2 from changes in land use. For the rest of CO2 emissions, which chiefly come from fossil fuel consumption and are known with much greater accuracy, there is no question they were higher in reality than as projected by the IPCC.

In the article I also show that the error in FAR’s methane forecast is so large that it can only be blamed on physical science – any influence from changes in human behaviour or economic activity is dwarfed by the uncertainties around the methane cycle. Thus, errors or deficiencies in physical science are to blame for the over-estimation in CO2 and methane concentration forecasts, along with the correspondent over-estimation in forecasts of greenhouse gas forcing, atmospheric warming, and sea level rise. Human emissions of greenhouse gases may indeed be unpredictable, but this unpredictability is not the reason the IPCC’s projections were wrong.

Calculations regarding the IPCC’s First Assessment Report

FAR, released in 1990, made projections according to a series of four scenarios. One of them, Scenario A, was also called Business-as-usual and represented just what the name implies: a world that didn’t try to mitigate emissions of greenhouse gases. In FAR’s Summary for Policymakers, Figure 5 offered projections of greenhouse-gas concentrations out to the year 2100, according to each of the scenarios. Here’s the panel showing CO2:

I’ve digitized the data, and the concentration in the chart rises from 354.8ppm in 1990 to 422.75 by 2018; that’s a rise of 67.86 ppm. Please notice that slight inaccuracies are inevitable when digitizing, especially if it’s a document, like FAR, that was first printed, then scanned and turned into a PDF.

For emissions, the Annex to the Summary for Policymakers offers a not-very-good-looking chart; a better version is this one (Figure A.2(a) page 331, the Annex to the whole report):

Some arithmetic is needed here. The concentrations chart is in parts per million (ppm), whereas the emissions chart is in gigatons of carbon (GtC); one gigaton equals a billion metric tons. But the molecular mass of CO2 (44) is 3.67 times bigger than that of carbon (12). Using C or CO2 as the unit is merely a matter of preference – both measures represent the same thing. The only difference is that, when expressing numbers as C, the figures will be 3.67 times smaller than when expressed as CO2. This means that, while one ppm one ppm of CO2 contains approximately the weight 7.81 gigatons of CO2 of said gas, if we express emissions as GtC rather than GtCO2 the equivalent figure is 7.81 / 3.67 = 2.13.

Under FAR’s Business-as-usual scenario, cumulative CO2 emissions between 1991 and 2018 were 237.61GtC, which is equivalent to 111.55ppm. Since concentrations increased by 67.86ppm, that means 60.8% of CO2 emissions remained in the atmosphere.

Now, saying that a given percentage of emissions “remained in the atmosphere” is just a way to express what happens in as few words as possible; it’s not a literally correct statement. Rather, all CO2 molecules (whether released by humankind or not) are always being moved around in a very complex cycle: some CO2 molecules are taken up by vegetation, are others released by the ocean into the atmosphere, and so on. There is also some interaction with other gases; for example, methane has an atmospheric lifespan of only a decade or so because it decays into CO2. What matters is that, without man-made emissions, CO2 concentrations would not increase. Whether the CO2 molecules currently in the air are “our” molecules, the same ones that came out of burning fossil fuels, is irrelevant.

And that’s where the concept of airborne fraction comes in. The increase in concentrations of CO2 has always been less than man-made emissions, so it could be said that only a fraction of our emissions remains in the atmosphere. Saying that “the airborne fraction of CO2 is 60%” may be technically incorrect, but it rolls off the keyboard more easily than “the increase in CO2 concentrations is equivalent to 60% of emissions”. And indeed the term is commonly used in the scientific literature.

Anyway, we’ve seen what FAR had to say about CO2 emissions and concentrations. Now let’s see what nature said.

Calculations regarding the real world

Here I use two sources on emissions:

  • BP’s Energy Review 2019, which has data up to 2018.
  • Emission estimates from the Lawrence Berkeley National Laboratory. These are only available until 2014.

BP counts only emission from fossil fuel combustion: the burning of petroleum, natural gas, other hydrocarbons, and coal. And both sources are in very close agreement as far as emissions from fossil fuel combustion are concerned: for the 1991-2014 period, LBNL’s figures are 1% higher than BP’s. The LBNL numbers also include cement manufacturing, because the chemical reaction necessary for producing cement releases CO2; I couldn’t find a similarly authoritative source with more recent data for cement.

There is also the issue of flaring, or burning of natural gas by the oil-and-gas industry itself; these emissions are included in LBNL’s total. BP’s report does not feature the word “flaring”, and it seems unlikely they would be included, because BP’s method for arriving at global estimates of emissions is by aggregating national-level data on fossil fuel consumption. Now, I’ll admit I haven’t emailed every country’s energy statistics agency to be sure of the issue, but flared gas is by definition gas that did not reach energy markets; it’s hard to see why national agencies would include this in their “consumption” numbers, and many countries would have trouble even knowing how much gas is being flared. For what it’s worth, according to LBNL’s estimate flaring makes up less than 1% of global CO2 emissions.

For concentrations, I use data from the Mauna Loa Observatory. CO2 concentration in 1990 was 354.39ppm, and by 2014 this had grown to 398.65 (an increase of 44.26ppm). By 2018, concentrations had reached a level of 408.52 ppm, which meant an increase of 54.13 ppm since 1990.

It follows that the airborne fraction according to these estimates was:

  • In 1991-2014, emissions per LBNL were 182.9GtC, which is equivalent to 85.88 ppm. Thus, the estimated airborne fraction was 44.26 / 85.88 = 51.5%
  • In 1991-2018, emissions according to BP were 764GtCO2, equivalent to 97.82ppm. We get an airborne fraction of 54.13 / 97.82 = 55.3%

Unfortunately, there is a kind of emissions that aren’t counted either by LBNL or BP. So total emissions have necessarily been higher than estimated above, and the real airborne fraction has been lower – which is what the next section is about.

Comparison of FAR with observations

This comparison has to start with two words: land use.

Remember what we said about the airborne fraction of CO2: it’s simply the increase in concentrations over a given period, divided by the emissions that took place over that period. If you emit 10 ppm and concentrations increase by 6ppm, then the airborne fraction is 60%. But if you made a mistake in estimating emissions and those had been 12ppm, then the airborne fraction in reality would be 50%.

This is an issue because, while we know concentrations with extreme accuracy, we don’t know emissions nearly that well. In particular, there is great uncertainty around emissions from land use: carbon released and stored due to tree-cutting, agriculture, etc. The IPCC itself acknowledged in FAR that estimates of these emissions were hazy; on page 13 it provided the following emission estimates for the 1980-89 period, expressed in GtC per year:

  • Emissions from fossil fuels: 5.4 ±5
  • Emissions from deforestation and land use: 1.6 ± 1.0

So, even though emissions from fossil fuels were believed to be three-and-a-half times higher than those from land use, in absolute terms the uncertainty around land use emissions was double that around fossil fuels.

(FAR didn’t break down emissions from cement; these were a smaller share of total emissions in 1990 than today, and presumably were lumped in with fossil fuels. By the way, I believe the confidence intervals reflect a 95% probability, but haven’t found any text in the report actually spelling that out).

Perhaps there was great uncertainty around land-use emissions back in 1990, but this has now been reduced? Well, the IPCC’s Assessment Report 5 (AR5) is a bit old now (it was published in 2013), but it didn’t look like uncertainty had been reduced much. More specifically, Table 6.1 of the report gives a 90% confidence interval for CO2 emissions from 1980 to 2011. And the confidence interval is the same interval in every period: ± 0.8GtC/year.

Still, it’s possible to make some comparisons. Let’s go first with LBNL: for 1991-2014, emissions according to FAR’s Business-as-usual scenario would be 196.91GtC, which is 14.17GtC more than LBNL’s numbers show. In other words: if real-world land use emissions over the period had been 14.17GtC, then emissions according to FAR would have been the same as according to LBNL. That’s only 0.6GtC/year, which is well below AR5’s best estimate of land use emissions (1.5GtC/year in the 1990s, and about 1GtC/year in the 2000s).

For BP, emissions of 764.8GtCO2 convert to 208.58GtC. Now, to this figure at a minimum we’d have to add cement emissions from 1991-2014, which were 7.46GtC. By 2014 emissions from cement were well above 0.5GtC, so even a conservative estimate would put the additional emissions until 2018 at 2GtC, or 9.46GtC in total. This would mean BP’s figures, when adding cement production, give a total of 218.04GtC. I don’t consider flaring here, but according to LBNL those emissions were only about 1GtC.

Therefore BP’s fossil-fuel-plus-cement emissions would be 19.57 GtC lower than the figure for FAR’s Business-as-usual scenario (237.61GtC). For BP’s emissions to have matched FAR’s, real-world land-use emissions would have needed to average 0.7 GtC/year. Again, it seems real-world emissions exceeded this rate, and indeed the figures from AR5’s Figure 6.1 suggest total emissions for 1991-2011 alone were around 25GtC. But just to be clear: it is only likely that real-world emissions exceeded FAR’s Business-as-usual scenario. The uncertainty in land-use emissions means one can’t be sure of that.

I’ll conclude this section by pointing out that FAR didn’t break down how many tons of CO2 would come from changes in land use as opposed to fossil fuel consumption, but its description of the Business-as-usual scenario says “deforestation continues until the tropical forests are depleted”. While this statement isn’t quantitative, it seems FAR did not expect the apparent decline in deforestation rates seen since the 1990s. If emissions from land use were lower than expected by FAR’s authors, yet total emissions appear to have been higher, the only possible conclusion is that emissions from fossil fuels and cement were greater than FAR expected.

The First Assessment Report greatly overestimated the airborne fraction of CO2

The report mentions the airborne fraction only a couple of times:

  • For the period from 1850 to 1986, airborne fraction was estimated at 41 ± 6%
  • For 1980-89, its estimate is 48 ± 8%

So according to the IPCC itself, the airborne fraction of CO2 in observations at the time of the report’s publication was 48%, with a confidence interval going no higher than 56%. But the forecast for the decades immediately following the report implied a fraction of 60 or 61%. There is no explanation or even mention of this discrepancy in the report; the closest the IPCC came is this line:

“In model simulations of the past CO2 increase using estimated emissions from fossil fuels and deforestation it has generally been found that the simulated increase is larger than that actually observed”

Further evidence of FAR’s over-estimate of the airborne fraction comes from looking at Scenario B. Under this projection, CO2 emissions would slightly decline from 1990 on, and then make a likewise slight recovery; in all, annual emissions over 1991-2018 would be on average lower than in 1990. But even under this scenario CO2 concentrations would reach 401 ppm by 2018, compared with 408.5ppm in reality and 422ppm in the Business-as-usual scenario.

So real-world CO2 emissions were probably higher than under the IPCC’s highest-emissions scenario, yet concentrations ended up closer to a different scenario in which emissions declined from their 1990 level.

The error in the IPCC’s forecast of methane concentrations was enormous

In this case the calculations I’ve done are rougher than for CO2, but you’ll see it doesn’t really matter. This chart is from FAR’s Summary for Policymakers, Figure 5:

From a 1990 level just above 1700 parts per billion (ppb), concentrations reach about 2500 ppb by 2018. Even in Scenario B methane reaches 2050 ppb by that year. In the real world concentrations were only 1850 ppb. In other words:

  • The increase in concentrations in Scenario B was about two-and-a-half times larger than in reality
  • For Scenario A, the concentration increase was five or six times bigger than in the real world

The mismatch arose because methane concentrations were growing very quickly in the 1980s, though a slowdown was already apparent; this growth slowed further in the 1990s, and essentially stopped in the early 2000s. Since 2006 or so methane concentrations have been growing again, but at nowhere near the rates forecasted by the IPCC.

Readers may be wondering if perhaps FAR’s projections of methane emissions were very extravagant. Not so: the expected growth in yearly emissions between 1990 and 2018 was about 30%, far less than for CO2. See Figure A.2(b), from FAR’s Annex, page 331:

There’s an obvious reason the methane miss is even more of a head-scratcher. One of the main sources of methane is the fossil fuel industry: methane leaks out of coal mines, gas fields, etc. But fossil fuel consumption grew very quickly during the forecast period – indeed faster than the IPCC expected, as we saw.

It’s also interesting that the differences between emission scenarios were smaller for methane than for CO2. This may reflect a view on the part of the IPCC (which I consider reasonable) that methane emissions are less actionable than those of CO2. If you want to cut CO2 emissions, you burn less fossil fuel: difficult, yet simple. If by contrast you want to reduce methane emissions, it probably helps to reduce fossil fuel consumption, but there are also significant methane emissions from cattle, landfills, rice agriculture, and other sources; even with all the uncertainty around total methane emissions, more or less everybody agrees that non-fossil-fuel emissions are a more important source for methane than for CO2. And it’s not clear how to measure non-fossil-fuel emissions, so it’s far more difficult to act on them.

CO2 and methane appear to account for most of the mistake in FAR’s over-estimate of forcings

Disclosure: this is the most speculative section of the article. But as with land-use emissions before, it’s a case in which one can make some inferences even with incomplete data.

Let’s start with a paper by Zeke Hausfather and three co-authors; I hope the co-authors don’t feel slighted – I will refer simply to “Hausfather” for short.

Hausfather sets out to answer question: how well have projections from old climate models done, when accounting for the differences between real-world forcings and projected forcings? This is indeed a very good question: perhaps the IPCC back in 1990 projected more atmospheric warming than has actually happened only because its forecast of forcing was too aggressive. Perhaps the IPCC’s estimates of climate sensitivity, which is to say how much air temperature increases as a response to a given level of radiative forcing, were spot on.

(Although Hausfather’s paper focuses on atmospheric temperature increase, the over-projection in sea level rise has been perhaps worse. FAR’s Business-as-usual scenario expected 20 cm of sea level rise between 1990 and 2030, and the result in the real world is looking like it will be about 13 cm).

Looking at the paper’s Figure 2, there are three cases in which climate models made too-warm projections, yet after accounting for differences in realized-versus-expected forcing this effect disappears; the climate models appear to have erred on the warm side because they assumed excessively high forcing. Of the three cases, the IPCC’s 1990 report has arguably had the biggest impact on policy and scientific discussions. And for FAR, the authors estimate (Figure 1) that forecasted forcing was 55% greater than realized: the trend is 0.61 watts per square meter per decade, versus 0.39 in reality. Over the 1990-2017 period, the difference in trends adds up to 0.59 watts per square meter.

Now, there is a lot to digest in the paper, and I hope other researchers dig through the numbers as carefully as possible. I’m just going to assume the authors’ calculations of forcing and temperature increase are correct, but I want to mention why a calculation like this (comparing real-world forcings with the forcings expected by a 1990 document) is a minefield. Even if we restrict ourselves to greenhouse gases, ignoring harder-to-quantify forcing agents such as aerosols, there are at least three issues which make an apples-to-apples comparison difficult. (Hausfather’s supplementary Information seems to indicate they didn’t account for any of this — they simply took the raw forcing values from FAR))

First, some greenhouse gases simply weren’t considered in old projections of climate change. The most notable case in FAR may be tropospheric ozone. According to the estimate of Lewis & Curry (2018), forcing from this gas increased by 0.067w/m2 between 1990 and 2016, the last year for which they offer estimates (over the last decade of data forcing was still rising by about 0.002w/m2/year). Just to be sure, you can check Figure 2.4 in FAR (page 56), as well as Table 2.7 (page 57). These numbers do not include tropospheric ozone, but you’ll see the sum of the different greenhouse gases featured equals the total greenhouse forcing expected in the different scenarios. The IPCC did not account for tropospheric ozone at all.

Second, the classification of forcings is somewhat subjective and changes over time. For example, the depletion of stratospheric ozone, colloquially known as the ‘ozone hole’, has a cooling effect (a negative forcing). So, when you see an estimate of the forcing of CFCs and similar gases, you have to ask: is it a gross figure, looking at CFCs only as greenhouse gases? Or is it a net figure, accounting for both their greenhouse effect and their impact on the ozone layer? In modern studies stratospheric ozone has normally been accounted for as a separate forcing, but I’m not sure how FAR did it (no, I haven’t read the whole report).

Finally, even when greenhouse gases were considered and their effects had a more-or-less-agreed classification, our estimates of their effect on the Earth’s radiative budget changes over time. For the best-understood forcing agent, CO2, FAR estimated a forcing of 4 watts/m2 if atmospheric concentrations doubled (the forcing from CO2 is approximately the same each time concentration doubles). In 2013, the IPCC’s Assessment Report 5 estimated 3.7w/m2, and now some studies say it’s actually 3.8w/m2. These differences may seem minor, but they’re yet another way the calculation can go wrong. And for smaller forcing agents the situation is worse. Methane forcing, for example, suffered a major revision just three years ago.

Is there a way around the watts-per-square-meter madness? Yes. While I previously described climate sensitivity as the response of atmospheric temperatures to an increase in forcing, in practice climate models estimate it as the response to an increase in CO2 concentrations, and this is also the way sensitivity is usually expressed in studies estimating its value in the real world. Imagine the forcing from a doubling of atmospheric CO2 is 3.8w/m2 in the real world, but some climate model, for whatever reason, produces a value of 3w/m2. Obviously, then, what we’re interested in is not how much warming we’ll get per w/m2, but how much warming we’ll get from a doubling of CO2.

Thus, for example, the IPCC’s Business-as-usual forecast of 9.90 w/m2 in greenhouse forcing by 2100 (from a 1990 baseline) could instead be expressed as equivalent to 2.475 doublings of CO2 (the result of diving 9.90 by 4). Hausfather’s paper, or a follow-up, could then apply this to all models. Just using some made-up numbers as an illustration, it may be that FAR’s Business-as-usual forecast expected forcing between 1990 and 2017 equivalent to 0.4 doublings of CO2, while in reality the forcing was equivalent to 0.26 doublings. This would still mean the difference in forcings was about 55%, meaning FAR overshot real forcings by around 55%; however, this would be easier to interpret than a simple w/m2 measure.

Now, even with all these caveats, one can make some statements. First, there are seven greenhouse gases counted by FAR in its scenarios, but one of them (stratospheric water vapor) is created through the decay of other (methane). I haven’t checked if water vapor forcing according to FAR was greater than in the real world, but if that happened the blame lies on FAR’s inaccurate methane forecast; in any case stratospheric H2O is a small forcing agent and did not play a major role in FAR’s forecasts.

Then there are three gases regulated by the Montreal Protocol, which I will consider together: CFC-11, CFC-12, and HCFC-22. That leaves us with four sources to be considered: CO2, methane, N2O, and Montreal Protocol gases. In previous sections of the article we already saw CO2 and methane, so let’s turn to the two remaining sources of greenhouse forcing. I use 2017 as the finishing year, for comparison with Hausfather’s paper. The figures for real-world concentrations and forcings come from NOAA’s Annual Greenhouse Gas Index (AGGI)

For N2O, Figure A.3 in FAR’s page 333 shows concentrations rising from about 307ppb in 1990 to 334 ppb by 2017. This is close to the level that was observed (2018 concentrations averaged about 332 ppb). And even a big deviation in the forecast of N2O concentration wouldn’t have a major effect on forcing; FAR’s Business-as-usual scenario expected forcing of only about 0.036w/m2 per decade, which would mean roughly 0.1w/m2 for the whole 1990-2017 period. Deviations in the N2O forecast may have accounted for about 0.01w/m2 of the error in FAR’s forcing projection – surely there’s no need to keep going on about this gas.

Finally, we have Montreal Protocol gases and their replacements: CFCs, HCFCs, and in recent years HFCs. To get a sense of of their forcing effect in the real world, I check NOAA’s AGGI and sum the columns for CFC-11, CFC-12, and the 15 minor greenhouse gases (almost all of that is HCFCs and HFCs). The forcing thus aggregated rises from 0.284w/m2 in 1990 to 0.344 w/m2 in 2017; in other words, forcing from these gases between these years was 0.06 w/m2.

Here’s where Hausfather and co-authors have a point: the world really did emit far smaller quantities of CFCs and HCFCs than FAR’s Business-as-usual projection assumed. In FAR’s Table 2.7 (page 57), the aggregated forcing of CFC-11, CFC-12 and HCFC-22 rises by 0.24w/m2 between 2000 and 2025. And the IPCC expected accelerating growth: the sum of the forcings from these three gases would then increase by 0.28w/m2 between 2025 and 2050.

A rough calculation of what this implies for forcing between 1990 and 2017 now follows. In 2000-2025 FAR expected Montreal Protocol gases to account for 0.0096 w/m2/year of forcing; multiplied by the 27 years that we’re analysing, that would mean 0.259w/m2. However, forcing was supposed to be slower over the first period than later, as we’ve seen; Table 2.6 in FAR’s page 54 also implies smaller growth in 1990-2000 than after 2000. So I round the previously-calculated figure down to 0.25w/m2; this is probably higher than the actual increase FAR was forecasting, but I cannot realistically make an estimate down the last hundredth of a watt, so it will have to do.

If FAR expected 1990-2017 forcing from Montreal Protocol gases of 0.25w/m2, that would mean the difference between the real world and FAR’s Scenario A was 0.25 – 0.06 = 0.19w/m2. I haven’t accounted here for these gases’ effect on stratospheric ozone, as it wasn’t clear whether that effect was already included in FAR’s numbers. If stratospheric ozone depletion hadn’t been accounted for, then the deviation between FAR’s numbers and reality would be smaller.

Readers who have made it to this part of the article probably want a summary, so here it goes:

  • Hausfather estimates that FAR’s Business-as-usual scenario over-projected forcings for the 1990-2017 period by 55%. This would mean a difference of 0.59 w/m2 between FAR and reality.
  • Lower-than-expected concentrations of Montreal Protocol gases explain about 0.19 w/m2 of the difference. With the big caveat that Montreal Protocol accounting is a mess of CFCs, HCFCs, HFCs, stratospheric ozone, and perhaps other things I’m not even aware of.
  • FAR didn’t account for tropospheric ozone, and this ‘unexplains’ about 0.07 w/m2. So there’s still 0.45-0.5 w/m2 of forcing overshoot coming from something else, if Hausfather’s numbers are correct.
  • N2O is irrelevant in these numbers
  • CO2 concentration was significantly over-forecasted by the IPCC, and that of methane grossly so. It’s safe to assume that methane and CO2 account for most or all of the remaining difference between FAR’s projections and reality.

Again, this is a rough calculation. As mentioned before, an exact calculation has to take into account for many issues I didn’t consider here. I really hope Hausfather’s paper is the beginning of a trend in properly evaluating climate models of the past, and that means properly accounting for (and documenting) how expected forcings and actual forcings differed.

By the way: this doesn’t mean climate action failed

There is a tendency to say that, since emissions of CO2 and other greenhouse gases are increasing, policies intended to reduce or mitigate emissions have been a failure. The problem with such an inference is obvious: we don’t know whether emissions would have been even higher in the absence of emissions reductions policies. Emissions may grow very quickly in an economic boom, even if emission-mitigation policies are effective; on the other hand, even with no policies at all, emissions obviously decline in economic downturns. Looking at the metric tons of greenhouse gases emitted is not enough.

Dealing specifically with the IPCC’s First Assessment Report, its emission scenarios used a common assumption about future economic and population growth; however, the description is so brief and vague as to be useless.

“Population was assumed to approach 10.5 billion in the second half of the next century. Economic growth was assumed to be 2-3% annually in the coming decade in the OECD countries and 3-5 % in the Eastern European and developing countries. The economic growth levels were assumed to decrease thereafter.”

So it’s impossible to say the amount of emissions FAR expected per unit of economic growth or population growth. The question ‘are climate policies effective?’ can’t answered by FAR.

Conclusions

The IPCC’s First Assessment report greatly overestimated future rates of atmospheric warming and sea level rise in its Business-as-usual scenario. This projection also overestimated rates of radiative forcing from greenhouse gases. A major part of the mis-estimation of greenhouse forcing happened because the world clamped down on CFCs and HCFCs much more quickly than its projections assumed. This was not a mistake of climate science, but simply a failure to foresee changes in human behaviour.

However, the IPCC also made other errors or omissions, which went the other way: they tended to reduce forecasted forcing and warming. Its Business-as-usual scenario featured CO2 emissions probably lower than those that have actually taken place, and its forcing estimates didn’t include tropospheric ozone.

This means that the bulk of the error in FAR’s forecast stems from two sources:

  • The fraction of CO2 emissions that remained in the atmosphere was much higher than has been observed, either at the time of the report’s publication or since then. There are uncertainties around the real-world airborne fraction, but the IPCC’s figure of 61% is about one third-higher than emission estimates suggest. As a result, CO2 concentrations grew 25% more in FAR’s Business-as-usual projection than in the real world.
  • The methane forecast was hopeless: methane concentrations in FAR’s Business-as-usual scenario grew five or six times more than has been observed. It’s still not clear where exactly the science went wrong, but a deviation of this size cannot be blamed on some massive-yet-imperceptible change in human behaviour.

These are purely problems of inadequate scientific knowledge, or a failure to apply scientific knowledge in climate projections. Perhaps by learning about the mistakes of the past we can create a better future.

Data

This Google Drive folder contains three files:

  • BP’s Energy Review 2019 spreadsheet (original document and general website)
  • NOAA’s data on CO2 concentrations from the Mauna Loa observatory (original document)
  • My own Excel file with all the calculations. This includes the raw digitized figures on CO2 emissions and concentrations from the IPCC’S First Assessment Report.

The emission numbers from LBNL are available here. I couldn’t figure out how to download a file with the data, so these figures are included in my spreadsheet.

NOAA’s annual greenhouse gas index (AGGI) is here. For comparisons of methane and N2O concentrations in the real world with the IPCC’s forecasts, I used Figure 2.

The IPCC’s First Assessment Report, or specifically the part of the report by Working Group 1 (which dealt with the physical science of climate change), is here. The corresponding section of Assessment Report 5 is here.