Yesterday scientists from the Universities of Exeter and Oxford together with University College London, announced that climate change is not as bad as previously thought due to modelling errors. In a paper published in the journal Nature Geoscience, the scientists concluded that global climate models used in the 2013 report from the Intergovernmental Panel on Climate Change (IPCC) tend to overestimate the extent of warming that has already occurred. (The paper can be found here, with supplementary data here.)

“An important uncertainty within this study is the specification of the attributable warming in the present-day climate state.”

The researchers used three approaches to evaluate the outstanding ‘carbon budget’ (the total amount of CO2 emissions compatible with a given global average warming) for 1.5°C. In all cases the level of emissions and warming to date were taken into account, and they found that the amount of carbon that humanity can emit from 2015 onwards while holding temperatures below 1.5°C is nearly three times greater than estimated by the IPCC – or even more if there is aggressive action on other greenhouse gases beyond carbon dioxide.

“Previous estimates of the remaining 1.5°C carbon budget based on the IPCC 5th Assessment were around four times lower, so this is very good news for the achievability of the Paris targets,”
Professor Pierre Friedlingstein, University of Exeter.

The IPCC models suggest that global temperatures should be 1.3°C above mid-19th century levels, whereas recent data suggest the increase is only 0.9-1.0°C.

“We haven’t seen that rapid acceleration in warming after 2000 that we see in the models. We haven’t seen that in the observations,”
Myles Allen, professor of geosystem science at the University of Oxford quoted in The Times.

The same article explains that the original forecasts were based on twelve separate computer models made by universities and government institutes around the world, which were compiled a decade ago, so it is perhaps unsurprising that there is a degree of divergence between the models and observable data.

 

Paul Homewood writing on the blog Not A Lot of People Know That makes the following comments:

  • We have known for several years that the climate models have been running far too hot. This rather belated admission is welcome, but a cynic would wonder why it was not made before Paris.
  • I suspect part of the motivation is to keep Paris on track. Most observers, including even James Hansen, have realised that it was not worth the paper it was written on. This new study is designed to restore the belief that the original climate targets can be achieved, via Paris and beyond.
  • Although they talk of the difference between 0.9°C and 1.3°C, the significance is much greater. Making the reasonable assumption that a significant part of the warming since the mid 19thC is natural, this means that any AGW signal is much less than previously thought.
  • Given that that they now admit they have got it so wrong, why should we be expected to have any faith at all in the models?
  • Finally, we must remember that temperatures since 2000 have been artificially raised by the recent record El Nino, and the ongoing warm phase of the AMO.

 

It is encouraging that this study is highlighting the disparity between the IPCC’s climate models, and observable data. As I have noted previously, it is difficult to reconcile the IPCC’s conclusions about anthropogenic climate change, and the fact that the data do not support the predictions of the climate models.

“For the period from 1998 to 2012, 111 of the 114 available climate-model simulations show a surface warming trend larger than the observations. There is medium confidence [66-100%] that this difference between models and observations is to a substantial degree caused by natural internal climate variability, which sometimes enhances and sometimes counteracts the long-term externally forced warming trend….”
IPCC 2014 Climate Change Report.

While the authors of the new study approached their work in terms of demonstrating that the Paris climate accord targets are achievable, this should not be taken as a signal to progress with aggressive de-carbonisation policies. The counter-argument to current climate change orthodoxy is that rising levels of atmospheric CO2 are the result of, and not the cause of, rises in global temperature (the data are presented and explained here with reference to the Vostok ice core samples).

If this alternative hypothesis is correct, the difference between the climate models and the observable data would be explained, meaning that expensive de-carbonisation schemes are un-necessary and will not impact climate change.

 

 

 

Please follow and like my blog:

5 thoughts on “Further evidence that climate models overstate global warming

  1. Hi Kathryn,

    One of the authors of the paper you quote (Dr Richard Millar) made a blog post yesterday (https://www.carbonbrief.org/guest-post-why-the-one-point-five-warming-limit-is-not-yet-a-geophysical-impossibility) relating to the paper entitled, “Why the 1.5C warming limit is not yet a geophysical impossibility”.

    In this blog post he states:

    “Our estimates suggest that we would have a remaining carbon budget equivalent to around 20 years at current emissions rates for a 2-in-3 chance of restricting end-of-century warming to below 1.5C.”

    and

    “However, in general, ESM projections don’t match exactly with how much warming we’ve seen – they display slightly more warming for slightly less cumulative CO2 than we’ve seen in the real world.

    Taken on their own, these discrepancies are relatively modest and are unsurprising given the inherent challenges in constructing computer models of the entire Earth system. But they can substantially affect estimates of the remaining carbon budget for 1.5C.”

    Nothing in the original paper nor in the blog post suggests the authors support your statement, “As I have noted previously, it is difficult to reconcile the IPCC’s conclusions about anthropogenic climate change, and the fact that the data do not support the predictions of the climate models.”

    Scientific consensus remains essentially unanimous in concluding that anthropogenic climate change is both real and detrimental. The “CO2 lags warming” argument expounded by Euan Mearns that you quote has already been explained by taking into account the Milankovich cycles and their consequential effects. It is accessibly explained here: https://www.skepticalscience.com/co2-lags-temperature-intermediate.htm together with some helpful graphs.

    In summary, (a) no new technical argument has yet navigated the scientific method to change global expert consensus, (b) any model by definition is imperfect but that doesn’t mean they aren’t useful, and (c) the authors of the quoted paper intended to point out that the 1.5degC target is still achievable and did not cast doubt on scientific consensus.

    1. Hi Stefano,

      I get that the authors do not believe their study changes the “accepted” relationship between CO2 and temperature, however I can’t get past this:

      “111 of the 114 available climate-model simulations show a surface warming trend larger than the observations”

      I cannot understand a scientific process where the models fail so dramatically, but no effort is made to adjust them. This new study further underlines the problems with the IPCC modelling, which matters because the IPCC’s conclusions are used to motivate energy policy in relation to climate change.

      In terms of the importance or otherwise of Milankovich cycles, that is addressed by Euan Mearns here: http://euanmearns.com/the-vostok-ice-core-and-the-14000-year-co2-time-lag/

      He concludes:

      “The overall pattern of glacial cycles are controlled by Milankovitch orbital cycles, but these alone are too weak to account for the large temperature fluctuations.”

      There is further discussion in the comments which is worth reading.

      For me, the key take-aways are:
      1. Climate systems are complex and chaotic;
      2. The relationship between CO2 and temperature is complex and non-linear;
      3. Models based on the CO2 leads temperature argument are failing to predict actual observed temperature variations;
      4. A simplistic “CO2 is to blame for climate change” narrative has been allowed to emerge which is not well grounded in science but is leading to extremely expensive policy choices.

      1. Hi Kathryn,

        I agree with your conclusions 1 and 2, but I think we diverge with 3 and are far apart on 4.

        On failing to predict actual observations, our friend Dr Millar explained that, “…these discrepancies are relatively modest and are unsurprising…”. Anyone that has undertaken modelling of any kind knows that simplifications are needed, particularly in complex systems, to avoid spending infinite time. The importance is to map out the possible spread of outcomes, often displayed probabilistically. I think you’ll agree that the chart in Dr Millar’s post clearly shows a spread but also an upward trend, hence their conclusion that it is still possible, although challenging, to reach the 1.5degC limit.

        The article I pointed to also agrees with Mearns that the Milankovich cycles by themselves are too weak to account for the temperature changes, but it goes on to explain how this relatively weak effect is amplified by collateral consequences in the complex global system and explains the observations adequately.

        It seems we’re agreeing that because it’s complicated stuff, it’s hard to perfectly predict detailed changes in the global climate. Where we diverge the most is on your conclusion 4: you argue that the interactions between atmospheric CO2 concentrations and climate change are not grounded in science because the wide range of models of the exceedingly complex weather system are imperfect and by implication valueless. This seems to ignore the fact these models are definitely not ‘simplistic’, that quite a few very clever people spend their entire careers trying to improve them, and that on aggregate the global scientific community accepts the models as ‘good enough’ to support the opposite conclusion.

  2. A first rate mind proceeds from agreed assumptions via impeccable logic
    The second rate mind reiterates received wisdom.
    The third rate mind merely appeals to authority. (‘consensus’}.

    (There is when you examine it closely, no scientific consensus on ‘anthropogenic climate change’ whatsoever: Opinions [when you can extract them] vary from ‘dire catastrophe’ to ‘unmeasurable in its effect on natural climate change’. Most scientists of my acquaintance are deeply worried by what they see as a shocking abuse of science, but they are afraid to speak out in public for fear of losing their jobs. That is a way of achieving ‘consensus’ that is only used by totalitarians).

    1. For me that’s another source of unease with the consensus. I came to my thinking by questioning the process, and then understanding something of the science…I was uneasy about the “denier” language and the scorn with which scepticism is treated. Scientific discourse shouldn’t be about shouting people down, and if that’s what you need to reply on then probably the science itself is weak.

Leave a Reply

Your email address will not be published. Required fields are marked *