archive-org.com » ORG » C » CLIMATEAUDIT.ORG

Total: 491

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • Appraising Marvel et al.: Implications of forcing efficacies for climate sensitivity estimates « Climate Audit
    solar which is further adrift 1 Kate Marvel Gavin A Schmidt Ron L Miller and Larissa S Nazarenko et al Implications for climate sensitivity from the response to individual forcings Nature Climate Change DOI 10 1038 NCLIMATE2888 The paper is pay walled but the Supplementary Information SI is not 2 The Historical simulations have an average temperature anomaly of 0 84 C for 1996 2005 relative to 1850 whereas HadCRUT4v4 shows an increase of 0 73 C from 1850 1859 to 1996 2005 and Figure 7 of Miller et al 2014 shows consistently greater warming for GISS E2 R than per GISTEMP since 2000 The same simulations show average ocean heat uptake of 0 84 W m 2 over 1996 2005 mean slope estimate compared to 0 40 W m 2 using AR5 Box 3 1 Figure 1 data or 0 67 W m 2 using NOAA Levitus et al 2012 data 3 Hansen J et al 2005 Efficacy of climate forcings J Geophys Res 110 D18104 doi 101029 2005JD005776 4 Chapter 8 of AR5 is available here 5 See Section 10 8 1 in Chapter 10 of AR5 for a discussion of the use of these equations in estimating TCR and ECS 6 Miller R L et al CMIP5 historical simulations 1850 2012 with GISS ModelE2 J Adv Model Earth Syst 6 441 477 2014 7 Or with climate state but feedbacks vary little with climate state within limits in most GCMs 8 I estimate GISS E2 R s effective climate sensitivity applicable to the historical period as 1 9 C and its ERF F 2xCO2 as 4 5 Wm 2 implying a climate feedback parameter of 2 37 Wm 2 K 1 based on a standard Gregory plot regression of Δ F Δ N on Δ T for 35 years following an abrupt quadrupling of CO 2 concentration The efficacy weighted mean period from the imposition of incremental forcing to the end of the historical period is of this order I also estimate the model s effective climate sensitivity as 2 0 C from regressing the same variables over the first 100 years of its 1 p a CO 2 increase simulation this estimate is little affected by F 2xCO2 value 9 Miller et al 2014 noted a 15 increase in GHG forcing in GISS ModelE2 compared to the CMIP3 version ModelE despite their forcing RF for a doubling of CO 2 being nearly identical but were unable to identify the cause 10 The 0 86 divisor comes from the coefficient on the integral of TOA imbalance anomaly Δ N when regressing the ocean heat content OHC anomaly against both that integral and time thus isolating any fixed offset between Δ Q and Δ N that may exist 11 The 1996 2005 Δ T for the sum of the six single forcing cases is 0 76 C compared to 0 84 C for Historical all forcings For iRF the corresponding Δ F values from the archived data are 2 53 W m 2 and 2 75 W m 2 However the values plotted are 2 74 W m 2 and 3 05 W m 2 respectively For ERF the sum of single forcings and the Historical forcing Δ F values from the data are respectively 2 99 W m 2 and 2 84 W m 2 but the values plotted in Figure 1c are 3 03 W m 2 and 2 93 W m 2 12 Otto et al used regression based estimates of ERF in multiple CMIP5 models Lewis and Curry used estimates from Table AII 1 2 of AR5 which are stated to be ERFs but in most cases aerosol forcing being the most notable exception assessed to be the same as their RFs 13 The AR5 Glossary Annex III states The traditional radiative forcing is computed with all tropospheric properties held fixed at their unperturbed values and after allowing for stratospheric temperatures if perturbed to readjust to radiative dynamical equilibrium Radiative forcing is called instantaneous if no change in stratospheric temperature is accounted for And early in Chapter 8 it says RF is hereafter taken to mean the stratospherically adjusted RF 14 However Hansen 2005 found that only in the cases of aerosol and BCsnow forcing was there a major difference between RF and ERF AR5 after surveying a wider range of evidence reached similar conclusions and accordingly in other cases estimated ERF to be the same as RF with an implied efficacy estimate of one but gave wider ranges for ERF to allow for uncertainty in the relationship between ERF and RF 15 AR5 states Section 7 5 1 of Chapter 7 it is inherently difficult to separate RFaci from subsequent rapid cloud adjustments either in observations or model calculations For this reason estimates of RFaci are of limited interest and are not assessed in this report 16 Transient efficacy estimates using iRF based respectively on unconstrained decadal regression from 1906 2015 to 1996 2005 as in Marvel et al changes from 1850 to 1996 2005 and zero intercept regression are LU 3 89 1 64 1 03 Oz 0 60 0 57 0 70 SI 1 53 1 68 1 82 and VI 0 56 26 45 0 31 In principle using changes is preferable to zero intercept regression for transient estimation because of the cold start issue but its superior noise suppression leads to more consistent estimation from zero intercept regression when forcing is small 17 Schmidt G A et al 2014 Configuration and assessment of the GISS ModelE2 contributions to the CMIP5 archive J Adv Model Earth Syst 6 141 184 doi 10 1002 2013MS000265 18 The GHG forcing in 1996 2005 is 10 higher in ERF than in iRF terms GHG forcing in 1996 2005 was dominated by CO 2 and Hansen 2005 found GHG had an efficacy of very close to one both in terms of F s which is very similar to ERF and using iRF 1 02 and 1 04 respectively That suggests scaling the actual F 2xCO2 iRF of 4 1 W m 2 by the ratio of Marvel et al s iRF and ERF values for GHG forcing which implies a 10 higher F 2xCO2 ERF of 4 52 W m 2 That value is also in line with F 2xCO2 of 4 53 W m 2 estimated from a Gregory plot regression over the 35 years following an abrupt quadrupling of CO 2 19 There were no material differences between the digitised and data values for Δ T so I used only the data values which were more precise Note that Marvel et al do not specify whether for ERF efficacy estimates ensemble means are taken before or after calculating quotients As only a single forcing value is given and ensmble means were taken before regressing in the iRF case I have assumed the former which also seems more appropriate 20 Lewis N Curry JA 2014 The implications for climate sensitivity of AR5 forcing and heat uptake estimates Clim Dyn DOI 10 1007 s00382 014 2342 y Non typeset version available here 21 Shindell D T et al 2013 Interactive ozone and methane chemistry in GISS E2 historical and future climate simulations Atmos Chem Phys 13 2653 2689 This study found that iRF ozone forcing from 1850 to 2000 was 0 28 W m 2 when the climate state was allowed to evolve in line with the Historical simulation and 0 22 W m 2 when a fixed present day climate was used and ERF was calculated as 0 22 W m 2 These values are substantially below those used in Marvel et al of 0 45 W m 2 iRF and 0 38 W m 2 ERF Substituting Shindell et al s values for Marvel et al s would raise the ozone iRF and ERF transient efficacies values to respectively 0 92 and 1 18 22 If one excludes LU run 1 no individual run for any forcing including Historical produces a 1950 2005 mean GMST response that differs by more than 0 031 C from the ensemble mean response for that forcing But for LU run 1 the difference is 0 134 C and would be 0 168 C were run 1 excluded from the ensemble mean 23 Chapter 8 of AR5 referring to a seven model study states that There is no agreement on the sign of the temperature change induced by anthropogenic land use change and concludes that a net cooling of the surface accounting for processes that are not limited to the albedo is about as likely as not 24 Schmidt H et al 2012 Solar irradiance reduction to counteract radiative forcing from a quadrupling of CO2 climate responses simulated by four earth system models Earth Syst Dynam 3 63 78 25 The GISS E2 R increase in GHG ERF is 3 39 W m2 The 1850 2000 increase in GHG RF and ERF per AR5 Table AII 1 2 is 2 25 W m2 but I use the higher 1842 2000 increase of 2 30 W m2 since the 1850 CO 2 concentration in GISS ModelE2 was first reached in 1842 according to the AR5 data 26 I calculate TCR and ECS values as shown in the below table from the efficacies stated in Marvel et al s SI Table 1 digitising from their Figure 1 for GHG E 1 means assuming all efficacies are one Median estimates Shindell 2014 Lewis and Curry 2014 Otto et al 2013 E 1 iRF ERF E 1 iRF ERF E 1 iRF ERF TCR As stated in SI Table 3 1 4 2 0 1 9 1 3 1 6 1 7 1 3 1 8 1 8 From SI Table 1 GHG from Fig 1 1 98 1 58 1 92 1 60 1 92 1 69 ECS As stated in SI Table 3 2 1 4 0 3 6 1 5 2 0 2 3 2 0 2 9 3 4 From SI Table 1 GHG from Fig 1 3 88 3 48 2 77 2 73 3 90 3 78 27 Sokolov A P 2005 Does model sensitivity to changes in CO2 provide a measure of sensitivity to other forcings J Climate 19 3294 3305 28 Shindell DT 2014 Inhomogeneous forcing and transient climate sensitivity Nature Clim Chg DOI 10 1038 NCLIMATE2136 29 Ocko IB V Ramaswamy and Y Ming 2014 Contrasting climate responses to the scattering and absorbing features of anthropogenic aerosol forcings J Climate 27 5329 5345 30 Kummer J R and A E Dessler 2014 The impact of forcing efficacy on the equilibrium climate sensitivity GRL 10 1002 2014GL060046 Update Data and calculations are available here in Excel form Like this Like Loading Related This entry was written by niclewis posted on Jan 8 2016 at 4 42 PM filed under Uncategorized and tagged Climate sensitivity Efficacy Bookmark the permalink Follow any comments here with the RSS feed for this post Post a comment or leave a trackback Trackback URL Update of Model Observation Comparisons Bob Carter 83 Comments michael hart Posted Jan 8 2016 at 6 21 PM Permalink Reply The efficacy of a forcing is defined as its effect on GMST relative to that of the same amount of forcing by CO2 Notwithstanding those who like to count joules in the deep oceans if that definition is reasonable then what does it say about whether the feedbacks should have un equal efficacies In other words if forcings are not all equal then it seems reasonable to ask if feedbacks are not equal either Steve McIntyre Posted Jan 8 2016 at 6 24 PM Permalink Reply Nic thanks for this impressive discussion Michael Jankowski Posted Jan 8 2016 at 6 28 PM Permalink Reply Why did they stop in 2005 Is that the last year common year in Otto et al 2013 Lewis and Curry 2014 and Shindell 2014 ristvan Posted Jan 8 2016 at 8 18 PM Permalink Reply No it is not Cherrypick thomaswfuller2 Posted Jan 8 2016 at 7 46 PM Permalink Reply In your introduction if you change assert to contend it would make the beginning of your paper sound less a bit less charged What s remarkable about your piece here is the clarity of the English writing I was able to follow it all despite being a non scientist Thanks for the hard work My only other suggestion would be a quick section on how you would recommend Marvel et al proceed to improve their work niclewis Posted Jan 9 2016 at 6 29 AM Permalink Reply thomasfuller2 thanks for your comment There was no intention on my part to use a charged term I consider assert to be a more neutral term than contend See http the difference between com contend assert the difference between assert and contend is that assert is to declare with assurance or plainly and strongly to state positively while contend is to strive in opposition to contest to dispute to vie to quarrel to fight Marvel et al could withdraw their paper and submit a new one using more satisfactory methodology and providing more detail after performing a set of simulations that showed how the GISS model responded to each type of forcing as the climate state evolved during the historical period Preferably extended to 2012 to match the simulation results in Miller et al 2014 which is a much higher quality paper But I see very little chance of that happening There is in any case a question mark over how suitable a model GISS E2 is for this purpose As I indicate in the article GISS E2 seems to have amazingly high forcing from non CO2 long lived greenhouse gases methane nitrous oxide CFCs etc and a remarkably strong GMST response to them if the forcing from a doubling of CO2 in the model is as taken in Marvel at al Richard Drake Posted Jan 9 2016 at 6 52 AM Permalink Reply Couple of questions on the last para 1 Which GCM would in your view have been better How easy is that to even evaluate 2 How long elapsed would it have taken to run the various simulations for the different forcings on GISS E2 leading to the write up in Marvel et al On the standard GISS supercomputer under standard loading or whatever they would have had available I realise it may only be the authors who can give any idea of the second but it would be interesting to get a feel for how easy it would be for others to play around with this stuff I m still after six years struggling with what openness if climate software even means compared to areas with which I m much more familar niclewis Posted Jan 9 2016 at 10 48 AM Permalink RichardDrake 1 Probably best to use a number of unconnected AOGCMs from different groups Perhaps focussing on those from grous in western Europe and North America judging from the views of the modellers I have met 2 I suspect a fair while See my response to Alberto Zaragoza Comendador below sue Posted Jan 10 2016 at 2 45 AM Permalink Reply Nic GISS E2 seems to have amazingly high forcing from non CO2 long lived greenhouse gases methane Very interesting since Gavin discourages ppl from worrying about methane even got into a row w Wadhoms sp over it How different are their scenarios I assume very different niclewis Posted Jan 10 2016 at 10 03 AM Permalink Sue As I say in note 25 The GISS E2 R increase in GHG ERF is 3 39 W m2 The 1850 2000 increase in GHG RF and ERF per AR5 Table AII 1 2 is 2 25 W m2 but I use the higher 1842 2000 increase of 2 30 W m2 since the 1850 CO2 concentration in GISS ModelE2 was first reached in 1842 If one strips out the CO2 contributions of 1 38 W m2 for AR5 based on an F2xCO2 of 3 71 W m2 and of 1 53 W m2 for GISS E2 R based on an ERF F2xCO2 of 4 1 W m2 the the contribution of the other long lived GHG is 0 92 W m2 per AR5 and 1 86 W m2 for GISS E2 R That is methane nitrous oxide CFCs and minor GHGs add TWICE as much forcing in GISS E2 R as per the AR5 best estimate As I wrote it looks as if GISS E2 R radiative transfer computation in GISS E2 may be inaccurate Although methane is classed as a long lived GHG its lifetime is only of the order of a decade so it presents much less of a long term problem than CO2 part of which is expected to remain in the atmosphere for 1000 years On the other hand as well as being a powerful GHG it is a source of tropospheric ozone and stratospheric water vapour both of which add to the basic forcing from methane Brandon Shollenberger Posted Jan 11 2016 at 2 29 PM Permalink niclewis Although methane is classed as a long lived GHG its lifetime is only of the order of a decade so it presents much less of a long term problem than CO2 part of which is expected to remain in the atmosphere for 1000 years On the other hand as well as being a powerful GHG it is a source of tropospheric ozone and stratospheric water vapour both of which add to the basic forcing from methane Another interesting feature of methane is when it breaks down it largely breaks down into C02 There is far less methane in the atmosphere than C02 but that effect may well have contributed a couple percent to the observed rise in C02 levels wkernkamp Posted Jan 22 2016 at 1 36 AM Permalink There is no reason to believe that excess CO2 will remain in the atmosphere very long Already only about half of the human produced CO2 in any given year as can be calculated from the atmospheric CO2 increase The other half is immediately removed by nature This indicates that a 33 increase in CO2 causes the natural removal processes to increase by this amount Therefore if we stopped all emissions we should cause CO2 to decline at about the same rate as it now is increasing This is so because the increased absorption persists until CO2 is lower At that rate it would not take thousands of years to remove all the CO2 from fossil fuels but less than one hundred years This timescale is also in agreement with the rapid decline of the C14 spike due to atmospheric nuclear explosions in the fifties mpainter Posted Jan 8 2016 at 9 02 PM Permalink Reply Nic This definition is reasonable CO2 is the dominant greenhouse gas Next to water vapor you must mean niclewis Posted Jan 9 2016 at 4 29 AM Permalink Reply GHG here means long lived greenhouse gases which excludes ozone as well as water vapour But I am afraid the definition appears after the term GHG has already been used Geoff Sherrington Posted Jan 8 2016 at 9 08 PM Permalink Reply Thank you Nic for yet another detailed study There is a matter arising from observations about relations between land temperature and local rainfall For example at several Australian weather stations studied in detail with statistics recorder local rainfall correlates with recorded temperatures quite significantly That is GHG are not the only driver of temperature changes as recorded Wetter is cooler Rainfall does not seem to sit within the 7 individual forcings you have studied it might pls correct if I am wrong Given that local rainfall statistically can account for 30 50 of the variation in local temperatures and given that simple physics help explain this I am left wondering where the effect of rainfall on local temperatures in inserted into sensitivity studies if indeed it needs to be As studies become more detailed it is likely that many odd questions of this type will emerge Another is from the Dec 2015 Schmidtusen GRL paper claiming a cooling over the Antarctic as atmospheric CO2 increases IR emissions to space do not come from the ground surface there because it is too cold so the use of land surface as a reference layer elsewhere might be compromised While models might gather up local effects like these they can be hard to track down Even if they are incorporated one wonders if the mathematics in the models are set to sum or integrate only positive values of sensitivities at defined locations not ECS or TCR as globally defined but locally I hope I am not wasting your time here There are bigger problems for us at home preventing some detailed digging niclewis Posted Jan 9 2016 at 5 55 AM Permalink Reply Thanks Geoff Forcings generally have in GCMs similar global effects even if they are concentrated in particular regions or differ between the hemispheres Figure 24 of the Hansen 2005 paper that I provided a link to shows this very well But variation in local feedbacks and hence in local climate sensitivity does seem to have more local effects GCMs do incorporate this although their simulations of feedbacks and their effects may not be correct The models don t distinguish between positive and negative local sensitivities In many GCMs sensitivity is negative in the deep tropics net outgoing radiation goes down when surface temperature increases because water vapour and cloud feedbacks are so strongly positive there That means there would be runaway warming there if heat from the deep tropics couldn t be exported to higher latitudes Maybe not hte sort of negative sensitivity you had in mind but it proves the point Models generally aren t very good at simulating changes in rainfall patterns to increasing GHG and resulting global warming But they do all agree that total rainfall will increase In fact the lower climate sensitivity is the faster must total precipitation increase with GMST or the atmosphere would heat up too much But where the extra rain falls is a different question it could almost all be over the oceans Bishop Hill Posted Jan 9 2016 at 6 25 AM Permalink Reply Does that mean that recent flooding in the UK is evidence for low climate sensitivity Richard Drake Posted Jan 9 2016 at 6 35 AM Permalink Got there before me Bish But this last paragraph plugged an important gap in my understanding thank you Nic There may be others AntonyIndia Posted Jan 9 2016 at 9 22 PM Permalink Reply I asked Gavin Schmidt s comment on your review on his co article on Realclimate and he answered Mostly confused but there are a couple of points worth following up on Should have the relevant sensitivity tests available next week gavin http www realclimate org index php archives 2016 01 marvel et al 2015 part 2 media responses comment 640742 sue Posted Jan 10 2016 at 2 49 AM Permalink 1 Looking forward to his follow up gymnosperm Posted Jan 8 2016 at 11 48 PM Permalink Reply 1 00 for CO2 forcing C mon Water is what 1 9 The feedbacks are entirely hypothetical The radiative forcing of CO2 is expressed as unity entirely ignoring its saturation While the trends of temperature and Co2 are mysteriously different the variability of CO2 is substantially captured by temperature even in the last 35 years FerdiEgb Posted Jan 9 2016 at 3 50 AM Permalink Reply Gymnosperm As intensively discussed here http wattsupwiththat com 2015 11 25 about spurious correlations and causation of the co2 increase 2 Most of the variability of the CO2 rate of change is caused by the influence of temperature variability Pinatubo El Niño on tropical variation That is proven by the opposite CO2 and δ13C changes Vegetation is not the cause of the trend in CO2 it is an increasing sink for CO2 at least since 1990 http www sciencemag org content 287 5462 2467 short and http www bowdoin edu mbattle papers posters and talks BenderGBC2005 pdf Variability and trend of CO2 have nothing in common they are driven by different processes opluso Posted Jan 9 2016 at 8 00 AM Permalink Reply It would seem that their methodology single forcing model runs would be most valuable in identifying areas for improvement in the GISS E2 R model I am at a loss to see how that methodology would be superior to estimating TCR ECS directly from observational data sets Perhaps the answer lies behind the Marvel et al paywall but did they calculate the relative contribution from each single forcing estimate to the ultimate increase in their respective TCR ECS estimates kribaez Posted Jan 9 2016 at 8 56 AM Permalink Reply Observational based studies must make some estimate of the forcing which gave rise to the observed temperature If a large forcing is assumed estimated then this implies a low climate sensitivity Conversely a low forcing giving rise to the same observed temperature gain implies a high climate sensitivity By definition TCR and ECS relate only to CO2 forcing It is known that in the models at least not all forcings produce identical temperature responses some higher than expected from a CO2 equivalent forcing and some lower than expected Marvel et al argue that by an accident of history the apparent summed forcings are higher than they would be if all of the forcings were expressed in terms of their equivalence to CO2 forcings By so doing they argue that the total forcings used as input into observational studies are too high relative to CO2 equivalence and hence climate sensitivities which again have to be CO2 specific are therefore biased low Hope this helps niclewis Posted Jan 9 2016 at 9 46 AM Permalink Reply opluso Their work certainly highlights some peculiarities in the GISS model Your question is a good one Marvel don t give any results for the relative contributions of diferent forcings to their increases in observational TCR ECS estimates I have worked them out for TCR using ERF forcings this is the only case for which their methodology doesn t need changing the true ERF F2xCO2 value is unknown but varying it would change all contributions in the same direction Their very high efficacy for land use is the biggest contributor closely followed by the slightly sub unity efficacy of GHG and then by the pretty low efficacy for ozone broadly half as important as LU Aerosols and solar have similarly small but opposing effects Volcanic should be small but I think they ve got the wrong VI forcing for the Otto and Shindell studies they made their own estimates of this forcing I believe kribaez Posted Jan 9 2016 at 8 25 AM Permalink Reply Nic Thank you for the detailed and thoughtful input to this problem Before making a comment on the results I would like to underline that outwith the gross methodological errors in Marvel et al there are two elements which I find bizarre Firstly to do efficacy comparisons meaningfully requires carrying at least 3sf accuracy through the calculations of derivative data One piece of fundamental input is the evolution of net flux or at a minimum an accurate estimate of the change in net flux over a pre specified period In this context Marvel s choice of using OHC data rather than making direct use of the available net flux data from the model runs seems absurd In observation based estimates of CS and feedbacks researchers are forced to use OHC data as a means of accessing net flux estimates over the longer term This requires some fairly coarse assumptions to be made including what percentage of any net flux imbalance is converted to sensible heat in the ocean as you point out Going from model calculated OHC back to net flux imbalance in the model with any accuracy is extremely difficult since as well as the natural net flux variation in the pre industrial control which is integrated in some guise into the GCM s energy accumulation and needs to be discounted there is also conversion of radiative input into sensible heat and latent heat conversion to momentum flux and distribution of sensible heat between land sea and atmosphere In addition there is energy leakage from the model climate system it is not fully conserved All of these elements are model specific I can quite honestly think of no excuse for the use of OHC data in this context when the net flux data should be available to the GISS researchers Its sole consequence in the efficacy calculations is the introduction of unnecessary error and uncertainty Secondly engineers would recognise an efficacy calculation as a benchmark calibration study A fundamental requirement for such a study is to have the benchmark measurements available Hansen 2005 recognised this and took great pains to measure the forcing data for the CO2 cases which form the benchmark against which all other responses are calibrated He provided estimates of Fi iRF Fa RF and Fs ERF across a range of concentrations of CO2 Commendable For Marvel et al on the other hand we have a statement in Miller saying However forcing associated with a doubling of CO2 is nearly identical between the CMIP3 and CMIP5 models Hansen et al 2005 Schmidt et al 2014a This is then contradicted by the iRF value cited in Marvel et al and the Fa values in Schmidt 2014 No reference is provided at all for ERF values This is a dog s dinner a benchmark study without benchmarks The above two elements strongly suggest to me that this did not start as an efficacy study My speculation is that it started as a study to show that by applying the same methods used in observation studies to the GISS ER 2 data you got the wrong answer for sensitivity They then found that you actually got very compatible answers if done with reasonable estimates of historical forcing and a sensible treatment of OHC The study then morphed into one which had to show that the historical forcing had an overall weighted average efficacy less than unity I can think of no other explanation for carrying out an efficacy study which uses OHC instead of net flux and which is based on a woefully inadequate definition of the benchmark data on the CO2 cases stevefitzpatrick Posted Jan 9 2016 at 9 33 AM Permalink Reply kribaez The paper is clearly an effort to discount the lower sensitivity estimates from empirical studies how the Marvel et al work evolved is speculative but the overall objective is obvious discount all low empirical estimates of TCR and ECS There have been several other papers from GISS where GCM behavior was used to discount lower empirical estimates of sensitivity one paper critical Stephen Schwartz s temperature autocorrelation based estimate of sensitivity immediately comes to mind The general class of paper can be described as you can t ever show the GCM projections are too high by using actual data In other fields efforts to discredit empirical data rather than improve a model would be laughed at but is oddly enough taken very seriously in climate science I will go out on a limb and predict GISS will produce similar critiques of other empirical estimates in the future kribaez Posted Jan 9 2016 at 9 55 AM Permalink Reply Yes the rebuttal to Schwartz is a very pertinent analogue In that instance the GISS team argued that the Schwartz method could not be sound because when applied to GISS data it gave the wrong answer for climate sensitivity The reality was that it actually gave the correct answer for GISS climate sensitivity over the temperature interval tested The error in the rebuttal was the failure to recognise the difference between the effective equilibrium temperature and the model reported ECS Because of the curvature in the net flux vs temperature relationship for a step forcing which GISS exhibits like most GCMs the latter is not tested The identical error among others is being made in the Marvel et al study JamesG Posted Jan 12 2016 at 7 33 AM Permalink Reply And of course Giss have the unique advantage of adjusting their own empirical data to fit what their model predicts For a while the satellite data was a minor constraint on doing that but since Best sic avoided reconciling satellite data with a sideways swipe it seems Giss felt ok to follow suit The next step will be twisting Carl Mears arm to apply an upwards adjustment to RSS and leave UAH as the lone outlier run by easily dismissible skeptics It all plays like a handbook of how to distort research in support of a predetermined agenda niclewis Posted Jan 9 2016 at 10 06 AM Permalink Reply kribaez Thank you for your comment I compltely agree with you about the use of OHC rather than TOA radiative imbalance data and the lack of benchmark values for the forcing from a doubling of CO2 Using the OHC slope rather than TOA radiative imbalance N seems bizarre and scientifically indefensible It does of course produce biased low estimates of the model ECS from historical period forcings or indeed from any type of forcing Schmidt 2014 states GISS E2 R has a stratospherically adjusted Fa F2xCO2 value of 4 1 W m2 which is in line with the 4 08 4 12 W m2 for Fa in GISS E per Hansen 2005 But Hansen gives the iRF Fi value as 4 52 W m2 whereas Marvel uses 4 1 W m2 stevefitzpatrick Posted Jan 9 2016 at 9 04 AM Permalink Reply Nic Thanks for this clearly written post Two questions 1 Since you were a coauthor of two of the three empirical estimate papers which Marvel et al claim to be inaccurate it seems to me that the journal editor should have considered you as a reviewer Were you asked to review the paper 2 Are you and or others considering submitting a comment on Marvel et al to the journal niclewis Posted Jan 9 2016 at 10 30 AM Permalink Reply stevefitzpatrick Thanks I answer to your questions 1 No I wasn t 2 I shall reserve my position on that but I am aware that journals often seek to avoid publishing comments I suspect Nature CC may be worse than most in this regard Comments also have very tight length restrictions climategrog Posted Jan 11 2016 at 3 29 AM Permalink Reply IFAIK Nature not Nature CC has something very restrictive like 500 word limit and a 6mo shut out mpainter Posted Jan 11 2016 at 11 31 AM Permalink 500 words No problem give an abstract on each point and links to Climate Audit Alberto Zaragoza Comendador Posted Jan 9 2016 at 9 46 AM Permalink Reply Marvel et al say near the end that the historicalMisc archive is sparse and these experiments were a low priority in CMIP5 so very few groups performed comparable calculations of radiative forcings associated with each forcing agent Yeah I m using the free version and cannot copy paste But their point is clear replication will be difficult The thing is at least one paper mentioned by Nic Ocko et al 2014 reference 29 had done the same kind of experiments and arrived at different conclusions But Marvel et al don t cite Ocko either in the paper itself or in the SI My question to Nic would be are these problems with historicalMisc whatever that may be real Or is the lack of single forcing experiments more due to plain lack of interest from researchers Non paywalled version here http www nature com

    Original URL path: http://climateaudit.org/2016/01/08/appraising-marvel-et-al-implications-of-forcing-efficacies-for-climate-sensitivity-estimates/ (2016-02-08)
    Open archived version from archive


  • niclewis « Climate Audit
    by Nicholas Lewis In a paper published last year Lewis Curry 2014 discussed here Judith Curry and I derived best estimates for equilibrium effective climate sensitivity ECS and transient climate response TCR At 1 64 C our estimate for ECS was below all those exhibited by CMIP5 global climate models and at 1 33 C for Posted in Uncategorized Comments 87 Marotzke and Forster s circular attribution of CMIP5 intermodel warming differences Feb 5 2015 10 42 AM A guest post by Nicholas Lewis Introduction A new paper in Nature by Jochem Marotzke and Piers Forster Forcing feedback and internal variability in global temperature trends i investigates the causes of the mismatch between climate models that simulate a strong increase in global temperature since 1998 and observations that show little increase and the influence Posted in Uncategorized Comments 889 The implications for climate sensitivity of AR5 forcing and heat uptake estimates Sep 24 2014 11 22 AM A guest post by Nic Lewis When the Lewis Crok report A Sensitive Matter about climate sensitivity in the IPCC Fifth Assessment Working Group 1 report AR5 was published by the GWPF in March various people criticised it for not being peer reviewed But peer review is for research papers not for lengthy wide ranging review Posted in Uncategorized Comments 62 Older posts Tip Jar The Tip Jar is working again via a temporary location Pages About Blog Rules and Road Map CA Assistant CA blog setup Contact Steve Mc Econometric References FAQ 2005 Gridded Data High Resolution Ocean Sediments Hockey Stick Studies Proxy Data Station Data Statistics and R Subscribe to CA Tip Jar Categories Categories Select Category AIT Archiving Nature Science climategate cg2 Data Disclosure and Diligence Peer Review FOIA General Holocene Optimum Hurricane Inquiries Muir Russell IPCC ar5 MBH98 Replication Source Code Spot the Hockey Stick Modeling Hansen Santer UK Met Office Multiproxy Studies Briffa Crowley D Arrigo 2006 Esper et al 2002 Hansen Hegerl 2006 Jones Mann 2003 Jones et al 1998 Juckes et al 2006 Kaufman 2009 Loehle 2007 Loehle 2008 Mann et al 2007 Mann et al 2008 Mann et al 2009 Marcott 2013 Moberg 2005 pages2k Trouet 2009 Wahl and Ammann News and Commentary MM Proxies Almagre Antarctica bristlecones Divergence Geological Ice core Jacoby Mann PC1 Medieval Noamer Treeline Ocean sediment Post 1980 Proxies Solar Speleothem Thompson Yamal and Urals Reports Barton Committee NAS Panel Satellite and gridcell Scripts Sea Ice Sea Level Rise Statistics Multivariate RegEM Spurious Steig at al 2009 Surface Record CRU GISTEMP GISTEMP Replication Jones et al 1990 SST Steig at al 2009 UHI TGGWS Uncategorized Unthreaded Articles CCSP Workshop Nov05 McIntyre McKitrick 2003 MM05 GRL MM05 EE NAS Panel Reply to Huybers Reply to von Storch Blogroll Accuweather Blogs Andrew Revkin Anthony Watts Bishop Hill Bob Tisdale Dan Hughes David Stockwell Icecap Idsos James Annan Jeff Id Josh Halpern Judith Curry Keith Kloor Klimazweibel Lubos Motl Lucia s Blackboard Matt Briggs NASA GISS Nature Blogs RealClimate Roger Pielke Jr Roger Pielke

    Original URL path: http://climateaudit.org/author/niclewis/ (2016-02-08)
    Open archived version from archive

  • Bob Carter « Climate Audit
    or thermodynamics heat mass transfer and fluid dynamics but then all so called climate scientists have no understanding of these either richard Posted Jan 20 2016 at 10 51 AM Permalink Reply We seem to be losing the best Crichton Brietbart now Carter They don t come any better Not suggesting anything but I would encourage you Lindzin et al to watch your back Gerald Machnee Posted Jan 20 2016 at 11 09 AM Permalink Reply Good post Steve The job is not done The rest of us with principles must carry on Stacey Posted Jan 20 2016 at 2 04 PM Permalink Reply Time and tide wait for no man but this man will be sorely missed Neville Posted Jan 20 2016 at 3 59 PM Permalink Reply Another tribute to Bob Carter from Andrew Bolt http blogs news com au heraldsun andrewbolt index php heraldsun comments bob carter And Michael Smith s tribute http www michaelsmithnews com 2016 01 how bob carter cost me a career and made me a better person html Craig Loehle Posted Jan 20 2016 at 4 06 PM Permalink Reply I interacted with Bob several times on research projects Met him several times and always impressed with his courtesy professionalism and helpfulness He was very careful with words in his writing A real loss docmartin3 Posted Jan 21 2016 at 5 32 PM Permalink Reply I corresponded with Bob on many occasions He freely shared excellent careful advice on the many complex areas of climate science and in my own way I gave him feedback on the arts of argument and communication which he freely acknowledged was a little bit outside his range as first and foremost a practical scientist a geologist I reviewed his book The Counter Consensus for The Philosopher http www the philosopher co uk reviews counter consensus htm and I regretted that the powerful arguments that it contains did not get a better launch pad Surely amongst the rather shallow arguments of many skeptics he was a very solid reliable figure I note there is no mention on this page of his recent poor treatment by his university who treated him so shabbily In order I rather think to pursue funding and political gain James Cook withdrew Bob s academic privileges even to the extent of hampering his ability to help supervise research students I know this treatment was hurtful to him as is the continued hateful slander of people such as Wikipedia s sometime adminstrator censor on global warming matters William Connolly On the other hand I have no doubt that time will look kindly on Bob and his ideas Ross Posted Jan 21 2016 at 6 24 PM Permalink Reply Thank you Steve for the incite into your connection with Bob Carter He clearly encouraged and inspired many people from all around the world Yours and tributes from many others make William Connelly s comments about Bob s passing even more obnoxious I won t denigrate your

    Original URL path: http://climateaudit.org/2016/01/19/bob-carter/ (2016-02-08)
    Open archived version from archive

  • carter « Climate Audit
    James Annan Jeff Id Josh Halpern Judith Curry Keith Kloor Klimazweibel Lubos Motl Lucia s Blackboard Matt Briggs NASA GISS Nature Blogs RealClimate Roger Pielke Jr Roger Pielke Sr Roman M Science of Doom Tamino Warwick Hughes Watts Up With That William Connolley WordPress com World Climate Report Favorite posts Bring the Proxies up to date Due Diligence FAQ 2005 McKitrick What is the Hockey Stick debate about Overview Responses to MBH Some thoughts on Disclosure Wegman and North Reports for Newbies Links Acronyms Latex Symbols MBH 98 Steve s Public Data Archive WDCP Wegman Reply to Stupak Wegman Report Weblogs and resources Ross McKitrick Surface Stations Archives Archives Select Month February 2016 January 2016 December 2015 September 2015 August 2015 July 2015 June 2015 April 2015 March 2015 February 2015 January 2015 December 2014 November 2014 October 2014 September 2014 August 2014 July 2014 June 2014 May 2014 April 2014 March 2014 February 2014 January 2014 December 2013 November 2013 October 2013 September 2013 August 2013 July 2013 June 2013 May 2013 April 2013 March 2013 January 2013 December 2012 November 2012 October 2012 September 2012 August 2012 July 2012 June 2012 May 2012 April 2012 March 2012 February 2012 January 2012 December 2011 November 2011 October 2011 September 2011 August 2011 July 2011 June 2011 May 2011 April 2011 March 2011 February 2011 January 2011 December 2010 November 2010 October 2010 September 2010 August 2010 July 2010 June 2010 May 2010 April 2010 March 2010 February 2010 January 2010 December 2009 November 2009 October 2009 September 2009 August 2009 July 2009 June 2009 May 2009 April 2009 March 2009 February 2009 January 2009 December 2008 November 2008 October 2008 September 2008 August 2008 July 2008 June 2008 May 2008 April 2008 March 2008 February 2008 January 2008 December

    Original URL path: http://climateaudit.org/2016/01/19/bob-carter/carter/ (2016-02-08)
    Open archived version from archive

  • Climate sensitivity « Climate Audit
    et al 2008 Mann et al 2009 Marcott 2013 Moberg 2005 pages2k Trouet 2009 Wahl and Ammann News and Commentary MM Proxies Almagre Antarctica bristlecones Divergence Geological Ice core Jacoby Mann PC1 Medieval Noamer Treeline Ocean sediment Post 1980 Proxies Solar Speleothem Thompson Yamal and Urals Reports Barton Committee NAS Panel Satellite and gridcell Scripts Sea Ice Sea Level Rise Statistics Multivariate RegEM Spurious Steig at al 2009 Surface Record CRU GISTEMP GISTEMP Replication Jones et al 1990 SST Steig at al 2009 UHI TGGWS Uncategorized Unthreaded Articles CCSP Workshop Nov05 McIntyre McKitrick 2003 MM05 GRL MM05 EE NAS Panel Reply to Huybers Reply to von Storch Blogroll Accuweather Blogs Andrew Revkin Anthony Watts Bishop Hill Bob Tisdale Dan Hughes David Stockwell Icecap Idsos James Annan Jeff Id Josh Halpern Judith Curry Keith Kloor Klimazweibel Lubos Motl Lucia s Blackboard Matt Briggs NASA GISS Nature Blogs RealClimate Roger Pielke Jr Roger Pielke Sr Roman M Science of Doom Tamino Warwick Hughes Watts Up With That William Connolley WordPress com World Climate Report Favorite posts Bring the Proxies up to date Due Diligence FAQ 2005 McKitrick What is the Hockey Stick debate about Overview Responses to MBH Some thoughts on Disclosure Wegman and North Reports for Newbies Links Acronyms Latex Symbols MBH 98 Steve s Public Data Archive WDCP Wegman Reply to Stupak Wegman Report Weblogs and resources Ross McKitrick Surface Stations Archives Archives Select Month February 2016 January 2016 December 2015 September 2015 August 2015 July 2015 June 2015 April 2015 March 2015 February 2015 January 2015 December 2014 November 2014 October 2014 September 2014 August 2014 July 2014 June 2014 May 2014 April 2014 March 2014 February 2014 January 2014 December 2013 November 2013 October 2013 September 2013 August 2013 July 2013 June 2013 May 2013 April 2013 March 2013 January 2013 December 2012 November 2012 October 2012 September 2012 August 2012 July 2012 June 2012 May 2012 April 2012 March 2012 February 2012 January 2012 December 2011 November 2011 October 2011 September 2011 August 2011 July 2011 June 2011 May 2011 April 2011 March 2011 February 2011 January 2011 December 2010 November 2010 October 2010 September 2010 August 2010 July 2010 June 2010 May 2010 April 2010 March 2010 February 2010 January 2010 December 2009 November 2009 October 2009 September 2009 August 2009 July 2009 June 2009 May 2009 April 2009 March 2009 February 2009 January 2009 December 2008 November 2008 October 2008 September 2008 August 2008 July 2008 June 2008 May 2008 April 2008 March 2008 February 2008 January 2008 December 2007 November 2007 October 2007 September 2007 August 2007 July 2007 June 2007 May 2007 April 2007 March 2007 February 2007 January 2007 December 2006 November 2006 October 2006 September 2006 August 2006 July 2006 June 2006 May 2006 April 2006 March 2006 February 2006 January 2006 December 2005 November 2005 October 2005 September 2005 August 2005 July 2005 June 2005 May 2005 April 2005 March 2005 February 2005 January 2005 December 2004 October 2004 January

    Original URL path: http://climateaudit.org/tag/climate-sensitivity/ (2016-02-08)
    Open archived version from archive

  • Efficacy « Climate Audit
    2007 Mann et al 2008 Mann et al 2009 Marcott 2013 Moberg 2005 pages2k Trouet 2009 Wahl and Ammann News and Commentary MM Proxies Almagre Antarctica bristlecones Divergence Geological Ice core Jacoby Mann PC1 Medieval Noamer Treeline Ocean sediment Post 1980 Proxies Solar Speleothem Thompson Yamal and Urals Reports Barton Committee NAS Panel Satellite and gridcell Scripts Sea Ice Sea Level Rise Statistics Multivariate RegEM Spurious Steig at al 2009 Surface Record CRU GISTEMP GISTEMP Replication Jones et al 1990 SST Steig at al 2009 UHI TGGWS Uncategorized Unthreaded Articles CCSP Workshop Nov05 McIntyre McKitrick 2003 MM05 GRL MM05 EE NAS Panel Reply to Huybers Reply to von Storch Blogroll Accuweather Blogs Andrew Revkin Anthony Watts Bishop Hill Bob Tisdale Dan Hughes David Stockwell Icecap Idsos James Annan Jeff Id Josh Halpern Judith Curry Keith Kloor Klimazweibel Lubos Motl Lucia s Blackboard Matt Briggs NASA GISS Nature Blogs RealClimate Roger Pielke Jr Roger Pielke Sr Roman M Science of Doom Tamino Warwick Hughes Watts Up With That William Connolley WordPress com World Climate Report Favorite posts Bring the Proxies up to date Due Diligence FAQ 2005 McKitrick What is the Hockey Stick debate about Overview Responses to MBH Some thoughts on Disclosure Wegman and North Reports for Newbies Links Acronyms Latex Symbols MBH 98 Steve s Public Data Archive WDCP Wegman Reply to Stupak Wegman Report Weblogs and resources Ross McKitrick Surface Stations Archives Archives Select Month February 2016 January 2016 December 2015 September 2015 August 2015 July 2015 June 2015 April 2015 March 2015 February 2015 January 2015 December 2014 November 2014 October 2014 September 2014 August 2014 July 2014 June 2014 May 2014 April 2014 March 2014 February 2014 January 2014 December 2013 November 2013 October 2013 September 2013 August 2013 July 2013 June 2013 May 2013 April 2013 March 2013 January 2013 December 2012 November 2012 October 2012 September 2012 August 2012 July 2012 June 2012 May 2012 April 2012 March 2012 February 2012 January 2012 December 2011 November 2011 October 2011 September 2011 August 2011 July 2011 June 2011 May 2011 April 2011 March 2011 February 2011 January 2011 December 2010 November 2010 October 2010 September 2010 August 2010 July 2010 June 2010 May 2010 April 2010 March 2010 February 2010 January 2010 December 2009 November 2009 October 2009 September 2009 August 2009 July 2009 June 2009 May 2009 April 2009 March 2009 February 2009 January 2009 December 2008 November 2008 October 2008 September 2008 August 2008 July 2008 June 2008 May 2008 April 2008 March 2008 February 2008 January 2008 December 2007 November 2007 October 2007 September 2007 August 2007 July 2007 June 2007 May 2007 April 2007 March 2007 February 2007 January 2007 December 2006 November 2006 October 2006 September 2006 August 2006 July 2006 June 2006 May 2006 April 2006 March 2006 February 2006 January 2006 December 2005 November 2005 October 2005 September 2005 August 2005 July 2005 June 2005 May 2005 April 2005 March 2005 February 2005 January 2005 December 2004 October

    Original URL path: http://climateaudit.org/tag/efficacy/ (2016-02-08)
    Open archived version from archive

  • Update of Model-Observation Comparisons « Climate Audit
    along the tow path under the foot bridge many times a day Most times the mule shied at the bridge and Paddy had to drag it along One day with a bright idea he took a shovel to the path and took about six inches of gravel out from under the bridge His mate Patrick watched him work then declared that the fix would not work Paddy that mule it s his ears is too long not his legs I m still having conceptual problems re the meaning of the mean and variance of an assemblage of model runs There is a lot of distance between the mules ears hanging down and pointing up I trust that you are well on the path to recovery and Then There s Physics Posted Jan 6 2016 at 6 37 AM Permalink Reply Have the models in the comparison been redone with the updated forcings as suggested in this paper opluso Posted Jan 6 2016 at 8 59 AM Permalink Reply CMIP5 has been used by numerous peer reviewed papers so this question seems like another red herring Models are constantly being updated and modified Surface temperature anomaly estimates which by the way should always display an error range confidence interval are frequently revised as well The snapshot comparison displayed in this post is useful nonetheless and Then There s Physics Posted Jan 6 2016 at 9 03 AM Permalink Reply I know the snapshot is useful but the question of updated forcings is a valid question As I undertand it the original CMIP5 runs were done using forcings that we known or that weren t guesses up until 2005 and then estimated forcings for the period after 2005 It seems that the actual forcings post 2005 and some of the pre 2005 forcings are in reality different to what was assumed Given that the goal of the models is not to predict what the change in forcings will be but what the response will be to the change in forcings updating the forcings seems like an important thing to do if you want to do a proper comparison between the models and the observations Ron Graf Posted Jan 6 2016 at 9 39 AM Permalink updating the forcings seems like an important thing to do if you want to do a proper comparison between the models and the observations This is a relevant point the CMIP5 gets periodically adjusted particularly for volcanic aerosol cooling The 1991 1994 dip in plotted CMIP5 in the first figure at top is surely the adjustment post Mt Pinatubo The CMIP5 protocol is not to predict volcanic events This leaves the projection always at worst case intentionally for the future opluso Posted Jan 6 2016 at 11 05 AM Permalink aTTP As you point out CMIP5 is circa 2005 So the proper comparison is between observed temps well HADCRUT 4 4 and or RSS and post 2005 model projections Not by coincidence that is approximately the period during which models begin to consistently overestimate warming The earlier years are just eye candy for the unwary Steve McIntyre Posted Jan 6 2016 at 1 04 PM Permalink Not by coincidence that is approximately the period during which models begin to consistently overestimate warming Actually the sort of problem began much earlier The first patch was Hansen s discovery of aerosol cooling Steve McIntyre Posted Jan 6 2016 at 12 58 PM Permalink one of the large problems in forcings is trying to locate data on actual forcings other than CO2 on a consistent basis with forcings in the underlying model Can you tell me where I can find the aerosol forcing used in say a HadGEM run and then the observed aerosols Also data for observed forcings that are published on a timely basis and not as part of an ex post reconciliation exercise I ve spent an inordinate amount of time scouring for forcing data I m familiar with the obvious dsets but they are not satisfactory stevefitzpatrick Posted Jan 6 2016 at 5 21 PM Permalink Steve McIntyre I m unconvinced that the physics precludes lower sensitivity models Yes modelers make choices for parameters consistent with physics which influence the models and there for certain is a lot of room for different choices as evidenced by the comically wide range of sensitivity values diagnosed by different physics based state of the art GCMs The problem is that the modelers appear unwilling to incorporate reasonable external constraints on critical factors like aerosol effects and the rate of ocean heat accumulation Seems to me a couple of very important questions are being neither asked nor answered Do the individual model s heat accumulations match reasonably well the measured warming accumulation from Argo Do the aerosol effects which each model generates align reasonably well with the best estimates of net aerosol effects from aerosol experts say those who contributed to AR5 My guess is that were these questions asked and answered it would be clear why the models project much more warming than has been actually observed parameter choices which lead to too much sensitivity combined with too high aerosol offsets and or too much heat accumulation Some feet need to be put to the fire or the models ignored dpy6629 Posted Jan 6 2016 at 5 32 PM Permalink Yes SteveF holding feet to the fire is called for My informants tell me GCM s are intensely political and not a career enhancement vehicle DOE is building a new one by 2017 but have apparently been told in very clear terms to not stray too far from what existing models use It is depressing and sad By contrast turbulence modelers are generally more scientific and open minded dpy6629 Posted Jan 6 2016 at 10 36 PM Permalink Ken and Steve It seems to me that the main argument for constructing low sensitivity models is to understand the effects of the various choices and there are so many in a GCM that the sensitivity to these choices are I believe badly understudied and under reported That is true of turbulence models too modelers know these things but they are almost never reported in the literature A careful and systemic study would be a huge contribution and such a study has been started at NASA However large resources will be needed to do a rigorous job The real issue is the uncertainty in the models and since all the models are strongly related in terms of methods and data used the usual 95 confidence interval is surely an underestimate and possibly a bad underestimate This is what we found for CFD The models are closely related and yet the variety of answers can be very large We did study some methodological choices as well But it turns out that its really difficult to isolate the uncertainty in the underlying turbulence models and methods because there are so many other sources of uncertainty such as grid density level of convergence etc I personally don t see how it is possible to really rigorously tune parameters in a climate model given the incredibly course grid sizes and the limited time integration times that are achievable on current computers Alberto Zaragoza Comendador Posted Jan 7 2016 at 5 05 PM Permalink Potsdam Institute has a database actually ATTP gave me the link Not updated since 2011 apparently http www pik potsdam de mmalte rcps I downloaded the concentration and forcing Excels for RCP6 The former says 400ppm CO2eq for 2014 which is 1 9w m2 assuming 3 7w m2 per doubling of CO2 But the forcing Excel disagrees says 2 2w m2 for 2014 So I wouldn t trust this stuff very much Steve that is not what I was asking for I am completely aware of RCP projections My request was for OBSERVED data in a format consistent with IPCC projections Giving me back the IPCC projections is not responsive It is too typical of people like ATTP to give an obtuse and unresponsive answer Also there is an important difference between EMISSIONS and CONCENTRATION AR5 seems to have taken a step back from SRES in not providing EMISSION scenarios Alberto Zaragoza Comendador Posted Jan 7 2016 at 6 52 PM Permalink Well shame on me the Potsdam website has files created in 2011 but the actual concentration data is indeed only for pre 2005 since that year it shows RCPs So everybody else ignore that link unless you have some fondness for historical methane forcing and Then There s Physics Posted Jan 6 2016 at 11 10 AM Permalink Reply So the proper comparison is between observed temps well HADCRUT 4 4 and or RSS and post 2005 model projections No the models are attempting to determine what will happen for a given concentration forcing pathway If the concentration forcing pathway turns out to be different to what was initially assumed then this should be updated in the models before doing the comparison Essentially the concentration forcing pathway is conditional the model output is really saying if the concentration forcing pathway is what we assumed this is what we would predict Hence if the concentration forcing pathway turns out to be different doing the comparison without updating the forcings is not a like for like comparison opluso Posted Jan 6 2016 at 11 30 AM Permalink Your herring is growing more red by the minute The various concentration forcing pathways are not the only source of flawed model projections MikeN Posted Jan 6 2016 at 7 18 PM Permalink Model output is grounded in physics and not adjusted The graph is from the NRC report and is based on simulations with the U of Victoria climate carbon model tuned to yield the mid range IPCC climate sensitivity http www realclimate org index php archives 2011 11 keystone xl game over Models can definitely produce low sensitivity outputs Older version of one developed by Prinn known for high sensitivity models had parameters you an set for oceans and aerosols and clouds and certain reasonable levels of these would produce warming close to 1C by 2100 MikeN Posted Jan 6 2016 at 7 19 PM Permalink It is reasonable to evaluate models based on updated emissions scenarios I have advocated that models should be frozen with code to allow for such evaluations at a later time and Then There s Physics Posted Jan 6 2016 at 11 35 AM Permalink Reply The various concentration forcing pathways are not the only source of flawed model projections The concentration forcing pathways aren t model projections at all they re inputs That s kind of the point It s a bit like saying I predict that if you drop a cannonball from the 10th floor of a building it will takes 2 5s to reach the ground and you claim that the prediction was wrong because it only took 2s when you dropped it from the 7th floor Steve as I understand it the scenarios are supposed to be relevant and realistic And rather than CO2 emissions being at the low end of the scenarios they are right up at the top end of the scenarios from the earlier IPCC reports Ron Graf Posted Jan 6 2016 at 2 20 PM Permalink ATTP the model output is really saying if the concentration forcing pathway is what we assumed this is what we would predict This is the Gavin Schmidt game of separating projection from prediction He didn t invent it economists did It is not fair in science to say when predictions are correct that they are validation and when they are wrong that they were qualified projections That is creates an unfalsifiable argument which by Karl Popper s definition is the opposite of science Steve I m considering putting Popper on my list of proscribed words and Then There s Physics Posted Jan 6 2016 at 2 34 PM Permalink Ron What Let s say I develop a model that is used to understand how some system will respond to some kind of externally imposed change I then assume something about what that external change will probably be and I run the model I then report that if the change is X the model suggests that Y will happen If however in reality the change that is imposed is different to what I assumed would happen then if I want to check how good the model is I should redo it with what the actual external change was The point is that climate models are not being used to predict what we will do AND what the climate will do They re really only being used to understand the climate That what was assumed about what we would do the concentration pathway turns out to be different to what we actualy did doesn t mean that the models were somehow wrong Steve McIntyre Posted Jan 6 2016 at 2 48 PM Permalink That what was assumed about what we would do the concentration pathway turns out to be different to what we actualy did doesn t mean that the models were somehow wrong If observed CO2 emissions have been at the top end of scenarios as they have been and observed temperatures have been at the very bottom end of scenarios it seems reasonable to consider whether the models are parameterized too warm From a distance it seems like far more effort is being spent arguing against that possibility than in investigating the properties of lower sensitivity models opluso Posted Jan 6 2016 at 2 36 PM Permalink The concentration forcing pathways aren t model projections at all they re inputs I didn t say the pathways were projections I said they were not the only source of flaws in model projections Obviously if the feedbacks and physics are poorly modeled you can project significant warming even with a lower concentration pathway Bottom line If CMIP5 was good enough to demand global economic restructuring I think it s good enough for the purposes of this post and Then There s Physics Posted Jan 6 2016 at 2 47 PM Permalink opluso None of what you say is really an argument against updating the concentration pathway if you know that what actually happened is different to what you initially assumed and Then There s Physics Posted Jan 6 2016 at 2 54 PM Permalink From a distance it seems like far more effort is being spent arguing against that possibility than in investigating the properties of lower sensitivity models Except climate sensitivity is an emergent property of the models You can t simply create a lower sensitivity model if the physics precludes such an outcome As you have probably heard before the model spread is intended to represent a region where the observed temperatures will fall 95 of the time If the observed temperatures track along or outside the lower boundary for more than 5 of the time there would certainly be a case for removing some of the higher sensitivity models and trying to understand why the models tend to produce sensitivities that are higher than seems reasonable or trying to construct physically plausible models with lower sensitivity However this doesn t appear to be what is happening and hence the case for trying to artificially construct lower sensitivity models seems IMO to be weak Steve McIntyre Posted Jan 6 2016 at 3 27 PM Permalink if the physics precludes such an outcome I m unconvinced that the physics precludes lower sensitivity models In any other walk of like specialists would be presently exploring their parameterizations to see whether they could produce a model with lower sensitivity that still meets other specifications The seeming stubbornness of the climate community on this point is really quite remarkable there are dozens of parameterizations within the model There is obviously considerable play withing these parameterizations to produce results of different sensitivity as evidenced by the spread that includes very hot models like Andrew Weaver s The very lowest sensitivity IPCC models are still in ore Opposition to investigation of even lower sensitivity parameterizations strike me as more ideological than objective Steve McIntyre Posted Jan 6 2016 at 5 18 PM Permalink Ken Rice says As you have probably heard before the model spread is intended to represent a region where the observed temperatures will fall 95 of the time Actually I haven t heard that before My understanding is that the models were independently developed and represented an ensemble of opportunity rather than being designed to cover a 5 95 spread What if any is your support for claiming that the model spread is intended to represent a region where the observed temperatures will fall 95 of the time Can you provide a citation to IPCC or academic paper Steve in responding to Rice s outlandish assertion I expressed myself poorly above There is no coordination among developers so that the models cover a space but it is incorrect to say that they are independently developed There are common elements to most models and systemic bias is a very real possibility as acknowledged by Tim Palmer and Then There s Physics Posted Jan 6 2016 at 3 34 PM Permalink I m unconvinced that the physics precludes lower sensitivity models I didn t say that they did preclude it I simply said if they preclude it The problem as I see it is that if we actively start trying to develop models that have low sensitivity then that s not really any different to actively trying to develop ones that have high sensitivity Even though there are parametrisations they are still typically constrained in some way Opposition to investigation of even lower sensitivity parameterizations strike me as more ideological than objective What makes you think there s opposition Maybe it s harder than it seems to generate such models and maybe people who work on this don t think that there is yet a case for actively doing so Steve McIntyre Posted Jan 6 2016 at 5 12 PM Permalink maybe people who work on this don t think that there is yet a case for actively doing so If the extraordinary and systemic overshoot of models in the period 1979 2015 doesn t constitute a case for re opening examination of the parameter selections I don t know what would be In other fields e g the turbulence example cited by a reader specialists would simply re open the file rather than argue against it opluso Posted Jan 6 2016 at 4 41 PM Permalink None of what you say is really an argument against updating the concentration pathway if you know that what actually happened is different to what you initially assumed In fact I pointed out that in far more important situations e g COP21 CMIP5 projections have been acceptable Therefore in the context of this post there is simply no need to compile a CMIP6 database before examining the existing hypotheses I strongly suspect that even if SMc had satisfied your desire for an updated CMIP you would say he should wait for HadCRUT 5 dpy6629 Posted Jan 6 2016 at 5 05 PM Permalink Ecs is an emergent property just as boundary layer health is for a turbulence model Developers of models who I know personally are much smarter than Ken Rice seems to believe They know how to tweak the parameters or the functional forms in models to change the important emergent properties For climate models where many of the emergent properties lack skill one needs to choose the ones you care most about According to Richard Betts for the Met office model they care most about weather forecast skill Toy models of planet formation are not the same ballgame at all and Then There s Physics Posted Jan 6 2016 at 5 13 PM Permalink Developers of models who I know personally are much smarter than Ken Rice seems to believe I ve no idea why you would say this as I ve said nothing about how smart or not model developers might be All I do know is that no one can be as smart as you seem to think you are Steve this is a needlessly chippy response The commenter had made a useful substantive point They know how to tweak the parameters or the functional forms in models to change the important emergent properties in response to your assertion that the models were grounded on physics Do you have a substantive response to this seemingly sensible comment dpy6629 Posted Jan 6 2016 at 5 22 PM Permalink Having almost infinitely better understanding of CFD modeling than you Ken is more accurate Modelers could produce low ECS models if they wanted to do so I share Steve M s puzzlement as to why There are some obvious explanations having to do with things like the terrible job models do with precipitation that may be higher priorities Steve I d prefer that you and Ken Rice tone down the comparison of shall we say manliness and Then There s Physics Posted Jan 6 2016 at 5 20 PM Permalink If the extraordinary and systemic overshoot of models in the period 1979 2015 doesn t constitute a case for re opening examination of the parameter selections I don t know what would be Have the models had their concentration forcing pathways updated Have you considered sampling bias in the surface temperature dataset Have you considered uncertainties in the observed trends Have you considered the analysis where only models that have internal variability that is in phase with the observations shows less of a mismatch Maybe your supposed gotcha isn t quite as straightforward as you seem to think it is In other fields e g the turbulence example cited by a reader Oooh I wonder who that could be specialists would simply re open the file rather than argue against it I don t know of anyone who s specifically arguing against it All I was suggesting is that it may be that it s not as straightforward as it may seem If a group of experts are not doing what you think they should be doing maybe they have a good reason for not doing so Steve McIntyre Posted Jan 6 2016 at 5 29 PM Permalink maybe they have a good reason for not doing so perhaps What is it On the other hand there s a lot of ideological investment in high sensitivity models and any backing down would be embarrassing Had there been less publicity it would have been easier to report on lower sensitivity models but unfortunately this would undoubtedly be felt in human terms as some sort of concession to skeptics The boxplot comparisons deal with trends over the 1979 2015 period This is a long enough period that precise phase issues are not relevant Further the comparison in the present post ends on a very large El Nino and is the most favorable endpoint imaginable to the modelers and Then There s Physics Posted Jan 6 2016 at 5 24 PM Permalink Having almost infinitely better understanding of CFD modeling than you Ken is more accurate I rest my case and Then There s Physics Posted Jan 6 2016 at 5 48 PM Permalink On the other hand there s a lot of ideological investment in high sensitivity models and any backing down would be embarrassing I think there is a great deal of ideological desire for low climate sensitivity too All I m suggesting is that there are many factors that may be contributing to the mismatch and that it may not be quite as simple as it at first seem To add to what I already said there s also the blending issue highlighted by Cowtan et al As for your 95 question that you asked You re correct I think that the models are intended to be independent so I wasn t suggesting that they re somehow chosen tuned to give that the observations would stay within the spread 95 of the time although I do remember having discussions with some maybe Ed Hawkins who were suggesting that some models are rejected for various reasons I was suggesting that if the observations stayed out for more than 5 of the time then we d have a much strong case for arguing that the models have an issue given that the observations are outside the expected range for much longer than would be reasonable Steve McIntyre Posted Jan 6 2016 at 7 39 PM Permalink that the models are intended to be independent in responding to your assertion that the models were designed to cover a model space I did not mean to suggest that the models are independent in a statistical sense For example I said that the ensemble was one of opportunity The models are not independent as elements are common to all of them a point acknowledged by Tim Palmer somewhere The possibility of systemic bias is entirely real and IMO there is convincing evidence that there is I ve added the following note to my earlier comment to clarify in responding to Rice s outlandish assertion I expressed myself poorly above There is no coordination among developers so that the models cover a space but it is incorrect to say that they are independently developed There are common elements to most models and systemic bias is a very real possibility as acknowledged by Tim Palmer dpy6629 Posted Jan 6 2016 at 8 19 PM Permalink We recently did an analysis of CFD models for some very simple test cases and discovered that the spread of results was surprisingly large These models also are all based on the same boundary layer correlations and data This spread is virtually invisible in the literature My belief is that GCMs are also all based roughly on common empirical and theoretical relationships I also suspect that the literature may not give a full range of possible model settings or types and may understate the uncertainty but this would be impossible to prove without a huge amount of work Ron Graf Posted Jan 6 2016 at 6 21 PM Permalink Except climate sensitivity is an emergent property of the models The argument that transient climate response TCR is an emergent property of the models is based on the assumption all the model parameters are constrained by lab validated physics It s just physics as I ve heard said What I believe is remarkable is that a scientific body approved a protocol that leaves the mechanics of the physics blind to outside review The CMIP5 models in fact are such black boxes that TCR does not emerge but with the use of multiple linear regressions on the output of multiple realizations In other words one run gives a TCR the next run can give a different one One can manipulate TCR not only by selective input but also by selective choice of output or ensemble mix and its method of analysis If it were just physics why is there 52 model pairs each producing unique responses kneel63 Posted Jan 6 2016 at 7 43 PM Permalink The concentration forcing pathways aren t model projections at all they re inputs Indeed Aren t the models in CMIP run using several scenarios RCP8 5 RCP6 RCP4 5 and so on A valid comparison might then be if real emissions between RCP4 5 and RCP6 then let s compare those model runs to your preferred measurement metric As Steve says if the model outputs using RCPs that are consistently low real forcing was higher and temps are consistently high actual temps were lower AND runs using RCPs that are consistently higher real forcing was lower project even higher temps ie more wrong it is reasonable to assume that using actual forcing data would fall somewhere in between and that therefore the models are running too hot I have no doubt that even should you agree this is correct that you will then suggest that eg we should only use those model runs that get ENSO PDO etc correct or It would be nice if we had an a priori agreed method to evaluate model performance because it certainly seems to me that when they appeared to be correct it was evidence of goodness but when they are wrong it s not evidence of badness Steve McIntyre Posted Jan 6 2016 at 8 24 PM Permalink One of the large problems in trying to assess the degree to which model overshooting can be attributed to forcing projections rather than sensitivity is that there is no ongoing accounting of actual forcings in a format consistent with the RCP projections This sort of incompatibility is not unique to climate I ve seen numerous projects in which the categories in the plan are not consistent with the accounting categories used in operations This is usually a nightmare in trying to do plan vs actual But given the size of the COP21 decisions it is beyond ludicrous that there is no regular accounting of forcing The RCP scenarios contain 53 forcing columns some are subtotals These are presumably calculated from concentration levels which in turn depend on emission levels But I challenge ATTP or anyone else to provide me with a location in which the observed values of these forcings are archived on a contemporary basis To my knowledge they aren t All the forcings that matter ought to be measured and reported regularly at NOAA who report forcings for only a few GHGs and do not report emissions 1 TOTAL INCLVOLCANIC RF Total anthropogenic and natural radiative forcing 2 VOLCANIC ANNUAL RF Annual mean volcanic stratospheric aerosol forcing 3 SOLAR RF Solar irradience forcing 4 TOTAL ANTHRO RF Total anthropogenic forcing 5 GHG RF Total greenhouse gas forcing CO2 CH4 N2O HFCs PFCs SF6 and Montreal Protocol gases 6 KYOTOGHG RF Total forcing from greenhouse gases controlled under the Kyoto Protocol CO2 CH4 N2O HFCs PFCs SF6 7 CO2CH4N2O RF Total forcing from CO2 methan and nitrous oxide 8 CO2 RF CO2 Forcing 9 CH4 RF Methane Forcing 10 N2O RF Nitrous Oxide Forcing 11 FGASSUM RF Total forcing from all flourinated gases controlled under the Kyoto Protocol HFCs PFCs SF6 i e columns 13 24 12 MHALOSUM RF Total forcing from all gases controlled under the Montreal Protocol columns 25 40 13 24 Flourinated gases controlled under the Kyoto Protocol 25 40 Ozone Depleting Substances controlled under the Montreal Protocol 41 TOTAER DIR RF Total direct aerosol forcing aggregating columns 42 to 47 42 OCI RF Direct fossil fuel aerosol organic carbon 43 BCI RF Direct fossil fuel aerosol black carbon 44 SOXI RF Direct sulphate aerosol 45 NOXI RF Direct nitrate aerosol 46 BIOMASSAER RF Direct biomass burning related aerosol 47 MINERALDUST RF Direct Forcing from mineral dust aerosol 48 CLOUD TOT RF Cloud albedo effect 49 STRATOZ RF Stratospheric ozone forcing 50 TROPOZ RF Tropospheric ozone forcing 51 CH4OXSTRATH2O RF Stratospheric water vapour from methane oxidisation 52 LANDUSE RF Landuse albedo 53 BCSNOW RF Black carbon on snow Matt Skaggs Posted Jan 7 2016 at 11 11 AM Permalink Steve wrote But I challenge ATTP or anyone else to provide me with a location in which the observed values of these forcings are archived on a contemporary basis To my knowledge they aren t All the forcings that matter ought to be measured and reported regularly at NOAA who report forcings for only a few GHGs and do not report emissions I took a deep dive looking for this information as well for the essay I wrote for Climate Etc If the IPCC were to serve one major useful purpose it would have been to develop a global system for collecting and collating direct measurement data on forcings I say would have been because this effort should have started in the 90s and here we are in 2016 with nothing more than scattered chunks of data in various formats davideisenstadt Posted Jan 7 2016 at 12 22 PM Permalink Steve I think your point regarding the independence of the various iterations of current GCMs was well put Given that they all share data used as inputs and although independently developed have shared structural characteristics its a misapprehension to regard them as independent Anyway the tests for statistical independence have nothing to do whatsoever with the provenance of the respective models its their behavior that tells the tale and they all exhibit a substantial degree of covariance that is to say they aren t independent Ken Rice should know better than to peddle this tripe Jeff Norman Posted Jan 9 2016 at 2 15 PM Permalink Matt I ve said it before if the IPCC truly cared about the future climate there would be a WG IV dealing with sources of error uncertainties and recommendations for improving our climate knowledge Very basic things like funding weather monitoring stations in those global voids David L Hagen Posted Jan 28 2016 at 2 27 PM Permalink Curry quote you on Popper in Insights from Karl Popper how to open the deadlocked climate debate Editor of the Fabius Maximus website Posted Jan 8 2016 at 4 01 PM Permalink Reply As ATTP comments in this threat show there is great potential from re running the GCMs with updated forcings That would give us predictions from the models instead of projections since the input would be observations of forcings not predictions of forcings actual tests of the models We could do this with older models to get multi decade predictions of temperature which could be compared with observations These would be technically hindcasts but more useful than those used today because they test the models with out of sample data i e not available when they were originally run Working with an eminent climate scientist I wrote up such a proposal to do this http fabiusmaximus com 2015 09 24 scientists restart climate change debate 89635 These results might help break the gridlock in the climate policy debate At least it would be a new effort to do so since the debate has degenerated into a cacophony each side blames the other for this both with some justification Editor of the Fabius Maximus website Posted Jan 8 2016 at 4 03 PM Permalink Reply Follow up to my comment this kind of test might be the best way to reconcile the gap between models projections and observations As the comments here show the current debate runs in circles at high speed New data and new perspectives might help Steve McIntyre Posted Jan 8 2016 at 6 18 PM Permalink One of the curiosities to the assertion that actual forcings have undershot those in the model scenarios is that actual CO2 emissions are at the very top end of model scenarios So any forcing shortfall is not due to CO2 The supposed undershot goes back once again to aerosols which unfortunately are not reported by independent agencies on a regular basis The argument is that negative forcing from actual aerosols has been much greater than projected the same sort of argument made by Hansen years ago to explain the same problem Steve McIntyre Posted Jan 8 2016 at 6 14 PM Permalink Reply We could do this with older models to get multi decade predictions of temperature which could be compared with observations Some time ago I did this exercise using the simple relationship in Guy Callendar s long ago article and subsequent forcing In subsequent terms it was low sensitivity It outperformed all the GCMs when all were centered on 1920 40 Editor of the Fabius Maximus website Posted Jan 8 2016 at 6 44 PM Permalink Steve Exercises similar to yours have been done several times but with inconclusive results no effect on

    Original URL path: http://climateaudit.org/2016/01/05/update-of-model-observation-comparisons/ (2016-02-08)
    Open archived version from archive

  • cmip5 « Climate Audit
    this year David Whitehouse of GWPF drew attention to a striking decrease in the UK Met Office decadal temperature forecast that had been quietly changed by the Met Office on Christmas Eve Whitehouse s article led to some contemporary interest in Met Office decadal forecasts The Met Office responded see here Whitehouse was also challenged By Steve McIntyre Posted in Modeling UK Met Office Uncategorized Also tagged decadal forecast doug smith hadcm3 hadgem2 hadgem3 met office tollefson Comments 88 Tip Jar The Tip Jar is working again via a temporary location Pages About Blog Rules and Road Map CA Assistant CA blog setup Contact Steve Mc Econometric References FAQ 2005 Gridded Data High Resolution Ocean Sediments Hockey Stick Studies Proxy Data Station Data Statistics and R Subscribe to CA Tip Jar Categories Categories Select Category AIT Archiving Nature Science climategate cg2 Data Disclosure and Diligence Peer Review FOIA General Holocene Optimum Hurricane Inquiries Muir Russell IPCC ar5 MBH98 Replication Source Code Spot the Hockey Stick Modeling Hansen Santer UK Met Office Multiproxy Studies Briffa Crowley D Arrigo 2006 Esper et al 2002 Hansen Hegerl 2006 Jones Mann 2003 Jones et al 1998 Juckes et al 2006 Kaufman 2009 Loehle 2007 Loehle 2008 Mann et al 2007 Mann et al 2008 Mann et al 2009 Marcott 2013 Moberg 2005 pages2k Trouet 2009 Wahl and Ammann News and Commentary MM Proxies Almagre Antarctica bristlecones Divergence Geological Ice core Jacoby Mann PC1 Medieval Noamer Treeline Ocean sediment Post 1980 Proxies Solar Speleothem Thompson Yamal and Urals Reports Barton Committee NAS Panel Satellite and gridcell Scripts Sea Ice Sea Level Rise Statistics Multivariate RegEM Spurious Steig at al 2009 Surface Record CRU GISTEMP GISTEMP Replication Jones et al 1990 SST Steig at al 2009 UHI TGGWS Uncategorized Unthreaded Articles CCSP Workshop Nov05 McIntyre McKitrick 2003 MM05 GRL MM05 EE NAS Panel Reply to Huybers Reply to von Storch Blogroll Accuweather Blogs Andrew Revkin Anthony Watts Bishop Hill Bob Tisdale Dan Hughes David Stockwell Icecap Idsos James Annan Jeff Id Josh Halpern Judith Curry Keith Kloor Klimazweibel Lubos Motl Lucia s Blackboard Matt Briggs NASA GISS Nature Blogs RealClimate Roger Pielke Jr Roger Pielke Sr Roman M Science of Doom Tamino Warwick Hughes Watts Up With That William Connolley WordPress com World Climate Report Favorite posts Bring the Proxies up to date Due Diligence FAQ 2005 McKitrick What is the Hockey Stick debate about Overview Responses to MBH Some thoughts on Disclosure Wegman and North Reports for Newbies Links Acronyms Latex Symbols MBH 98 Steve s Public Data Archive WDCP Wegman Reply to Stupak Wegman Report Weblogs and resources Ross McKitrick Surface Stations Archives Archives Select Month February 2016 January 2016 December 2015 September 2015 August 2015 July 2015 June 2015 April 2015 March 2015 February 2015 January 2015 December 2014 November 2014 October 2014 September 2014 August 2014 July 2014 June 2014 May 2014 April 2014 March 2014 February 2014 January 2014 December 2013 November 2013 October 2013 September 2013 August 2013 July 2013

    Original URL path: http://climateaudit.org/tag/cmip5/ (2016-02-08)
    Open archived version from archive



  •