archive-org.com » ORG » C » CLIMATEAUDIT.ORG

Total: 491

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • Update of Model-Observation Comparisons « Climate Audit
    the tow path under the foot bridge many times a day Most times the mule shied at the bridge and Paddy had to drag it along One day with a bright idea he took a shovel to the path and took about six inches of gravel out from under the bridge His mate Patrick watched him work then declared that the fix would not work Paddy that mule it s his ears is too long not his legs I m still having conceptual problems re the meaning of the mean and variance of an assemblage of model runs There is a lot of distance between the mules ears hanging down and pointing up I trust that you are well on the path to recovery and Then There s Physics Posted Jan 6 2016 at 6 37 AM Permalink Reply Have the models in the comparison been redone with the updated forcings as suggested in this paper opluso Posted Jan 6 2016 at 8 59 AM Permalink Reply CMIP5 has been used by numerous peer reviewed papers so this question seems like another red herring Models are constantly being updated and modified Surface temperature anomaly estimates which by the way should always display an error range confidence interval are frequently revised as well The snapshot comparison displayed in this post is useful nonetheless and Then There s Physics Posted Jan 6 2016 at 9 03 AM Permalink Reply I know the snapshot is useful but the question of updated forcings is a valid question As I undertand it the original CMIP5 runs were done using forcings that we known or that weren t guesses up until 2005 and then estimated forcings for the period after 2005 It seems that the actual forcings post 2005 and some of the pre 2005 forcings are in reality different to what was assumed Given that the goal of the models is not to predict what the change in forcings will be but what the response will be to the change in forcings updating the forcings seems like an important thing to do if you want to do a proper comparison between the models and the observations Ron Graf Posted Jan 6 2016 at 9 39 AM Permalink updating the forcings seems like an important thing to do if you want to do a proper comparison between the models and the observations This is a relevant point the CMIP5 gets periodically adjusted particularly for volcanic aerosol cooling The 1991 1994 dip in plotted CMIP5 in the first figure at top is surely the adjustment post Mt Pinatubo The CMIP5 protocol is not to predict volcanic events This leaves the projection always at worst case intentionally for the future opluso Posted Jan 6 2016 at 11 05 AM Permalink aTTP As you point out CMIP5 is circa 2005 So the proper comparison is between observed temps well HADCRUT 4 4 and or RSS and post 2005 model projections Not by coincidence that is approximately the period during which models begin to consistently overestimate warming The earlier years are just eye candy for the unwary Steve McIntyre Posted Jan 6 2016 at 1 04 PM Permalink Not by coincidence that is approximately the period during which models begin to consistently overestimate warming Actually the sort of problem began much earlier The first patch was Hansen s discovery of aerosol cooling Steve McIntyre Posted Jan 6 2016 at 12 58 PM Permalink one of the large problems in forcings is trying to locate data on actual forcings other than CO2 on a consistent basis with forcings in the underlying model Can you tell me where I can find the aerosol forcing used in say a HadGEM run and then the observed aerosols Also data for observed forcings that are published on a timely basis and not as part of an ex post reconciliation exercise I ve spent an inordinate amount of time scouring for forcing data I m familiar with the obvious dsets but they are not satisfactory stevefitzpatrick Posted Jan 6 2016 at 5 21 PM Permalink Steve McIntyre I m unconvinced that the physics precludes lower sensitivity models Yes modelers make choices for parameters consistent with physics which influence the models and there for certain is a lot of room for different choices as evidenced by the comically wide range of sensitivity values diagnosed by different physics based state of the art GCMs The problem is that the modelers appear unwilling to incorporate reasonable external constraints on critical factors like aerosol effects and the rate of ocean heat accumulation Seems to me a couple of very important questions are being neither asked nor answered Do the individual model s heat accumulations match reasonably well the measured warming accumulation from Argo Do the aerosol effects which each model generates align reasonably well with the best estimates of net aerosol effects from aerosol experts say those who contributed to AR5 My guess is that were these questions asked and answered it would be clear why the models project much more warming than has been actually observed parameter choices which lead to too much sensitivity combined with too high aerosol offsets and or too much heat accumulation Some feet need to be put to the fire or the models ignored dpy6629 Posted Jan 6 2016 at 5 32 PM Permalink Yes SteveF holding feet to the fire is called for My informants tell me GCM s are intensely political and not a career enhancement vehicle DOE is building a new one by 2017 but have apparently been told in very clear terms to not stray too far from what existing models use It is depressing and sad By contrast turbulence modelers are generally more scientific and open minded dpy6629 Posted Jan 6 2016 at 10 36 PM Permalink Ken and Steve It seems to me that the main argument for constructing low sensitivity models is to understand the effects of the various choices and there are so many in a GCM that the sensitivity to these choices are I believe badly understudied and under reported That is true of turbulence models too modelers know these things but they are almost never reported in the literature A careful and systemic study would be a huge contribution and such a study has been started at NASA However large resources will be needed to do a rigorous job The real issue is the uncertainty in the models and since all the models are strongly related in terms of methods and data used the usual 95 confidence interval is surely an underestimate and possibly a bad underestimate This is what we found for CFD The models are closely related and yet the variety of answers can be very large We did study some methodological choices as well But it turns out that its really difficult to isolate the uncertainty in the underlying turbulence models and methods because there are so many other sources of uncertainty such as grid density level of convergence etc I personally don t see how it is possible to really rigorously tune parameters in a climate model given the incredibly course grid sizes and the limited time integration times that are achievable on current computers Alberto Zaragoza Comendador Posted Jan 7 2016 at 5 05 PM Permalink Potsdam Institute has a database actually ATTP gave me the link Not updated since 2011 apparently http www pik potsdam de mmalte rcps I downloaded the concentration and forcing Excels for RCP6 The former says 400ppm CO2eq for 2014 which is 1 9w m2 assuming 3 7w m2 per doubling of CO2 But the forcing Excel disagrees says 2 2w m2 for 2014 So I wouldn t trust this stuff very much Steve that is not what I was asking for I am completely aware of RCP projections My request was for OBSERVED data in a format consistent with IPCC projections Giving me back the IPCC projections is not responsive It is too typical of people like ATTP to give an obtuse and unresponsive answer Also there is an important difference between EMISSIONS and CONCENTRATION AR5 seems to have taken a step back from SRES in not providing EMISSION scenarios Alberto Zaragoza Comendador Posted Jan 7 2016 at 6 52 PM Permalink Well shame on me the Potsdam website has files created in 2011 but the actual concentration data is indeed only for pre 2005 since that year it shows RCPs So everybody else ignore that link unless you have some fondness for historical methane forcing and Then There s Physics Posted Jan 6 2016 at 11 10 AM Permalink Reply So the proper comparison is between observed temps well HADCRUT 4 4 and or RSS and post 2005 model projections No the models are attempting to determine what will happen for a given concentration forcing pathway If the concentration forcing pathway turns out to be different to what was initially assumed then this should be updated in the models before doing the comparison Essentially the concentration forcing pathway is conditional the model output is really saying if the concentration forcing pathway is what we assumed this is what we would predict Hence if the concentration forcing pathway turns out to be different doing the comparison without updating the forcings is not a like for like comparison opluso Posted Jan 6 2016 at 11 30 AM Permalink Your herring is growing more red by the minute The various concentration forcing pathways are not the only source of flawed model projections MikeN Posted Jan 6 2016 at 7 18 PM Permalink Model output is grounded in physics and not adjusted The graph is from the NRC report and is based on simulations with the U of Victoria climate carbon model tuned to yield the mid range IPCC climate sensitivity http www realclimate org index php archives 2011 11 keystone xl game over Models can definitely produce low sensitivity outputs Older version of one developed by Prinn known for high sensitivity models had parameters you an set for oceans and aerosols and clouds and certain reasonable levels of these would produce warming close to 1C by 2100 MikeN Posted Jan 6 2016 at 7 19 PM Permalink It is reasonable to evaluate models based on updated emissions scenarios I have advocated that models should be frozen with code to allow for such evaluations at a later time and Then There s Physics Posted Jan 6 2016 at 11 35 AM Permalink Reply The various concentration forcing pathways are not the only source of flawed model projections The concentration forcing pathways aren t model projections at all they re inputs That s kind of the point It s a bit like saying I predict that if you drop a cannonball from the 10th floor of a building it will takes 2 5s to reach the ground and you claim that the prediction was wrong because it only took 2s when you dropped it from the 7th floor Steve as I understand it the scenarios are supposed to be relevant and realistic And rather than CO2 emissions being at the low end of the scenarios they are right up at the top end of the scenarios from the earlier IPCC reports Ron Graf Posted Jan 6 2016 at 2 20 PM Permalink ATTP the model output is really saying if the concentration forcing pathway is what we assumed this is what we would predict This is the Gavin Schmidt game of separating projection from prediction He didn t invent it economists did It is not fair in science to say when predictions are correct that they are validation and when they are wrong that they were qualified projections That is creates an unfalsifiable argument which by Karl Popper s definition is the opposite of science Steve I m considering putting Popper on my list of proscribed words and Then There s Physics Posted Jan 6 2016 at 2 34 PM Permalink Ron What Let s say I develop a model that is used to understand how some system will respond to some kind of externally imposed change I then assume something about what that external change will probably be and I run the model I then report that if the change is X the model suggests that Y will happen If however in reality the change that is imposed is different to what I assumed would happen then if I want to check how good the model is I should redo it with what the actual external change was The point is that climate models are not being used to predict what we will do AND what the climate will do They re really only being used to understand the climate That what was assumed about what we would do the concentration pathway turns out to be different to what we actualy did doesn t mean that the models were somehow wrong Steve McIntyre Posted Jan 6 2016 at 2 48 PM Permalink That what was assumed about what we would do the concentration pathway turns out to be different to what we actualy did doesn t mean that the models were somehow wrong If observed CO2 emissions have been at the top end of scenarios as they have been and observed temperatures have been at the very bottom end of scenarios it seems reasonable to consider whether the models are parameterized too warm From a distance it seems like far more effort is being spent arguing against that possibility than in investigating the properties of lower sensitivity models opluso Posted Jan 6 2016 at 2 36 PM Permalink The concentration forcing pathways aren t model projections at all they re inputs I didn t say the pathways were projections I said they were not the only source of flaws in model projections Obviously if the feedbacks and physics are poorly modeled you can project significant warming even with a lower concentration pathway Bottom line If CMIP5 was good enough to demand global economic restructuring I think it s good enough for the purposes of this post and Then There s Physics Posted Jan 6 2016 at 2 47 PM Permalink opluso None of what you say is really an argument against updating the concentration pathway if you know that what actually happened is different to what you initially assumed and Then There s Physics Posted Jan 6 2016 at 2 54 PM Permalink From a distance it seems like far more effort is being spent arguing against that possibility than in investigating the properties of lower sensitivity models Except climate sensitivity is an emergent property of the models You can t simply create a lower sensitivity model if the physics precludes such an outcome As you have probably heard before the model spread is intended to represent a region where the observed temperatures will fall 95 of the time If the observed temperatures track along or outside the lower boundary for more than 5 of the time there would certainly be a case for removing some of the higher sensitivity models and trying to understand why the models tend to produce sensitivities that are higher than seems reasonable or trying to construct physically plausible models with lower sensitivity However this doesn t appear to be what is happening and hence the case for trying to artificially construct lower sensitivity models seems IMO to be weak Steve McIntyre Posted Jan 6 2016 at 3 27 PM Permalink if the physics precludes such an outcome I m unconvinced that the physics precludes lower sensitivity models In any other walk of like specialists would be presently exploring their parameterizations to see whether they could produce a model with lower sensitivity that still meets other specifications The seeming stubbornness of the climate community on this point is really quite remarkable there are dozens of parameterizations within the model There is obviously considerable play withing these parameterizations to produce results of different sensitivity as evidenced by the spread that includes very hot models like Andrew Weaver s The very lowest sensitivity IPCC models are still in ore Opposition to investigation of even lower sensitivity parameterizations strike me as more ideological than objective Steve McIntyre Posted Jan 6 2016 at 5 18 PM Permalink Ken Rice says As you have probably heard before the model spread is intended to represent a region where the observed temperatures will fall 95 of the time Actually I haven t heard that before My understanding is that the models were independently developed and represented an ensemble of opportunity rather than being designed to cover a 5 95 spread What if any is your support for claiming that the model spread is intended to represent a region where the observed temperatures will fall 95 of the time Can you provide a citation to IPCC or academic paper Steve in responding to Rice s outlandish assertion I expressed myself poorly above There is no coordination among developers so that the models cover a space but it is incorrect to say that they are independently developed There are common elements to most models and systemic bias is a very real possibility as acknowledged by Tim Palmer and Then There s Physics Posted Jan 6 2016 at 3 34 PM Permalink I m unconvinced that the physics precludes lower sensitivity models I didn t say that they did preclude it I simply said if they preclude it The problem as I see it is that if we actively start trying to develop models that have low sensitivity then that s not really any different to actively trying to develop ones that have high sensitivity Even though there are parametrisations they are still typically constrained in some way Opposition to investigation of even lower sensitivity parameterizations strike me as more ideological than objective What makes you think there s opposition Maybe it s harder than it seems to generate such models and maybe people who work on this don t think that there is yet a case for actively doing so Steve McIntyre Posted Jan 6 2016 at 5 12 PM Permalink maybe people who work on this don t think that there is yet a case for actively doing so If the extraordinary and systemic overshoot of models in the period 1979 2015 doesn t constitute a case for re opening examination of the parameter selections I don t know what would be In other fields e g the turbulence example cited by a reader specialists would simply re open the file rather than argue against it opluso Posted Jan 6 2016 at 4 41 PM Permalink None of what you say is really an argument against updating the concentration pathway if you know that what actually happened is different to what you initially assumed In fact I pointed out that in far more important situations e g COP21 CMIP5 projections have been acceptable Therefore in the context of this post there is simply no need to compile a CMIP6 database before examining the existing hypotheses I strongly suspect that even if SMc had satisfied your desire for an updated CMIP you would say he should wait for HadCRUT 5 dpy6629 Posted Jan 6 2016 at 5 05 PM Permalink Ecs is an emergent property just as boundary layer health is for a turbulence model Developers of models who I know personally are much smarter than Ken Rice seems to believe They know how to tweak the parameters or the functional forms in models to change the important emergent properties For climate models where many of the emergent properties lack skill one needs to choose the ones you care most about According to Richard Betts for the Met office model they care most about weather forecast skill Toy models of planet formation are not the same ballgame at all and Then There s Physics Posted Jan 6 2016 at 5 13 PM Permalink Developers of models who I know personally are much smarter than Ken Rice seems to believe I ve no idea why you would say this as I ve said nothing about how smart or not model developers might be All I do know is that no one can be as smart as you seem to think you are Steve this is a needlessly chippy response The commenter had made a useful substantive point They know how to tweak the parameters or the functional forms in models to change the important emergent properties in response to your assertion that the models were grounded on physics Do you have a substantive response to this seemingly sensible comment dpy6629 Posted Jan 6 2016 at 5 22 PM Permalink Having almost infinitely better understanding of CFD modeling than you Ken is more accurate Modelers could produce low ECS models if they wanted to do so I share Steve M s puzzlement as to why There are some obvious explanations having to do with things like the terrible job models do with precipitation that may be higher priorities Steve I d prefer that you and Ken Rice tone down the comparison of shall we say manliness and Then There s Physics Posted Jan 6 2016 at 5 20 PM Permalink If the extraordinary and systemic overshoot of models in the period 1979 2015 doesn t constitute a case for re opening examination of the parameter selections I don t know what would be Have the models had their concentration forcing pathways updated Have you considered sampling bias in the surface temperature dataset Have you considered uncertainties in the observed trends Have you considered the analysis where only models that have internal variability that is in phase with the observations shows less of a mismatch Maybe your supposed gotcha isn t quite as straightforward as you seem to think it is In other fields e g the turbulence example cited by a reader Oooh I wonder who that could be specialists would simply re open the file rather than argue against it I don t know of anyone who s specifically arguing against it All I was suggesting is that it may be that it s not as straightforward as it may seem If a group of experts are not doing what you think they should be doing maybe they have a good reason for not doing so Steve McIntyre Posted Jan 6 2016 at 5 29 PM Permalink maybe they have a good reason for not doing so perhaps What is it On the other hand there s a lot of ideological investment in high sensitivity models and any backing down would be embarrassing Had there been less publicity it would have been easier to report on lower sensitivity models but unfortunately this would undoubtedly be felt in human terms as some sort of concession to skeptics The boxplot comparisons deal with trends over the 1979 2015 period This is a long enough period that precise phase issues are not relevant Further the comparison in the present post ends on a very large El Nino and is the most favorable endpoint imaginable to the modelers and Then There s Physics Posted Jan 6 2016 at 5 24 PM Permalink Having almost infinitely better understanding of CFD modeling than you Ken is more accurate I rest my case and Then There s Physics Posted Jan 6 2016 at 5 48 PM Permalink On the other hand there s a lot of ideological investment in high sensitivity models and any backing down would be embarrassing I think there is a great deal of ideological desire for low climate sensitivity too All I m suggesting is that there are many factors that may be contributing to the mismatch and that it may not be quite as simple as it at first seem To add to what I already said there s also the blending issue highlighted by Cowtan et al As for your 95 question that you asked You re correct I think that the models are intended to be independent so I wasn t suggesting that they re somehow chosen tuned to give that the observations would stay within the spread 95 of the time although I do remember having discussions with some maybe Ed Hawkins who were suggesting that some models are rejected for various reasons I was suggesting that if the observations stayed out for more than 5 of the time then we d have a much strong case for arguing that the models have an issue given that the observations are outside the expected range for much longer than would be reasonable Steve McIntyre Posted Jan 6 2016 at 7 39 PM Permalink that the models are intended to be independent in responding to your assertion that the models were designed to cover a model space I did not mean to suggest that the models are independent in a statistical sense For example I said that the ensemble was one of opportunity The models are not independent as elements are common to all of them a point acknowledged by Tim Palmer somewhere The possibility of systemic bias is entirely real and IMO there is convincing evidence that there is I ve added the following note to my earlier comment to clarify in responding to Rice s outlandish assertion I expressed myself poorly above There is no coordination among developers so that the models cover a space but it is incorrect to say that they are independently developed There are common elements to most models and systemic bias is a very real possibility as acknowledged by Tim Palmer dpy6629 Posted Jan 6 2016 at 8 19 PM Permalink We recently did an analysis of CFD models for some very simple test cases and discovered that the spread of results was surprisingly large These models also are all based on the same boundary layer correlations and data This spread is virtually invisible in the literature My belief is that GCMs are also all based roughly on common empirical and theoretical relationships I also suspect that the literature may not give a full range of possible model settings or types and may understate the uncertainty but this would be impossible to prove without a huge amount of work Ron Graf Posted Jan 6 2016 at 6 21 PM Permalink Except climate sensitivity is an emergent property of the models The argument that transient climate response TCR is an emergent property of the models is based on the assumption all the model parameters are constrained by lab validated physics It s just physics as I ve heard said What I believe is remarkable is that a scientific body approved a protocol that leaves the mechanics of the physics blind to outside review The CMIP5 models in fact are such black boxes that TCR does not emerge but with the use of multiple linear regressions on the output of multiple realizations In other words one run gives a TCR the next run can give a different one One can manipulate TCR not only by selective input but also by selective choice of output or ensemble mix and its method of analysis If it were just physics why is there 52 model pairs each producing unique responses kneel63 Posted Jan 6 2016 at 7 43 PM Permalink The concentration forcing pathways aren t model projections at all they re inputs Indeed Aren t the models in CMIP run using several scenarios RCP8 5 RCP6 RCP4 5 and so on A valid comparison might then be if real emissions between RCP4 5 and RCP6 then let s compare those model runs to your preferred measurement metric As Steve says if the model outputs using RCPs that are consistently low real forcing was higher and temps are consistently high actual temps were lower AND runs using RCPs that are consistently higher real forcing was lower project even higher temps ie more wrong it is reasonable to assume that using actual forcing data would fall somewhere in between and that therefore the models are running too hot I have no doubt that even should you agree this is correct that you will then suggest that eg we should only use those model runs that get ENSO PDO etc correct or It would be nice if we had an a priori agreed method to evaluate model performance because it certainly seems to me that when they appeared to be correct it was evidence of goodness but when they are wrong it s not evidence of badness Steve McIntyre Posted Jan 6 2016 at 8 24 PM Permalink One of the large problems in trying to assess the degree to which model overshooting can be attributed to forcing projections rather than sensitivity is that there is no ongoing accounting of actual forcings in a format consistent with the RCP projections This sort of incompatibility is not unique to climate I ve seen numerous projects in which the categories in the plan are not consistent with the accounting categories used in operations This is usually a nightmare in trying to do plan vs actual But given the size of the COP21 decisions it is beyond ludicrous that there is no regular accounting of forcing The RCP scenarios contain 53 forcing columns some are subtotals These are presumably calculated from concentration levels which in turn depend on emission levels But I challenge ATTP or anyone else to provide me with a location in which the observed values of these forcings are archived on a contemporary basis To my knowledge they aren t All the forcings that matter ought to be measured and reported regularly at NOAA who report forcings for only a few GHGs and do not report emissions 1 TOTAL INCLVOLCANIC RF Total anthropogenic and natural radiative forcing 2 VOLCANIC ANNUAL RF Annual mean volcanic stratospheric aerosol forcing 3 SOLAR RF Solar irradience forcing 4 TOTAL ANTHRO RF Total anthropogenic forcing 5 GHG RF Total greenhouse gas forcing CO2 CH4 N2O HFCs PFCs SF6 and Montreal Protocol gases 6 KYOTOGHG RF Total forcing from greenhouse gases controlled under the Kyoto Protocol CO2 CH4 N2O HFCs PFCs SF6 7 CO2CH4N2O RF Total forcing from CO2 methan and nitrous oxide 8 CO2 RF CO2 Forcing 9 CH4 RF Methane Forcing 10 N2O RF Nitrous Oxide Forcing 11 FGASSUM RF Total forcing from all flourinated gases controlled under the Kyoto Protocol HFCs PFCs SF6 i e columns 13 24 12 MHALOSUM RF Total forcing from all gases controlled under the Montreal Protocol columns 25 40 13 24 Flourinated gases controlled under the Kyoto Protocol 25 40 Ozone Depleting Substances controlled under the Montreal Protocol 41 TOTAER DIR RF Total direct aerosol forcing aggregating columns 42 to 47 42 OCI RF Direct fossil fuel aerosol organic carbon 43 BCI RF Direct fossil fuel aerosol black carbon 44 SOXI RF Direct sulphate aerosol 45 NOXI RF Direct nitrate aerosol 46 BIOMASSAER RF Direct biomass burning related aerosol 47 MINERALDUST RF Direct Forcing from mineral dust aerosol 48 CLOUD TOT RF Cloud albedo effect 49 STRATOZ RF Stratospheric ozone forcing 50 TROPOZ RF Tropospheric ozone forcing 51 CH4OXSTRATH2O RF Stratospheric water vapour from methane oxidisation 52 LANDUSE RF Landuse albedo 53 BCSNOW RF Black carbon on snow Matt Skaggs Posted Jan 7 2016 at 11 11 AM Permalink Steve wrote But I challenge ATTP or anyone else to provide me with a location in which the observed values of these forcings are archived on a contemporary basis To my knowledge they aren t All the forcings that matter ought to be measured and reported regularly at NOAA who report forcings for only a few GHGs and do not report emissions I took a deep dive looking for this information as well for the essay I wrote for Climate Etc If the IPCC were to serve one major useful purpose it would have been to develop a global system for collecting and collating direct measurement data on forcings I say would have been because this effort should have started in the 90s and here we are in 2016 with nothing more than scattered chunks of data in various formats davideisenstadt Posted Jan 7 2016 at 12 22 PM Permalink Steve I think your point regarding the independence of the various iterations of current GCMs was well put Given that they all share data used as inputs and although independently developed have shared structural characteristics its a misapprehension to regard them as independent Anyway the tests for statistical independence have nothing to do whatsoever with the provenance of the respective models its their behavior that tells the tale and they all exhibit a substantial degree of covariance that is to say they aren t independent Ken Rice should know better than to peddle this tripe Jeff Norman Posted Jan 9 2016 at 2 15 PM Permalink Matt I ve said it before if the IPCC truly cared about the future climate there would be a WG IV dealing with sources of error uncertainties and recommendations for improving our climate knowledge Very basic things like funding weather monitoring stations in those global voids David L Hagen Posted Jan 28 2016 at 2 27 PM Permalink Curry quote you on Popper in Insights from Karl Popper how to open the deadlocked climate debate Editor of the Fabius Maximus website Posted Jan 8 2016 at 4 01 PM Permalink Reply As ATTP comments in this threat show there is great potential from re running the GCMs with updated forcings That would give us predictions from the models instead of projections since the input would be observations of forcings not predictions of forcings actual tests of the models We could do this with older models to get multi decade predictions of temperature which could be compared with observations These would be technically hindcasts but more useful than those used today because they test the models with out of sample data i e not available when they were originally run Working with an eminent climate scientist I wrote up such a proposal to do this http fabiusmaximus com 2015 09 24 scientists restart climate change debate 89635 These results might help break the gridlock in the climate policy debate At least it would be a new effort to do so since the debate has degenerated into a cacophony each side blames the other for this both with some justification Editor of the Fabius Maximus website Posted Jan 8 2016 at 4 03 PM Permalink Reply Follow up to my comment this kind of test might be the best way to reconcile the gap between models projections and observations As the comments here show the current debate runs in circles at high speed New data and new perspectives might help Steve McIntyre Posted Jan 8 2016 at 6 18 PM Permalink One of the curiosities to the assertion that actual forcings have undershot those in the model scenarios is that actual CO2 emissions are at the very top end of model scenarios So any forcing shortfall is not due to CO2 The supposed undershot goes back once again to aerosols which unfortunately are not reported by independent agencies on a regular basis The argument is that negative forcing from actual aerosols has been much greater than projected the same sort of argument made by Hansen years ago to explain the same problem Steve McIntyre Posted Jan 8 2016 at 6 14 PM Permalink Reply We could do this with older models to get multi decade predictions of temperature which could be compared with observations Some time ago I did this exercise using the simple relationship in Guy Callendar s long ago article and subsequent forcing In subsequent terms it was low sensitivity It outperformed all the GCMs when all were centered on 1920 40 Editor of the Fabius Maximus website Posted Jan 8 2016 at 6 44 PM Permalink Steve Exercises similar to yours have been done several times but with inconclusive results no effect on the

    Original URL path: http://climateaudit.org/2016/01/05/update-of-model-observation-comparisons/?replytocom=765659 (2016-02-08)
    Open archived version from archive

  • Update of Model-Observation Comparisons « Climate Audit
    path under the foot bridge many times a day Most times the mule shied at the bridge and Paddy had to drag it along One day with a bright idea he took a shovel to the path and took about six inches of gravel out from under the bridge His mate Patrick watched him work then declared that the fix would not work Paddy that mule it s his ears is too long not his legs I m still having conceptual problems re the meaning of the mean and variance of an assemblage of model runs There is a lot of distance between the mules ears hanging down and pointing up I trust that you are well on the path to recovery and Then There s Physics Posted Jan 6 2016 at 6 37 AM Permalink Reply Have the models in the comparison been redone with the updated forcings as suggested in this paper opluso Posted Jan 6 2016 at 8 59 AM Permalink Reply CMIP5 has been used by numerous peer reviewed papers so this question seems like another red herring Models are constantly being updated and modified Surface temperature anomaly estimates which by the way should always display an error range confidence interval are frequently revised as well The snapshot comparison displayed in this post is useful nonetheless and Then There s Physics Posted Jan 6 2016 at 9 03 AM Permalink Reply I know the snapshot is useful but the question of updated forcings is a valid question As I undertand it the original CMIP5 runs were done using forcings that we known or that weren t guesses up until 2005 and then estimated forcings for the period after 2005 It seems that the actual forcings post 2005 and some of the pre 2005 forcings are in reality different to what was assumed Given that the goal of the models is not to predict what the change in forcings will be but what the response will be to the change in forcings updating the forcings seems like an important thing to do if you want to do a proper comparison between the models and the observations Ron Graf Posted Jan 6 2016 at 9 39 AM Permalink updating the forcings seems like an important thing to do if you want to do a proper comparison between the models and the observations This is a relevant point the CMIP5 gets periodically adjusted particularly for volcanic aerosol cooling The 1991 1994 dip in plotted CMIP5 in the first figure at top is surely the adjustment post Mt Pinatubo The CMIP5 protocol is not to predict volcanic events This leaves the projection always at worst case intentionally for the future opluso Posted Jan 6 2016 at 11 05 AM Permalink aTTP As you point out CMIP5 is circa 2005 So the proper comparison is between observed temps well HADCRUT 4 4 and or RSS and post 2005 model projections Not by coincidence that is approximately the period during which models begin to consistently overestimate warming The earlier years are just eye candy for the unwary Steve McIntyre Posted Jan 6 2016 at 1 04 PM Permalink Not by coincidence that is approximately the period during which models begin to consistently overestimate warming Actually the sort of problem began much earlier The first patch was Hansen s discovery of aerosol cooling Steve McIntyre Posted Jan 6 2016 at 12 58 PM Permalink one of the large problems in forcings is trying to locate data on actual forcings other than CO2 on a consistent basis with forcings in the underlying model Can you tell me where I can find the aerosol forcing used in say a HadGEM run and then the observed aerosols Also data for observed forcings that are published on a timely basis and not as part of an ex post reconciliation exercise I ve spent an inordinate amount of time scouring for forcing data I m familiar with the obvious dsets but they are not satisfactory stevefitzpatrick Posted Jan 6 2016 at 5 21 PM Permalink Steve McIntyre I m unconvinced that the physics precludes lower sensitivity models Yes modelers make choices for parameters consistent with physics which influence the models and there for certain is a lot of room for different choices as evidenced by the comically wide range of sensitivity values diagnosed by different physics based state of the art GCMs The problem is that the modelers appear unwilling to incorporate reasonable external constraints on critical factors like aerosol effects and the rate of ocean heat accumulation Seems to me a couple of very important questions are being neither asked nor answered Do the individual model s heat accumulations match reasonably well the measured warming accumulation from Argo Do the aerosol effects which each model generates align reasonably well with the best estimates of net aerosol effects from aerosol experts say those who contributed to AR5 My guess is that were these questions asked and answered it would be clear why the models project much more warming than has been actually observed parameter choices which lead to too much sensitivity combined with too high aerosol offsets and or too much heat accumulation Some feet need to be put to the fire or the models ignored dpy6629 Posted Jan 6 2016 at 5 32 PM Permalink Yes SteveF holding feet to the fire is called for My informants tell me GCM s are intensely political and not a career enhancement vehicle DOE is building a new one by 2017 but have apparently been told in very clear terms to not stray too far from what existing models use It is depressing and sad By contrast turbulence modelers are generally more scientific and open minded dpy6629 Posted Jan 6 2016 at 10 36 PM Permalink Ken and Steve It seems to me that the main argument for constructing low sensitivity models is to understand the effects of the various choices and there are so many in a GCM that the sensitivity to these choices are I believe badly understudied and under reported That is true of turbulence models too modelers know these things but they are almost never reported in the literature A careful and systemic study would be a huge contribution and such a study has been started at NASA However large resources will be needed to do a rigorous job The real issue is the uncertainty in the models and since all the models are strongly related in terms of methods and data used the usual 95 confidence interval is surely an underestimate and possibly a bad underestimate This is what we found for CFD The models are closely related and yet the variety of answers can be very large We did study some methodological choices as well But it turns out that its really difficult to isolate the uncertainty in the underlying turbulence models and methods because there are so many other sources of uncertainty such as grid density level of convergence etc I personally don t see how it is possible to really rigorously tune parameters in a climate model given the incredibly course grid sizes and the limited time integration times that are achievable on current computers Alberto Zaragoza Comendador Posted Jan 7 2016 at 5 05 PM Permalink Potsdam Institute has a database actually ATTP gave me the link Not updated since 2011 apparently http www pik potsdam de mmalte rcps I downloaded the concentration and forcing Excels for RCP6 The former says 400ppm CO2eq for 2014 which is 1 9w m2 assuming 3 7w m2 per doubling of CO2 But the forcing Excel disagrees says 2 2w m2 for 2014 So I wouldn t trust this stuff very much Steve that is not what I was asking for I am completely aware of RCP projections My request was for OBSERVED data in a format consistent with IPCC projections Giving me back the IPCC projections is not responsive It is too typical of people like ATTP to give an obtuse and unresponsive answer Also there is an important difference between EMISSIONS and CONCENTRATION AR5 seems to have taken a step back from SRES in not providing EMISSION scenarios Alberto Zaragoza Comendador Posted Jan 7 2016 at 6 52 PM Permalink Well shame on me the Potsdam website has files created in 2011 but the actual concentration data is indeed only for pre 2005 since that year it shows RCPs So everybody else ignore that link unless you have some fondness for historical methane forcing and Then There s Physics Posted Jan 6 2016 at 11 10 AM Permalink Reply So the proper comparison is between observed temps well HADCRUT 4 4 and or RSS and post 2005 model projections No the models are attempting to determine what will happen for a given concentration forcing pathway If the concentration forcing pathway turns out to be different to what was initially assumed then this should be updated in the models before doing the comparison Essentially the concentration forcing pathway is conditional the model output is really saying if the concentration forcing pathway is what we assumed this is what we would predict Hence if the concentration forcing pathway turns out to be different doing the comparison without updating the forcings is not a like for like comparison opluso Posted Jan 6 2016 at 11 30 AM Permalink Your herring is growing more red by the minute The various concentration forcing pathways are not the only source of flawed model projections MikeN Posted Jan 6 2016 at 7 18 PM Permalink Model output is grounded in physics and not adjusted The graph is from the NRC report and is based on simulations with the U of Victoria climate carbon model tuned to yield the mid range IPCC climate sensitivity http www realclimate org index php archives 2011 11 keystone xl game over Models can definitely produce low sensitivity outputs Older version of one developed by Prinn known for high sensitivity models had parameters you an set for oceans and aerosols and clouds and certain reasonable levels of these would produce warming close to 1C by 2100 MikeN Posted Jan 6 2016 at 7 19 PM Permalink It is reasonable to evaluate models based on updated emissions scenarios I have advocated that models should be frozen with code to allow for such evaluations at a later time and Then There s Physics Posted Jan 6 2016 at 11 35 AM Permalink Reply The various concentration forcing pathways are not the only source of flawed model projections The concentration forcing pathways aren t model projections at all they re inputs That s kind of the point It s a bit like saying I predict that if you drop a cannonball from the 10th floor of a building it will takes 2 5s to reach the ground and you claim that the prediction was wrong because it only took 2s when you dropped it from the 7th floor Steve as I understand it the scenarios are supposed to be relevant and realistic And rather than CO2 emissions being at the low end of the scenarios they are right up at the top end of the scenarios from the earlier IPCC reports Ron Graf Posted Jan 6 2016 at 2 20 PM Permalink ATTP the model output is really saying if the concentration forcing pathway is what we assumed this is what we would predict This is the Gavin Schmidt game of separating projection from prediction He didn t invent it economists did It is not fair in science to say when predictions are correct that they are validation and when they are wrong that they were qualified projections That is creates an unfalsifiable argument which by Karl Popper s definition is the opposite of science Steve I m considering putting Popper on my list of proscribed words and Then There s Physics Posted Jan 6 2016 at 2 34 PM Permalink Ron What Let s say I develop a model that is used to understand how some system will respond to some kind of externally imposed change I then assume something about what that external change will probably be and I run the model I then report that if the change is X the model suggests that Y will happen If however in reality the change that is imposed is different to what I assumed would happen then if I want to check how good the model is I should redo it with what the actual external change was The point is that climate models are not being used to predict what we will do AND what the climate will do They re really only being used to understand the climate That what was assumed about what we would do the concentration pathway turns out to be different to what we actualy did doesn t mean that the models were somehow wrong Steve McIntyre Posted Jan 6 2016 at 2 48 PM Permalink That what was assumed about what we would do the concentration pathway turns out to be different to what we actualy did doesn t mean that the models were somehow wrong If observed CO2 emissions have been at the top end of scenarios as they have been and observed temperatures have been at the very bottom end of scenarios it seems reasonable to consider whether the models are parameterized too warm From a distance it seems like far more effort is being spent arguing against that possibility than in investigating the properties of lower sensitivity models opluso Posted Jan 6 2016 at 2 36 PM Permalink The concentration forcing pathways aren t model projections at all they re inputs I didn t say the pathways were projections I said they were not the only source of flaws in model projections Obviously if the feedbacks and physics are poorly modeled you can project significant warming even with a lower concentration pathway Bottom line If CMIP5 was good enough to demand global economic restructuring I think it s good enough for the purposes of this post and Then There s Physics Posted Jan 6 2016 at 2 47 PM Permalink opluso None of what you say is really an argument against updating the concentration pathway if you know that what actually happened is different to what you initially assumed and Then There s Physics Posted Jan 6 2016 at 2 54 PM Permalink From a distance it seems like far more effort is being spent arguing against that possibility than in investigating the properties of lower sensitivity models Except climate sensitivity is an emergent property of the models You can t simply create a lower sensitivity model if the physics precludes such an outcome As you have probably heard before the model spread is intended to represent a region where the observed temperatures will fall 95 of the time If the observed temperatures track along or outside the lower boundary for more than 5 of the time there would certainly be a case for removing some of the higher sensitivity models and trying to understand why the models tend to produce sensitivities that are higher than seems reasonable or trying to construct physically plausible models with lower sensitivity However this doesn t appear to be what is happening and hence the case for trying to artificially construct lower sensitivity models seems IMO to be weak Steve McIntyre Posted Jan 6 2016 at 3 27 PM Permalink if the physics precludes such an outcome I m unconvinced that the physics precludes lower sensitivity models In any other walk of like specialists would be presently exploring their parameterizations to see whether they could produce a model with lower sensitivity that still meets other specifications The seeming stubbornness of the climate community on this point is really quite remarkable there are dozens of parameterizations within the model There is obviously considerable play withing these parameterizations to produce results of different sensitivity as evidenced by the spread that includes very hot models like Andrew Weaver s The very lowest sensitivity IPCC models are still in ore Opposition to investigation of even lower sensitivity parameterizations strike me as more ideological than objective Steve McIntyre Posted Jan 6 2016 at 5 18 PM Permalink Ken Rice says As you have probably heard before the model spread is intended to represent a region where the observed temperatures will fall 95 of the time Actually I haven t heard that before My understanding is that the models were independently developed and represented an ensemble of opportunity rather than being designed to cover a 5 95 spread What if any is your support for claiming that the model spread is intended to represent a region where the observed temperatures will fall 95 of the time Can you provide a citation to IPCC or academic paper Steve in responding to Rice s outlandish assertion I expressed myself poorly above There is no coordination among developers so that the models cover a space but it is incorrect to say that they are independently developed There are common elements to most models and systemic bias is a very real possibility as acknowledged by Tim Palmer and Then There s Physics Posted Jan 6 2016 at 3 34 PM Permalink I m unconvinced that the physics precludes lower sensitivity models I didn t say that they did preclude it I simply said if they preclude it The problem as I see it is that if we actively start trying to develop models that have low sensitivity then that s not really any different to actively trying to develop ones that have high sensitivity Even though there are parametrisations they are still typically constrained in some way Opposition to investigation of even lower sensitivity parameterizations strike me as more ideological than objective What makes you think there s opposition Maybe it s harder than it seems to generate such models and maybe people who work on this don t think that there is yet a case for actively doing so Steve McIntyre Posted Jan 6 2016 at 5 12 PM Permalink maybe people who work on this don t think that there is yet a case for actively doing so If the extraordinary and systemic overshoot of models in the period 1979 2015 doesn t constitute a case for re opening examination of the parameter selections I don t know what would be In other fields e g the turbulence example cited by a reader specialists would simply re open the file rather than argue against it opluso Posted Jan 6 2016 at 4 41 PM Permalink None of what you say is really an argument against updating the concentration pathway if you know that what actually happened is different to what you initially assumed In fact I pointed out that in far more important situations e g COP21 CMIP5 projections have been acceptable Therefore in the context of this post there is simply no need to compile a CMIP6 database before examining the existing hypotheses I strongly suspect that even if SMc had satisfied your desire for an updated CMIP you would say he should wait for HadCRUT 5 dpy6629 Posted Jan 6 2016 at 5 05 PM Permalink Ecs is an emergent property just as boundary layer health is for a turbulence model Developers of models who I know personally are much smarter than Ken Rice seems to believe They know how to tweak the parameters or the functional forms in models to change the important emergent properties For climate models where many of the emergent properties lack skill one needs to choose the ones you care most about According to Richard Betts for the Met office model they care most about weather forecast skill Toy models of planet formation are not the same ballgame at all and Then There s Physics Posted Jan 6 2016 at 5 13 PM Permalink Developers of models who I know personally are much smarter than Ken Rice seems to believe I ve no idea why you would say this as I ve said nothing about how smart or not model developers might be All I do know is that no one can be as smart as you seem to think you are Steve this is a needlessly chippy response The commenter had made a useful substantive point They know how to tweak the parameters or the functional forms in models to change the important emergent properties in response to your assertion that the models were grounded on physics Do you have a substantive response to this seemingly sensible comment dpy6629 Posted Jan 6 2016 at 5 22 PM Permalink Having almost infinitely better understanding of CFD modeling than you Ken is more accurate Modelers could produce low ECS models if they wanted to do so I share Steve M s puzzlement as to why There are some obvious explanations having to do with things like the terrible job models do with precipitation that may be higher priorities Steve I d prefer that you and Ken Rice tone down the comparison of shall we say manliness and Then There s Physics Posted Jan 6 2016 at 5 20 PM Permalink If the extraordinary and systemic overshoot of models in the period 1979 2015 doesn t constitute a case for re opening examination of the parameter selections I don t know what would be Have the models had their concentration forcing pathways updated Have you considered sampling bias in the surface temperature dataset Have you considered uncertainties in the observed trends Have you considered the analysis where only models that have internal variability that is in phase with the observations shows less of a mismatch Maybe your supposed gotcha isn t quite as straightforward as you seem to think it is In other fields e g the turbulence example cited by a reader Oooh I wonder who that could be specialists would simply re open the file rather than argue against it I don t know of anyone who s specifically arguing against it All I was suggesting is that it may be that it s not as straightforward as it may seem If a group of experts are not doing what you think they should be doing maybe they have a good reason for not doing so Steve McIntyre Posted Jan 6 2016 at 5 29 PM Permalink maybe they have a good reason for not doing so perhaps What is it On the other hand there s a lot of ideological investment in high sensitivity models and any backing down would be embarrassing Had there been less publicity it would have been easier to report on lower sensitivity models but unfortunately this would undoubtedly be felt in human terms as some sort of concession to skeptics The boxplot comparisons deal with trends over the 1979 2015 period This is a long enough period that precise phase issues are not relevant Further the comparison in the present post ends on a very large El Nino and is the most favorable endpoint imaginable to the modelers and Then There s Physics Posted Jan 6 2016 at 5 24 PM Permalink Having almost infinitely better understanding of CFD modeling than you Ken is more accurate I rest my case and Then There s Physics Posted Jan 6 2016 at 5 48 PM Permalink On the other hand there s a lot of ideological investment in high sensitivity models and any backing down would be embarrassing I think there is a great deal of ideological desire for low climate sensitivity too All I m suggesting is that there are many factors that may be contributing to the mismatch and that it may not be quite as simple as it at first seem To add to what I already said there s also the blending issue highlighted by Cowtan et al As for your 95 question that you asked You re correct I think that the models are intended to be independent so I wasn t suggesting that they re somehow chosen tuned to give that the observations would stay within the spread 95 of the time although I do remember having discussions with some maybe Ed Hawkins who were suggesting that some models are rejected for various reasons I was suggesting that if the observations stayed out for more than 5 of the time then we d have a much strong case for arguing that the models have an issue given that the observations are outside the expected range for much longer than would be reasonable Steve McIntyre Posted Jan 6 2016 at 7 39 PM Permalink that the models are intended to be independent in responding to your assertion that the models were designed to cover a model space I did not mean to suggest that the models are independent in a statistical sense For example I said that the ensemble was one of opportunity The models are not independent as elements are common to all of them a point acknowledged by Tim Palmer somewhere The possibility of systemic bias is entirely real and IMO there is convincing evidence that there is I ve added the following note to my earlier comment to clarify in responding to Rice s outlandish assertion I expressed myself poorly above There is no coordination among developers so that the models cover a space but it is incorrect to say that they are independently developed There are common elements to most models and systemic bias is a very real possibility as acknowledged by Tim Palmer dpy6629 Posted Jan 6 2016 at 8 19 PM Permalink We recently did an analysis of CFD models for some very simple test cases and discovered that the spread of results was surprisingly large These models also are all based on the same boundary layer correlations and data This spread is virtually invisible in the literature My belief is that GCMs are also all based roughly on common empirical and theoretical relationships I also suspect that the literature may not give a full range of possible model settings or types and may understate the uncertainty but this would be impossible to prove without a huge amount of work Ron Graf Posted Jan 6 2016 at 6 21 PM Permalink Except climate sensitivity is an emergent property of the models The argument that transient climate response TCR is an emergent property of the models is based on the assumption all the model parameters are constrained by lab validated physics It s just physics as I ve heard said What I believe is remarkable is that a scientific body approved a protocol that leaves the mechanics of the physics blind to outside review The CMIP5 models in fact are such black boxes that TCR does not emerge but with the use of multiple linear regressions on the output of multiple realizations In other words one run gives a TCR the next run can give a different one One can manipulate TCR not only by selective input but also by selective choice of output or ensemble mix and its method of analysis If it were just physics why is there 52 model pairs each producing unique responses kneel63 Posted Jan 6 2016 at 7 43 PM Permalink The concentration forcing pathways aren t model projections at all they re inputs Indeed Aren t the models in CMIP run using several scenarios RCP8 5 RCP6 RCP4 5 and so on A valid comparison might then be if real emissions between RCP4 5 and RCP6 then let s compare those model runs to your preferred measurement metric As Steve says if the model outputs using RCPs that are consistently low real forcing was higher and temps are consistently high actual temps were lower AND runs using RCPs that are consistently higher real forcing was lower project even higher temps ie more wrong it is reasonable to assume that using actual forcing data would fall somewhere in between and that therefore the models are running too hot I have no doubt that even should you agree this is correct that you will then suggest that eg we should only use those model runs that get ENSO PDO etc correct or It would be nice if we had an a priori agreed method to evaluate model performance because it certainly seems to me that when they appeared to be correct it was evidence of goodness but when they are wrong it s not evidence of badness Steve McIntyre Posted Jan 6 2016 at 8 24 PM Permalink One of the large problems in trying to assess the degree to which model overshooting can be attributed to forcing projections rather than sensitivity is that there is no ongoing accounting of actual forcings in a format consistent with the RCP projections This sort of incompatibility is not unique to climate I ve seen numerous projects in which the categories in the plan are not consistent with the accounting categories used in operations This is usually a nightmare in trying to do plan vs actual But given the size of the COP21 decisions it is beyond ludicrous that there is no regular accounting of forcing The RCP scenarios contain 53 forcing columns some are subtotals These are presumably calculated from concentration levels which in turn depend on emission levels But I challenge ATTP or anyone else to provide me with a location in which the observed values of these forcings are archived on a contemporary basis To my knowledge they aren t All the forcings that matter ought to be measured and reported regularly at NOAA who report forcings for only a few GHGs and do not report emissions 1 TOTAL INCLVOLCANIC RF Total anthropogenic and natural radiative forcing 2 VOLCANIC ANNUAL RF Annual mean volcanic stratospheric aerosol forcing 3 SOLAR RF Solar irradience forcing 4 TOTAL ANTHRO RF Total anthropogenic forcing 5 GHG RF Total greenhouse gas forcing CO2 CH4 N2O HFCs PFCs SF6 and Montreal Protocol gases 6 KYOTOGHG RF Total forcing from greenhouse gases controlled under the Kyoto Protocol CO2 CH4 N2O HFCs PFCs SF6 7 CO2CH4N2O RF Total forcing from CO2 methan and nitrous oxide 8 CO2 RF CO2 Forcing 9 CH4 RF Methane Forcing 10 N2O RF Nitrous Oxide Forcing 11 FGASSUM RF Total forcing from all flourinated gases controlled under the Kyoto Protocol HFCs PFCs SF6 i e columns 13 24 12 MHALOSUM RF Total forcing from all gases controlled under the Montreal Protocol columns 25 40 13 24 Flourinated gases controlled under the Kyoto Protocol 25 40 Ozone Depleting Substances controlled under the Montreal Protocol 41 TOTAER DIR RF Total direct aerosol forcing aggregating columns 42 to 47 42 OCI RF Direct fossil fuel aerosol organic carbon 43 BCI RF Direct fossil fuel aerosol black carbon 44 SOXI RF Direct sulphate aerosol 45 NOXI RF Direct nitrate aerosol 46 BIOMASSAER RF Direct biomass burning related aerosol 47 MINERALDUST RF Direct Forcing from mineral dust aerosol 48 CLOUD TOT RF Cloud albedo effect 49 STRATOZ RF Stratospheric ozone forcing 50 TROPOZ RF Tropospheric ozone forcing 51 CH4OXSTRATH2O RF Stratospheric water vapour from methane oxidisation 52 LANDUSE RF Landuse albedo 53 BCSNOW RF Black carbon on snow Matt Skaggs Posted Jan 7 2016 at 11 11 AM Permalink Steve wrote But I challenge ATTP or anyone else to provide me with a location in which the observed values of these forcings are archived on a contemporary basis To my knowledge they aren t All the forcings that matter ought to be measured and reported regularly at NOAA who report forcings for only a few GHGs and do not report emissions I took a deep dive looking for this information as well for the essay I wrote for Climate Etc If the IPCC were to serve one major useful purpose it would have been to develop a global system for collecting and collating direct measurement data on forcings I say would have been because this effort should have started in the 90s and here we are in 2016 with nothing more than scattered chunks of data in various formats davideisenstadt Posted Jan 7 2016 at 12 22 PM Permalink Steve I think your point regarding the independence of the various iterations of current GCMs was well put Given that they all share data used as inputs and although independently developed have shared structural characteristics its a misapprehension to regard them as independent Anyway the tests for statistical independence have nothing to do whatsoever with the provenance of the respective models its their behavior that tells the tale and they all exhibit a substantial degree of covariance that is to say they aren t independent Ken Rice should know better than to peddle this tripe Jeff Norman Posted Jan 9 2016 at 2 15 PM Permalink Matt I ve said it before if the IPCC truly cared about the future climate there would be a WG IV dealing with sources of error uncertainties and recommendations for improving our climate knowledge Very basic things like funding weather monitoring stations in those global voids David L Hagen Posted Jan 28 2016 at 2 27 PM Permalink Curry quote you on Popper in Insights from Karl Popper how to open the deadlocked climate debate Editor of the Fabius Maximus website Posted Jan 8 2016 at 4 01 PM Permalink Reply As ATTP comments in this threat show there is great potential from re running the GCMs with updated forcings That would give us predictions from the models instead of projections since the input would be observations of forcings not predictions of forcings actual tests of the models We could do this with older models to get multi decade predictions of temperature which could be compared with observations These would be technically hindcasts but more useful than those used today because they test the models with out of sample data i e not available when they were originally run Working with an eminent climate scientist I wrote up such a proposal to do this http fabiusmaximus com 2015 09 24 scientists restart climate change debate 89635 These results might help break the gridlock in the climate policy debate At least it would be a new effort to do so since the debate has degenerated into a cacophony each side blames the other for this both with some justification Editor of the Fabius Maximus website Posted Jan 8 2016 at 4 03 PM Permalink Reply Follow up to my comment this kind of test might be the best way to reconcile the gap between models projections and observations As the comments here show the current debate runs in circles at high speed New data and new perspectives might help Steve McIntyre Posted Jan 8 2016 at 6 18 PM Permalink One of the curiosities to the assertion that actual forcings have undershot those in the model scenarios is that actual CO2 emissions are at the very top end of model scenarios So any forcing shortfall is not due to CO2 The supposed undershot goes back once again to aerosols which unfortunately are not reported by independent agencies on a regular basis The argument is that negative forcing from actual aerosols has been much greater than projected the same sort of argument made by Hansen years ago to explain the same problem Steve McIntyre Posted Jan 8 2016 at 6 14 PM Permalink Reply We could do this with older models to get multi decade predictions of temperature which could be compared with observations Some time ago I did this exercise using the simple relationship in Guy Callendar s long ago article and subsequent forcing In subsequent terms it was low sensitivity It outperformed all the GCMs when all were centered on 1920 40 Editor of the Fabius Maximus website Posted Jan 8 2016 at 6 44 PM Permalink Steve Exercises similar to yours have been done several times but with inconclusive results no effect on the policy debate

    Original URL path: http://climateaudit.org/2016/01/05/update-of-model-observation-comparisons/?replytocom=765691 (2016-02-08)
    Open archived version from archive

  • Raobcore Adjustments « Climate Audit
    to gather meteorological quality data which is generally coarser than what climatologists need The attempts to refine the upper air data are in a sense like trying to pretty up a pig A sense of the problems and methodology can be gathered from papers linked here Be prepared for comments like this Although our original intent was merely to adjust for the effects of artificial steplike changes it became obvious that some maladies could not be handled in such a fashion As a result deletion of selected portions of individual time series was added as one of the decisions made As shown in Part II overall the impact of data deletions is substantial and of comparable magnitude to adjustment of artificial changepoints and this Our previous attempts to develop objective schemes to homogenize radiosonde data Gaffen et al 2000b did not yield useful time series but did suggest that completely objective methods are not well suited to this particular problem The statistical methods employed to identify abrupt shifts in mean temperature could not distinguish between real and artificial changepoints i e discontinuities and resulted in adjustments that removed practically all of the original trends steven mosher Posted May 3 2008 at 7 10 PM Permalink RE 6 pat I will never forget the time a manager of defensive electronics got assigned to the high school liason program The General came in for the review and this guy danced around a particular inconsistency between the model and the data Half way through his presentation the general turned to our management and said He is finished get him off the stage The poor guy kept talking and finaly was convinced to take his seat next month he was assigned to the high school liason program as detailed in the company newsletter with all the nice photos Thereafter he retired growing weary of the sweaty and ignorant youth Pat Keating Posted May 3 2008 at 7 47 PM Permalink 9 Did he appeal to the CA Supreme Court based on the cruel and unusual clause We had an off site presentation which the technical part of the team didn t get there due to fog One of the Sales guys had a copy of the slides and did his best At one point the customer asked Is that to 1 sigma or 2 sigma The sales guy responded Which would you prefer Raven Posted May 3 2008 at 7 54 PM Permalink I am curious about the relationship between error bars and data adjustments If one establishes that there are issues with the data and you correct the data shouldn t the error bars on the corrected data be widened to reflex the uncertainty associated with the adjustment IOW Adding 0 5 degC adjustment to data should result in error bars that are at least 0 5 degC Does this make sense Pat Keating Posted May 3 2008 at 8 01 PM Permalink 8 David It s hard to understand why the need for homogenization especially when the whole point was to measure trends A well planned and managed program would retain the same instrumentation for all measurements so comparisons can be usefully made In the infrequent event when switching instrumentation was necessary the new version would be fully calibrated against the old version before making the switch These precautions are so obvious and elementary that one has to wonder about the competence of the people involved But perhaps I don t understand all the constraints involved David Smith Posted May 3 2008 at 9 25 PM Permalink Pat I think that their practices have improved and probably will provide good data going forward Part of the historical problem is that different countries used different instrumentation and practiced different levels of care in processing the data I seem to recall reading that data from some places is suspected to include falsified values basically they never launched the instruments or launched at times other than those recorded It s a mess JM Posted May 3 2008 at 10 03 PM Permalink It has been highlighted in other threads but how does this article square with the current discussion http climatesci org 2008 01 01 important new paper using limited time period trends as a means to determine attribution of discrepancies in microwave sounding unit derived tropospheric temperature time by rmrandall and bm herman Jeff C Posted May 3 2008 at 10 38 PM Permalink As a payload sytems engineer for a major satellite manufacturer I have played these data adjustment games in the past However it is only under one circumstance the satellite is in orbit it is malfunctioning and during t shooting we find the factory test data is inadequate or compromised We can t get the satellite back and we don t have good data but we have to figure out something to keep our product operational In other words we are desperate because we very well could lose hundreds of millions of dollars I have a feeling the motivation of the climate science community is similar Dennis Wingo Posted May 3 2008 at 10 39 PM Permalink There is a tale of an error in the measured data regarding a rather important instrument It seems that the data showed an error in the finish of a certain part The technician that made the measurement was quite sure that the finished part was correct and the measurement device in error So he put a 5 cent washer in the measuring device which caused the measurement that he took to be consistent with what he knew was the state of the part being measured The part was then stamped as quality assured and then the was integrated into the overall system This system was then fully signed off on and shipped to the customer When the system was launched and put on orbit the scientists noticed a blurring of the optics It seemed that the primary mirror was ground just slightly out of specification a problem later traced to a flaw in the measuring device at Perkin Elmer It cost NASA another billion dollars to fabricate an optical corrector for what you now have figured out is the Hubble space telescope This is what happens when people know that their preconceived notion is right and the instruments wrong Jeff C Posted May 3 2008 at 11 00 PM Permalink And of course during the post mortem there will be a full failure review board convened with independent external reviewers Next time it will be done right without the need for firefighting heroics Brooks Hurd Posted May 3 2008 at 11 03 PM Permalink It would be much easier to forget about instruments alltogether and simply create the data as needed This would avoid the problem of collecting data which was not in agreement with the researchers pre conceived notions of what the data should look like This would also save the problem of erasing the original data Andrew Posted May 4 2008 at 12 35 AM Permalink Douglass mentioned here http www climateaudit org p 3058 An addendum to their paper which I have located here http www pas rochester edu douglass papers addendum A 20comparison 20of 20tropical 20temperature 20trends 20with 20model JOC1651 20s1 ln377204795844769 1939656818Hwf 88582685IdV9487614093772047PDF HI0001 Andrew Posted May 4 2008 at 1 29 AM Permalink 14 JM Actually this is just to anybody that link doesn t work for me indeed I can t get to Roger s site period how about anybody else Ivan Posted May 4 2008 at 2 16 AM Permalink Steve what about journal paper both on Douglas analysis and Raobcore adjustment Stefan Posted May 4 2008 at 3 45 AM Permalink The Douglass paper can be downloaded from http icecap us images uploads DOUGLASPAPER pdf George M Posted May 4 2008 at 5 36 AM Permalink In 50 years of dealing with cranky measuring instruments and trying to find the real data among the noise I observed that error direction was random and unpredictable How then is it that all the climate science adjustments are in the same direction indicating that all this varied instrumentation erred in the same opposite direction And I read the UAH post Andrew kindly provided in another thread about the adjustments of the satellite data Therein was a description of the calibration routine Now the satellite derives temperature by looking at the frequency of microwave emission of the Oxygen molecule which is temperature dependent But calibration is by aiming at outer space no Oxygen temp and an internal black panel no Oxygen temp Really Did I read the paper too fast Anyway I now understand why all this weather data is subject to corrections Dennis reminder above about the Hubble telescope fiasco is appropriate in more ways than just one All these are NASA programs with about the same level of credibility Jon Posted May 4 2008 at 6 58 AM Permalink 2 3 18 23 Should data never be subject to revision correction Interesting dilemma for those bemoaning adjustments a priori yet calling for same in other areas Quite similar to the phenomenon of slagging climate models in reference to longterm projections of mean temp increase while citing some of the same if they produce superficially agreeable to anti AGW results on shorter timescales or other metrics All data are subject to revision Isn t indeed the premise of this blog dedicated to such Steve Please lay off the editorializing I think that I made a quite reasonable observation in the post that people who are concerned about inhomogeneities in the surface record can hardly cavil at the possibility of inhomogeneities in the radiosonde record merely because they like the results I would characterize my own viewpoint on adjustments to data as this if the size of the adjustments is equal to the size of the trend then the adjustments need to be comprehensively documented and examined carefully Not that all data are subject to revision Indeed if data is revised it needs to be carefully marked and the original data preserved so that subsequent people can analyze the adjustment process This means that the adjustment code needs to be published not just loose sketches It means that new adjustments need to be announced and their effect analyzed unlike what Hansen did last September I don t view any of the radiosonde data as showing very much Indeed a real concern one expressed by some posters is whether the potential of this data set for monitoring changes has been botched by unrecorded inhomogeneities The disquieting thing about the inhomogeneity adjustments in RAOBCORE is that so many have occurred during the IPCC period when climate change issues were on the radar screen and care to ensure instrumentation continuity should have been on the minds of climate scientists Pat Keating Posted May 4 2008 at 7 18 AM Permalink 15 Jeff because we very well could lose hundreds of millions of dollars I can understand that But the fact that this equipment is so expensive should mean that there is extra effort to make sure the instrument is properly calibrated before use Isn t there any data from balloons and well calibrated conventional thermometry 25 Jon If the data are bad they should be replaced by new data not adjusted Adjustment is too prone to personal bias already an issue in science even with unadjusted data bender Posted May 4 2008 at 7 23 AM Permalink 11 Raven In theory yes Calibrations have error in them and adjustments increase error In practice nobody actually does anything about this Jon Posted May 4 2008 at 7 44 AM Permalink 25 Please lay off the editorializing My comments were directed at specific posts for a reason How can you find my post objectionable while countless comments implicitly or explicitly accusing scientists of outright fraud go untouched Interesting choice of moderation Steve I ve made it clear that such accusations of fraud are against blog rules and far from leaving such posts untouched I make a practice of deleting such posts In the earlier days of the blog I made a point of not deleting anything but I changed that policy and will enforce these rules You say that there are countless posts explicitly accusing scientists of outright fraud Such accusations are against the policies here I would appreciate it if you would identify even a few of the posts or comments in question so that I can attend to them If there are countless such posts it should be easy to find a few of them Steve McIntyre Posted May 4 2008 at 9 16 AM Permalink Some other usages of inconsistent in IPCC AR4 chapter 9 The observations in each region are generally consistent with model simulations that include anthropogenic and natural forcings whereas in many regions the observations are inconsistent with model simulations that include natural forcings only They find that a much higher percentage of grid boxes show trends that are inconsistent with model estimated internal variability than would be expected by chance and that a large fraction of grid boxes show changes that are consistent with the forced simulations particularly over the two shorter periods This assessment is essentially a global scale detection result because its interpretation relies upon a global composite of grid box scale statistics Thus the anthropogenic signal is likely to be more easy to identify in some regions than in others with temperature changes in those regions most affected by multidecadal scale variability being the most difficult to attribute even if those changes are inconsistent with model estimated internal variability and therefore detectable Stott et al 2004 apply the FAR concept to mean summer temperatures of a large part of continental Europe and the Mediterranean Using a detection and attribution analysis they determine that regional summer mean temperature has likely increased due to anthropogenic forcing and that the observed change is inconsistent with natural forcing It is very unlikely that the 20th century warming can be explained by natural causes The late 20th century has been unusually warm Palaeoclimatic reconstructions show that the second half of the 20th century was likely the warmest 50 year period in the Northern Hemisphere in the last 1300 years This rapid warming is consistent with the scientific understanding of how the climate should respond to a rapid increase in greenhouse gases like that which has occurred over the past century and the warming is inconsistent with the scientific understanding of how the climate should respond to natural external factors such as variability in solar output and volcanic activity Observed changes in ocean heat content have now been shown to be inconsistent with simulated natural climate variability but consistent with a combination of natural and anthropogenic influences both on a global scale and in individual ocean basins Observed decreases in arctic sea ice extent have been shown to be inconsistent with simulated internal variability and consistent with the simulated response to human influence but SH sea ice extent has not declined Steve McIntyre Posted May 4 2008 at 9 20 AM Permalink CCSP uses the term discrepancies on some occasions where IPCC and Douglass used inconsistent commenting on the precise issue in question here While these data are consistent with the results from climate models at the global scale discrepancies in the tropics remain to be resolved For recent decades all current atmospheric data sets now show global average warming that is similar to the surface warming While these data are consistent with the results from climate models at the global scale discrepancies in the tropics remain to be resolved Nevertheless the most recent observational and model evidence has increased confidence in our understanding of observed climatic changes and their causes Comparing trend differences between the surface and the troposphere exposes potentially important discrepancies between model results and observations in the tropics In the tropics most observational data sets show more warming at the surface than in the troposphere while almost all model simulations have larger warming aloft than at the surface In the stratosphere the radiosonde products differ somewhat although there is an inconsistent relationship involving the two stratospheric measures T 100 50 and T4 regarding which product indicates a greater decrease in temperature in the mid 1970s The issue of changes at the surface relative to those in the troposphere is important because larger surface warming at least in the tropics would be inconsistent with our physical understanding of the climate system and with the results from climate models The concept here is referred to as vertical amplification or for brevity simply amplification greater changes in the troposphere would mean that changes there are amplified relative to those at the surface Jon Posted May 4 2008 at 9 22 AM Permalink I would appreciate it if you would identify even a few of the posts or comments in question so that I can attend to them Off the top of my head bender at 363 in the tropical troposphere thread The allusion to 1984 speaks for itself Post 2 in this thread is fairly obvious Of course I could be mistaken All of these posts could implying something other than scientific misconduct All of the disparaging comments about Mann Schmidt Hansen the IPCC and the wider community are in no way to be interpreted as implications of misconduct Steve McIntyre Posted May 4 2008 at 9 39 AM Permalink None of your examples are explicit accusations of fraud I don t see that the comment in 2 comes anywhere close to making such a statement The 1984 quotation didn t contain an explicit accusation of fraud but is a type of venting that I ask people not to do and I ve exercised moderation rights to delete it Bender s 363 is certainly not an explicit accusation of fraud but is perhaps venting and snippable under blog rules But none of these are countless explicit accusations of fraud Indeed the word does not occur in the posts You ll have to do better than this to support your accusation The only person to recently make an explicit allegation of fraud on this board was Phil against a skeptic which I ve deleted One can make critical comments even disparage people without that necessarily implying misconduct or even fraud I make a point of avoiding the imputation of motives as much as possible We observed for example that Mann withheld adverse verification r2 results I intentionally did not apply any labels to this I merely reported the facts If the facts are unpleasant then that s the fault of the author not mine I ve said on a number of occasions that misconduct and fraud are quite different things and no purpose is served by conflating the two as you are doing here Yes I filed an academic misconduct complaint against Caspar Ammann Or Ammann and Wahl issuing a press release stating that all our results were unfounded when their calculations of the verification statistics in the Table in MM2005a reported only after the academic misconduct complaint proved to be virtually identical to ours Is this a practice that you endorse bender Posted May 4 2008 at 9 46 AM Permalink 30 Off the top of my head bender at 363 in the tropical troposphere thread Excuse me You ll have to defend or retract that statement my friend I suggest retracting it I have never accused anyone of what you say I did Not in 363 not anywhere bender Posted May 4 2008 at 9 48 AM Permalink snip bender calm down Jon Posted May 4 2008 at 9 53 AM Permalink None of the examples are an explicit accusation of fraud I said explicit or implicit and bender s comment is on the explicit side Please explain to me how the 1984 allusion could be construed as anything other than a deliberate implication of misconduct As to Mann etc I ve said I wasn t referring to your personal posts or other actions I think that you have a good opportunity here to contribute positively and your conversations with Curry lead me to believe you are ultimately going to follow that route I have no problem with you bringing as much scrutiny to bear on any aspect of the science you wish to Raven Posted May 4 2008 at 9 57 AM Permalink Jon says My comments were directed at specific posts for a reason How can you find my post objectionable while countless comments implicitly or explicitly accusing scientists of outright fraud go untouched Have you heard of the term confirmation bias It is a trap that even the most diligent scientist can fall into Scientists should be skeptical and always ask themselves whether they are trying to impose their beliefs on the data instead of allowing the data to tell them what their beliefs should be The potential for confirmation bias is painfully obvious to those not involved in the process Especially when we see dataset after dataset being revised in ways that always preserve the original hypotheses Such lopsided adjustments are not proof that the climate science community has a big problem with confirmation bias but it does raise enough suspicions to justify a concern For this reason the climate science community has an obligation to confront the issue confirmation bias directly on and demonstrate to the wider community that their methods are sound This requires full disclosure of the adjustment algorithms in a way that allows others to verify that they can come up with the same numbers It also means that errors must always be reported with the data lest people get the impression that the data is more certain than it is Failing to address the issue of confirmation bias will result in accusations of fraud If people in the climate science community don t like those accusations then they should address the legimate concerns regarding confimation bias directly and honestly Expressions of outrage and insistence of infallibility will only increase suspicion Incidently I have no reason to believe that corporate executives regularily engage in fraud when it comes to reporting their results However I would never accept their word alone and would never consider investing in a company that refused to have their financial numbers audited assuming that was an option I see no difference between investing in a stock based on financial data and making massive public investments based on scientific data The same standards of external audit and review must apply to both Kenneth Fritsch Posted May 4 2008 at 10 03 AM Permalink Is anyone here as amazed and somewhat perplexed as I am that Gavin Schmidt does not appreciate the statistical tool using the standard error of the mean SEM to compare averages or an average with true mean estimate of it or a target value Should not it be obvious that if one were comparing a twenty third climate model to the previous 22 models one would use the standard deviation and not the standard error of the mean to determine whether that twenty third model was outside the distribution of the previous twenty two On the other hand should it not be just as obvious that if one is comparing the mean of twenty models to a target value or in this case the instrumental results one would use the standard error of the mean SEM Take an alternate case where one group of climate models results were to be compared to another group of models let us say because of a difference in methodology between the groups The averages of the two groups would be compared by taking the number of samples used to determine of the averages of the two groups into consideration in calculating a standard deviation SEM like What is so difficult for Gavin Schmidt to understand about that We could argue separately that climate models do not fit well for such a statistical test but that is not what Gavin Schmidt is arguing I noticed on rereading the Douglas paper that the authors commented about other papers one coauthored by Karl that used the range of the climate models in comparing model output to observed data and apparently some of these models had outliers that did not realistically reproduce the surface temperature trends So I guess that these papers set the precedent for a certain group of climate scientists to throw together an array of climate models measure the range regardless of obvious outliers and then treat the observed data as just another model result I think fatal errors would be appropriately used to describe that approach Steve McIntyre Posted May 4 2008 at 10 24 AM Permalink At Matt Briggs blog Gavin Schmidt accused Douglass et al of having received v1 4 data and not reporting it However you were sent three versions of the RAOBCORE radiosonde data v1 2 1 3 and 1 4 You chose to use only v1 2 which has the smallest tropospheric warming You neither mentioned the other more up to date versions nor the issue of structural uncertainties in that data odd since you were well aware that the different versions gave significantly different results Maybe you d like to share the reasons for this with the readership here Douglass denied that they had been sent v1 4 data Contrary to your information we were never sent the RAOBCORE ver1 4 data check your source He added the following However we did realize that we had not explained our use of ver 1 2 in our paper so we sent an addendum to the Journal on Jan 3 2008 clarifying two points The first point is quoted below 1 The ROABCORE data choice of ver1 2 Haimberger 2007 published a paper in which he discusses ver1 3 and the previous ver1 2 of the radiosonde data He does not suggest a choice although he refers to ver1 2 as best estimate He later introduces on his web page ver1 4 We used ver1 2 and neither ver1 3 nor ver1 4 in our paper for the satellite era 1979 2004 The reason is that ver1 3 and ver1 4 are much more strongly influenced by the first guess of the ERA 40 reanalyses than ver1 2 Haimberger s methodology uses radiosonde minus ERA 40 first guess differences to detect and correct for sonde inhomogeneities However ERA 40 experienced a spurious upper tropospheric warming shift in 1991 likely due to inconsistencies in assimilating data from HIRS 11 and 12 satellite instruments which would affect the analysis for the 1979 2004 period especially as this shift is near the center of the time period under consideration This caused a warming shift mainly in the 300 100 hPa layer in the tropics and was associated with 1 a sudden upward shift in 700 hPa specific humidity 2 a sudden increase in precipitation 3 a sudden increase in upper level divergence and thus 4 a sudden temperature shift All of these are completely consistent with a spurious enhancement of the hydrologic cycle Thus ver1 3 and ver1 4 have a strange and unphysical vertical trend structure with much warming above 300 hPa but much less below 300 hPa actually producing negative trends for 1979 2004 in some levels of the zonal mean tropics Even more unusual is the fact the near surface air trend in the tropics over this period in ERA 40 is a minuscule 0 03 C decade Karl et al 2006 and so is at odds with actual surface observations indicating problems with the assimilation process This inconsistent vertical structure as a whole is mirrored in the direct ERA 40 pressure level trends and has been known to be a problem as parts of this issue have been pointed out by Uppala et al 2005 Trenberth and Smith 2006 and Onogi et al 2007 Thus we have chosen ver1 2 as it is less influenced by the ERA 40 assimilation of the satellite radiances Gerry Parker Posted May 4 2008 at 10 28 AM Permalink Jon said I would characterize my own viewpoint on adjustments to data as this if the size of the adjustments is equal to the size of the trend then the adjustments need to be comprehensively documented and examined carefully Hi Jon I would suggest that if the adjustments are anywhere near the same size as the trend you need a better measurement system There s a lot of statistical process control literature available for manufacturing that outlines this kind of thing and the magnitude of errors that can be tolerated I ve been through a lot of the training and enough process reviews to know this wouldn t fly if you were manufacturing widgets It s difficult to understand why it should be adequate for something as important as this From my experience uncontrolled variation is significantly influencing the measured data It is remarkably risky to assume and highly unlikely that the data can adequately be corrected for errors of the magnitude representd I cannot think of any good examples in engineering where we would accept this level of error in the measurement system vs the trend My analyst would tell me the measurement system couldn t be trusted and not to draw any conclusions before improving the measurement system Gerry Jonathan Schafer Posted May 4 2008 at 10 35 AM Permalink 24 Of course data can should be subject to revision when it is shown to be wrong However there is a responsibility that goes along with those revisions Namely you can t adjust the data silently then excoriate someone who publishes a paper based on data that was previously published and then silently updated which seems to happen a lot Even part of the blog entry from Steve mentions this in an alternate fashion RAOBCORE is a re analysis of radiosonde data by Leopold Haimberger and associates RAOBCORE 1 2 was published in April 2007 though presumably available in preprint prior to that Douglass et al 2007 was submitted in May 2007 when the ink was barely dry on the publication of RAOBCORE 1 2 Nonetheless Schmidt excoriates Douglass et al for using RAOBCORE 1 2 Another case in point was a recent thread about a paper published by Rob Wilson et al where he used data provided directly to him and Steve used a version from the ITRB database There were differences between the two leading to different results These are major issues in climate science and have been discussed repeatedly on so many threads you can t even keep track anymore As Steve pointed out in 31 above We observed for example that Mann withheld adverse verification r2 results I intentionally did not apply any labels to this I merely reported the facts If the facts are unpleasant then that s the fault of the author not mine In the stock market pharmaceutical world mining etc withholding adverse results could lead to snip even when they make statements like this Q There s a lot of debate right now over the best way to communicate about global warming and get people motivated Do you scare people or give them hope What s the right mix A I think the answer to that depends on where your audience s head is In the United States of America unfortunately we still live in a bubble of unreality And the Category 5 denial is an enormous obstacle to any discussion of solutions Nobody is interested in solutions if they don t think there s a problem Given that starting point I believe it is appropriate to have an over representation of factual presentations on how dangerous it is as a predicate for opening up the audience to listen to what the solutions are and how hopeful it is that we are going to solve this crisis snip Kenneth Fritsch Posted May 4 2008 at 10 36 AM Permalink I think it is important to notice the relative openness of the adjustment processes used for and discussions about the radiosondes temperature data sets as compared to the counterparts in surface data sets One should also be keenly aware of the homogeneity adjustments being made and why they are made Firstly for the surface record homogeneity adjustments are made on a station basis that would affect only a small part of the total data set while those as I see them made for the radiosondes would have a larger effect on the total data set The major issue in either case is making a homogeneity adjustment based on a coinciding change in instrumentation or methodology or making them based simply on finding statically significant break or change points in the time series I believe the intent of the latest version of GHCN was to look at the time series by station for any break points and make more or less automatic adjustments We know there are breakpoints in the combined surface temperature series and that the station by station approach for homogeneity adjustment to the total series would obviously negate what are probably real break points There are probably then real break points in the station data that may not be discriminated in the newer approach for homogeneity adjustment As I recall the homogeneity adjustments for the radiosonde series were made based on break points and corroborating evidence that a coinciding change was made and in light of whether it made physical sense Regardless it is these criteria that I think should be discussed in this thread along with a follow up analysis of the reasons given by Douglas for not using the most currently corrected radiosonde data set It should be much less difficult than doing the analysis for the surface data sets Armagh Geddon Posted May 4 2008 at 2 35 PM Permalink Re 39 Jonathan Schafer Re your last quote I am starting to collect statements like that where influential AGW advocates argue the need to exaggerate the problems so that the public can be mobilised I have examples from Stephen Schneider and Al Gore Can you please attribute that quote Thank you kuhnkat Posted May 4 2008 at 6 00 PM Permalink Jon 24 the use of outlier models by deniers to support the NO WARMING meme is more to irritate people like Gavin than to imply the SKILL shown by those particular models There is no argument they will accept to get across to them that they simply do NOT have enough understanding of this extremely complex system to allow their work to be used for policy or anything other than continued research Using their own tools against them becomes a desperation move in irony If the models are so loosely built that they can validate everything from no warming to catastrophic warming what is the value Even if the modelers could have one model do runs that reasonably matched temps across the globe and elevations it still would not PROVE they have the values and signs attributed to the correct components of the system The fact that the modelers trade on the idea that a particular model is able to show one segment of the climate reasonably totally mystifies me What it shows is that the values and or signs are misapplied and that they can tune the model to replicate a known phenomenon This actually falsifies the model as a whole Jonathan Schafer Posted May 4 2008 at 6 52 PM Permalink 41 An interview with accidental movie star Al Gore beaker Posted May 5 2008 at 4 51 AM Permalink Steve this maybe nitpicking but I think it is important to maintain the most moderate language possible in discussing disagreements between scientists whether they deserve it or not I don t think it is reasonable to describe Gavin s criticism regarding RAOBCORE 1 4 as excoriation verbal flaying scathing criticism invective

    Original URL path: http://climateaudit.org/2008/05/03/raobcore-adjustments/ (2016-02-08)
    Open archived version from archive

  • Leopold in the Sky with Diamonds « Climate Audit
    selective quote given in the top is based I assume this was culled from the AFP cull and is a highly selective version of my views as a result Undoubtedly further work is needed to verify or deny the result and understand the issues but when temperature analyses alone patently aren t working use of winds may help remove the roadblock and allow better understanding Those with access to Nature Geoscience can read my full comment that expands on all of this far more lucidly than a rapidly written note here On other comments raised We and others have been applying our methodologies to test bed cases precisely to try to ascertain what can and cannot be said and those test cases are becoming increasingly complex and realistic Papers in the pipeline will address this further but McCarthy et al gives a start and both Steve Sherwood s IUK and Leo s RAOBCORE include a degree of verification in test cases so this criticism seems born more of not reading the manual than anything else I agree that we should be making observations for climate Sadly to date we haven t GRUAN offers that opportunity but needs broad support to happen Ditto CLARREO Never heard of them That potentially gives you some idea where climate comes on the pecking order It is unfair to assert that we do not care about observations Many people spend a lot of time making sure really dumb decisions are not made and trying to protect the observing system but climate has no observing budget is sadly the bottom line Please feel free to write to your politicians to demand the billions necessary Not so keen all of a sudden Finally rather than ragging on the radiosonde community it would be nice if those who constantly carp on here about availability of metadata and audit trails were to recognise that as a community the radiosonde experts do actually provide that trail for nearly all the datasets that are publicly available If all that is forthcoming constantly is criticism then this forum rapidly approaches the status of irrelevance to the climate community Some balance and encouragement highlighting of positive aspects is never remiss if you want to be taken seriously Nylo Posted May 28 2008 at 3 46 AM Permalink I have recently learnt that RSS data comes from the same source than UAH data but uses a correction of UAH procedures when adjusting diurnal temperatures of the lower troposphere It is said that the adjustment affects mostly in the tropics I have verified that myself but I am a bit astonished as to the results I got Not only do the adjustments affect mostly in the tropics In truth the adjustment only affect the tropical trend And it changes the trend by a full 0 1ºC decade This means that it DOUBLES the tropical tropospheric temperature trend shown by UAH data Does anybody have a clue as to what has happened to the diurnal temperature in the tropical troposphere but didn t happen to the diurnal temperatures of the remaining troposphere I would have liked to download and thoroughly read the Mears and Wentz 2005 article that explains the corrections performed but it is not available without paying The correction trend is obtained by cooling the past and warming the present Does it ring a bell here in Climate Audit JamesG Posted May 28 2008 at 4 24 AM Permalink In RC it is quoted that Thorne concludes The new analysis adds to the growing body of evidence suggesting that these discrepancies are most likely the result of inaccuracies in the observed temperature record rather than fundamental model errors This quote is not at all culled as he says of the long awaited experimental verification of model predictions quote but nevertheless both statements are inconsistent with the uncertainties and caveats that he has stated above Perhaps he should post his real views on the realclimate org blog in order to give the proper balance that he so desires the rest of us to display And while he s at it he should tell them he disapproves of such quote mining being used for public disinformation I won t hold my breath waiting We are well used to scientists saying one thing in private and quite another in public but it remains unacceptable behaviour It is precisely this kind of mis presentation of controversial and preliminary adjustments to the raw data as evidence that we are often ragging about At what point does it become just plain dishonesty I d like to know Michael Smith Posted May 28 2008 at 4 55 AM Permalink From 40 If all that is forthcoming constantly is criticism then this forum rapidly approaches the status of irrelevance to the climate community Some balance and encouragement highlighting of positive aspects is never remiss if you want to be taken seriously Does that criteria for being taken seriously also apply to the pro AGW people with respect to the arguments put forth by those who are skeptical James Bailey Posted May 28 2008 at 5 13 AM Permalink Billions have been spent on Climate Research because of AGW yet nobody gets any money to improve our ability to measure climate change Does it all go to supercomputers The data sets we have are flatly not capable of proving anything Superhuman efforts are needed and bragged about to try and make sense out of the mess Yet time and again when somebody pulls back the curtain on the problems the superhumans have glossed over or even created they respond with attacks and then change the very same data to prove the critic wrong Of course no new money goes into this data taking The superhumans are claiming they get the right answer with what they have Rare admissions like that of 40 above which are needed to justify the improvements undermine the claims that the answers are right The politicians handing out the money want power now And the activists pushing to drastically change the world in damaging ways claim that we can t afford to wait It is long past the time where the scientists of this field regrouped and put together proposals on how to improve the measuring systems so that the new data will not have all these well known problems that make the old data worthless to the task at hand Quit griping about the lack of money and insist on the share you need to advance your field and do the science MarkW Posted May 28 2008 at 5 27 AM Permalink If the temperature data from radiosondes is so fraught with uncertainty how is it that the wind data from the same probes is better Steve McIntyre Posted May 28 2008 at 5 50 AM Permalink 40 Thank you for the comment Before I make any other comment on Peter Thorne s post I would like to note that my post did not survey the state of archiving in radiosonde data However I would like to observe that I was able to promptly locate and download several key radiosonde data sets Angell RATPAC B HAdAT2 While I ve not spent enough time on the data to provide an opinion on the completeness of the archives as Dr Thorne observes the authors in the field have made substantial efforts to make their data publicly available and should be commended for it Dr Thorne is also justified in reproaching me for not giving credit where credit is due Point taken and my apology I will add this information to the head post I ll discuss other issues separately RomanM Posted May 28 2008 at 6 22 AM Permalink Peter Thorne You make some valid points However there seem to be a variety of issues that the climate science community does not seem to understand or does not seem to be willing to take steps to deal with In particular my specificl concern is with statistical methodology and its application to climate issues For whatever reason there is a distinct lack of trained statisticians working with the researchers Given the increasing complexity of statistical methodology developed in the post BC Before Computers era this lack means that data analysts without a proper training or understanding are less likely to grasp the implications of using that methodology or of making ad hoc theoretically unjustified changes to it This may lead to spurious results and or serious underestimation of the inherent uncertainty of estimates of important parameters When I read quotes like The new study provides long awaited experimental verification of model predictions based on such results my professional hackles are raised and my response is to somewhat vehemently point out the shortcomings in the analysis Some examples from your post The upshot is that the raw data is a mess and the choices that a dataset creator makes imparts non negligible and unintentional bias into the resulting database So how do you address this You get many people to look at the data independently Yes some may make dumb choices but only through getting this multi effort approach can you begin to understand what you can and cannot say about the data This is one example where I feel that a good statistician could be helpful Why do you have the idea that it is appropriate to over manipulate the data with whole series of often subjective ad hoc adjustments Looking at your document at http www cru uea ac uk cru posters 2003 07 PT HadAT1 pdf We homogenise the individual station series by near neighbor checks to maintain spatio temporal consistency Neighbours are drawn from the contiguous region with correlation r 1 e for each target station as defined by NCEP reanalyses fields 10 Weightings used to develop neighbor averages for each station are the NCEP correlation We apply a moving Kolomogorov Smirnov test through the difference series target station neighbour average on a level by level and seasonal basis to identify potential jump points using metadata where available to confirm these jump points Time series are corrected based upon the change in the mean of the difference series across the break point We only proceed if this is 0 1K to avoid artificially reddening the time series The process is then iterated to a subjectively assessed degree of convergence on a station by station basis Our QC procedure has been through a total of six iterations The resulting zonal trends are spatio temporally smoother than either the uncorrected analysis or HadRT2 1s Figure 3 Importantly the observed tropical tropospheric cooling over the satellite period remains Just how much of what is now in the data set an artifact of the QC process Is there some theoretical basis for applying the K S test in the moving average manner that you have What are its properties as a jump point detector What inherent uncertainty has been introduced What s wrong with doing genuine Quality Control which is typically necessary on large data sets using metadata and other available real information The remaining factors should be dealt with through an appropriate statistical model in the subsequent analysis where the uncertainty can be assessed in a more realistic fashion We and others have been applying our methodologies to test bed cases precisely to try to ascertain what can and cannot be said and those test cases are becoming increasingly complex and realistic Papers in the pipeline will address this further but McCarthy et al gives a start and both Steve Sherwood s IUK and Leo s RAOBCORE include a degree of verification in test cases so this criticism seems born more of not reading the manual than anything else This is another situation where a competent statistician would be helpful Test bedding is not appropriately performed by running one or two cases and then performing an eye ball comparison Statisticians will run several thousand cases using multiple underlying scenarios and report statistics on bias variability robustness etc to evaluate the behavior of procedures where the derivation of a theoretical basis is not possible The seat of the pants approach we tried a couple of situations and got the same results doesn t cut it in statistics Finally rather than ragging on the radiosonde community it would be nice if those who constantly carp on here about availability of metadata and audit trails were to recognise that as a community the radiosonde experts do actually provide that trail for nearly all the datasets that are publicly available If all that is forthcoming constantly is criticism then this forum rapidly approaches the status of irrelevance to the climate community Some balance and encouragement highlighting of positive aspects is never remiss if you want to be taken seriously Your comments regarding the openness and archiving of data seem to be justified given the relative ease with which I was able to locate data My attitude in an earlier post on the Sherwood paper regarding the relative impossibility of reproducing the results was based on the fact that a lot of the steps done were probably unintentionally incompletely described Criticism of a paper or any research is the way to evaluate the results if it can stand up to the criticism then it is good work and the results are reliable I like to think that most of the posters here are honest enough to recognize and applaud good work when they see it JP Posted May 28 2008 at 6 27 AM Permalink Large errors in rawinsonde data are fairly easy to pick visually once the data is plotted on a Skew t chart Most operational forecasters with a few years under thier belt can pick out screwy data quite quickly Both the NWS and Air Force have data validation sub routines that kick the observations back to the operators if the lapse rates reach or exceed a certain threshold If for some reason the error is missed and is inserted into the model runs one can be fairly certain that the first model run have some really screwy forecasts Rawinsonde stations are few and far between any significant uncaught errors will quickly show up in the forecast models Steve McIntyre Posted May 28 2008 at 7 07 AM Permalink Turning to Dr Thorne s comment I m having trouble understanding what if anything he substantively disagrees with in the above post other than the fact that he wants to be patted on the head I come at the radiosonde data as a third party as do readers here Our first interest is in knowing what reliance can be placed on this data set in terms of understanding climate change I said that the raw data was a mess and the inhomogeneities were far worse than the surface record which is more familiar to readers here Dr Thorne said numerous changes in instrumentation and observing practice over time which make the purported issues with the surface record look like a walk in the park in comparison The upshot is that the raw data is a mess Seems to me that Dr Thorne agrees 100 with one of the key comments in my post This is unsurprising since I drew this observation from specialist literature which is quite candid on the topic though this is less clear as the reports get culled in Dr Thorne s turn of phrase for the public I expressed concern that the changepoint methodologies used in dealing with extremely inhomogeneous temperature data were potentially very problematic that these techniques were not well established by the general statistical community and could bias the results Dr Thorne stated the choices that a dataset creator makes imparts non negligible and unintentional bias into the resulting database Again it seems to me that Dr Thorne has agreed with the second key point of the above post Again this is unsurprising since specialist literature in the field e g the two quotes from Sherwood says exactly the same thing I concluded that it seemed that the radiosonde data was insufficient quality to support any conclusion one way or the other as to whether radiosonde observations were or were not inconsistent with models though I did not comment on the satellite record as it has its own set of issues Dr Thorne said The fact of the matter is that we cannot definitively say whether the troposphere is warming less quickly as quickly or more quickly than the surface either from sondes or satellites although no one doubts it is warming Again I see no point of disagreement with this observation and my conclusion in respect to the radiosonde data I said that Allen and Sherwood argued that the radiosonde wind data was less screwed up than the radiosonde temperature data I m not in a position to comment one way or the other on whether this is the case now that the issue is raised other scientists may well disagree But I stated that this was an attempt to use a different portion of the radiosonde data Allen and Sherwood 2008 try a different tack they try to create a homogenized wind data series on the basis that the radiosonde wind data is much less screwed up They then argue that the trends in wind are consistent with tropical troposphere warming They use this as evidence for the side of the argument that the UAH satellite temperature trends in the tropics are incorrect I guess that we ll see more about tropospheric wind data in the next while Again I see no material point of difference between this observation and Thorne s corresponding observation So why the petulant tone in Dr Thorne s post He felt that I had not properly praised the radiosonde community for archiving their data While I ve not fully surveyed their archival practices as noted above I was able to quickly locate and readily download data from relevant data sets It would have done to no harm to have noted this and I ve amended the above post to express this On a scale of 1 to Lonnie Thompson they re pretty good But by now archiving data should be regarded merely as a type of hygiene Hopefully we ll reach a point where praising a climate scientist for archiving data would seem as ridiculous as praising George Bush or Hillary Clinton for brushing their teeth But since Dr Thorne wishes some praise of this sort from these quarters as noted above I m happy to recognize their archiving efforts and have done so Dr Thorne makes the Rabbettesque Halperinesque accusation that criticisms of their adjustment methodologies comes from failure to read the manuals Thorne We and others have been applying our methodologies to test bed cases precisely to try to ascertain what can and cannot be said and those test cases are becoming increasingly complex and realistic Papers in the pipeline will address this further but McCarthy et al gives a start and both Steve Sherwood s IUK and Leo s RAOBCORE include a degree of verification in test cases so this criticism seems born more of not reading the manual than anything else In my post I quoted criticisms from Sherwood et al 2008 a paper that is current of all prior adjustment efforts and stated that I was prepared to stipulate to these criticisms Yes there are a slew of new changepoint analyses To carry out a complete deconstruction of all these changepoint analyses was beyond the scope of this post or my interest Perhaps one of these new adjustment methods will cut the Gordian knot where previous methods have failed The track record of these prior efforts according to the most recent survey by Sherwood is not encouraging My point was that these particular changepoint analyses were homemade methods developed by the climate science community and their properties were not well understood by the general statistical community I can t go to a statistics textbook and look up calculations on confidence intervals for any of these new techniques As Dr Thorne observes the choices that a dataset creator makes imparts non negligible and unintentional bias into the resulting database Quite so I see no justification for Dr Thorne s petulant tone which is all too reminiscent of the Gavinesque sigh that we ve all become used to Craig Loehle Posted May 28 2008 at 7 15 AM Permalink Re Peter Thorne s comments To add to what RomanM said the entire procedure of multiple adjustments models assumptions and analysis procedures while perhaps all very reasonable individually leave too much wiggle room for arbitrary choices and are without an overall theoretical or experimental frame Such a frame would be for example in a randomized block experimental design there is an established statistical framework for how to handle errors and do the analysis When multiple ad hoc and or complex analyses are done we end up afloat in theory land with no way to evaluate the results rigorously This is even with perfectly honest scientists My own field ecology suffers from this quite a bit which makes competing theories hard to test for many decades and study results sometimes not convincing to others In such a setting it is really too bad when people make claims about the certainty of their results that can t be supported by any rigor Filippo Turturici Posted May 28 2008 at 7 43 AM Permalink Dr Thorne having made a rude criticism of that work I think you are right to ask explanations I could say I above all agree with Dr McIntyre point of view but I want to be more clear maybe because I am younger and ruder than he is snip sorry about this but it s too diffuse a question to expect Dr Thorne to answer Kenneth Fritsch Posted May 28 2008 at 8 14 AM Permalink Re JamesG 42 In RC it is quoted that Thorne concludes The new analysis adds to the growing body of evidence suggesting that these discrepancies are most likely the result of inaccuracies in the observed temperature record rather than fundamental model errors This quote is not at all culled as he says of the long awaited experimental verification of model predictions quote but nevertheless both statements are inconsistent with the uncertainties and caveats that he has stated above I agree and find Thorne s post a long way around saying there are uncertainties in temperature measurements I suspect one could just as readily apply what he says about the troposphere to the surface The problem I have with some of these scientific opinions is that they seem to come with an outcome in mind I also have a problem with climate scientists who make corrections by concentrating on and doing it primarily in one direction Notice that Thorne does not confine his argument to the reliability of radiosondes but of MSU also Thorne and RC continue to present good evidence for the case that those of us interested in climate science need to do our own investigations and analyses Ivan Posted May 28 2008 at 8 52 AM Permalink This could be a little bit off topic but not entirely In his RC critique of Douglass et al 2007 paper Gavin Schmidt displays two graphs representing greenhouse and soalr forcing and both showing tropical hot spot and states that GISS model simulations show that hot spot quite IRRESPECTIVE of the type of forcing involved So if Schmidt is right vertical amplification of warming in the tropical troposphere IS NOT unique signature of greenhouse warming as commonly understood but regular consequence of any kind of warming It is quite contrary to what IPCC says and what Kenneth in 36 posted that only greenhouse warming shows that characteristic tropospheric fingerprint Interesting question to clarify by someone better qualified in physics than me is NASA GISS Schmidt or IPCC wrong If IPCC is wrong then whole this fight to correct and adjust radiosonde and satellite data is misplaced it would achieve nothing in terms of attribution of recent warming to any particular cause If Schmidt is wrong is it possible that none spotted his mistake thus far waiting for me philosopher economist to do that Ivan Posted May 28 2008 at 9 01 AM Permalink Kenneth 36 I now have read that you also reffered to Schmidt strange redefinition of tropical warming amplification but you didn t emphasized that Schmidt displayed for greenhouse and solar forcing two basically identical graphs apart from stratosphere That is quite different from IPCC graphs you have posted yorick Posted May 28 2008 at 9 03 AM Permalink What you cannot say is that this new analysis confirms the models I guess the best you can say is that the radiosonde data does not falsify the models The point was made and it is apparently true that the way that good data is determined is by reference to agreement to the model output when as Steve points out good data should be identified by careful qc analysis of the collection methods irrespective of the results as long as the results are not unphysical in some way and no you cannot defined unphysical through GCM output Can somebody explain to me why the results here are not circular Steve McIntyre Posted May 28 2008 at 9 04 AM Permalink PErhaps someone could look through IPCC AR4 and see whether they reported Dr Thorne s above observations that the data was a mess and that attempts to create data sets were fraught with problems and that the field experienced the under funding teported above by Dr Thorne M Simon Posted May 28 2008 at 9 41 AM Permalink Saint Stephen with a rose in and out of the garden he goes Country garden in the wind and the rain Wherever he goes the people all complain Fortune comes a crawlin calliope woman spinnin that curious sense of your own Can you answer Yes I can But what would be the answer to the answer man St Stephen by the Grateful Dead for all the Old Hippies and Climate Auditors Steve McIntyre Posted May 28 2008 at 9 46 AM Permalink IPCC AR4 said Within the community that constructs and actively analyses satellite and radiosonde based temperature records there is agreement that the uncertainties about long term change are substantial Changes in instrumentation and protocols pervade both sonde and satellite records obfuscating the modest longterm trends Historically there is no reference network to anchor the record and establish the uncertainties arising from these changes many of which are both barely documented and poorly understood Therefore investigators have to make seemingly reasonable choices of how to handle these sometimes known but often unknown influences It is difficult to make quantitatively defensible judgments as to which if any of the multiple independently derived estimates is closer to the true climate evolution This reflects almost entirely upon the inadequacies of the historical observing network and points to the need for future network design that provides the reference sonde based ground truth Karl et al 2006 provide a comprehensive review of this issue Although the language is polite the conclusion is that the radiosonde data is a mess as Thorne observes above and that no individual adjustment method can be selected as right as Thorne also observes I guess what Thorne doesn t like is us saying that here Erv Leonard Posted May 28 2008 at 10 08 AM Permalink I have doubted the ability to determine or measure AGW from the beginning Before I get into the details the first problem that jumped out at me was the potential quality of measurements of very small changes measured over long periods of time The second problem and most dangerous is people stand to gain financially from getting folks to believe in AGW Though I will be the first to admit I am ignorant of the nomenclature used on this site one thing is obvious all the data comes from some form of measuring instrument the data is interpreted and then manipulated based on acceptable standards This site touches on all three aspects and it is obvious many AGW believers have played a bit fast and loose with the the data is interpreted and then manipulated based on acceptable standards I will leave that debate to the experts here What I would like to comment on and it is touched on here is the quality of the data from the measuring instruments and their operators A little personal history is needed I spent 13 years installing repairing and instructing on the proper us of analytical instruments They included gas chromatograph liquid chromatograph mass spectrometers capillary electrophoresis atomic emission detectors diode array detectors and many more In those 13 years I leaned the following 1 80 of all problems were operator error Our call centers reported year after year 80 of trouble calls were fixed over the phone by correcting user practices or misconceptions 2 Operating instruments in even the most controlled environments still did not eliminate environmential affects on the instrument talking in front of Refractive Index detector to long or sunlight reflected off a pane of glass heating instrument parts all can affect results 3 Calibrating an instrument only guaranteed it was in specifications at the moment of the calibration 4 If it has electronic components the number of potential errors is astronomical 5 Expecting a result GREATLY affected the objectivity of the operator to evaluate data 6 Being humans operators get sloppy and circumvent proper protocols I once had a NASA PhD taking C02 samples for the atmosphere by walking out in the hall and drawing a sample I asked him if he felt that was appropriate and his response floored me he said I wait until no one has walked by for a few minutes 7 And lastly and I know I may insult a large number of this sites readers so I will qualify it to say I do not know if it applies to this field of science So here goes without a doubt the most incompetent unethical and bottom of the barrel operators where PhDs in academia On average I spent one day a week in a college or University lab the other four days were in private sector of pharmaceutical labs chemical manufactory labs food production labs plastic manufacturing The difference in academia and private sector PhDs was not just a little bit but dramatic It was so bad we my coworkers would make bets on sports events and the loser had to do the academia service calls for a given period Again I am sorry if this does not apply to this field of study Bottom line is in 13 years of being totally immersed in extremely accurate measurement instructs I seriously doubt the ability of many instruments in diverse and harsh environments to accurately measure the very small variances claimed by the AGW crowd with any reliability Dave Dardinger Posted May 28 2008 at 10 24 AM Permalink I think in one sense Steve you re being too hard on Dr Thorne It doesn t look to me as though he was complaining about your basic post A lot of what he says is basically a repeat of what you said but he doesn t do it in a context of saying you were wrong Likewise it s useful to have a specialist in the field back up what you said What s a bit more contentious were his last three paragraphs which were explicitly addressed to other comments I haven t parsed the entire thread to see who he s addressing but I expect they are his general comments on the threaded blog comments rather than on the head post Unfortunately a lot of scientists who come to a blog like this seem to think that the comments of other people are to be taken as representative of the blog owner s thought though except on the most highly censored sites this isn t the case Since active scientists don t have time to learn much about how blogs work this is understandable but it does mean they re often tilting at windmills That s what I believe is the case here Paul Maynard Posted May 28 2008 at 10 26 AM Permalink Re Peter Thorne So what Thorne and the IPCC are saying is that none of the post war temperature records are much cop We know they place great emphasis on the surface record but we know from this blog and Surface Stations Org what the quality of that record is One comment of Thorne s at the beginning that flummoxed me was Firstly and most importantly neither the radiosonde or the satellite programs have answered primarily or at all in some cases to the needs of climate Rather these measurements have been made with operational forecasts in mind I don t want to start the weather versus climate debate again but I don t see how this can hold up If the radiosondes are not good enough for climate how are they good enough for operational forecasts Can somebody tell me what the difference in accuracy required for climate and operational forecasts is If the radiosondes are so poor at their job why are they used Perhaps this explains why forecasts can only be made for 4 days ahead and are invariably wrong but projections can be made for 100 years from models that are now validated by wrong data And how after these admissions can the IPCC claim the certainty it does for AGW Cheers Paul MarkW Posted May 28 2008 at 11 01 AM Permalink Paul The difference is that nobody cares if the forecast for tomorrow s temperature is off by 2 1 degrees vs 2 2 degrees So the need to be accurate to tenths of a degree is just not there When it comes to climate we are looking for a signal that is believed to be a few tenths of a degree per century For that you need instruments that are accurate to hundredths of a degree Rusty S Posted May 28 2008 at 11 23 AM Permalink If I may be so bold as to give my reading of Thorne s post I think what Thorne is really saying he doesn t like is the Radiosonde community taking all the heat for the messy data when it is used in climate analysis Firstly and most importantly neither the radiosonde or the satellite programs have answered primarily or at all in some cases to the needs of climate Rather these measurements have been made with operational forecasts in mind This has meant that there have been numerous changes in instrumentation and observing practice over time which make the purported issues with the surface record look like a walk in the park in comparison Unlike surface observations radiosondes are single use instruments fire and forget and satellites have in a climate sense very short lifetimes and are subject to all that space can throw at them Neither is appealing as a raw database from which to construct an unimpeachable dataset either in isolation or in combination My reading of this is that the system was designed for operational forecasts and the data has been carefully collected and logged with meta data and audit trails as would be expected But this is all within the parameters of operational forecasts The problems that arise from using this data in climate analysis are varied and complex and acknowledged within the radiosonde community and some attempts to identify useful information from the existing datasets is happening I think I would get upset too if I were fielding complaints about how someone else was using my data for something it was never designed to be used for in the first place To me that

    Original URL path: http://climateaudit.org/2008/05/27/leopold-in-the-sky-with-diamonds/ (2016-02-08)
    Open archived version from archive

  • Guy Callendar vs the GCMs « Climate Audit
    timestamp of 2002 had been prepared especially for me Mann s claim that we had not checked with him when we noticed problems with the data Mann s claim that the dataset which he moved into a public folder in November 2003 had been online all the time notice the CG2 email in which Mann accused me in September 2003 of trying to break into his data Your blog article distributed these lies to the public though Mann had himself distributed them widely through email distribution When I asked you in the politest possible manner appealing to your journalistic ethics to correct Mann s untruthful allegations you refused fastfreddy101 Posted Jul 27 2013 at 12 58 PM Permalink The trouble with most of us is that we d rather be ruined by praise than saved by criticism Brandon Shollenberger Posted Jul 27 2013 at 4 23 PM Permalink Steve you might be interested in an exchange I had with David Appell about a year ago He requested a single example of Michael Mann lying I responded How about Mann s repeated assertion that Steve McIntyre and Ross McKitrick made errors because they had asked for an Excel spreadsheet I m sure you should be able to find documentation for that one since you were following the topic when it first came up Mann s repeated that lie in to one of the groups investigating things after Climategate and they accepted his answer without any attempt at verifying it He also repeated that lie in his book a page number for which I can provide if you d like I then pointed out we have the correspondence between you and Michael Mann David Appell s response How do you know it s the entire correspondence How do you know what transpired in other correspondence In other words it s no proof at all but still one person s word against the others On a subject that frankly has gotten to be a broken record and hence a waste of time from important issues There is no excuse too great for David Appell He seriously suggested you deceptively posted only a portion of the correspondences you had with Mann et al in order to paint Michael Mann as a liar It d have been easy for Mann to prove this about you but nobody else has ever suggested it Still Appell thinks it s a valid enough idea to dismiss the overwhelming evidence that Mann lied That s what made me decide David Appell is like a less intelligent less honest version of Nick Stokes I figured there was no point in pursuing the matter further as I m sure he could find ways to deny any evidence one might present Steve As you observe Appell s attempt to rationalize Mann s lying is very Nick Stokesian But even this bizarre excuse doesn t rationalize the other elements of Mann s lying at the time 1 the data that I was referred to was timestamped in 2002 and was not prepared in response to my request in April 2003 2 we had contacted Mann to confirm that this was the data that he had used and I provided an email to show it Also in the Climategate emails there s an email from Mann to Climategate emails described as containing my original request and which was the email that I had placed online I don t regard Appell as harshly as some readers but I m very disappointed in his irrationality on this topic TerryMN Posted Jul 27 2013 at 9 39 AM Permalink Of course that s because people like you and your crowd have harassed the latter to no end The irony it burns Steven Mosher Posted Jul 28 2013 at 1 14 AM Permalink Of course that s because people like you and your crowd have harassed the latter to no end So what exactly is your point That if only you d have had the chance to harass Callendar he too have been just as upset about it David Appel argues that science is rough and tumble http judithcurry com 2013 07 27 the 97 consensus part ii comment 353709 Yet asking for data is harassment snip gallopingcamel Posted Jul 27 2013 at 12 10 AM Permalink David Appell As a master of obfuscation you seem unable to make precise statements What I got out of your comment was a complaint that Steve has somehow been unfair to the Climategate Correspondents In the opinion of virtually everyone Steve is overly kind and generous What he sees as errors in statistical analysis look like dishonesty or fraud to many of us The problem with your web site Quark Soup is that honest discussion is not possible when censorhip rules You will get better treatment here than you deserve sue Posted Jul 27 2013 at 1 37 AM Permalink Steve why do the observations on your graph go past the present I know I said I was done here but this is bothering the heck out of me In the past you were a detail guy now it doesn t matter trade marked by you via climate scientists isn t going to work Bender Bender Bender or RC Please help me I m dazed and confused Steve I ve included the most recent UK Met Office decadal forecasts as a dotted line and noted Decadal in the legend I ll make the caption clearer sue Posted Jul 27 2013 at 10 34 AM Permalink oh Thanks for the reply Martin A Posted Jul 27 2013 at 2 50 AM Permalink This last comment was noted up in Hawkins and Jones 2013 who sniffed in contradiction that great progress had subsequently been made in determining whether warming was beneficial or not quoting Callendar but surgically removing Callendar s reference to direct benefits heat and power and carbon dioxide fertilization surgically removing Bowdlerization is an alternative term In this case it is an example of altering a scientific publication to change its message drissel Posted Jul 27 2013 at 1 31 PM Permalink Snipping a quote to change its meaning is technically known as Dowdification after Maureen Dowd of NYT http www urbandictionary com define php term dowdification Regards Bill Drissel Grand Prairie TX FerdiEgb Posted Jul 27 2013 at 2 53 AM Permalink Steve nice work The lack of skill of the current GCM s always wondered me and I have been suspicious about the use of human aerosols as a convenient tuning knob to fit the past and even so not so good especially not in current times where the reduction of aerosols in the Western world is near fully compensated by the increase in SE Asia With virtually no change in global aerosols there is no way that aerosols can explain the current standstill in temperature with ever increasing CO2 levels Thus what else can be the cause of the standstill 1945 1975 and 2000 current Large ocean oscillations may be one of the culprits as these are not reflected in any GCM But if they are responsible for the standstill of the past and today with record CO2 emissions then they may at least in part responsible for the increase in temperature inbetween As a side note Callendar was the man who did throw out a lot of local CO2 measurements made by chemical methods to show a curve for what he thought that the real increase of CO2 over time was until then He used several predefined criteria to do that not the post editing we see from many in modern climate research The remarkable point is that his estimates for the increase of CO2 over that period was confirmed several decades later by ice cores Steve there was a recent article on tuning which Judy Curry covered I haven t parsed the topic but it sounded like there are knobs connected to cloud parameterization clouds needless to say being a sort of black hole in terms of comprehensive understanding David L Hagen Posted Jul 27 2013 at 1 12 PM Permalink Steve re clouds being a sort of black hole in terms of comprehensive understanding In his TRUTHS presentation 2010 slide 7 Nigel Fox of the UK National Physical Lab summarizes the IPCC s uncertainty of 0 24 for clouds compared to 0 26 total 2 sigma I e Clouds form 92 of all uncertainty in the feedback factor Furthermore Uncertainty in feedback limits the ability to discriminate to 30 years Need to constrain models with data more accurate than natural variability kim Posted Jul 27 2013 at 5 52 PM Permalink I think I ve never heard so loud The quiet message in a cloud MikeN Posted Jul 27 2013 at 10 42 PM Permalink I ve worked with an MIT climate model I think it was EPPA 2 Presumably a simplified version as century runs would run in about 10 minutes For this you were explicitly providing values for ocean sensitivity aerosols and clouds And yes changing these numbers would give you huge variations in the sensitivity The professor even acknowledged that certain values which were possible would give you an amount of warming equal to the previous century and no big deal jorgekafkazar Posted Aug 4 2013 at 6 18 PM Permalink And yes changing these numbers would give you huge variations in the sensitivity Which is congruent with the precision MIT Monty Carthon climate study tool shown here This is not science this is a Big Six carnival game michael Kelly Posted Jul 27 2013 at 4 45 AM Permalink Even more primitive than Callendar is the more recent work of Akasofu 2010 2013 who has looked at the simplest non trivial fit to the data he predicted the present temperature stasis and predicts that it will last another 15 years It may but is not likely to be a fluke Akasofu made his prediction based on the attribution of the first harmonic to oceanic patterns It is a concern that most practitioners of GCMS have not taken it at all seriously If they had engaged in 2000 they might have thought more carefully about what is happening and the systematic divergence of the GCMS from the real world data that ahs occurred since then might have not have emerged into the serious problem it is today Syun Ichi Akasofu On the Present Halting of Global Warming Climate 2013 1 4 11 doi 10 3390 cli1010004 S Y Akasofu On the recovery from the little ice age 2010 Natural Science 2 1211 24 Chris Wright Posted Jul 27 2013 at 5 39 AM Permalink This chimes in beautifully with a recent post by Willis at WUWT He found that a simple formula that can easily be run on a laptop performed as well or possibly better than the climate models running on million dollar supercomputers snip overeditorializing Jimmy Haigh Posted Jul 27 2013 at 6 25 AM Permalink Nice to see good old fashioned science in action Excellent stuff Steve Top quality Thanks Geronimo love this As Lord Rutherford once said presciently in my view If you can t explain your theory to a barmaid it probably isn t very good physics And no surprises on reading David Appel s response Doc Snow Posted Jul 27 2013 at 9 55 AM Permalink I don t know why folks here are characterizing Callendar as some sort of unknown As has been pointed out above his work is well described in Weart s Discovery of Global Warming and JR Fleming has written a biography at book length http www rmets org shop publications callendar effect One offshoot is an online article https secure ametsoc org amsbookstore viewProductInfo cfm productID 13 His bio is also the primary source for my article on Callendar here http doc snow hubpages com hub Global Warming Science And The Wars Callendar is scarcely unknown to readers of RC or of the climate mainstream For example his papers form an important and treasured archive at the University of East Anglia https www uea ac uk is collections G S Callendar Notebooks I ve characterized Callendar as the man who brought CO2 theory into the 20th century He corresponded with several of the important mid century figures in that study notably Gilbert Plass but also if I m not mistaken Dave Keeling And Revelle was certainly aware of his work probably so too was Bert Bolin first chair of the IPCC All of which explains why it was that scientists celebrated the 75th anniversary of his 9138 paper back in April Googling guy callendar anniversary produced 697 000 hits Steve I certainly did not characterize Callendar as some sort of unknown Quite the opposite In my post I cited and linked your article http doc snow hubpages com hub Global Warming Science And The Wars noting that I had drawn on it for my profile of Callandar I also referred to the Callendar archive at the University of East Anglia contrasting Callendar s meticulous recordkeeping with Phil Jones casual failure to preserve original station data I cited Hawkins and Jones retrospective article on Callendar tho I criticized it for overly focusing on Callendar s temperature accountancy work I also mentioned Plass also Canadian born I don t understand your complaint Doc Snow Posted Jul 27 2013 at 9 56 AM Permalink Sorry 1938 paper Doc Snow Posted Jul 30 2013 at 6 43 AM Permalink Thanks for linking the article I m not complaining about your original text I m expressing surprise at the characterizations of a number of commenters and for clarity some sort of unknown is my phrase relating only to my perception of a number of comments and is not meant to refer to any particular comment Steve if you are criticizing someone I think that you have an obligation to accurately quote and reference precisely what you are criticizing Gavin Schmidt at Real Climate has a reprehensible habit of not doing this It s a bad habit that you should try to avoid David L Hagen Posted Jul 27 2013 at 1 33 PM Permalink Doc Snow With nominally 425 citations to Guy Callendar 1938 it is surprising that the IPCC ignores his climate sensitivity estimate Steve Archer and Rahnstorf Climate Crisis reported that Callendar s sensitivity estimate was 2 deg C and that he had supported water vapor feedbacks AJ Posted Jul 27 2013 at 5 30 PM Permalink I also mentioned Plass also Canadian born So was Nesmith the inventor of basketball This gives credence to the old joke Q How many Canadians does it take to screw in a light bulb A Ten One to screw in the lightbulb and nine to say Look He s Canadian kim Posted Jul 27 2013 at 5 44 PM Permalink Naismith AJ Posted Jul 27 2013 at 6 11 PM Permalink I staannd coorectid tomdesabla Posted Jul 28 2013 at 10 40 PM Permalink Nesmith was the Monkey who flew out of Moshers Akasofu AJ Posted Jul 27 2013 at 6 26 PM Permalink You wouldn t have noticed if I used an accent aigu as in Nésmith eh kim Posted Jul 27 2013 at 7 18 PM Permalink Heh I wouldn t have been able to find my way home AJ Posted Jul 27 2013 at 8 27 PM Permalink David Smith Posted Jul 27 2013 at 9 26 PM Permalink Good one Makes me remember very fondly my years living in Ontario kim Posted Jul 27 2013 at 5 54 PM Permalink I ve asked Spencer Weart a number of times when he is going to write The Discovery of Global Cooling bernie1815 Posted Jul 27 2013 at 11 35 AM Permalink Doc Snow I found your article clear and helpful The links were also helpful Alas Flemings book appears to be hard to find Doc Snow Posted Jul 30 2013 at 6 34 AM Permalink Thanks much appreciated I don t have a copy of the Fleming myself my local library was able to get me one via ILL Inter Library Loan Maybe yours can too David L Hagen Posted Jul 27 2013 at 12 59 PM Permalink Brunt s Discussion to Callendar s paper appears as pertinent today in terms of evaluating GCM s skill in hindcasting forecasting or lack thereof as Steve quantifies Prof D Brunt referred to the diagrams showing the gradual rise of temperature during the last 30 years and said that this change in mean temperature was no more striking than the changes which appear to have occurred in the latter half of the eighteenth century p 238 dearieme Posted Jul 27 2013 at 1 00 PM Permalink A steam technologist who did furnace calculations would be familiar with methods of calculating radiation through an atmosphere containing CO2 and H2O because flue gas contains those two species Steve his practical experience clearly enabled him to see things that academic climate scientists of his day were unfamiliar with AJ Posted Jul 27 2013 at 5 53 PM Permalink Steve his practical experience clearly enabled him to see things that academic climate scientists of his day were unfamiliar with Much like our host Eli Rabett Posted Aug 25 2013 at 8 31 PM Permalink You seriously underestimate how hard and low resolution IR measurements were in those days klee12wp Posted Jul 27 2013 at 4 34 PM Permalink Excellent work Steve McIntyre But 1 I never had much faith in the models used by the IPCC that projected global temperatures to 2100 because they could not be validated before 2100 Consistency requires that I restrain my faith in Callendar s model until then However I have much more faith in it than the IPCC s model because it does seem to fit the data better than the IPCC models 2 What global temperature would Callendar s models project using various scenarios Am I correct in assuming Callendar model is mostly C02 forcing without much feedback Then Callendar is assuming say sunspots are not important Whatever the answers to the above questions are I think the current modeler s might want reevaluate their models klee12 Steve I m not asking readers to take a position on whether Callendar was right As I observed his parameters are hardly engraved in stone The question was why after so much resources and effort GCMs had no skill in GLB temperature relative to Callendar kim Posted Jul 27 2013 at 5 55 PM Permalink Can t wait for Bel Tol to chime in durango12 Posted Jul 27 2013 at 6 51 PM Permalink Evidently the establishment has already defined Callendar s work as consistent with IPCC http en wikipedia org wiki Guy Stewart Callendar though on the low end Never mind the difference between a climate sensitivity of 2 deg per Archer and 1 67 deg the latter value lying outside of the range of liklihood defined by IPCC climategrog Posted Jul 28 2013 at 2 42 AM Permalink Steve there was a recent article on tuning which Judy Curry covered I haven t parsed the topic but it sounded like there are knobs connected to cloud parameterization clouds needless to say being a sort of black hole in terms of comprehensive understanding This is what I was trying to point to in the last thread and it got snipped as O T The cloud amount is the biggest fiddle factor in the whole story especially in the tropics where most of the energy input to the system comes in Roy Spencer calculated that is would take a 2 change in cloud cover to equal the CO2 forcing and I don t think anyone claims the current guestimates of cloud cover are accurate to within 2 That means modellers can just pick cloud parametrisations that give the results they like Also tropical cloud cover is not just some wandering internal variability it is a strong negative feedback climate mechanism The problem is that individual tropical storms are well below the geographical resolution of any model so don t get modelled AT ALL Just parametrisation We have a natural experiment to look at climate response to changes in radiative input in the from of major eruptions And when we look at the tropics we see a very different response to extra tropical regions http climategrog wordpress com attachment id 310 The cumulative integral of degree days or growth days to farmer seems to be fully maintained in the tropics This implies a strong non linear negative feedback mechanism It think it is very likely that it is tropical storms that provide the physical mechanism Willis has posted a number of times on this at WUWT calling it a governor A governor would maintain a roughly constant value of the control variable I think my graphs show this is tighter control that a temperature governor since it appears to maintain the cumulative integral This would be closer to an industrial PID controller To do this requires a self correcting mechanism not just a passive negative feedback The nature of tropical storms where the negative feedback is amplified and self maintaining seems to provide that The series of inter linked graphs I ve provided demonstrate that there is a fundamental control mechanism in the tropics that leaves them with a near zero sensitivity to changes in radiative forcing That may go some way to explaining why a low sensitivity model works better on global averages but it s not just a case of playing with the global tuning knob Steve McIntyre Posted Jul 28 2013 at 5 45 AM Permalink OK Arguing for specific knobs is non Callendar but I guess that I opened the theme FWIW it seems to me that clouds also function as a sort of regulator in mid latitude summers Today is another cool cloudy day in Ontario in what has been a rather cool cloudy summer In our mid latitude summers heat waves seem to occur when there are blocking patterns that enable the sun to pour in as in the 1936 and 2012 heat waves Although we are often told that cloud feedbacks are positive in our mid latitude summers when the total solar insolation is very large even in tropical terms cloudy days are cool Given that the planet is warmest in NH summer even though it is then almost at the farthest in its orbit mid latitude thermostats are probably worth mulling over as well climategrog Posted Jul 28 2013 at 9 36 AM Permalink mid latitude thermostats are probably worth mulling over as well Indeed and my graphs demonstrate that too The same one I linked above covers ex tropical SH and shows that between 3 and 5 years after the average eruption the integral is flat ie SST is at the SAME temp as the four year pre eruption reference period http climategrog wordpress com attachment id 285 How much of this is the stabilising effect of tropics and how much is local thermostatic effects would need investigation The down step is a loss of degree days not a permanent temperature drop Given that the planet is warmest in NH summer This is why I shy away from global averages especially land sea averages Land temps change about twice as fast as SST http climategrog wordpress com attachment id 219 That means that looking at SST should be sufficient to follow any warming patterns and avoids introducing the land sea ratio bias of NH I think it is quite important to get any further with understanding that we move beyond unified global average metrics That kind of approach is OK as a first approximation but to understand why models are not working and have even a chance of determining system behaviour beyond a trivial CO2 curve we need to stop muddying the water my mixing all the paints in one pot Jeff Norman Posted Jul 31 2013 at 2 41 PM Permalink Steve Yes a cool and cloudy summer but not cooler than the average at Pearson since 1938 anyway BTW and COTSSAR Environment Canada changed how they register temperatures at Pearson GTAA in July Completely Off Topic So Snip As Required mrsean2k Posted Jul 31 2013 at 7 22 PM Permalink Given the fact that GCM Q is a few orders of magnitude simpler and less resource intensive than the GCM s you re pitting it against how feasible would it be to exhaustively test parameterisation and goal seek a better fit Not a guarantee of the plausibility reality of the winning parameters but maybe interesting Ed Hawkins ed hawkins Posted Jul 28 2013 at 5 33 AM Permalink Hi Steve I think Callendar did some amazing work Our focus in the 75th anniversary paper was on his temperature records but his work in collecting the various CO2 observations and also inferring that the ocean would not take up all the excess human emissions of CO2 was also excellent and way ahead of Revelle who proved this much later His model of the atmosphere was advanced for the time but he did consider the radiative balance at the surface whereas we now consider that this is flawed and the balance at the top of the atmosphere TOA is more appropriate Interestingly Arrhenius used TOA balance Regards Ed PS The two links which look they should go to my blog article on this are wrong one points to the paper first para and the other is missing in postscript Ed Hawkins ed hawkins Posted Jul 28 2013 at 1 15 PM Permalink PPS The article I mention is this one http www climate lab book ac uk 2013 75 years after callendar And the accepted paper on the 75th anniversary is now online in it s final form and is open access http onlinelibrary wiley com doi 10 1002 qj 2178 abstract Ed FerdiEgb Posted Jul 28 2013 at 5 19 PM Permalink Ed Hawkins I agree that Callendar did amazing work in his time But I have some objections against your story in the first link for the mixing in of the role of aerosols in the cooling period 1945 1975 which isn t part of the Callendar story but part of the tuning story of current GCM s to explain that period with increasing CO2 levels While SO2 emissions may have had some small role in that period they can t have a role in the current standstill as the increase of emissions in SE Asia is compensated by the decrease in emissions in the Western world thus there is hardly any increase in cooling aerosols while CO2 levels are going up at record speed and temperatures are stalled That makes it quite doubtfull that the same aerosols would have had much impact in the previous period of temperature standstill cooling Ed Hawkins ed hawkins Posted Jul 29 2013 at 2 02 AM Permalink FerdiEgb the effect of aerosols is thought to be more complicated than you imply For example a simple shift of emissions from one location to another could still have a global temperature impact because of a emissions into a regionally cleaner atmosphere have a larger impact and b any cloud circulation response will depend on the mean state in the region where the emissions occur i e the effect is likely non linear As the emission of aerosols in the 1940s onwards tended to be into a cleaner atmosphere they may have had a larger effect There is still much debate about possible causes of the recent slowdown in temperatures but the natural solar volcanic forcings are very likely to have had an effect http www climate lab book ac uk 2013 recent slowdown As an aside Steve s forcing estimate used above for the GCM Q doesn t I believe include the natural forcings Regards Ed FerdiEgb Posted Jul 29 2013 at 2 53 AM Permalink Ed Hawkins I have more the impression that aerosols were a convenient way to explain the non change in temperature 1945 1975 with increasing CO2 levels When stringent measures were taken in industrial and residential area s see the London smog that should have given a huge difference in temperature downwind the most polluting sources as the average residence time of tropospheric SO2 is only 4 days but that was not measurable at all But this is an aside of the main article which is about the performance of the complex GCM s compared to the simplest model possible or any simple model see http www economics rpi edu workingpapers rpi0411 pdf maybe worth another discussion about the causes of the current standstill Richard Drake Posted Jul 29 2013 at 3 50 AM Permalink But this is an aside of the main article I ve found the asides on this thread particularly educational though those allowed to remain by the rumbling zamboni Another sign the main post may just be onto something Ed Hawkins Posted Jul 29 2013 at 6 26 AM Permalink FerdiEgb the direct effect of aerosols is fairly well understood and produces a cooling effect it is not just a convenient way to explain the flat period The indirect effects of aerosols have more uncertainty And when the clear air acts were implemented and the cooling aerosols were removed the temperature started to increase Remember that the global temperature changes are not instantaneous with changes in forcing this lag is missing from Steve s model as he acknowledges above Richard Drake I think Steve should add the volcanic forcings to GCM Q and use a skill measure which doesn t penalise the GCMs for having internal variability Simple models are useful but have their limitations Richard Drake Posted Jul 29 2013 at 8 12 AM Permalink Ed Hawkins Steven Mosher s also suggested adding Leif Svalgaard s new TSI forcing to the mix I don t know enough to know how wise and or uncontroversial that might be I m also not at my best with skill measures indeed some would say with skill itself But that s what I mean by educational I do have some more confident reflections on regression testing though arising from my own experience of software engineering which I will post further up the thread FerdiEgb Posted Jul 29 2013 at 8 56 AM Permalink Ed Hawkins I agree that the direct effect of aerosols is good understood but I have the impression that the models exaggerate their effect If you look at the effect of the Pinatubo and compare that to what humans emit both in quantity and accumulation lifetime then the net global effect of human SO2 emissions is less than 0 05 K I received a plot of the HADcm3 model which shows the regional impact of the reduction of human aerosols in Western Europe in the period 1990 1999 That gives a 4 6 K increase in temperature downwind the main emissions area with highest effect near the Finnish Russian border over that period But if you look at the temperature record of several stations upwind and downwind the industrial area s there is hardly any difference over a long period except for a stepwise change in 1989 which is directly atributable to the switch of the NAO from negative to strong positive That has more effect over the far inland stations than over the seaside ones See http www ferdinand engelbeen be klimaat aerosols html Ed Hawkins Posted Jul 29 2013 at 9 25 AM Permalink FerdiEgb the figure in your link shows a modelled temperature change that may or may not be due to aerosols It could be entirely due to climate variability as it looks like the difference in two 10 year means which can be very different by chance and you have no way of knowing the cause And volcanic eruptions are very different from human produced aerosols as they put the aerosol into the stratosphere rather than the troposphere The effects are then very different and are not comparable FerdiEgb Posted Jul 29 2013 at 11 03 AM Permalink Ed Hawkins the model shows the difference between the influence of all human influences including GHGs aerosols and ozone and GHGs only Thus the influence of aerosols and ozone over the period with the largest decrease in aerosols I suppose that the codes on top should give the type of runs which were done Further chemically and physically there is no difference in effect between SO2 in the troposphere and the stratosphere The difference is in the residence time mainly due to the lack of water vapour the stratospheric injection of SO2 by the Pinatubo did last 2 3 years before the reflecting drops were large enough to fall out of the atmosphere The human emissions in the lower troposphere drop out on average in 4 days Ed Hawkins ed hawkins Posted Jul 29 2013 at 4 46 PM Permalink Even if as it looks from the codes at the top of the plots it is the difference between two simulations with and without aerosol you are still ignoring the possible effect of variability on the pattern and magnitude of response With only one simulation of with and without aerosol you CANNOT separate these two effects And you are wrong again on the aerosols Chemically the volcanic and human caused aerosols are very different even each eruption has a very different chemical signature Physically the response is also very different if the aerosol emission is into a part of the atmosphere with or without clouds Ed FerdiEgb Posted Jul 30 2013 at 1 41 AM Permalink Ed the IPCC TAR haven t found something similar in the FAR shows a similar cooling effect of SO2 aerosols primary effect see Fig 6 7 d in http www grida no publications other ipcc tar src climate ipcc tar wg1 fig6 7 htm be it more eastward while the main wind direction in NW Europe is from the SW Thus the one simulation of the HADcm3 models can t be far of for the warming effect for the period 1990 1999 when SO2 emissions were drastically reduced While there are huge differences in aerosol injection from different volcanoes most of the heavy particulates injected in the stratosphere are dropping out within a few months What is retained is SO2 which is oxidized to SO3 I suppose by ozone which attracts water to form drops that reflect sunlight That is a much slower process as there is little water vapor in the stratosphere The average time that these drops grow before dropping out of the stratosphere atmosphere is 2 3 years Human emissions are quite different in composition as some also contain brown black soot which may absorb more sunlight and thus may have more a warming than a cooling effect especially over India But the first effect of SO2 is exactly the same as for volcanic aerosols oxidizing to SO3 via ozone and OH radicals attracting water and the formation of reflecting drops The main difference is in the residence time average 4 days the IPCC FAR even gives 1 day http www ipcc ch graphics ar4 wg1 jpg fig 2 20 jpg thanks to lots of water rain in the troposphere Ferdinand Ben Posted Jul 30 2013 at 2 42 PM Permalink Another possibility to consider for the 1945 1975 lull is the injection of aerosols via nuclear detonation Over 1900 nuclear tests were performed by USA Russia UK France China and India Nuclear blasts are stratospheric injectors Open air testing was rare by the 1980s and likely only France used open air in the 1990s Jeff Norman Posted Jul 31 2013 at 2 58 PM Permalink So the direct effect of aerosols is fairly well understood and produces a cooling effect I guess this is why they were forecasting dire cooling

    Original URL path: http://climateaudit.org/2013/07/26/guy-callendar-vs-the-gcms/ (2016-02-08)
    Open archived version from archive

  • New Paper by McKitrick and Vogelsang comparing models and observations in the tropical troposphere « Climate Audit
    through these in a way that gets their impact right on average The predicted climate from the models needs to behave like the real world on multi decadial time scales If they wander off somewhere else they aren t going to be much use So the question for you is how do you expect these models to behave both individually and in aggregate while the real world is having a particular step change HR Posted Jul 24 2014 at 7 40 PM Permalink I m with Mosher on this in part I don t think there is any expectation that models should match the timing of particular physical processes Having said that it still seems important that outside the step change the models and obs do not match Beta Blocker Posted Jul 24 2014 at 2 04 PM Permalink I have a question concerning Figure 3 Has the IPCC assigned its own confidence intervals to the LT and MT model ensemble mean predictions CI s which could be plotted as continuous lines on Figure 3 along with the plots of the central tendencies Kenneth Fritsch Posted Jul 24 2014 at 3 02 PM Permalink Individual climate model runs can show weather noise and that should result in breakpoints It is the timing for the weather noise and the breaks that the individual climate models and of course the combined runs cannot handle HAS Posted Jul 25 2014 at 2 30 PM Permalink The problem is when they show approximations to break points aka linear trends when they shouldn t be showing anything It does suggest the models are gaining information on break point surrogates from somewhere other than the initial conditions tuning parameter estimations and assumption Matt Skaggs Posted Jul 24 2014 at 3 54 PM Permalink There are two paths to root cause attribution of a phenomenon eliminate all potential causes but one or show that the phenomenon produces unique output that can only be explained by one cause The former is far away for GHG warming the latter is a depressingly short list consisting of only polar and tropospheric amplification AFAIK Kudos to Dr McKitrick for keeping his eye on the ball While I lack the math chops to fully follow the arguments I will be reading as many of the cited references that I can get for free to catch up on the tropospheric amplification debate Colin Wernham Posted Jul 24 2014 at 4 20 PM Permalink Neither weather satellites nor radiosondes weather balloons have detected much if any warming in the tropical troposphere There is the paper Warming maximum in the tropical upper troposphere deduced from thermal winds by Robert J Allen Steven C Sherwood http www nature com ngeo journal v1 n6 abs ngeo208 html that uses wind as a proxy for temperature and finds the warming direct temperature observations from radiosonde and satellite data have often not shown this expected trend However non climatic biases have been found in such measurements Here we apply the thermal wind equation to wind measurements from radiosonde data which seem to be more stable than the temperature data Can anyone comment on this Matt Skaggs Posted Jul 24 2014 at 5 30 PM Permalink Colin That paper was refuted the following year here Pielke Sr R A T N Chase J R Christy B Herman and J J Hnilo 2009 Assessment of temperature trends in the troposphere deduced from thermal winds Int J Climatol not sure if this one is available on line I made a couple unsuccessful attempts Colin Wernham Posted Jul 25 2014 at 1 41 AM Permalink Thanks Matt Here s a link to Pielke s blog on it http pielkeclimatesci wordpress com 2009 01 28 submitted paper assessment of temperature trends in the troposphere deduced from thermal winds by pielke sr et al bk51 Posted Jul 24 2014 at 9 55 PM Permalink So if I understand this correctly the paper is saying that the actual temps aren t doing what they want so they ll use a proxy Unless the proxy goes the wrong way in which case they will probably revert back to the actual temps How Mannian of them William Larson Posted Jul 24 2014 at 5 28 PM Permalink McKitrick For readers who skip that part and wonder why it is even necessary the answer is that in serious empirical disciplines that s what you are expected to do to establish the validity of novel statistical tools before applying them and drawing inferences I myself always appreciate Dr McKitrick s writing and especially so in this quote Yes it is transparently obvious that his prescription is what one in fact MUST do in serious empirical disciplines I feel sad that some who ascribe to themselves a great love of climate science do not treat their field as a serious empirical discipline in this way Mark Lewis Posted Jul 25 2014 at 12 30 AM Permalink William I was going to quote that paragraph as well I laughed out loud when I read it Bob Posted Jul 24 2014 at 11 09 PM Permalink One of the more enjoyable blog discussions that ever occurred happened a couple of years ago on Bart Verheggen s site Commenter VS took on a good part of the climate community in a marathon discussion My guess was VS is Tim Vogelsong Tim Vogelsang Posted Jul 25 2014 at 10 06 AM Permalink I am not VS Tom T Posted Jul 25 2014 at 11 09 AM Permalink Can we call you VS for short Or TS Bob Posted Jul 25 2014 at 4 43 PM Permalink Re Tim Vogelsang Jul 25 10 06 Sorry for the surmise Tim VS was mathematically gifted and a econometrician with state of the art mastery of times series wkernkamp Posted Jul 25 2014 at 10 37 PM Permalink That discussion is worth going back to if you want to understand that the published confidence interval for climate predictions are consistently too narrow for systems such as the climate with a near unit root There was a high probability for the climate to leave the bounds No surprise that it is doing so James Smyth Posted Jul 26 2014 at 2 16 PM Permalink How about a link wkernkamp Posted Jul 28 2014 at 1 38 PM Permalink https ourchangingclimate wordpress com 2010 03 01 global average temperature increase giss hadcru and ncdc compared Jeremy Harvey Posted Jul 26 2014 at 3 42 PM Permalink If you look at that thread you ll see that VS can write in fluent Dutch EdeF Posted Jul 25 2014 at 12 46 AM Permalink Is the 1977 step change real or is it just a bad transistor in the radiosonde receiver Ross McKitrick Posted Jul 25 2014 at 11 19 AM Permalink The Pacific Climate Shift was first identified in the fisheries literature when major shifts in locations of tuna and anchovy harvests IIRC started to be noted around 1980 As the years went on people noted step changes in numerous parameters around the Pacific rim such as the FANB record in Alaska see http climate gi alaska edu Bowling FANB html Tsonis et al have published several studies looking at step changes in the climate system as chaotic bifurcations Steven Mosher Posted Jul 28 2014 at 2 28 PM Permalink in fact Tsonis 2007 argues that the sychronization and regime shift is captured by GFDL Stephen Richards Posted Jul 25 2014 at 3 21 AM Permalink Steven Mosher Posted Jul 24 2014 at 5 24 PM Permalink Reply Not true 1 the can Show a step change Steven this sounds like the they can and produce 15 yr pauses but not when they are observed That means to me that they are useless models S Geiger Posted Jul 25 2014 at 10 08 AM Permalink McKitrick As for timing never mind getting the year right across a 55 year window it s just not there period Are we sure the step change does not show up in the models My understanding would be that results of individual models i e GFDL 2 0 for instance are still reported as ensembles As such realizations of a particular model would if we accept the timing of a step change would be an initial value issue show the step at different times throughout the period Seems the ensemble for that model would be a sloped line i e averaging out all the step changes from the individual realizations The final full model average i e as shown in the Figure 3 panels would average the results even more Assuming the initial value issue is accurate and that individual model s realizations could indeed get a step change in MT LT temps then I see no reason why anyone would expect the step change to show up in an average of the averaged model ensembles Ross McKitrick Posted Jul 25 2014 at 11 07 AM Permalink Think of it this way The models all exhibit positive trends in all cases but one statistically significant None of the models yield a significant mean shift and half are and half So in terms of what the models are programmed to do it s a safe bet that they are programmed to mimic a physical process that yields an upward trend but not programmed to mimic a physical process that yields step changes bifurcations etc If the representation of the trend component were unbiased the models would yield a flat line that runs parallel to the blue line but doesn t jump at the 1979 step Instead the models yield a trend biased significantly high If you look closely at Figure 3 you can see that the blue and red lines meet at the top of the step change as if the size of the step was the exact measure of the average trend bias in the models S Geiger Posted Jul 25 2014 at 11 21 AM Permalink Instead the models yield a trend biased significantly high If you look closely at Figure 3 you can see that the blue and red lines meet at the top of the step change as if the size of the step was the exact measure of the average trend bias in the models I guess I see your point the actual physics are not captured However I would think this criticism will be rather easily dismissed We are but one or maybe 2 for the MT real world step changes away from being in general accord I don t think there are claims that the models capture these types of emergent phenomena yet i e an abrupt step change maybe due to some sort of physical regime change vs longer term average changes or are there such claims S Geiger Posted Jul 25 2014 at 12 06 PM Permalink More importantly should have mentioned that I really enjoy the Spirits Bright CD Ross McKitrick Posted Jul 25 2014 at 12 42 PM Permalink Hey thanks cdquarles Posted Jul 25 2014 at 12 45 PM Permalink Thanks for that Ross Can the models capture any kind of a phase change not directly coded into them I m thinking of threshold functions similar to neuronal activation These are not initial value problems in my mind It seems to me that any damped driven dynamic system will have emergent behaviors of some kind HAS Posted Jul 25 2014 at 2 44 PM Permalink S Geiger Jul 25 2014 at 11 21 AM We are but one or maybe 2 for the MT real world step changes away from being in general accord The problem is that given the behaviour being discussed here the models will show a further trend response when the real world step changes occurs moving them increasingly ahead assuming upward step changes For this argument to work I think you need to assume models in aggregate smooth out the step changes perhaps not unreasonable but something in the climate is causing a bias to upward step changes that the models do pick up However if this were the case the trend result should be smoothed over the complete time period between step changes not much more acute as appears to be shown here jim z Posted Jul 26 2014 at 11 00 PM Permalink Ross said Think of it this way The models all exhibit positive trends That sentence is all of it Complexity the reality the molecules of matter of the atmosphere and oceans can t yet be modeled Steve this comment is coat racked onto a technical article Please avoid generalized complaining jim z Posted Jul 26 2014 at 11 46 PM Permalink Terry Complexity theory is the logical understanding of a complex system Abstraction of some detail of a complex system only gives you an abstract simplification of that detail Global Average Temperature is a very abstract simplification of the global climate state Does anyone think that global temperature over time is an understanding of the climate history Terry Oldberg Posted Jul 27 2014 at 12 24 AM Permalink jim z Thanks for giving me the opportunity to clarify In addition to providing a simplified description abstraction increases statistical significance Thus for example the more abstract description male OR female references no fewer humans than the less abstract description male and may reference more humans If it contains more then the sample size is increased with consequential increase in the statistical significance of the conclusions Abstraction is one of the ideas that lead to thermodynamics The macrostate of thermodynamics is an abstraction from the associated system s microstates This idea seems however to be misunderstood by many climatologists Craig Loehle Posted Jul 25 2014 at 10 45 AM Permalink There are two possible reasons for a step change 1 Internal ocean air circulation dynamics which lead to a change of dynamic state the pacific climate shift If the models miss this then they aren t handling the fluid dynamics right and or the internal feedbacks right The key question here is do they ever produce a simulated step change I don t think so 2 A step change in external forcing Since the sun is also a fluid dynamics system and could influence the earth in various ways not just direct visible spectrum a change in it s state or transition over some threshold could produce a step change on earth In this case the models should be much more synchronized than in case 1 and missing the step indicates errors in the forcing data inputs and or how the system responds to forcings DayHay Posted Jul 25 2014 at 11 16 AM Permalink If there are step changes just how does linearly increasing CO2 cause that It causes everything I guess Matt Skaggs Posted Jul 25 2014 at 12 14 PM Permalink There are quite a few comments lamenting inability to model step changes but Mosh is right that the significance is unclear this takes nothing away from Dr McKitrick s analysis If your car is climbing a hill and the transmission downshifts the rpm will make a step change But if your model is just supposed to predict the speed of the car capturing the step change in rpm adds nothing Climate models don t necessarily need to capture local step changes caused by ocean current migration as long as basic poleward heat flow is adequately modeled Ross McKitrick Posted Jul 25 2014 at 12 54 PM Permalink In your example there s a fixed mechanical connection between RPM and wheel speed Suppose you don t know it and you are constructing a regression model to estimate it Your model is of the form Speed a b RPM You ride in a car with 2 speeds and collect data on speed and RPM as you go up a hill but you ignore the gear change Obviously the line you fit will yield a biased value for b Now suppose you are tweaking a climate model to warm up X degrees over an interval where CO2 rose Y and you ignore the step change along the way Again you will wind up with a biased model because you will attribute the portion associated with the step change to the CO2 increase I m not suggesting the process of fiddling the knobs on a GCM is that simplistic but that would explain the mismatch we found Matt Skaggs Posted Jul 25 2014 at 2 44 PM Permalink I see what you are saying about fitting the linear regression I guess my analogy fell short AFAIK all GCMs are control volume in engineering lingo very roughly heat input heat loss delta T If something inside the controlled volume has capacitance it might slowly store heat and then rapidly release that heat in a cyclical fashion and each release will look like a step change in the T data That won t introduce any error in the controlled volume equation though as long as you have enough data in the time domain The way to follow the control volume equation is to draw a straight line through the step changes because in the end the capacitance won t change the trajectory What am I missing Ross McKitrick Posted Jul 25 2014 at 3 30 PM Permalink What am I missing You are describing a brick in an oven that acts nothing like a brick What you are missing is the mechanism that explains why your brick stays cool for an hour then suddenly gets hot Capacitance won t get you that You need a mechanism like an air conditioner inside the oven blowing on the brick until the oven melts the power line Once you postulate a mechanism then you need to see if the data are consistent with it And that s what we tested Harold Posted Jul 26 2014 at 9 56 AM Permalink Or to put it another way thermodynamics rules out behavior like an automatic transmission There s more to this than mathematical abstraction It has to be physical Matt Skaggs Posted Jul 27 2014 at 9 03 AM Permalink Analogies help me but they are not helping me here Dr McKitrick wrote The Pacific Climate Shift in the late 1970s is a well documented phenomenon see ample references in the paper in which a major reorganization of ocean currents induced a step change in a lot of temperature series around the Pacific rim If solar insolation if flat throughout how can that be anything but a manifestation of capacitance You can go to SkS or RC and find some just so stories about how the pause is caused by sequestration of heat in the oceans It is not my argument but I cannot refute it If it is true that heat can pass through the atmosphere be stored in the oceans and later manifest as a step change in atmospheric temperature due to a change in ocean currents then this can be considered capacitance A control volume equation need not consider intermittent capacitance to show the correct trend over long intervals So the mechanism you are asking me for is whatever mechanism you are invoking when you say that a change in ocean currents caused a step change in temperature Specifically what mechanism are you invoking for the Pacific Climate Shift in the statement I quoted above and how can it be differentiated from the engineering concept of capacitance Matt Skaggs Posted Jul 27 2014 at 10 33 AM Permalink To be clear I agree with a claim that if a GCM cannot model a decadal scale step change such as the PCS it is unlikely to have skill in forecasting climate on a decadal scale I don t think this is a major concern regarding the ability of a GCM to forecast runaway heat those problems lie elsewhere On the other hand tropospheric amplification based upon first principles in physics is primary evidence about AGW for the reasons outlined in my first comment Ross McKitrick Posted Jul 27 2014 at 1 13 PM Permalink Not being an expert in the underlying physics I am loathe to try and postulate how a reorganization of ocean currents could manifest as a rise in the average temperature without it necessarily involving release of heat previously stored so take the following with appropriate caveats First energy is not equivalent to temperature In the analogy I gave the brick heats up suddenly even though the total amount of energy used by the system drops because the AC unit shuts off In a system as complex as the climate I have no difficulty imagining that there are circumstances in which following a reorganization of ocean currents the same amount of energy is distributed in such a way that the average over the tropical air temperature field steps up or down Willis points toward one such mechanism in his comment below All you really need is to allow one or two parameters normally assumed to be invariant in a model to change in response to changes in other parameters such as allowing the lapse rate to vary or cloud formation processes to vary and there should be no difficulty coming up with any number of possible mechanisms that exhibit discontinuities Second if the step change is merely a release of stored up heat the point of my analogy was that your model can t resemble a brick because bricks don t work that way You need a more complex mechanism that explains storage and periodic release of heat on some deterministic timetable GCMs don t seem to generate step changes even to match the historical record so it looks like they are brick models so to speak Third if capacitance discharge really is the story you still need to ascertain at what point in time the heat began to get stored you can t assume the time window coincides with the time window of the rise in CO2 forcings The curious match between the red line and the size of the step change over the 1958 1978 interval suggests to me that such an assumption is being made But if for instance the PCS event released heat stored over a millennium and the models attributed that rise to the effects of GHG emissions since the 1950s then there will obviously be a bias in the representation of the forcing mechanism in the model Matt Skaggs Posted Jul 29 2014 at 5 05 PM Permalink OK I think we hammered out some agreement When you wrote Now suppose you are tweaking a climate model to warm up X degrees over an interval where CO2 rose Y and you ignore the step change along the way Again you will wind up with a biased model because you will attribute the portion associated with the step change to the CO2 increase This is true if and only if the intializing assumption of equilibrium in the control volume equation is false It would be false if heat had been sequestered in the oceans for a millenium Terry Oldberg Posted Jul 25 2014 at 10 26 PM Permalink Dear Dr McKitrick A car s transmission is a putative analogy to the climate via the formula Speed a b RPM for a transmission In the development of this analogy the change in the global equilibrium temperature at Earth s surface substitutes for the speed of the car and the change in the logarithm of the CO2 concentration substitutes for the RPM of the car s engine However there is a pitfall enroute to this analogy This is that the speed is an observable feature of the real world but the change in the equilibrium temperature is not In consequence the RPM provides perfect information about the speed but the change in the logarithm of the CO2 concentration provides no information about the change in the equilibrium temperature This conclusion follows from the definition in information theory of the mutual information as the information theoretic measure of the intersection between observable state spaces The mutual information is the information that is available for the control of the associated system In the relation between the change in the logarithm of the CO2 concentration and the change in the equilibrium temperature the mutual information is nil Thus the controllability of the equilibrium temperature that is evident to the makers of governmental policy on CO2 emissions is illusory Misunderstanding of the important difference between the climate and a car s transmission is leading the people of the world into a public policy disaster As I strive to avert this disaster I need your help if you are able to provide it Nic Lewis Posted Jul 25 2014 at 1 29 PM Permalink Ross Excellent work congratulations to you and Tim Vogelsang on the paper And thank you for a very clear article It would be great if you could also post the results applying to CMIP5 models at CA when they become available When you analyse the CMIP5 models might you also look at trends over the period 1979 2013 with no shift allowed for thus updating your previous analysis of the satellite era Although it is a shorter period I suspect the results will be fairly similar to those for the full period with a break allowed And the extra data sets and avoidance of the break issue may make the results seem more robust to some climate scientists No doubt you are aware that Figure 9 9 in AR5 WG1 shows that the average model trend over 1988 2012 in tropical 20 S to 20 N lower tropospheric temperature over the oceans was 3x that of the satellite observations and the best reanalysis dataset ERA interim Kenneth Fritsch Posted Jul 25 2014 at 1 47 PM Permalink If you are modeling the residuals of a series with step changes those residuals and thus the model will change between ignoring the step change and accounting for it As an aside I think the overall trend will be different depending whether you account for or ignore the step changes i e assuming one linear trend versus segmented linear trends Kenneth Fritsch Posted Jul 25 2014 at 1 40 PM Permalink I did a breakpoint analysis of the 149 climate model runs for the historical series for the global surface from 1964 2005 using the strucchange library in R and the breakpoints function The series were all used as monthly I did 2 analysis For the first the h parameter was set at 0 15 meaning a maximum of 6 breakpoints and with breaks set at 2 or a maximum of 2 breakpoints for the second In the first analysis all the individual climate model runs had at least 2 breaks and maximum of 5 breaks with most having between 3 and 4 In the second analysis all runs had 2 breakpoints with the first occurring on average around 1982 and the second on average around 1993 The scatter around the averages was broad and particularly so for 1982 If nothing else the analysis shows that the individual climate model runs for surface temperature have some structure I believe the paper being discussed on this thread deals with the lower and mid troposphere temperatures and my analysis was for surface temperatures I would be surprised if the climate model individual runs for the troposphere had a much different structure than the surface but I could check a few KNMI is a great source for this data Ross McKitrick Posted Jul 25 2014 at 2 08 PM Permalink Kenneth the R breakpoints routine assumes independent error terms so it s not robust to autocorrelation There are methods in the literature that apply in the AR1 case see our lit review Our method is robust to higher order autocorrelation Still it sounds like you detected breaks the major volcanoes which are programmed into the models Ross McKitrick Posted Jul 25 2014 at 2 02 PM Permalink Thanks for the comments Nic I hadn t really studied Figure 9 9 before but there I see two important features in it As you note the average model temperature trend is about 3x the average observed trend which is why I assume the CMIP5 versions of our results will be much like the CMIP3 versions The other interesting point is that the observed temperature trends are almost invariant to the observed H2O trends whereas in the GCMs the two move in lockstep This is another trend comparison crying out to be examined They don t put a trend through the observational points admittedly there are only 5 but we could exploit the time series dimension to deal with that So it looks at a quick glance like the observations fall on the same line as the models but that s because there s only one line in the chart But just compare UAH to ERA Interim to get the sense of the invariance issue The 5 observed H2O trends vary over 0 1 to 1 4 decade and temperature trends over that range only increase from 0 08 to 0 14 C decade a span of 0 06 The same vertical range of modeled H2O trends implies an increase in temperature trends of 0 2 C about 3x larger If the feedback process takes a change in the water vapour trend and yields a change in the temperature trend that chart implies the observed feedback strength is 3x stronger in the models than in the observations Kenneth Fritsch Posted Jul 25 2014 at 3 32 PM Permalink Ross your point is well taken I would guess that autocorrelation can screw up the information criteria evaluation It is time for me to read your paper My point remains though that for CMIP5 individual model surface historical temperature series I can see visual structure that appears as step changes Kenneth Fritsch Posted Jul 25 2014 at 9 38 PM Permalink I need also to look at the tropics area which is the focus of the paper under discussion I have been looking at global data I have started to read the paper and saw what I was going to suggest about applying this breakpoint method to the algorithms used to adjust station temperature series It was a bit off putting to see that the method deals with single breakpoints and likes to know that one exists Willis Eschenbach Posted Jul 25 2014 at 9 58 PM Permalink Kenneth Fritsch Posted Jul 25 2014 at 3 32 PM Ross your point is well taken I would guess that autocorrelation can screw up the information criteria evaluation It is time for me to read your paper My point remains though that for CMIP5 individual model surface historical temperature series I can see visual structure that appears as step changes Thanks for that Kenneth I and others have shown that the global temperature output of GCMs is merely a linear lagged transformation of the input As a result they show step changes for e g volcanoes and in many cases show variations from the 11 year cycle of solar forcing However such changes from the volcanoes and from the sunspot cycle are NOT present in the observational record and thus appear to be spurious effects of the linear nature of the models Finally Ross and Tim very nice work well cited and well explained My congratulations Best regards w David L Hagen Posted Jul 25 2014 at 4 29 PM Permalink Compliments on key statistical insights This is an excellent example of Einstein s Razor Everything should be as simple as it can be but not simpler Geoff Sherrington Posted Jul 25 2014 at 10 36 PM Permalink Traditional concepts of accuracy and precision are typically ignored or treated wrongly in exercises such as CMIP One of my earliest posts was about the need to include all runs from a given model even those rejected by the modellers in the estimation of error bounds You cannot set proper error bounds if you preselect which outcomes are going into the error estimation and which ones are not Then there is a problem with taking an average between modellers as is commonly done The resulting average suffers because not all prior model runs are included in the average There are some which are excluded on justifiable grounds when a proper error is found but if runs are excluded from an average because the modeller did not like the look of the outcome that it not scientific There are reasons to be cautious about the results of some runs Here is an example from Australia s CSIRO as reported in Douglass D H J R Christy B D Pearson and S F Singer 2007 A comparison of tropical temperature trends with model predictions International Journal of Climatology 27 doi 10 1002 joc 1651 table II a The left column shows barometric altitude in hPa the middle columns shows the temperatures modelled by CSIRO Mark 3 expressed as millidegrees C per decade and the right column the average of 22 GCMs for CMIP3 Surface 163 156 1000 213 198 925 174 166 850 181 177 700 199 191 600 204 203 500 226 227 400 271 272 300 307 314 250 299 320 200 255 307 150 166 268 100 53 78 Those used to working with numbers might raise an eyebrow at the closeness of temperatures modelled versus average at 600 500 and 400 hPa More eyebrow exercises might follow from the claim of trends of 1 thousandth of a degree per decade inherent in this overuse of significant figures One wonders about the amount of cross comparison done before the final preferred model run is submitted As a generality modellers tend to calculate statistical precision while downplaying the importance of accuracy It is not ok to report temperatures to a thousandth of a degree when the instruments are incapable of that performance There is abundant literature about measurement problems accuracy errors in balloons and sondes as Ross McKitrick and Tim Vogelsang note Ditto for ocean temperatures TOA satellite radiation balances ocean pH etc etc New generations of instruments so often show accuracy errors in what was formerly thought to be the best of instruments No amount of dense sampling can overcome such accuracy errors though precision should be improved David L Hagen Posted Jul 26 2014 at 7 59 AM Permalink Geoff you highlight the systemic failure to report Type B uncertainties and ignorance of BIPM JCGM 100 2008 Evaluation of measurement data Guide to the expression of uncertainty in measurement Frank Posted Jul 26 2014 at 10 44 AM Permalink If I remember correctly there is an annual seasonal rise and fall in surface temperature partially driven by the eccentricity of the earth s orbit which is removed by converting to anomalies This annual rise and fall in temperature in the tropics is amplified in the upper atmosphere as expected from models at least according to a Santer 2005 paper that deserves more scrutiny So we are stuck with a dilemma Seasonal warming is transmitted to the upper atmosphere with amplification by a well mixed turbulent atmosphere but decadal warming clearly has not been Yet decadal warming is the net result produced by decades of seasonal warming and cooling Although I am convinced that the best data we have shows a discrepancy between models and observations over decades data on seasonal changes should be innately more reliable I d like to see the seasonal and decadal amplification of warming analyzed side by side to see which is more robust Did Santer analyze all of the seasonal data or just part of it solomon green Posted Jul 26 2014 at 1 40 PM Permalink More than twenty years ago after three years of analysis a working party established to validate a widely used stochastic

    Original URL path: http://climateaudit.org/2014/07/24/new-paper-by-mckitrick-and-vogelsang-comparing-models-and-observations-in-the-tropical-troposphere/ (2016-02-08)
    Open archived version from archive

  • Does the observational evidence in AR5 support its/the CMIP5 models’ TCR ranges? « Climate Audit
    1 8 2 C even 50 is an over conservative scaling down To do as you suggest and used the unscaled OHU estimate from a sensitive model for the recent period would bias ECS and TCR estimation towards the high model sensitivity value Surely that is obvious if I were to take the AR5 estimate for the last 30 years from 1980 2011 Fig 8 20 in AR5 provides a total effective anthropogenic forcing of 0 95W m2 for the specified period The GISS trend is 0 16K decade which makes it roughly 0 5K Using the effective forcing for doubling of 3 44W m2 TCR is 1 8K Natural forcing is small over that period as solar and volcanic forcing tend to balance each other I argue that aerosol forcing wasn t negative as assumed by the AR5 authors which would bring TCR down to 1 6K Yes it is unclear whether aerosol forcing has become more negative over the last 30 years But your figures are wrong The AR5 best estimate of the change in anthropogenic forcing over 1980 2011 is 1 04 W m2 with a linear trend based change of 1 01 W m2 Where did you get your 0 95 W m2 figure from As it happens that error of yours largely cancels out with your use of 3 44 W m2 for a doubling of CO2 concentration when in fact the AR5 Figure 8 20 etc forcing data are based on a value of 3 71 W m2 But I wouldn t attempt to estimate TCR over a 30 year period using crude surface temperature data because of the large impact of multidecadal internal variability Over 1980 2011 the Delsole et el J Climate 2011 estimate the forced trend in sea surface temperature SST as fairly constant at 0 122 K decade over 1977 2008 during which the AR5 forcing trend with constant aerosol forcing was 0 516 W m2 per decade That implies a TCR of 0 88 C for SST Scaling that up by the ratio of global to SST temperature of 1 30 per HadCRUT4 HadSST3 raises the TCR estimate to 1 14 C far below your 1 6 C That looks rather low to me I suggest the lesson probably is don t try to estimate TCR or ECS over periods of only a few decades The AR5 forcing of 1 72W m2 for 1950 2011 which you are using seems to neglect changes in natural forcing Of course it is an anthropogenic only forcing since I was comparing it with an anthropogenic only temperature change An apples to apples comparison yes Are you aware of the numerous poorly constrained assumptions which have to be made in order to obtain the forcing from remote sensing products Yes I am aware it is a tricky job But the AR5 aerosol forcing uncertainty range far exceeds the stated uncertainty ranges for most satellite based observational estimates so it seems to me to take plenty enough account of the uncertainties involved quite possibly more than is needed particularly given the constraints provided by inverse estimates of aerosol forcing K a r S t e N Posted Dec 10 2013 at 10 13 PM Permalink Nic Re the OHU discussion I m afraid I don t have to add much If you think you can learn something from what you ve done fine with me Re 1980 2011 As referenced my 1980 2011 forcing 0 95W m2 is taken from Fig 8 20 Chapter 8 and it is undoubtedly the effective radiative forcing ERF Your forcing 1 04W m2 is from Fig SPM 5 but unfortunately it seems to be only the non adjusted forcing as total aerosol forcing is only 0 82W m2 rather than the central estimate of 0 9W m2 On top of that it is only the anthropogenic forcing while we need the total forcing i e a slight negative solar forcing term for 1980 2011 has to be added Doing so brings both figures in very good agreement Given that Fig 8 20 provides ERF the forcing for doubling should be 3 44W m2 With dT of 0 5K TCR is 1 81 Taking the forcing from SPM 5 and combining it with 3 71W m2 TCR is 1 78K Again very good agreement here I m actually not so sure that I m the one who is mistaken Apart from that I m not here to debate ocean oscillations We won t agree on that one anyways Note however that regionally aerosol forcing has dramatically changed at multidecadal timescales That s all I have to say Re 1950 2011 It s only now that I see what you actually did Apologies for the slight confusion on my part here You are using the temperature attributed to anthropogenic forcing from Chapter 10 Fair enough Interesting though that you suddenly seem to fancy GCM results given that all these attribution studies are based on CMIP5 data scaled to match HadCRUT4 That also explains the small difference when it comes to the forcing estimates The best estimate for temperature change 1950 2011 attributable to natural forcing is zero while the actual natural forcing is estimated to be slightly negative due to declining solar activity as pointed out in my previous comment A minor inconsistency which indicates that any detection and attribution study is afflicted with uncertainties not surprisingly No reason for me to dismiss them After all I m certainly happy to note that you put trust in GCM results In fact Fig 10 4 provides the required scaling factors to match HadCRUT4 that could allow us to deduce the corresponding true model TCR The multimodel mean seems to be 1 8K With the multimodel scaling factor of 0 8 Fig 10 4b TCR comes down to 1 44 An extremely rough quick and dirty guess but one which is fairly consistent with my earlier estimates Re remote sensing I tend to disagree on the uncertainty range Given that you won t convince me of the opposite we should leave it at that Nicholas Posted Dec 11 2013 at 2 13 AM Permalink Excuse my ignorance but isn t it rather fraught estimating climate sensitivity by dividing the measured temperature change over a given period by the estimated change in CO2 based forcing The biggest problem I see is that we have no way of knowing that the change in temperature is entirely due to the change in forcing from CO2 Some of it could be natural variation up or down and there could be other forcings too If the change in temperature from change in CO2 over a given period is x and the change in temperature from otter sources is y and the change in forcing from CO2 is z what we want to know is x z but what you re actually calculating is x y z where the magnitude of y could be close to or possibly even exceed x and is unknown Surely then this result can only give us a very rough ball park figure certainly not a figure with two decimal places Or am I missing something niclewis Posted Dec 11 2013 at 4 50 AM Permalink Nicholas The biggest problem I see is that we have no way of knowing that the change in temperature is entirely due to the change in forcing from CO2 In the usual method the measured change in mean temperature between two periods is divided by the change in estimated total forcing from all sources not just CO2 Natural variation is indeed a source of uncertainty The effect of shorter term variations can be minimised by making each of the two periods long typically one to four decades and trying to get a similar El Nino El Nina balance in both periods Balancing their volcanic activity levels is also important The effect of a quasi cyclical multidecadal variations can be minimised by matching the positions of the two periods in the cycle And maximising the difference in mean temperature between the two periods dilutes the effect of internal variations Hence taking one period in the second half of the nineteenth century and the other as recently as possible Accuracy is clearly not even to a single decimal place in TCR in terms of standard error but stating results to two d p is helpful to avoid large rounding uncertainty Nicholas Posted Dec 11 2013 at 5 22 AM Permalink nic thanks for clarifying and that all makes sense but what about long term natural warming cooling trends such as those which may have caused the Roman Warm Period Medieval Warm Period Little Ice Age and so on While they are somewhat controversial there is a certain amount of evidence that natural variability can create centuries long trends which ultimately shift temperatures by several degrees So for example some warming included in the TCR calculation may be recovery from the LIA which may not be due to a known forcing at all my understanding is that nobody really knows what causes these natural fluctuations So surely the uncertainty for a temperature delta of 0 5K going into this formula must be large It s hard to see how you can get a result with a 95 CI that doesn t overlap both low and high TCR estimates and thus fails to help narrow down the answer at all Ross McKitrick Posted Dec 9 2013 at 1 50 PM Permalink One of the really remarkable points Nic makes here is that just using numbers from the IPCC report itself and applying their own formula for transient climate response an estimate of around 1 3C is unavoidable Yet most of the models they employ have TCR s of 1 6 or higher and quite a few are even above 2 implying way too much sensitivity to CO2 emissions Yet the IPCC goes on to say things like There is very high confidence that models reproduce the general features of the global scale annual mean surface temperature increase over the historical period including the more rapid warming in the second half of the 20th century and the cooling immediately following large volcanic eruptions Ch 9 p 3 The whole summary section of Ch 9 gives the impression that models and observations are beautifully in alignment Something s gotta give here Craig Loehle Posted Dec 9 2013 at 2 23 PM Permalink They are in agreement in the same sense that an inkblot looks just like a pony and each storm is clearly due to global warming Just say I do I do believe Skiphil Posted Dec 9 2013 at 8 58 PM Permalink It is all too reminiscent of the Phil Jones stance on his work as a peer reviewer gut feelings are scientific and The Team knows what the answers should look like Confirmation bias and groupthink anyone MarkB Posted Dec 9 2013 at 2 09 PM Permalink Ross McK Something s gotta give here I suspect that they are hoping that the next few years will show an uptick in surface temps and save them It s the bitter ender method of fighting refuse to surrender and pray for a miracle stevefitzpatrick Posted Dec 9 2013 at 3 41 PM Permalink Interesting analysis Only continued divergence from reality will force the modelers to fix the models A few have already undergone a substantial reduction in diagnosed sensitivity eg GISS Model E R Another decade should force the hand of most modeling groups Which is not to say I think the models will then accurately represent reality cloud feed backs will probably be adjusted only enough for the model projections to fall in the range of just barely plausible Ian H Posted Dec 9 2013 at 9 00 PM Permalink There are many ways a model could be adjusted Lets classify them into two types 1 Adjustments which lower TCR but not ECS These are mechanisms of delay Catastrophe is not cancelled merely postponed 2 Adjustments which lower TCR and ECS both Under these kinds of adjustments the Catastrophe may need to be cancelled Faced with a choice of adjustment methods what logic will be used in choosing which one to apply Am I being too cynical in expecting that every possible type 1 adjustment will be explored in depth before type 2 adjustments are even considered stevefitzpatrick Posted Dec 9 2013 at 10 13 PM Permalink Ya well higher aerosol offsets and hidden deep ocean heat uptake are getting a lot of attention lately both type 1 adjustments But these things are only delaying tactics against the inevitable reality will not be denied for ever mpainter Posted Dec 10 2013 at 11 32 AM Permalink you are not taking into account the inextinguishable creative spark the species They will never concede but simply invent new ways to keep the ball in the air But their antics will isolate them from serious science as has already occurred to a large extent RB Posted Dec 10 2013 at 12 52 PM Permalink ECS may be most likely a second order effect in terms of expected temperature rise and therefore policy I don t see the need for biased motive seeking even if it were assumed that ocean heat uptake numbers are likely to be revised in the future DocMartyn Posted Dec 9 2013 at 6 26 PM Permalink This is a graph of the changes in global optical depth at 550 nm from 1970 to the end of 2012 which is a very good measure of atmospheric aerosol light scattering What is note worthy is how little aerosols have been flung into the atmosphere over the last decade or so Note that this is the same period where we have observed he pause in global warming This aerosol get out of jail free card for the modelers is pretty much defunct for the last 12 years Geoff Sherrington Posted Dec 9 2013 at 6 40 PM Permalink Dependence on adjusted temperature records still worries me greatly Although not representative of the globe there are many many records from Australia that have been studied in detail here The official Bureau of Met line is that positive adjustments balance negative just nicely but we do not find that We find cooling of older years to give an exaggerated trend since about 1880 or so The correctness or otherwise of the adjusted global record is so basic to estimates of TCR and ECS I sometimes wonder if mismatches in CMIP5 and earlier comparisons are due in part to more rigid physics meeting less rigid T adjustments Even in the satellite era there are differences between RSS and UAH some of which can be large see a regional comparison at http www warwickhughes com blog p 2496 I realise that this comment is not helpful because I have not given a solution to the problem apologies Jeff Id Posted Dec 9 2013 at 10 02 PM Permalink Wow Nic Herculean work continues I see Do you have a job Amazing amounts of detail poured through on every paper to an extent that is very unusual If you get a CS degree they will pay you high 6 figures for that work and you can do it in your spare time I think you can get one of those on line these days Then again they might pay you the same money not to do it What happened here Note that the PDFs and ranges given for Otto et al 2013 are slightly too high in the current version of Figure 10 20a It is understood that those in the final version of AR5 will agree to the ranges in the published study troyca Posted Dec 10 2013 at 1 01 AM Permalink Very interesting analysis A few months ago I looked at something similar although I did not examine the other TCR estimate papers as you did here trying to figure out how the SPM could say that 2K warming was likely and more likely than not with high confidence for the RCP6 0 and RCP4 5 scenarios when the most up to date TCR estimates Otto et al 2013 in particular implied quite the opposite http troyca wordpress com 2013 10 17 how well do the ipccs statements about the 2c target for rcp4 5 and rcp6 0 scenarios reflect the evidence As far as I can tell these statements stem from the CMIP5 projections and the seemingly mistaken idea that the somewhat arbitrary expert selected TCR range matches up with the CMIP5 TCR range and that this apparent match endorses more specific aspects of the CMIP5 TCR distribution niclewis Posted Dec 10 2013 at 6 16 AM Permalink Troy Yes I like your 17 October blog article I m not sure to what extent the AR5 Chapter 10 authors would regard the 1 2 5 C TCR range as expert selected rather than as it should in principle be based on observational evidence The wording of the final paragraph of Section 10 8 1 implies it is based primarily on observational evidence Whilst it would have been difficult for the Ch 10 authors not to have been influenced by the CMIP5 TCR range that influence may mainly have operated indirectly through observational TCR studies that were consistent with the similar CMIP3 AR4 TCR range being more likely to be published and given weight to in Chapter 10 However it would no doubt in any case have been very awkward for the Ch 10 authors to propose a TCR range that conflicted substantially with the CMIP5 TCR range laterite Posted Dec 10 2013 at 1 33 AM Permalink Nic Lewis rocks Thank you so much Massive contradictions within the IPCC report Perfectly objective exposition I stand in awe Paul K Posted Dec 10 2013 at 5 57 AM Permalink Nic Nice article And credit to Troy above who did get there a bit before you on the issue of the flawed comparison of the pdf s from observation and model I can t really fault you for using the same assumptions as the IPCC to demonstrate the irregularity of its conclusions On a higher plain however making any TCR estimate without a clear statement of assumptions about the multidecadal oscillations seems to me to be bad science There are now an embarrassing richness of papers which confirm the existence of quasi 60 year cycles in temperature and other climate indices going back many centuries There is an embarrassing lack of credible papers which explain root cause If they are caused by internal mode an internal redistribution of heat then simply ignoring them in an energy budget calculation renders such calculation highly dependent on the time period chosen for the analysis Choosing 1950 to 1979 for example gives a very different answer from choosing 1970 to 1999 On the other hand if the cycles are caused by or if they induce an unaccounted for flux forcing then the total forcing on the system will be under or over estimated In particular the total forcing in late 20th century would be under estimated giving rise to overestimation of TCR I can reproduce an estimate of TCR of 1 34 which matches ocean heat content and surface temperature data available from 1955 to present day Hindcasting this model match reveals that the model temperature neatly lops off the peaks and troughs of the previous multi decadal cycles in the observed temperature data In fact the model result looks very similar to the matches achieved by the AOGCMs when their results are averaged over a few runs I am fairly confident that this TCR represents an upper bound on the modal estimate before accounting for uncertainties in the input data series I have been trying for several weeks to pull an article together on this for Lucia including the indications of some external flux forcing but keep getting distracted by my wife telling me that I should be doing something useful instead niclewis Posted Dec 10 2013 at 6 58 AM Permalink Paul Thanks I agree about the importance of multidecadal oscillations and I don t ignore them One can use the Internal Multidecadal Pattern IMP shown in Fig 4 of Delsole et al J Climate 2011 as a guide to selecting base periods to match the final period used in energy budget estimation of TCR and ECS The IMP was high over 1995 2011 it was also fairly high over 1859 1882 the base period I use It was also fairly high from the late 1920s through to 1960 So rather than taking a fixed 65 year periodicity and using 1923 1946 for my shorter baseline TCR estimate there is an arguemnt for using 1930 1960 Doing so gives almost identical TCR estimates Your point about the possibility of the cycles actually being in forcing presumably mediated through changes in clouds rather than in redistribution of heat primarily between the ocean and atmosphere is interesting If that is the case then energy budget ECS estimates as well as TCR estimates would be affected In principle internal changes in heat distribution should not affect such ECS estimates since an increase in energy flow into the ocean would be accompanied by a depression of surface temperature a favourite explanation for the current hiatus So a comparison of the relationship of energy budget TCR and ECS estiamtes over different states of the IMP might throw light on which explanation for multidecadal oscillations is valid However I m not sure whether that decent records of heat uptake or its counterpart satellite measured TOA radiative imbalance are yet long enough to reach solid conclusions I shall be very interested in your article at Lucia s blog let me know when you post it You can tell your wife that others think you are doing useful and important cliamte science work Frank Posted Dec 12 2013 at 4 33 PM Permalink Nic for the detection of the IMP you refer to ftp wxmaps org pub delsole dir ipcc dts jclim 2010 pdf Maybe one can have a clearer impression of the impact of multidecadal oscillations when calculating the stability indecies R² of running linear trends with a length as noted in the diagram http www dh7fb de reko bestgiss gif Data GISS With shorter Trends than 70 years you get some oscillation in R² with clear maxima and minima of R² and a length of the periode of about 60 years So it s clear that for a calculation of TCR only intervals of more than 70 years are useful because the shorter ones have not enough stability over time Paul K Posted Dec 13 2013 at 3 25 AM Permalink Frank Yes that s one way to expose the periodicity In practice any half way decent spectral analysis including a straight Fourier analysis a bandpass filter or elemental mode decomposition will expose the cycles in the modern temperature datasets That s not where the big challenge lies IMO The first challenge is to demonstrate that the cycles are not just persistent autoregressive stochastic wanderings With only two and a half cycles available in the modern global series there is no statistical test which confirms unambiguously that they are predictably recurrent To confirm predictable recurrence requires the use of longer term temperature records or the use of long term proxy records See for example http depts washington edu amath research articles Tung journals Tung and Zhou 2013 PNAS pdf So far at least the mainstream modelers have rejected the evidence for predictable recurrence The GCM s treat these cycles as stochastic natural variation and cannot reproduce the phasing that s official One consequence is that the cycles from 1850 to around 1960 disappear when averages of temperature are taken over several GCM runs so the peaks and troughs of the observed temperature series are lopped off Thereafter the model average tracks the temperature gain for the late 20th century natural plus forced and continues to climb after the predictable cycle peak around the end of the 20th century My specific concern relevant to Nic s article is that many applications of energy balance or energy budget models actually make de facto assumptions which are similar to the GCM modelers and hence produce an upper bound on rather than most likely estimates of transient sensitivity As Nic indicates it is possible that the error is minimised by comparing energy balance sheets over periods when the multidecadal variations are in the same phase Fair enough There is however still a problem with mathematical coherence in doing this which is one of the things I am trying to write up at the moment In short form under the assumption that there are no external fluxes other than the predefined forcings and feedbacks I believe that we can eliminate the likelihood that the transfer and storage mechanism for net heat redistribution is between ocean and atmosphere on several grounds Latent heat flux is in the wrong direction to add heat flux to the mixed layer when needed Sensible heat flux is almost in phase between atmosphere and ocean as evidenced by temperature movements no source and sink available Regional surface distribution doesn t work either since local variations are almost in phase with each other globally by latitude and by ocean basin within a 10 year timespan This leaves only sensible heat movement within the oceans from somewhere to the surface mixed layer This is not a global diffusion process it is likely to be controlled entirely by local or regional changes in the deep convective regions and in the equatorial Pacific See Kosaka and Xie 2013 If this is valid then we do not expect to see the oscillations in any valid measure of total ocean heat content since the internal heat fluxes are self cancelling However since the oscillations are not visible in the forcing series apart from an artful artifactual kink around 1950 but are included in the temperature feedback then mathematically the oscillations should be visible in the net total ocean heat flux A clear contradiction So we either accept the mathematical incoherence of the energy balance model or we reject the assumption that the cycles have no external flux forcing Or you can reject my logic that the net flux movement must be entirely internal to the oceans Additional to the above is the fact that long term MSL data shows crystal clear evidence of 60 year cycles So can we conclude that there must be an unaccounted for external flux forcing Maybe niclewis Posted Dec 15 2013 at 2 00 PM Permalink Frank Thanks for sharing this graph I concur with your conclusion that estimating TCR over intervals separated by less than 60 70 years or trends over periods of at least that length is not reliable In principle if multidecadal natural cycles in surface temperature are caused by fluctuations in ocean heat uptake then if OHU or its counterpart TOA radiative imbalance can be measured accurately it would be feasible to estimate ECS over a shorter period than 60 70 years But I m not sure it would be a good idea to attempt that ECS might vary over the multidecadal cycle Paul Thanks for your detailed comments I agree that the Tung and Zhou paper is helpful in that it considers a much longer period than the Delsole one I tend to cite Delsole as it identifies the post 1850 IMP AMO contribution quite well As you may know some UK Met Office GCM modellers Booth et al 2012 have sought to explain the AMO as the result of changes in aerosol forcing Zhang et al 2013 contained a pretty devastating criticism of that explanation As you say multidecadal quasi cycles without a predictable period cause problems for TCR estimation If those fluctuations are the counterpart of changes in OHU it may be that first estimating ECS taking account of those changes offers a better route TCR could then be derived from ECS using an OHU measure e g assuming a mixed layer diffusive ocean that was estimated over a long period to iron out the effects of multi decadal cycles Re your argument about the net flux movement being internal to the ocean why does that prevent it causing fluctuations if global surface temperature If cold deep water upwells and is warmed by and cools the atmosphere or there is more mixing of warm surface water with cold deeper water by increased isopycnal transport or whatever that would depress the surface temperature and equate to a higher rate of OHU Or are you saying that you don t think the global rate of downwards heat transport in the ocean can change much It is certainly possible that multidecadal changes in the distribution of surface temperature are causing changes in global cloud forcing which in turn cause fluctuations in global mean surface temperature I think is what you are suggesting But is there any decent evidence for that Paul K Posted Dec 16 2013 at 7 06 AM Permalink Nic Thanks for the additional comments Re aerosol forcing I agree it was total nonsense Re your argument about the net flux movement being internal to the ocean why does that prevent it causing fluctuations if global surface temperature It doesn t Nic That s not the problem The problem is one of coherence in application of an energy balance Definitionally the net downward flux at TOA Forcing lambda T Assume the secular forcing is constant or zero and let us consider the part of the oscillation when surface temperature is increasing If the surface temperature increases because of solely internal ocean heat flux then we will see a decrease in net incoming radiation a net cooling of the upper ocean mixed layer plus atmosphere to space in net terms At the same time the surface mixed layer is displaying a heat gain which tracks SST which heat comes by assumption from the deeper ocean The heat flux from the deeper ocean to the mixed layer must be greater in magnitude than the radiative loss from the planet in order to support net heat gain by the mixed layer In summary the heat flux into the mixed layer is greater in amplitude than and out of phase with the TOA radiative flux the latter is cooling when the former is heating and vice versa So far this is OK but now consider what happens when we make the common assumption in energy balance models that the integral of the net TOA flux must all end up as ocean heat The ocean heat flux into the mixed layer is exactly equal to the ocean heat flux out of the deep under this model so these fluxes should be self cancelling in any sum of total ocean heat content The only thing that should be left visible is the net radiative flux due to surface temperature change But this radiative flux is 180 degrees out of phase with the temperature oscillation as explained above So when we plot total OHC we expect to see the oscillations 180 degrees out of phase with temperature cooling at the temperature peaks but we don t we see the opposite The total heat flux moves in phase with temperature using the very limited data we have available For a longer term proxy dataset have a look at Figure 3 here http www psmsl org products reconstructions jevrejevaetal2008 php Note that the MSL is a proxy for total heat and so its derivative should reflect flux The flux oscillation is almost exactly in phase with the temperature oscillations and 180 degrees out of phase with what we should expect under the assumption that the oscillations are driven entirely by internal fluxes It is certainly possible that multidecadal changes in the distribution of surface temperature are causing changes in global cloud forcing which in turn cause fluctuations in global mean surface temperature I believe that this is one source of unaccounted for forcing but I am fairly sure that it is induced by ENSO I know that you have looked at Forsetr and Gregory 2006 in terms of total feedback and uncertainty If you havn t done so I would strongly recommend that you have a close look at the results obtained for SW and LW separately The LW shows a range of feedbacks from small positive to very large negative The SW shows a huge positive feedback a forcing in disguise I do not believe that this cloud albedo change is the trigger for the oscillations I think that the oscillations are controlled by another unaccounted for flux which has the correct periodicity one which has been measured for a very long time but which has to date been largely ignored by climate science I will remain mysterious until I have written it up properly Paul K Posted Dec 16 2013 at 7 11 AM Permalink Correction I wrote When we plot total OHC when I meant to write When we plot the derivative of total OHC we expect to see the oscillations 180 degrees out of phase with temperature Sorry niclewis Posted Dec 16 2013 at 2 04 PM Permalink Paul Thanks I agree that the SL reconstruction you cite supports your argument but it s unclear to me whether the SL data is accurate reliable enough to establish it for certain Whether ocean heat content data supports your argument or not over 1960 2000 depends on which dataset you look at But since the late 1990s surface temperature has been pretty flat whilst the ocean heat content datasets show a continuing rise The Forster and Gregory 2006 SW surface temp vs TOA radiation relationship is statistically insignificant in all or most cases so I m not sure how much it shows But it could indeed represent a forcing The Lindzen and Choi lagged regression papers are quite interesting on the forcing vs feedback issue as you are probably aware I look forward to reading your piece at Lucia s Frank Posted Dec 16 2013 at 4 30 PM Permalink Paul Nic did you notice this paper http www pmel noaa gov people gjohnson OHCA 1950 2011 final pdf about OHCA It seems to be interesting because when one selects a climatology without too much infill due to the fuzziness of the early data b4 Argo the record of OHCA gets another face as you can see in Fig 4 of the paper Any impact to discussion about IMV vs Forcing niclewis Posted Dec 16 2013 at 6 07 PM Permalink Frank Thanks I d missed that although I ve read a previous paper of theirs on the influence of climatology on OHC timeseries I ll look at their new one Paul K Posted Dec 17 2013 at 2 50 AM Permalink Thanks for the reference Frank which I had not seen If we accept the 0 1800m data series as a true and b representative of the total OHC then it suggests that net flux the derivative of the curve shown in Fig 4 is roughly tracking surface temperature Over the period shown it starts at a high decreases to a low value climbs to a peak around 2001 and then decreases after that pretty well in phase with the multidecadal temperature oscillation which achieved an oscillatory high in the 40 s and just after the turn of the century This is the exact opposite in phase of what we would expect if the surface temperature oscillation was caused by internal ocean flux movement if you accept my previous argument So applying my previous logic we reject the hypothesis that the oscillations are caused by heat redistribution within the ocean and look for an external flux If I were a climate scientist then I would describe

    Original URL path: http://climateaudit.org/2013/12/09/does-the-observational-evidence-in-ar5-support-itsthe-cmip5-models-tcr-ranges/ (2016-02-08)
    Open archived version from archive

  • niclewis « Climate Audit
    Mann et al 2008 Mann et al 2009 Marcott 2013 Moberg 2005 pages2k Trouet 2009 Wahl and Ammann News and Commentary MM Proxies Almagre Antarctica bristlecones Divergence Geological Ice core Jacoby Mann PC1 Medieval Noamer Treeline Ocean sediment Post 1980 Proxies Solar Speleothem Thompson Yamal and Urals Reports Barton Committee NAS Panel Satellite and gridcell Scripts Sea Ice Sea Level Rise Statistics Multivariate RegEM Spurious Steig at al 2009 Surface Record CRU GISTEMP GISTEMP Replication Jones et al 1990 SST Steig at al 2009 UHI TGGWS Uncategorized Unthreaded Articles CCSP Workshop Nov05 McIntyre McKitrick 2003 MM05 GRL MM05 EE NAS Panel Reply to Huybers Reply to von Storch Blogroll Accuweather Blogs Andrew Revkin Anthony Watts Bishop Hill Bob Tisdale Dan Hughes David Stockwell Icecap Idsos James Annan Jeff Id Josh Halpern Judith Curry Keith Kloor Klimazweibel Lubos Motl Lucia s Blackboard Matt Briggs NASA GISS Nature Blogs RealClimate Roger Pielke Jr Roger Pielke Sr Roman M Science of Doom Tamino Warwick Hughes Watts Up With That William Connolley WordPress com World Climate Report Favorite posts Bring the Proxies up to date Due Diligence FAQ 2005 McKitrick What is the Hockey Stick debate about Overview Responses to MBH Some thoughts on Disclosure Wegman and North Reports for Newbies Links Acronyms Latex Symbols MBH 98 Steve s Public Data Archive WDCP Wegman Reply to Stupak Wegman Report Weblogs and resources Ross McKitrick Surface Stations Archives Archives Select Month February 2016 January 2016 December 2015 September 2015 August 2015 July 2015 June 2015 April 2015 March 2015 February 2015 January 2015 December 2014 November 2014 October 2014 September 2014 August 2014 July 2014 June 2014 May 2014 April 2014 March 2014 February 2014 January 2014 December 2013 November 2013 October 2013 September 2013 August 2013 July 2013 June 2013 May 2013 April 2013 March 2013 January 2013 December 2012 November 2012 October 2012 September 2012 August 2012 July 2012 June 2012 May 2012 April 2012 March 2012 February 2012 January 2012 December 2011 November 2011 October 2011 September 2011 August 2011 July 2011 June 2011 May 2011 April 2011 March 2011 February 2011 January 2011 December 2010 November 2010 October 2010 September 2010 August 2010 July 2010 June 2010 May 2010 April 2010 March 2010 February 2010 January 2010 December 2009 November 2009 October 2009 September 2009 August 2009 July 2009 June 2009 May 2009 April 2009 March 2009 February 2009 January 2009 December 2008 November 2008 October 2008 September 2008 August 2008 July 2008 June 2008 May 2008 April 2008 March 2008 February 2008 January 2008 December 2007 November 2007 October 2007 September 2007 August 2007 July 2007 June 2007 May 2007 April 2007 March 2007 February 2007 January 2007 December 2006 November 2006 October 2006 September 2006 August 2006 July 2006 June 2006 May 2006 April 2006 March 2006 February 2006 January 2006 December 2005 November 2005 October 2005 September 2005 August 2005 July 2005 June 2005 May 2005 April 2005 March 2005 February 2005 January 2005 December 2004 October 2004

    Original URL path: http://climateaudit.org/tag/niclewis/ (2016-02-08)
    Open archived version from archive