archive-org.com » ORG » C » CLIMATEAUDIT.ORG

Total: 491

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • IPCC: Fixing the Facts « Climate Audit
    I only said I d hit the barn Doesn t matter where does it It would be cynical of me to suggest that if observations had matched the AR4 A1B multi model mean fairly well that the IPCC would claim that validates the use of the multi model mean In a like vein it s disingenuous to include the entire range of FAR predictions from business as usual to scenario D stringent controls in industrialized countries combined with moderated growth of emissions in developing countries Scenario D has clearly not occurred The IPCC should compare observations with the low best high estimates of business as usual given in FAR SPM Figure 8 All time series re baselined to some common period such as 1961 1990 Bruce Cunningham Posted Oct 1 2013 at 1 59 PM Permalink All these shenanigans just so they could say that temps are consistent with the models and hope that the public buys it they know that we flat Earthers know it isn t true What utter tosh Do the few CMIP3 and CMIP5 model runs that do not run hotter than observations have values of atmospheric CO2 concentration that even closely approximate what actually occurred In my opinion only model runs that had CO2 at realistic observed levels should even be considered Sven Posted Oct 1 2013 at 2 03 PM Permalink According to the new graph there could be a cooling trend right through to 2035 and it would still match the models projections Gail Posted Oct 1 2013 at 2 07 PM Permalink We re very lucky those earlier drafts were leaked Does any other science rely so crucially on leaks to make any progress rgbatduke Posted Oct 1 2013 at 2 17 PM Permalink Steve as I ve pointed out on a number of equations there are much worse considerations in the graphs above The spaghetti they present deliberately provides the illusion as you and indeed they directly point out that current temperatures lie within an ensemble of single model runs drawn from the collection of models presented Let us count the sins a Let me choose the single model runs to put into the figure and by running each model a few dozen times and picking the one run I include I can make the figure look like anything at all God invented Monte Carlo to help stupid confirmation biased sinners avoid the deliberate or accidental abuse of statistics described by its own chapter in How to Lie with Statistics To Hell with it b We cannot be certain that they did in fact choose the model runs to include Maybe they did just pick them randomly In that case their conclusion is a clear case of data dredging only worse This is a mortal sin even without cherrypicking When one does ordinary data dredging one takes 20 jars of jelly beans feeds them to lots of people counts the number with acne and discovers that green jelly beans can be positively correlated with and hence cause acne because they beat the usual but meaningless cut off of 0 05 where all of the others fail Of course with 20 jars it is PROBABLE that one will make the cut off and with enough colors one can beat even more stringent limits there are what over thirty colors of GCM jelly beans in this ensemble If only this were the worst of it it would be easy enough to fix One has to use a more stringent distribution and statistical test when one has an ensemble of independent jars of jelly beans but there are still levels of correlation between green jelly beans and acne that would be difficult to explain with the null hypothesis of no correlation But now take one of the actual jelly beans OUT of the jar above that contains thirty different colors of jelly beans in a SINGLE jar Yes there are places where the green jelly beans are correlated with acne some people that got acne did indeed eat green jelly beans Most however did not Some people that got acne ate more red jelly beans and no so many green Most however did not In fact every single one of the jelly bean colors individually FAILS a simple hypothesis test of good correlation with acne really even barely marginal correlation with acne but nearly all colors of jelly bean had a few days not the same days where they were well correlated among many more days they were not The graph above is in the unique position of stating that while EVERY color of jelly bean INDEPENDENTLY fails a hypothesis test against the data we can be certain that jelly beans cause acne because every color of jelly bean has at least a few people who ate that color and got acne This isn t a small ignorable error This leads to a simple pair of possibilities Either the assemblers of the graph and drawers of conclusions from the graph are completely incompetent at statistical hypothesis testing and data dredging and managed to put a poster child case of data dredging front and center in the report for policy makers in which case they should be summarily fired for incompetence and replaced with competent statisticians or else worse they are COMPETENT statisticians and deliberately assembled a misleading graph that openly encourages the ignorant to dredge the data by interpreting the fact that nearly every model dips for TINY INTERVALS OF TIME down to where they reach the measured GAST but they all do it at different times spending much less than 5 of their time down there as evidence that collectively the model spread includes reality Oh My God To Hell with you sinner c The next two sins are closely related In AR4 and the early draft of AR5 the mean and standard deviation of the collection of models was presented graphically at least as a physically meaningful quantity I say standard deviation because without the usual normal erf assumptions how can they generate confidence levels AT ALL The basis of nearly all such measures in hypothesis testing is the central limit theorem especially lacking even a hint of knowledge of the underlying distribution However this is in and of itself a horrible mortal sin against the holy writ of statistics The central limit theorem explicitly refers to drawing independent identically distributed samples out of a fixed underlying distribution There is no POSSIBLE sense in which the GCMs included in the graphs above are iid samples from a statistical distribution of physically correct GCMs There IS NO SUCH THING yet as the latter the GCMs don t even MUTUALLY agree within a sensible hypothesis test started with identical initial conditions in trivial toy problems they converge to entirely distinct answers and if one does Monte Carlo with the starting conditions the correctly formed ensemble averages per GCM will often fail to overlap for different GCMs certainly if you run enough samples The variations between GCMs are not random variations They share a common structure coordinatization and in many cases similar physics similarly implemented The mean of many runs of INDEPENDENT GCMS is not a statistically meaningful quantity in any sense defensible by the laws of statistics The standard deviation of that mean is not a meaningful predictor of the actual climate One can average HUNDREDS of failed models and get nothing but a very precise failed model or average a single successful model and have a successful model So to present such a figure in the first place is utterly misleading To Hell with it c The GCMs are not drawn from an iid of correct GCMs Therefore their mean and standard deviation is already a meaningless quantity no matter how it is presented There is no basis in statistics for the quantitative evaluation of a confidence interval lacking iid samples and any possibility of applying e g the central limit theorem Evil Sin to Hell with it I was afraid AR5 would persist in the statistical sins told in the summary for policy makers in AR4 and it appears that they have indeed done so and even added to them To CORRECT their errors though is simple Just draw each jelly bean colored strand of spaghetti against the data ALONE For EACH model ask is this a successful model Not when it spends well over 95 of the time too warm Repeat for the next one Ooo reject it too Then the next one Outta here In the end you might end up with ONE OR TWO models from the entire collection that only spend 80 of their time too warm that aren t rejected by a one at a time hypothesis test per independent GCM Those models are merely probably wrong not almost certainly wrong Or apply a Bonferroni analysis in order to obtain the p value for the complete set Oooo does THAT fail the hypothesis test of what is the probability of getting the actual data given the null hypothesis that all of these models are in fact drawn from a hat of correct models Since NONE of them are even CLOSE to the actual trajectory and one would expect at least one to BE close by mere chance given over 30 shots at it be can reject the whole set slightly fallaciously Finally we could look at I dunno second moments the FLUCTUATIONS of the models Do they bear any resemblance to the actual fluctuation in the data No they do not not even as single model runs Indeed the single model runs could be rejected on this basis alone why would the year to year variation of the climate be changing when it has historically been remarkable stable in the entire HADCRUT record with the exception of a single decade that is almost certainly a spurious error back in the 19th century To Hell with it rgb William Larson Posted Oct 1 2013 at 10 29 PM Permalink rgbatduke Well for one I appreciate your taking the time to write up this comment for me at least a non statistician it does an excellent job of explaining the sins Well for two posts comments like this are a major reason that I read CA I get to be educated about it all Thanks to you here I believe I come away with a much clearer understanding But I am in So far in blood that sin will pluck on sin IPCC aka Richard III johanna Posted Oct 2 2013 at 9 16 PM Permalink Thanks Prof Brown Once again you help to educate and inform us in language that non scientists can understand Given that the choice of baselines is so critical in these exercises my flabber is gasted at the way they did this When I was involved in quite different research one of the first things we did was to play around with different baselines as a reality check Choosing your baseline is one of the most important decisions you make and requires a lot of thought and testing Bernie Hutchins Posted Oct 3 2013 at 12 03 AM Permalink In this excellent post Dr Brown rgbatduke has provided yet again a superb framework on which physicists and engineers who have at least a tentative sense of distrust in the proffering of AGW alarmists can organize their thoughts In this instance we may feel that the mainstream climate scientists are moving the road signs of doubtful models and trying to justify an envelope of model outcomes based on the contention that an 18 wheeler once went through a guard rail here and into a cornfield to say that the muddy ruts are really part of their model s road Dr Brown has called out the statistical sins involved And he has told us exactly what to look for To CORRECT their errors though is simple Just draw each jelly bean colored strand of spaghetti against the data ALONE For EACH model ask is this a successful model Not when it spends well over 95 of the time too warm Repeat for the next one Ooo reject it too Then the next one Outta here This insight he has provided is of immense value Thanks again Dr Brown Please give his post careful study Truthseeker Posted Oct 3 2013 at 1 36 AM Permalink Rgb excellent summary However shouldn t the first sentence use occasions instead of equations rgbatduke Posted Oct 3 2013 at 3 30 PM Permalink Funny you should ask Yes but the error is subtle enough to be a halfway decent pun As for elevating it to a full post later comments if I were going to do an actual post on it I d only feel comfortable doing so if I had the actual data that went into 1 4 so I could extract the strands of spaghetti one at a time As it is I can only see what a very few strands of colored noodles do as they are literally interwoven to make it impossible to track specific models For example at the very top of the figure there is one line that actually spends all of its time at or ABOVE the upper limit of even the shaded line from the earlier leaked AR5 draft It is currently a spectacular 0 7 to 0 8 C above the actual GAST anomaly Why is this model still being taken seriously As not only an outlier but an egregiously incorrect outlier it has no purpose but to create alarm as the upper boundary of model predicted warming one that somebody unversed in hypothesis testing might be inclined to take seriously But then it is very difficult to untangle the lower threads A blue line has an inexplicable peak in the mid 2000 s 0 6 C warmer than the observed temperatures with all of the warming rocketing up in only a couple of years from something that appears much cooler Not even the 1997 1998 ENSO or Pinatubo produce a variation like this anywhere in the visible climate record This sort of sudden extreme fluctuation appears common in many of the models excursions 2 or three times the size of year to year fluctuations in the actual climate even during times when the climate did in fact rapidly warm over a 1 2 year period This is one of the things that is quite striking even within the spaghetti Look carefully and you can make out whole sawtooth bands of climate results where most of the GCMs in the ensemble are rocketing up around 0 4 to 0 5 C in 2 3 years then dropping equally suddenly then rocketing up again This has to be compared to the actual annual variation in the real world climate where a year to year variation of 0 1 or less is typical 0 2 in a year is extreme and where there are almost no instances of 3 4 year sequential increases I have to say that I think that the reason that they present EITHER spaghetti OR simple shaded regions against the measurements isn t just to trick the public and ignorant policy makers into thinking that the GCMs embrace the real world data it is to hide lots of problems problems even with the humble GAST anomaly problems so extreme that if they ever presented the GCM results one at a time against the real world data would cause even the most ardent climate zealot to think twice Even in the greyshaded unheralded past before 1990 the climates have an excursion and autocorrelation that is completely wrong an easy factor of two too large and this is in the fit region Autocorrelation matters In fact it is the ONLY way we can look at external macroscopic quantities like the GAST anomaly and try to assess whether or not the internal dynamics of the model is working It is the decay rate of fluctuations produced by either internal feedbacks or sudden change in external forcings In the crudest of terms many of the models above exhibit Too much positive feedback they shoot up too fast Too much negative feedback they fall down too fast Too much sensitivity to perturbations presuming that they aren t whacking the system with ENSO scale perturbations every other year small perturbations within the model are growing even faster and with greater impact than the 1997 1998 ENSO which involved a huge bolus of heat rising up in the pacific Too much gain they go up more on the upswings than the go down on the downswings which means that the effects of overlarge positive and negative oscillations biases the trend in the positive direction That s all I can make out in the mass of strands but I m sure that more problems would emerge if one examined individual models without the distraction of the others Precisely the same points by the way could be made of and were apparent in the spaghetti produced by individual GCMS for lower troposphere temperatures as presented by Roy Spencer before congress a few months ago There the problem was dismissed by warming enthusiasts as being irrelevant because it only looked at a single aspect of the climate and they could claim but the GASTA predictions they re OK But the GASTA predictions above are the big deal the big kahuna global warming incarnate And they re not OK they are just as bad as the LTT and everybody knows it That s the sad part As Steve pointed out they acknowledged the problem in the leaked release we spent another year or two STILL without warming with the disparity WIDENING and their only response is to pull the acknowledgement sound the alarm and obfuscate the manifold failures of the GCMs by presenting them in an illegible graphic that preserve a pure statistical illusion of marginal adequacy Most of the individual GCMs however are clearly NOT adequate They are well over 0 5 C too warm They have the wrong range of fluctuation They have absurd time constants for growth from perturbations They have absurd time constants for decay from perturbations They aren t even approximately independent one can see bands of similar fluctuations slightly offset in time for supposedly distinct models all of them too warm all of them too extreme and too fast Any trained eye can see these problems The real world data has a completely different CHARACTER and if anything the problems WORSEN in the future I cannot imagine that the entire climate community is not perfectly well aware of the travesty referred to in Climategate that the models are failing and nobody knows why Why is honesty so difficult in this field As Steve Mosher pointed out none of this should ever have been used to push energy policy or CAGW fears on an unsuspecting world It is not as he seems to finally be admitting NOT settled science It s not surprising that models that try to microscopically solve the world s most difficult computational physics problem get the wrong answer across the board rather it s perfectly reasonable to be expected If it weren t for the world saving heroic angst the politics and the bags full of money building tuning fixing comparing the models would be what science is all about as Steve also notes So why not ADMIT this to a world that has been fooled into thinking that the model results were actually authoritative bombarded by phrases like very likely that have no possible defensible basis in statistical analysis All they are doing in AR5 figure 1 4 is delaying the day of reckoning and that not by much If its information content is unravelled strand by strand and presented to the world for objective consideration all it will succeed in doing is proving beyond any doubt that they are indeed trying to cover up their very real uncertainty and perpetuate for a little while longer the illusion that GCMs are meaningful predictors and a sound basis for diverting hundreds of billions of dollars and costing millions of lives per year mostly in developing countries where increased costs of energy are directly paid for in lives paid right now not in a hypothetical 50 years I think they are delaying on the basis of a prayer They are praying for another super ENSO a CME a huge spike in temperature like the ones their models all produce all the time one sufficient to warm the world 0 5C in a year or two and get us back on the track they predict However we are at solar maximum in solar cycle 24 at a 100 year low and the coming minimum promises to be long and slow with predictions of an even lower solar cycle 25 We are well into the PDO at a point in its phase where in the recent past the temperature has held steady or dropped Stratospheric water vapor content has dropped and nobody quite knows why but it significantly lowers the greenhouse forcing in the water channel I ve read NASA estimates for the lowering of sensitivity as high as 0 5C all by itself Volcanic forcings appear to have been heavily overestimated in climate models and again the forcings have the wrong time constants It seems quite likely that the pause could continue well indefinitely or at least until the PDO changes phase again or the sun s activity goes back up Worse yet it might even cool because GCMs do not do particularly well at predicting secular trends or natural variability and we DO NOT KNOW what the temperature outside should be in the absence of increased CO 2 in any way BUT from failed climate models So sad So expensive rgb Steve McIntyre Posted Oct 3 2013 at 4 37 PM Permalink RGB thanks for this BTW I have collation of CMIP3 and CMIP5 GLB tas spaghetti strands and will upload them I ve also written a R function that will ping KNMI and obtain CMIP runs works for a number of variables Dan Hughes Posted Oct 3 2013 at 5 51 PM Permalink The GCM results for the GAST reported in AR5 are consistent with projections made in the peer reviewed literature in 2001 Long range correlations and trends in global climate models Comparison with real data Abstract We study trends and temporal correlations in the monthly mean temperature data of Prague and Melbourne derived from four state of the art general circulation models that are currently used in studies of anthropogenic effects on the atmosphere GFDL R15 a CSIRO Mk2 ECHAM4 OPYC3 and HADCM3 In all models the atmosphere is coupled to the ocean dynamics We apply fluctuation analysis and detrended fluctuation analysis which can systematically overcome nonstationarities in the data to evaluate the models accordingto their ability to reproduce the proper fluctuations and trends in the past and compare the results with the future prediction Dan Hughes Posted Oct 3 2013 at 6 04 PM Permalink ooops I forgot From the conclusions From the trends one can estimate the warmingof the atmosphere in future Since the trends are almost not visible in the real data and overestimated by the models in the past it seems possible that the trends are also overestimated for the future projections of the simulations From this point of view it is quite possible that the global warming in the next 100 yr will be less pronounced than that is predicted by the models kim Posted Oct 3 2013 at 5 27 AM Permalink Final capital pertinent Skiphil Posted Oct 3 2013 at 1 14 PM Permalink Thank you Dr Brown for this insightful discussion I think this would provide the basis for a terrific guest post at Climate Etc or WUWT Any chance you would submit it or could someone get it considered at one of those sites or perhaps Steve would consider elevating it to a lead post here with suitable edits I think a lot of ppl would find the discussion illuminating Beta Blocker Posted Oct 1 2013 at 2 30 PM Permalink Accurate or not honestly derived or not Figure 1 4 is an exceptionally effective means of conveying a message to the general public that temperature observations are in alignment with model predictions Perception is reality In the public s mind Figure 1 4 has the strong look and feel of science and so therefore it must be the product of science As iconic graphs go Figure 1 4 will stand right up there with the hockey stick as a means of effectively communicating the AGW narrative to government policy makers and to the public WillR Posted Oct 1 2013 at 3 12 PM Permalink Re Beta Blocker Oct 1 14 30 Some day I suspect the comment by Beta Blocker will become known as the most astute comment ever made about the latest IPCC release AR5 Beta Blocker Posted Oct 2 2013 at 11 02 AM Permalink Re WillR Oct 1 15 12 IMHO global mean surface temperature must decline continuously for a period of from thirty to fifty years doing so in the face of ever rising greenhouse gas emissions before the climate science community ever begins to seriously question its AGW narrative If the Central England Temperature record between 1659 and 2007 is taken as a rough guide for predicting future trends in GMST then we may see a small decline in GMST over the next ten to twenty years at which point a warming trend will resume CET is the only continuous instrumental record we have that goes back as far as it does and it accurately reflects warming trends over the last 100 years Using the historical pattern of CET s rising falling trends over 350 years as a rough guide to predicting future GMST rising falling trends I think it is only a matter of time before a warming trend resumes If a warming trend resumes within the next decade regardless of how small that warming trend might be relative to IPCC s predictive models the climate science community will consider itself completely off the hook for explaining The Pause Unless of course Figure 1 4 has for all practical purposes already accomplished that objective for them at least for the next six years anyway Keith DeHavelle Posted Oct 3 2013 at 6 55 PM Permalink IMHO global mean surface temperature must decline continuously for a period of from thirty to fifty years doing so in the face of ever rising greenhouse gas emissions It would not take you an hour To come to sensibility You underestimate the power Of human gullibility Sun declining Aerosols rising They d buy into sun s effect And endless rationalizing To state that they re still correct When money and position Might be in some contention You ll see no real attrition Just rhetorical invention But that doesn t mean they ll win this For science is gaining ground You and all of us who re in this Are doing something quite profound Keith DeHavelle MJFriesen Posted Oct 2 2013 at 11 05 AM Permalink re Figure 1 4 is an exceptionally effective means of conveying a message to the general public that temperature observations are in alignment with model predictions Perhaps But as I point out at the bottom although Fig 1 4 is a way of showing the CMIP3 projections made in AR4 compared to history through 2012 going forward the comparisons should be history from 2013 onward compared to AR5 projections using CMIP5 models Future evaluation should be the realized history vs the CMIP5 models until such time as better models than CMIP5 are available Beta Blocker Posted Oct 2 2013 at 11 44 AM Permalink Re MJFriesen Oct 2 11 05 MJFriesen in saying Figure 1 4 is an exceptionally effective means of conveying a message to the general public that temperature observations are in alignment with model predictions I am not making a judgement as to the graph s validity or accuracy as a scientific exercise Rather I am saying merely that it is a highly effective tool for communicating that particular message which the climate science community and the IPCC now greatly desire to communicate to the public and to policy makers i e Observations are in alignment with IPCC s past predictions Regardless of any issues that exist concerning its accuracy and validity Figure 1 4 is such an effective communications tool for influencing the lay public that it may very well get the IPCC and the climate science community off the hook for explaining The Pause at least until the AR6 review cycle begins later on in this decade Bob Koss Posted Oct 2 2013 at 2 04 PM Permalink MJFriesen See my comment below at 1 46 PM which includes link to CMIP5 projections No better than CMIP3 ianl8888 Posted Oct 3 2013 at 7 49 PM Permalink the comparisons should be history from 2013 Nope The baseline will simply be changed As with Beta Blocker my view is that convincing a majority of the population is regarded as the real achievement by the IPCC when this is threatened is when the defence becomes the most vociferous Bob Posted Oct 1 2013 at 3 05 PM Permalink Dana Nuke in the Guardian blog McIntyre has the goods Is that why he doesn t understand simple baselining or even look at the modeled vs observed trends For someone who has the goods that was a pathetically worthless blog post A high school maths student could have done better analysis It is fair to say that the alarmists are the true science deniers clivebest Posted Oct 1 2013 at 3 16 PM Permalink Consider instead Fig TS 9 Three observational estimates of global mean surface temperature black lines from HadCRUT4 GISTEMP and MLOST compared to model simulations C graph here if not loaded above As far as I can work out natural variation is based on post hoc data assimilation matching of GCM model outputs to measured temperatures after including effects of volcanoes and aerosols These are not derived empirically but instead fitted to agree with past results which is one reason why hindcasting is so successful TS 9 c shows just model predictions of greenhouse effects only without natural forcing Now we see that there is an underlying discrepancy between CMIP5 model predictions and rality CMIP5 models currently cannot predict natural variations because they are still not understood Observed warming lies significantly lower than pure AGW predictions man in a barrel Posted Oct 1 2013 at 4 32 PM Permalink Many people in the last few years have been saying that the models are running hot even folks such as Annan Why does the IPCC continue to defy reality jbenton2013 Posted Oct 2 2013 at 10 43 AM Permalink Don t you mean DENY rather than DEFY kevstest Posted Oct 1 2013 at 3 53 PM Permalink anip over editorializing Richard Betts Posted Oct 1 2013 at 5 17 PM Permalink There two points to make here 1 The final AR5 figure presents both model projections and observations as changes relative to a common baseline of 1961 1990 just as was done in AR4 see here The SOD graph for some odd reason used a baseline of 1990 for the models and 1961 1990 for the observations That doesn t make any sense which is presumably why they corrected it for the final draft Incidentally Steve you yourself chose to plot a model against observations in terms of changes relative to a common baseline of 1961 1990 here so you clearly agree with the AR4 and AR5 authors that this is the most appropriate thing to do The fact that the final AR5 figure is consistent with the equivalent AR4 figure shows that they haven t introduced anything new here they ve just done what they did before 2 The AR4 envelope from the SOD figure which is based on AR4 Figure 10 26 is from a Simple Climate Model SCM which only represents the long term trend and does not include natural variability like a GCM see here for the figure the legend says its from an SCM The new AR5 figure shows the spaghetti diagram from the CMIP3 GCMs which do include natural variability Since natural variability is important on the timescales under consideration here it makes more sense to compare the observations with models that include natural variability GCMs rather than those which don t SCMs So in both aspects the published AR5 figure is scientifically better than the SOD version as the model obs comparison is done like with like Steve perhaps in your opinion it would have been better for AR4 to have done Figure 10 26 using a different method than the one that they selected Nonetheless that s what AR4 elected to show and comparison to Figure 10 26 is a natural starting point Nor did the AR5 authors have any compunction about comparison to AR2 Figure 19 which is constructed from a single energy balance model Tamino misrepresented its construction in his blogpost a point that IPCC appears not to have adequately considered when they adopted the Tamino bodge Laurie Childs Posted Oct 1 2013 at 9 34 PM Permalink Richard Betts I m not sure that they did what you say they did with that AR4 graphic The text below it states Figure 1 1 Yearly global average surface temperature Brohan et al 2006 relative to the mean 1961 to 1990 values and as projected in the FAR IPCC 1990 SAR IPCC 1996 and TAR IPCC 2001a my bold It appears to me that the projections were still based on 1990 as in earlier Ars I could find no further discussion or explanation of what was done on this graphic in the relevant AR4 chapter either but perhaps I missed it Do you know where this was discussed I ve left a similar comment at Bishop Hill mt Posted Oct 2 2013 at 8 17 AM Permalink It looks to me that the issue is how 1990 is defined The old AR5 graph uses the value of the observation at 1990 The new AR5 graph and Richard s AR4 link above used the value of the smoothed series at 1990 Steve this is true for the rendering of the AR2 comparison where IPCC has applied the method proposed by Tamino But it does not apply for the AR4 comparison where IPCC has done something different mtobis Posted Oct 2 2013 at 11 16 AM Permalink I own no copyright on the initials but I do often style myself lowercase mt I just want to point out that I am not the mt in question here However I appreciate other mt s constructive approach in this particular case lucia Posted Oct 4 2013 at 2 09 PM Permalink The final AR5 figure presents both model projections and observations as changes relative to a common baseline of 1961 1990 just as was done in AR4 see here The SOD graph for some odd reason used a baseline of 1990 for the models and 1961 1990 for the observations That doesn t make any sense which is presumably why they corrected it for the final draft The proper baselines for comparing models and projections to observations is whichever was selected by those making the projections For the AR4 that was the 20 year mean from 1980 1999 Under this baseline the model mean temperature avearage over 1980 1999 should match the observed temperatures for the same period and in fact the average temperature in ever run during that periods should match the average of observations averaged over those 20 years Of course one may first do the comparison and then shift everything by the same

    Original URL path: http://climateaudit.org/2013/09/30/ipcc-disappears-the-discrepancy/ (2016-02-08)
    Open archived version from archive


  • ar5 « Climate Audit
    2012 5 43 PM Unfortunately IPCC seems far more concerned about secrecy than in requiring its contributors to archive data I received another request to remove discussion of IPCC draft reports On this issue David Appell and I are in full agreement see David Appell s collection of ZOD chapters here Jan 30 Update see below By Steve McIntyre Posted in Uncategorized Also tagged midgley stocker Comments 164 Neukom and the Steig Over Under Jan 19 2012 11 29 AM Earlier this year I reported on the refusal of Raphael Neukom an associate of IPCC confidentiality advocate and WG1 Co Chair Thomas Stocker at the University of Bern to archive data used in a then recent multiproxy study Neukom et al 2011 Clim Dyn In his refusal letter Neukom stated that Most of the non publicly available By Steve McIntyre Posted in Uncategorized Also tagged neukom pages Comments 133 Stocker s Earmarks Jan 12 2012 7 52 PM In December the WG1 TSU of the IPCC sent me a formal notice asking me to remove Climate Audit discussion of the IPCC Zero Draft In this notice they stated It has come to our attention that several Chapters of the Zero Order Draft ZOD of WGI AR5 are being cited quoted and discussed on By Steve McIntyre Posted in Uncategorized Also tagged stocker Comments 166 Older posts Tip Jar The Tip Jar is working again via a temporary location Pages About Blog Rules and Road Map CA Assistant CA blog setup Contact Steve Mc Econometric References FAQ 2005 Gridded Data High Resolution Ocean Sediments Hockey Stick Studies Proxy Data Station Data Statistics and R Subscribe to CA Tip Jar Categories Categories Select Category AIT Archiving Nature Science climategate cg2 Data Disclosure and Diligence Peer Review FOIA General Holocene Optimum Hurricane Inquiries Muir Russell IPCC ar5 MBH98 Replication Source Code Spot the Hockey Stick Modeling Hansen Santer UK Met Office Multiproxy Studies Briffa Crowley D Arrigo 2006 Esper et al 2002 Hansen Hegerl 2006 Jones Mann 2003 Jones et al 1998 Juckes et al 2006 Kaufman 2009 Loehle 2007 Loehle 2008 Mann et al 2007 Mann et al 2008 Mann et al 2009 Marcott 2013 Moberg 2005 pages2k Trouet 2009 Wahl and Ammann News and Commentary MM Proxies Almagre Antarctica bristlecones Divergence Geological Ice core Jacoby Mann PC1 Medieval Noamer Treeline Ocean sediment Post 1980 Proxies Solar Speleothem Thompson Yamal and Urals Reports Barton Committee NAS Panel Satellite and gridcell Scripts Sea Ice Sea Level Rise Statistics Multivariate RegEM Spurious Steig at al 2009 Surface Record CRU GISTEMP GISTEMP Replication Jones et al 1990 SST Steig at al 2009 UHI TGGWS Uncategorized Unthreaded Articles CCSP Workshop Nov05 McIntyre McKitrick 2003 MM05 GRL MM05 EE NAS Panel Reply to Huybers Reply to von Storch Blogroll Accuweather Blogs Andrew Revkin Anthony Watts Bishop Hill Bob Tisdale Dan Hughes David Stockwell Icecap Idsos James Annan Jeff Id Josh Halpern Judith Curry Keith Kloor Klimazweibel Lubos Motl Lucia s Blackboard Matt Briggs NASA GISS Nature Blogs

    Original URL path: http://climateaudit.org/tag/ar5/ (2016-02-08)
    Open archived version from archive

  • Met Office Hindcast « Climate Audit
    do the same With hindcasts you can put in some fudge factors for the occasional volcanic explosion etc As a modeler although not in the field of climate of 30 years experience in running very large computer simulations I find the task of trying to model future climate very daunting You can only model what you understand and as Don Rumsfeld said there are things you don t know and things you don t know that you don t know I would take a different approach much like Dr Lindzen to mainly try to do a simpler model of the sensitivity of the climate to changes in GHGs although in no way is simple A half degree C divergence in the models just a few years out raises questions Martin A Posted Jul 22 2013 at 5 04 AM Permalink I don t see the point in running hindcasts at all You could always tweek this or tweek that parameter until your model approximates the station data I have a Lotus 1 2 3 spreadsheat that produces perfect hindcasts However its forecasts are as useless as an ashtray on a motorbike Martin A Posted Jul 22 2013 at 5 05 AM Permalink sheet It s a lookup table of past values Richard Drake Posted Jul 22 2013 at 5 14 AM Permalink Fair enough but there was no need to swear to start with Craig Loehle Posted Jul 20 2013 at 9 55 AM Permalink Are these curves what climate scientists TM call remarkably similar Carrick Posted Jul 20 2013 at 12 08 PM Permalink My take the reconstructed temperatures are not reliable enough prior to 1950 to draw many conclusions about whether the models are reliable or not Beyond that there is a substantial amount of tuning in the models I think the good agreement between a particular model and data during the backcasting period is more of a statement that somebody worked hard to tune their model and possibly had more money and other resource to get their model look good Forecasting skill would be the appropriate place to test the validity of the models William Newman Posted Jul 20 2013 at 2 59 PM Permalink Carrick wrote Forecasting skill would be the appropriate place to test the validity of the models Another way to test validity would be to match enough data sufficiently closely that the match can t be due to overfitting because roughly the number of independent observations being matched is much larger than the number of degrees of freedom available for tuning to improve the fit The number of degrees of freedom notion can be refined in various ways e g Vapnik Chervonenkis dimension I would guess indeed guess somewhat wildly because I m ignorant of lots of things like measurement uncertainty and observed correlations at different time and spatial scales that we already have enough data to convincingly check a realistically complex model if the model extended its predictions down to the detailed level where we have lots of data Detailed data like individual and cross correlated statistics of local stations and individual satellite pixels In principle it might even be possible to use recorded weather observations to work backwards to estimate important unrecorded inputs like particulates and land use and still get a mathematically convincing can t be overfitting fit In practice given the heroic approximations needed to model a system as complicated as the earth it seems unbelievably unlikely that anyone will be able to do that so probably a climate modeler s best bet is to stick to predicting a few degrees of freedom that are easy to overfit then pound the table about how closely the hindcast matches a few degrees of freedom in historical data But if tomorrow a superadvanced civilization sent us a superfast computer and a model that actually captured the physics I think we could reliably recognize the model as good and not explainable by overfitting with a few months of analysis of very detailed hindcasts without waiting for decades to see how it does on forecasts stevepostrel Posted Jul 20 2013 at 6 48 PM Permalink I ve often wondered why we don t see Vapnik Chervonenkis analyses of the existing simulators That would at least put a bound on the degree of tuning involved Bob Tisdale Posted Jul 20 2013 at 7 35 PM Permalink Carrick the models are tuned but we know that models can t simulate most metrics even over that past three decades including sea surface temperatures http bobtisdale wordpress com 2013 02 28 cmip5 model data comparison satellite era sea surface temperature anomalies Precipitation over land and oceans http bobtisdale wordpress com 2013 07 08 models fail global land precipitation global ocean precipitation Daily Tmax and Tmin and Diurnal Temperature Range http bobtisdale wordpress com 2013 06 20 model data comparison daily maximum and minimum temperatures and diurnal temperature range dtr Hemispheric sea ice area http bobtisdale wordpress com 2013 06 15 model data comparison hemispheric sea ice area 2 Etc Regards Gerald Browning Posted Jul 20 2013 at 10 47 PM Permalink Climate models have been used for forecasting and the results are terrible Dave Williamson Syvie Gravel s manuscript shows how quickly a forecast model goes astray because of the dominant error boundary layer nonsense They are only brought back to reality by inserting new obs every 6 or 12 hours a process known as updating a tuned blend of obs and model data Jerry David Young Posted Jul 20 2013 at 11 46 PM Permalink Jerry I m very interested in this CAn you provide a reference of link for Gravel s paper You may remember me from Boulder around 1978 or so I went to one of your NCAR seminars on sound wave filtering I think I ve since been working on Navier Stokes and we have found that eddy viscosity such as is used for boundary layers is not very accurate It s usually overly dissipative Best Dave Young tchannon Posted Jul 20 2013 at 12 46 PM Permalink Can someone clarify the meaning of hindcast please because this looks like another word hijack and meaning twist I expect it to mean forecasting with time reversed where in this context take conditions today and forecast model from then backwards I suspect the meaning used is nothing of the kind but merely taking some past point in time and then forecasting ie forwards in time from that point which is not hindcasting Forecast with withheld known data is a normal development technique which I assumed was entirely normal in climatic work yet the word seems to have appeared often recently as though this is new fastfreddy101 Posted Jul 20 2013 at 1 28 PM Permalink An old Chinese saying goes Those who have knowledge don t predict Those who predict don t have knowledge Speed Posted Jul 20 2013 at 2 51 PM Permalink Overfitting The possibility of overfitting exists because the criterion used for training the model is not the same as the criterion used to judge the efficacy of a model In particular a model is typically trained by maximizing its performance on some set of training data However its efficacy is determined not by its performance on the training data but by its ability to perform well on unseen data XKCD Speed Posted Jul 20 2013 at 2 52 PM Permalink That was supposed to point here http xkcd com 1122 Richard Drake Posted Jul 20 2013 at 3 14 PM Permalink And yet I tell you nobody will produce a cartoon on overfitting as good as that for a very long time rpielke Posted Jul 20 2013 at 3 50 PM Permalink Hi Steve I have a comment on We often hear about the supposed success of current GCMs in hindcasting 20th century from first principles The GCMs are not first principle models Except for the pressure gradient force advection and gravity the models are constructed with parameterizations that are always using parameters and functions that are tuned usually from a very limited set of observational data during ideal conditions and or from a higher resolution model with its own set of tuned adjustments Then the parameterizations are applied to situations for which they were not tuned I discuss this issue at length for mesoscale models and the same restraint exists for GCMs in my book Pielke Sr R A 2002 Mesoscale meteorological modeling 2nd Edition Academic Press San Diego CA 676 pp http cires colorado edu science groups pielke pubs books mesoscalemodeling html Pielke Sr R A 2013 Mesoscale meteorological modeling 3rd Edition Academic Press in press The same issues of tuning of parameterizations apply to the all other components of the climate models i e in the representation of physics chemistry and biology in the oceans snow and ice soil vegetation etc I also recently documented the failings of the CMIP5 hindcast runs in my guest post at http www climatedialogue org are regional models ready for prime time Best Regards Roger Sr Richard Drake Posted Jul 20 2013 at 4 04 PM Permalink It s very nice to see Roger Sr on CA On a more anecdotal note I ve just found this from an interview of James Lovelock by Leo Hickman in the Guardian in March 2010 The great climate science centres around the world are more than well aware how weak their science is If you talk to them privately they re scared stiff of the fact that they don t really know what the clouds and the aerosols are doing They could be absolutely running the show We haven t got the physics worked out yet One of the chiefs once said to me that he agreed that they should include the biology in their models but he said they hadn t got the physics right yet and it would be five years before they do So why on earth are the politicians spending a fortune of our money when we can least afford it on doing things to prevent events 50 years from now They ve employed scientists to tell them what they want to hear The Germans and the Danes are making a fortune out of renewable energy I m puzzled why politicians are not a bit more pragmatic about all this We do need scepticism about the predictions about what will happen to the climate in 50 years or whatever It s almost naive scientifically speaking to think we can give relatively accurate predictions for future climate There are so many unknowns that it s wrong to do it We haven t got the physics worked out yet But the folks at the Met Office don t always say that as clearly as they might publicly do they Tom Fuller Posted Jul 20 2013 at 10 56 PM Permalink Umm Gerald do you realize who you are addressing your comment to Steve McIntyre Posted Jul 20 2013 at 4 25 PM Permalink I ve added an update showing the UK Met Office contribution to IPCC AR5 against a couple of graphics from Ed Hawkins blog Hawkins blog has some interesting posts and is worth a visit Richard Drake Posted Jul 20 2013 at 5 01 PM Permalink I didn t make the original plot either but it s not that hard to an overlay see code below With the updated data observations are outside the dashed lines The planet s at stake and it s this volunteer who takes time in both senses and becomes first to witness such naughty observations Satisfying moment RoyFOMR Posted Jul 20 2013 at 4 36 PM Permalink Broad brush template for func CS tm postLIA HindcastProjection REM apply explicit casting on unarchived data where convenient e g established physics on adjusted as appropriate GAT UNREM switch maraschino as public case maraschino startyear 1920 to maraschino endyear 1930 call AdjustParametersToFitFunding maraschino break repeat and adjust parameters as required for correct conclusion Broad brush template for func CS tm postPresent HindcastProjection Forecast Forecast SettledForecastFundingFunc Random 0 97 CONSENSUS Call CS tm postPresent HindcastProjection Forecast RoyFOMR Posted Jul 20 2013 at 4 39 PM Permalink Darn it doesn t compile but at least it does comply Kenneth Fritsch Posted Jul 20 2013 at 5 09 PM Permalink SteveM while looking at an average scenario model result when compared to an observed series such as you show with RCP4 5 here is revealing some readers here might think that there is one or a few magic model runs that get it right vis a vis the observed record To that end I have taken all the difference series from all the RCP4 5 model runs minus the GHCN observed series then did a breakpoint determination that divides all the entire difference series into linear segments then regressed those segments against time and finally summarized those results showing the number of significant trends in the linear segments and whether the trends are negative or positive for each model run GHCN difference pair I also include the number of years in each linear segment I ll post the results at this thread with a link to the tables showing the results I can say right now that no difference series of RCP4 5 model runs versus GHCN has no breakpoints ColinD Posted Jul 20 2013 at 5 26 PM Permalink Slightly OT but I was at a symposium recently where a very warmist climate scientist presenter had several digs at the deniers yes that branding was used One of their points was in countering the discrepancy between models and actual temperature record that only 5 of the difference could be attributed to the models themselves I hadn t heard of this before has anyone here RomanM Posted Jul 20 2013 at 5 57 PM Permalink One of their points was in countering the discrepancy between models and actual temperature record that only 5 of the difference could be attributed to the models themselves I don t think that such a statement makes any sort of scientific sense What was the other 95 of the difference attributed to I get the distinct impression that the presenter might have made a rookie misinterpretation of a confidence interval or a statistical test NicL Posted Jul 20 2013 at 5 43 PM Permalink The UK Met Office HadGEM2 AOGCM has an exceptionally high climate sensitivity ECS 4 59 K topped only by the MIROC ESM model at 4 67 K and the highest TCR 2 50 K of any CMIP5 model analysed per Forster et al 2013 JGR That would account for its very high projected future warming At the same time HadGEM2 has a low radiative forcing for a doubling of CO2 concentrations 2 93 Wm 2 c f 4 26 Wm 2 for MIROC ESM a mean of 3 44 Wm 2 for CMIP5 models analysed and a generally accepted figure used in AR5 WG1 of 3 71 Wm 2 HadGEM2 s sensitivity to forcing is therefore much higher than MIROC ESM s at 1 57 vs 1 10 K Wm 2 for ECS and 0 85 vs 0 52 K Wm 2 for TCR HadGEM2 also has high negative aerosol forcing level which would have had a depressing effect on its simulated change in global temperature between 1950 and 1980 or so when aerosol loading rose rapidly As a result of its high aerosol cooling HadGEM2 s net forcing change from pre industrial to 2010 was only 1 0 Wm 2 under half the estimated change per in the leaked draft AR5 WG1 report That explains why HadGEM2 s hindcast rise in global temperature over the 20th century is unrealistically low But under the RCP45 scenario aerosol loadings are projected to fall from now on So simulated future global temperature changes by HadGEM2 will fully reflect its very high TCR There is no doubt that HadGEM2 is an outlier model in terms of its simulation of and response to radiative forcings however good it may be at simulating weather patterns in the short term Speed Posted Jul 21 2013 at 6 34 AM Permalink So there are two categories of uncertainty in modeling future climate 1 Sensitivity of a model to forcing the code in the model 2 What the modeler has predicted future forcing s will be CO2 concentration aerosols land use changes etc An engineer can design a car and predict what its fuel economy numbers will be for a given set of conditions Separately an engineer can predict but can t know under what conditions forcings the customer will use the car Your mileage may vary Matt Skaggs Posted Jul 21 2013 at 9 37 AM Permalink Thanks for this Nic I was hoping someone would post on the why If I am interpreting what you wrote correctly you are saying that hadGEM2 had high sensitivity to non CO2 radiative forcing but low sensitivity to CO2 radiative forcing The twin uses of climate sensitivity in the vernacular makes it a bit confusing At any rate a consensus seems to be forming that Trenberth s missing heat was blocked by aerosols but the heat is still coming as aerosols decline NicL Posted Jul 22 2013 at 3 22 PM Permalink you are saying that hadGEM2 had high sensitivity to non CO2 radiative forcing but low sensitivity to CO2 radiative forcing Almost right I am saying that hadGEM2 has high sensitivity to non CO2 radiative forcing in W m 2 but lower sensitivity to CO2 doubling than one would expect given its sensitivity to non CO2 forcing FerdiEgb Posted Jul 21 2013 at 9 52 AM Permalink Aerosol load and influence is the largest tuning button besides cloud influence that they do use in models If you compare the human emissions of SO2 with that of the Pinatubo then the maximum influence is a 0 1 K global cooling if taken into

    Original URL path: http://climateaudit.org/2013/07/19/met-office-hindcast/ (2016-02-08)
    Open archived version from archive

  • Modeling « Climate Audit
    orientated readers will find it of By niclewis Also posted in Uncategorized Tagged lewis sensitivity Comments 34 New Nic Lewis Paper Apr 16 2013 1 49 PM Nic Lewis s paper on climate sensitivity is available See his BH post here Also see discussion at Judy Curry and WUWT By Steve McIntyre Also posted in Uncategorized Tagged lewis sensitivity Comments 11 Mike s AGU Trick Mar 2 2013 12 29 PM There has been considerable recent discussion of the fact that observations have been running cooler than models see for example Lucia s discussion of IPCC AR5 SOD Figure 9 8 see here However Michael Mann at AGU took an entirely different line Mann asserted that observations were running as hot or hotter than models Mann s assertion By Steve McIntyre Also posted in Hansen Uncategorized Tagged agu Mann scenario b Comments 330 Nic Lewis on Statistical errors in the Forest 2006 climate sensitivity study Nov 8 2012 8 03 AM Nic Lewis writes as follows see related posts here here First my thanks to Steve for providing this platform Some of you will know of me as a co author of the O Donnell Lewis McIntyre and Condon paper on an improved temperature reconstruction for Antarctica Since then I have mainly been investigating studies of equilibrium climate By Steve McIntyre Also posted in Uncategorized Tagged forest nic lewis Comments 122 Older posts Tip Jar The Tip Jar is working again via a temporary location Pages About Blog Rules and Road Map CA Assistant CA blog setup Contact Steve Mc Econometric References FAQ 2005 Gridded Data High Resolution Ocean Sediments Hockey Stick Studies Proxy Data Station Data Statistics and R Subscribe to CA Tip Jar Categories Categories Select Category AIT Archiving Nature Science climategate cg2 Data Disclosure and Diligence Peer Review FOIA General Holocene Optimum Hurricane Inquiries Muir Russell IPCC ar5 MBH98 Replication Source Code Spot the Hockey Stick Modeling Hansen Santer UK Met Office Multiproxy Studies Briffa Crowley D Arrigo 2006 Esper et al 2002 Hansen Hegerl 2006 Jones Mann 2003 Jones et al 1998 Juckes et al 2006 Kaufman 2009 Loehle 2007 Loehle 2008 Mann et al 2007 Mann et al 2008 Mann et al 2009 Marcott 2013 Moberg 2005 pages2k Trouet 2009 Wahl and Ammann News and Commentary MM Proxies Almagre Antarctica bristlecones Divergence Geological Ice core Jacoby Mann PC1 Medieval Noamer Treeline Ocean sediment Post 1980 Proxies Solar Speleothem Thompson Yamal and Urals Reports Barton Committee NAS Panel Satellite and gridcell Scripts Sea Ice Sea Level Rise Statistics Multivariate RegEM Spurious Steig at al 2009 Surface Record CRU GISTEMP GISTEMP Replication Jones et al 1990 SST Steig at al 2009 UHI TGGWS Uncategorized Unthreaded Articles CCSP Workshop Nov05 McIntyre McKitrick 2003 MM05 GRL MM05 EE NAS Panel Reply to Huybers Reply to von Storch Blogroll Accuweather Blogs Andrew Revkin Anthony Watts Bishop Hill Bob Tisdale Dan Hughes David Stockwell Icecap Idsos James Annan Jeff Id Josh Halpern Judith Curry Keith Kloor Klimazweibel Lubos Motl Lucia s

    Original URL path: http://climateaudit.org/category/modeling/ (2016-02-08)
    Open archived version from archive

  • UK Met Office « Climate Audit
    again via a temporary location Pages About Blog Rules and Road Map CA Assistant CA blog setup Contact Steve Mc Econometric References FAQ 2005 Gridded Data High Resolution Ocean Sediments Hockey Stick Studies Proxy Data Station Data Statistics and R Subscribe to CA Tip Jar Categories Categories Select Category AIT Archiving Nature Science climategate cg2 Data Disclosure and Diligence Peer Review FOIA General Holocene Optimum Hurricane Inquiries Muir Russell IPCC ar5 MBH98 Replication Source Code Spot the Hockey Stick Modeling Hansen Santer UK Met Office Multiproxy Studies Briffa Crowley D Arrigo 2006 Esper et al 2002 Hansen Hegerl 2006 Jones Mann 2003 Jones et al 1998 Juckes et al 2006 Kaufman 2009 Loehle 2007 Loehle 2008 Mann et al 2007 Mann et al 2008 Mann et al 2009 Marcott 2013 Moberg 2005 pages2k Trouet 2009 Wahl and Ammann News and Commentary MM Proxies Almagre Antarctica bristlecones Divergence Geological Ice core Jacoby Mann PC1 Medieval Noamer Treeline Ocean sediment Post 1980 Proxies Solar Speleothem Thompson Yamal and Urals Reports Barton Committee NAS Panel Satellite and gridcell Scripts Sea Ice Sea Level Rise Statistics Multivariate RegEM Spurious Steig at al 2009 Surface Record CRU GISTEMP GISTEMP Replication Jones et al 1990 SST Steig at al 2009 UHI TGGWS Uncategorized Unthreaded Articles CCSP Workshop Nov05 McIntyre McKitrick 2003 MM05 GRL MM05 EE NAS Panel Reply to Huybers Reply to von Storch Blogroll Accuweather Blogs Andrew Revkin Anthony Watts Bishop Hill Bob Tisdale Dan Hughes David Stockwell Icecap Idsos James Annan Jeff Id Josh Halpern Judith Curry Keith Kloor Klimazweibel Lubos Motl Lucia s Blackboard Matt Briggs NASA GISS Nature Blogs RealClimate Roger Pielke Jr Roger Pielke Sr Roman M Science of Doom Tamino Warwick Hughes Watts Up With That William Connolley WordPress com World Climate Report Favorite posts Bring the Proxies up to date Due Diligence FAQ 2005 McKitrick What is the Hockey Stick debate about Overview Responses to MBH Some thoughts on Disclosure Wegman and North Reports for Newbies Links Acronyms Latex Symbols MBH 98 Steve s Public Data Archive WDCP Wegman Reply to Stupak Wegman Report Weblogs and resources Ross McKitrick Surface Stations Archives Archives Select Month February 2016 January 2016 December 2015 September 2015 August 2015 July 2015 June 2015 April 2015 March 2015 February 2015 January 2015 December 2014 November 2014 October 2014 September 2014 August 2014 July 2014 June 2014 May 2014 April 2014 March 2014 February 2014 January 2014 December 2013 November 2013 October 2013 September 2013 August 2013 July 2013 June 2013 May 2013 April 2013 March 2013 January 2013 December 2012 November 2012 October 2012 September 2012 August 2012 July 2012 June 2012 May 2012 April 2012 March 2012 February 2012 January 2012 December 2011 November 2011 October 2011 September 2011 August 2011 July 2011 June 2011 May 2011 April 2011 March 2011 February 2011 January 2011 December 2010 November 2010 October 2010 September 2010 August 2010 July 2010 June 2010 May 2010 April 2010 March 2010 February 2010 January 2010 December 2009 November 2009

    Original URL path: http://climateaudit.org/category/modeling/uk-met-office/ (2016-02-08)
    Open archived version from archive

  • hadgem2 « Climate Audit
    Hansen Santer UK Met Office Multiproxy Studies Briffa Crowley D Arrigo 2006 Esper et al 2002 Hansen Hegerl 2006 Jones Mann 2003 Jones et al 1998 Juckes et al 2006 Kaufman 2009 Loehle 2007 Loehle 2008 Mann et al 2007 Mann et al 2008 Mann et al 2009 Marcott 2013 Moberg 2005 pages2k Trouet 2009 Wahl and Ammann News and Commentary MM Proxies Almagre Antarctica bristlecones Divergence Geological Ice core Jacoby Mann PC1 Medieval Noamer Treeline Ocean sediment Post 1980 Proxies Solar Speleothem Thompson Yamal and Urals Reports Barton Committee NAS Panel Satellite and gridcell Scripts Sea Ice Sea Level Rise Statistics Multivariate RegEM Spurious Steig at al 2009 Surface Record CRU GISTEMP GISTEMP Replication Jones et al 1990 SST Steig at al 2009 UHI TGGWS Uncategorized Unthreaded Articles CCSP Workshop Nov05 McIntyre McKitrick 2003 MM05 GRL MM05 EE NAS Panel Reply to Huybers Reply to von Storch Blogroll Accuweather Blogs Andrew Revkin Anthony Watts Bishop Hill Bob Tisdale Dan Hughes David Stockwell Icecap Idsos James Annan Jeff Id Josh Halpern Judith Curry Keith Kloor Klimazweibel Lubos Motl Lucia s Blackboard Matt Briggs NASA GISS Nature Blogs RealClimate Roger Pielke Jr Roger Pielke Sr Roman M Science of Doom Tamino Warwick Hughes Watts Up With That William Connolley WordPress com World Climate Report Favorite posts Bring the Proxies up to date Due Diligence FAQ 2005 McKitrick What is the Hockey Stick debate about Overview Responses to MBH Some thoughts on Disclosure Wegman and North Reports for Newbies Links Acronyms Latex Symbols MBH 98 Steve s Public Data Archive WDCP Wegman Reply to Stupak Wegman Report Weblogs and resources Ross McKitrick Surface Stations Archives Archives Select Month February 2016 January 2016 December 2015 September 2015 August 2015 July 2015 June 2015 April 2015 March 2015 February 2015 January 2015 December 2014 November 2014 October 2014 September 2014 August 2014 July 2014 June 2014 May 2014 April 2014 March 2014 February 2014 January 2014 December 2013 November 2013 October 2013 September 2013 August 2013 July 2013 June 2013 May 2013 April 2013 March 2013 January 2013 December 2012 November 2012 October 2012 September 2012 August 2012 July 2012 June 2012 May 2012 April 2012 March 2012 February 2012 January 2012 December 2011 November 2011 October 2011 September 2011 August 2011 July 2011 June 2011 May 2011 April 2011 March 2011 February 2011 January 2011 December 2010 November 2010 October 2010 September 2010 August 2010 July 2010 June 2010 May 2010 April 2010 March 2010 February 2010 January 2010 December 2009 November 2009 October 2009 September 2009 August 2009 July 2009 June 2009 May 2009 April 2009 March 2009 February 2009 January 2009 December 2008 November 2008 October 2008 September 2008 August 2008 July 2008 June 2008 May 2008 April 2008 March 2008 February 2008 January 2008 December 2007 November 2007 October 2007 September 2007 August 2007 July 2007 June 2007 May 2007 April 2007 March 2007 February 2007 January 2007 December 2006 November 2006 October 2006 September 2006 August 2006 July 2006

    Original URL path: http://climateaudit.org/tag/hadgem2/ (2016-02-08)
    Open archived version from archive

  • hawkins « Climate Audit
    al 2009 Marcott 2013 Moberg 2005 pages2k Trouet 2009 Wahl and Ammann News and Commentary MM Proxies Almagre Antarctica bristlecones Divergence Geological Ice core Jacoby Mann PC1 Medieval Noamer Treeline Ocean sediment Post 1980 Proxies Solar Speleothem Thompson Yamal and Urals Reports Barton Committee NAS Panel Satellite and gridcell Scripts Sea Ice Sea Level Rise Statistics Multivariate RegEM Spurious Steig at al 2009 Surface Record CRU GISTEMP GISTEMP Replication Jones et al 1990 SST Steig at al 2009 UHI TGGWS Uncategorized Unthreaded Articles CCSP Workshop Nov05 McIntyre McKitrick 2003 MM05 GRL MM05 EE NAS Panel Reply to Huybers Reply to von Storch Blogroll Accuweather Blogs Andrew Revkin Anthony Watts Bishop Hill Bob Tisdale Dan Hughes David Stockwell Icecap Idsos James Annan Jeff Id Josh Halpern Judith Curry Keith Kloor Klimazweibel Lubos Motl Lucia s Blackboard Matt Briggs NASA GISS Nature Blogs RealClimate Roger Pielke Jr Roger Pielke Sr Roman M Science of Doom Tamino Warwick Hughes Watts Up With That William Connolley WordPress com World Climate Report Favorite posts Bring the Proxies up to date Due Diligence FAQ 2005 McKitrick What is the Hockey Stick debate about Overview Responses to MBH Some thoughts on Disclosure Wegman and North Reports for Newbies Links Acronyms Latex Symbols MBH 98 Steve s Public Data Archive WDCP Wegman Reply to Stupak Wegman Report Weblogs and resources Ross McKitrick Surface Stations Archives Archives Select Month February 2016 January 2016 December 2015 September 2015 August 2015 July 2015 June 2015 April 2015 March 2015 February 2015 January 2015 December 2014 November 2014 October 2014 September 2014 August 2014 July 2014 June 2014 May 2014 April 2014 March 2014 February 2014 January 2014 December 2013 November 2013 October 2013 September 2013 August 2013 July 2013 June 2013 May 2013 April 2013 March 2013 January 2013 December 2012 November 2012 October 2012 September 2012 August 2012 July 2012 June 2012 May 2012 April 2012 March 2012 February 2012 January 2012 December 2011 November 2011 October 2011 September 2011 August 2011 July 2011 June 2011 May 2011 April 2011 March 2011 February 2011 January 2011 December 2010 November 2010 October 2010 September 2010 August 2010 July 2010 June 2010 May 2010 April 2010 March 2010 February 2010 January 2010 December 2009 November 2009 October 2009 September 2009 August 2009 July 2009 June 2009 May 2009 April 2009 March 2009 February 2009 January 2009 December 2008 November 2008 October 2008 September 2008 August 2008 July 2008 June 2008 May 2008 April 2008 March 2008 February 2008 January 2008 December 2007 November 2007 October 2007 September 2007 August 2007 July 2007 June 2007 May 2007 April 2007 March 2007 February 2007 January 2007 December 2006 November 2006 October 2006 September 2006 August 2006 July 2006 June 2006 May 2006 April 2006 March 2006 February 2006 January 2006 December 2005 November 2005 October 2005 September 2005 August 2005 July 2005 June 2005 May 2005 April 2005 March 2005 February 2005 January 2005 December 2004 October 2004 January 2000 NOTICE Click on the

    Original URL path: http://climateaudit.org/tag/hawkins/ (2016-02-08)
    Open archived version from archive

  • More Met Office Hypocrisy « Climate Audit
    wrong While I think that the Nature News graphic was misleading because it did not show the most recent Met Office I didn t use the word wrong to describe it though it was much more worthy of the term wrong than my graphic Mark T Posted Jul 20 2013 at 1 02 AM Permalink Don Though my reply to Richard was snipped ok I was naughty in no way was I implying that everyone at BH is apologetic to Betts I was simply noting a general trend of sorts that was highlighted by recent protests that any comment critical of Betts should be censored IMHO such a concept is the antithesis of skepticism Steve openly referred to Betts comments as hypocritical a point I wholeheartedly agree with but not limited to this instance in spite of Drake s protests to the contrary Also IMHO Montford himself has become increasingly critical though again much more subtly than I Sorry if you disagree Richard but my observation and like comments around the various blogs are pretty obvious upon inspection Mark Marion Posted Jul 17 2013 at 5 22 PM Permalink Unfortunately untrue and unwarranted allegations seems to be the MO of Climate scientists Just look at the hype the Met Office pushed out in the run up to the Copenhagen negotiations back in 2009 in their brochure Warming Climate Change the Facts no longer available on the Met Office web site as Betts had it quietly removed http people virginia edu rtg2t future gcc UK Met quick guide pdf Page O4 of the brochure has the most exaggerated hockey stick I ve ever seen and comments such as those below are strewn throughout It s now clear that man made greenhouse gases are causing climate change The rate of change began as significant has become alarming and is simply unsustainable in the long term It s a problem we all share because every single country will be affected Together today we must take action to adapt to it and stop it or at least slow it down What will happen if we don t reduce emissions If emissions continue to grow at present rates CO2 concentration in the atmosphere is likely to reach twice pre industrial levels by around 2050 Unless we limit emissions global temperature could rise as much as 7 C above pre industrial temperature by the end of the century and push many of the world s great ecosystems such as coral reefs and rainforests to irreversible decline Even if global temperatures rise by only2 C it would mean that 20 30 of species could face extinction We can expect to see serious effects on our environment food and water supplies and health Are computer models reliable Yes Computer models are an essential tool in understanding how the climate will respond to changes in greenhouse gas concentrations and other external effects such as solar output and volcanoes Computer models are the only reliable way to predict changes in climate Their reliability is tested by seeing if they are able to reproduce the past climate which gives scientists confidence that they can also predict the future And bearing in mind that in 2009 the Met Office would have been well aware of the comparitive flatlining of temperatures for over a decade that hockey stick image was grossly dishonest Interesting to compare it to the decadal forecast they quietly slipped out on Christmas Eve 2012 Joe Born Posted Jul 17 2013 at 7 14 PM Permalink Hilary Rostov Chris Leamy You aren t the only ones who don t know what initialized is supposed to mean While I have a high regard for Mr McIntyre s work he needs an interpreter If there is someone who professes to understand all of his posts and the points he is trying to make that person would be doing a public service to provide an interlinear McIntyre similar to those sites that show the Koine and English New Testaments side by side I only occasionally have the time to tease meaning out of his posts Hilary Ostrov aka hro001 Posted Jul 17 2013 at 10 44 PM Permalink Joe I respectfully disagree with your criticism and or expectation If I don t understand something in one of Steve s posts I consider it my responsibility to educate myself or to determine that a post is way beyond my pay grade and or specific interests and wait for Steve s next post In this instance I drew an inference which may or may not be correct from Steve s revised graph in his previous post combined with a recollection of Betts intermittent and IMHO very selective responses to questions posed at Bishop Hill Particularly informative IMHO were those questions politely and respectfully asked to which Betts did not respond For an example of Betts in action so to speak you might want to take a look at the rather long thread of comments pursuant to Andrew Montford s post in January of this year Spot the difference And speaking of Andrew Montford apart from his The Hockey Stick Illusion and its sequel Hiding the Decline in which he succeeds in conveying the translations you seek there are many of Steve s key posts of which this is probably not one in which Montford can be counted on to provide such a translation Not to mention that one of the hallmarks of climate science is a rather unique Dynamic Language Library which is so flexible that it seems to permit redefinition on the fly IMHO it is also worth noting that initialized or not initialized was introduced by Betts not Steve This being the case surely the onus of explanation lies with Betts who conveniently chose not to post his objection in a comment on the post but to broadcast it to his followers via twitter YMMV but well that s the view from here Mooloo Posted Jul 17 2013 at 11 31 PM Permalink Joe you appear to want to understand the maths without having to understand the maths The world doesn t work like that to understand maths you have to learn the language of maths and take time to learn the concepts I think one of the great benefits of this blog is that it does dwell on the details and doesn t try to avoid the unavoidable statistical intricacies Like Hilary I feel it my duty to catch up when it is heavy going A side effect is that I have learnt a lot of statistics on the way intrepid wanders Posted Jul 18 2013 at 1 26 AM Permalink Joe Initializing is mostly a modeling terminology for where you set your model spin up to take in a variety of initial conditions parameters Think of it as you came up with a process to throw a ball through a tire on a windless day Two week from then on a windy day you still want to throw the ball through the tire You would have to re initialize you throwing parameters from two weeks ago to compensate for the the current wind with the angle of your throw or maybe the force For those with programming micro controller experience it is the setup potion of the program macro What initialise could possible mean to data that is already out of the modeling machine Richard s MetO Bizzaro World can only know Data can only be initialised before it produces a product Nature produces mostly three leaf clovers but Richard may insist with his argument that is has not been properly initialised after it has been created as a three leaf and it should be a four leaf clover Steve M indulges the very poor use of terminology and finds one of Betts co workers using the post data in a similar fashion as Steve s un initialized So tomorrow Richard will need to a Talk to Smith about his un initialized data b Talk to Steve about Smith s three leaf clover is unrelated to Steve s three leaf colver c Ignore the entire mis speak and comment of Steve s three leaf and hope people will forget by Tuesday next week Steven Mosher Posted Jul 19 2013 at 11 39 AM Permalink What initialise could possible mean to data that is already out of the modeling machine Richard s MetO Bizzaro World can only know Data can only be initialised before it produces a product Lets see if I can give you a reasonable description In an uninitilized run you spin the model up from a zero state to an equilibrium state What s that mean At the start of the simulation lets say the velocity state of a ocean grid is zero Its not moving There is no wind The spin up proceeds and forcings are applied Say a TSI of 1360 watts The oceans start to move the winds start to blow The spin up takes 100s of years even more At some point the temperature stops fluctuating You achieve balance at the TOA You are at equillibrium One thing you check for is drift in key variables like global temps Your simulation is now ready to run It is year ZERO you now apply forcings for 1850 to the present Year zero now becomes year 1850 This means that even if your simulation produces emergent phenomena like El Nino the timing of these events will differ from reality Lets imagine that in 1851 there was a La Nina starting Your sim wont be able to match that why because your simulation 1850 was in equillibium and the actual 1850 was not The hope is that if you run a bunch of realizations that these natural cycles will integrate to zero and the average of all your realizations will show the effect of changes in forcings This also means that at the any point in your collection of simulations you will miss when you compare it to the real observations if the period of comparison is dominated by a emergent phenomena To address this problem and the problem of shorter term projections you can try to initialize the simulation to the historical observations take 1960 take observations from that time period and initialize the model using those Eric Worrall Posted Jul 17 2013 at 11 38 PM Permalink DodgerBlue hilarious PJB Posted Jul 19 2013 at 10 19 AM Permalink Perhaps Artful Dodger Blue might be more appropriate Martin A Posted Jul 18 2013 at 12 20 AM Permalink Scuse my ignorance but what is an initialised forecast mrsean2k Posted Jul 18 2013 at 2 22 AM Permalink Steve s untrue and unwarranted raises the temperature in a way that s not helpful in eliciting a more nuanced explanation from RB IMO As are references to occasional snipes RB was asked to respond to the previous article by means of a question on Twitter from Anthony Watts and he replied on the same forum within it s confines He didn t take any kind of position until prodded by AW And It is misleading to claim as AW does that pointing someone at an article researched by Steve in his usual meticulous way amounts to a simple question Just because it took him a second or so to ask it doesn t make it simple I suffer at the hands of this sort of question regularly An email request beginning Quick question invariably means that the person asking just wants a quick answer and will generally offer no thanks or acknowledgment when I take time to explain why no quick simple answer exists Framing someone s disagreement with your conclusions as an attack or accusation is a tactic that s not warranted by either side Salamano Posted Jul 18 2013 at 3 49 AM Permalink The last time I talked like this on a Climate Science website I was accused of being a Concern Troll Oh well Hyperbole aside Does it not look like Steve has a point there is a significant difference between the IPCC submitted numbers and the latest Joe Born Posted Jul 18 2013 at 7 25 AM Permalink intrepid wanders Thanks for the input Actually I think I understand what initialization is in general it s just that the use here specifically was obscure If you re modeling a physical system you must assume some initial state yet reference in the post is made to an uninitialized model so perhaps I can be forgiven for wondering whether initialize means something different here My guess is that uninitialized doesn t mean that the simulations had no initialization but rather that the initial conditions were applied to a time decades in the past and or that different contributions to the uninitialized mean curve came from multiple simulations which were initialized at a variety of times with various initial conditions But since Mr McIntyre was confident enough in the meaning to say as he did needless to say there is no scientific or statistical principle forbidding the illustration of initialized and uninitialized forecasts on the same graphic it would have been helpful to at least three of his readers if he had set forth his understanding explicitly Joe Born Posted Jul 18 2013 at 8 03 AM Permalink Hilary Rostov and Mooloo If I don t understand something in one of Steve s posts I consider it my responsibility to educate myself You are no doubt to be commended for that attitude but the purpose of a blog is to communicate and the more effort a post requires of the reader the fewer readers there will be who get communicated to effectively Mooloo Joe you appear to want to understand the maths without having to understand the maths The world doesn t work like that to understand maths you have to learn the language of maths and take time to learn the concepts Believe me I am painfully aware of my mathematical shortcomings but the problem with Mr McIntyre s posts is not mathematics but English exposition His previous post for example set forth several interesting facts but one might infer from its title Nature mag Hides the Decline that Mr McIntyre considered those facts to evidence something nefarious At least to me though it s far from clear what he thought it was The Jeff Tollefson article to which Mr McIntyre s previous post drew attention dealt with applying the climate models to decade scale forecasts In contrast to those models regular simulations which Mr Tollefson characterized as starting as far back as before the industrial era these efforts initialize the models to conditions actually measured at the beginning of the decade being forecast Graphs showing comparisons of the thus predicted temperatures with actual temperatures and with the outputs of regular simulations indicated that the resultant temperature forecasts for recent periods tended to be lower than regular simulations but significantly higher than those that actually eventuated and this was in tune with article s verbal content the Smith et al study given as the lead example gave reasonable results for about a year out but then predicted dramatic warming that didn t happen Read in light of the article as a whole its title Climate Change Forecast for 2018 Is Cloudy with Record Heat could have been taken merely as a cute way of saying Now they re trying to make weather forecasts decades in advance particularly in view of the cloudy part True that title was a poor hint at the above described content as is often the way of headlines and it can justifiably criticized on that basis But Mr McIntyre instead refers to it in connection with the fact that the Met Office decadal forecast represented by the graphs rightmost blue trace is not that organization s most recent which does not exhibit that trace s pronounced increase Is Mr McIntyre saying the author made that forecast choice in order to justify the headline Or that the headline was based on that trace What is his point And if we don t know for sure his point is how are we to judge whether the facts he marshals support it I appreciate that many technical topics are just inherently hard to understand and the reader needs to pore over them no matter how good the writer is But most of the subjects of Mr McIntyre s post are not inherently as hard as he makes them those Mr Montford or Prof McKitrick explained turned out to be quite easy to grasp So if there s someone who can translate that would be a great public service Matt Skaggs Posted Jul 18 2013 at 9 50 AM Permalink In the Figure 2 graphic from Smith et al 2012 the blue line is obviously uninitialized as it starts from a point above the observed data while the red initialized curve points back to the observed data When I first looked at Steve s plot in the last post all three curves appeared to originate from the observed data making it look like all three were initialized Now I see the little red spur to the left of 2010 on the hadGEM2 curve so I am guessing that it was just a coincidence that it touched the hadCRU4 data in 2010 Steve yes and no while it wasn t overtly initialized on 2010 the IPCC submissions were done at that time Kenneth Fritsch Posted Jul 18 2013 at 10 10 AM Permalink HaroldW Posted Jul 17 2013 at 10 56 PM Permalink Reply Richard Betts explained initialized This is still early days of course there is still a lot more work to do but you can see from the 2012 figure that the hindcasts show the model agreeing with the observations reasonably well and better than the HadCM3 hindcasts as shown in the 2011 figure The statement Betts makes here is one that is frequently

    Original URL path: http://climateaudit.org/2013/07/17/more-met-office-hypocrisy/ (2016-02-08)
    Open archived version from archive



  •