archive-org.com » ORG » C » CLIMATEAUDIT.ORG

Total: 491

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • Post-1980 Proxies « Climate Audit
    used in nearly all the multiproxy studies I did a standard type chronology fitting negative exponential curves by By Steve McIntyre Also posted in Proxies Comments 21 Cook et al 2004 More Cargo Cult Mar 14 2006 3 09 PM Reader Bart S has argued that Cook et al QSR 2004 disposed of the Divergence Problem the name applied at the NAS panel on March 2 3 2006 for the problem that if the proxies do not record late 20th century warming how can we be sure that they recorded potential earlier warming in the MWP By Steve McIntyre Also posted in Divergence Multiproxy Studies NAS Panel Tagged cook 2004 divergence Comments 8 Upside Down Quadratic Proxy Response Oct 10 2005 3 17 PM David Stockwell has suggested a discussion of nonlinear responses of tree growth to temperature I ve summarized here some observations which I ve seen about bristlecones limber pine cedars and spruce all showing an upside down U shaped response to temperature The implications of this type of relationship for the multiproxy project of attempting to reconstruct past temperatures By Steve McIntyre Also posted in bristlecones Divergence Proxies Tagged divergence quadratic upside down Comments 85 Older posts Tip Jar The Tip Jar is working again via a temporary location Pages About Blog Rules and Road Map CA Assistant CA blog setup Contact Steve Mc Econometric References FAQ 2005 Gridded Data High Resolution Ocean Sediments Hockey Stick Studies Proxy Data Station Data Statistics and R Subscribe to CA Tip Jar Categories Categories Select Category AIT Archiving Nature Science climategate cg2 Data Disclosure and Diligence Peer Review FOIA General Holocene Optimum Hurricane Inquiries Muir Russell IPCC ar5 MBH98 Replication Source Code Spot the Hockey Stick Modeling Hansen Santer UK Met Office Multiproxy Studies Briffa Crowley D Arrigo 2006 Esper et al 2002 Hansen Hegerl 2006 Jones Mann 2003 Jones et al 1998 Juckes et al 2006 Kaufman 2009 Loehle 2007 Loehle 2008 Mann et al 2007 Mann et al 2008 Mann et al 2009 Marcott 2013 Moberg 2005 pages2k Trouet 2009 Wahl and Ammann News and Commentary MM Proxies Almagre Antarctica bristlecones Divergence Geological Ice core Jacoby Mann PC1 Medieval Noamer Treeline Ocean sediment Post 1980 Proxies Solar Speleothem Thompson Yamal and Urals Reports Barton Committee NAS Panel Satellite and gridcell Scripts Sea Ice Sea Level Rise Statistics Multivariate RegEM Spurious Steig at al 2009 Surface Record CRU GISTEMP GISTEMP Replication Jones et al 1990 SST Steig at al 2009 UHI TGGWS Uncategorized Unthreaded Articles CCSP Workshop Nov05 McIntyre McKitrick 2003 MM05 GRL MM05 EE NAS Panel Reply to Huybers Reply to von Storch Blogroll Accuweather Blogs Andrew Revkin Anthony Watts Bishop Hill Bob Tisdale Dan Hughes David Stockwell Icecap Idsos James Annan Jeff Id Josh Halpern Judith Curry Keith Kloor Klimazweibel Lubos Motl Lucia s Blackboard Matt Briggs NASA GISS Nature Blogs RealClimate Roger Pielke Jr Roger Pielke Sr Roman M Science of Doom Tamino Warwick Hughes Watts Up With That William Connolley WordPress com World Climate Report

    Original URL path: http://climateaudit.org/category/proxies/post-1980-proxies/ (2016-02-08)
    Open archived version from archive


  • Marvel et al.: Implications of forcing efficacies for climate sensitivity estimates – update « Climate Audit
    the same ocean patch south of Greenland warming and the opposite pattern of changes off Antarctica So it would seem a bit surprising if land use forcing were enough to itself initiate glaciation Maybe it is more likely that LU forcing had enough of an effect in the version of GISS E2 R used at least for other CMIP5 runs with the faulty ocean mixing scheme Also Chandler say that GISS E2 R has a regional cool bias in the upper mid latitude Atlantic in its preindustrial control run Whatever the cause it looks to me as if there is a change in the AMOC involved As I wrote earlier whether or not LU run 1 is strictly a rogue it seems to me that there is a good case for excluding it since we know the real world climate system did not behave like this during the 20th century opluso Posted Jan 30 2016 at 6 59 AM Permalink Reply Has anyone seen the most recent Ganopolski paper that got a big PR push Human made climate change suppresses the next ice age https www pik potsdam de news press releases human made climate change suppresses the next ice age opluso Posted Jan 30 2016 at 7 01 AM Permalink Reply Oh my mistake I didn t notice the url link in your comment Thanks for the link nobodysknowledge Posted Jan 23 2016 at 3 37 PM Permalink Reply I can just agree with Marvel in one thing from her blog The climate s sensitivity is hard to nail down but mine is pretty high Well that it is pretty high is an understatement Paul Penrose Posted Jan 23 2016 at 8 18 PM Permalink Reply When I see some evidence that the software models were written by software experts and have been developed using industry standard best practices then I will start taking them a bit more seriously Until then they are about as useful as an uncalibrated piece of lab equipment kribaez Posted Jan 24 2016 at 1 16 AM Permalink Reply Nic Re your latest update Gavin Schmidt noted the heat transport problem in the Russel ocean model in a paper published in March 2014 http onlinelibrary wiley com doi 10 1002 2013MS000265 full It looks like it had not been fixed up to that time The Miller et al paper was published in June 2014 http onlinelibrary wiley com doi 10 1002 2013MS000266 full As far as I can tell the Miller paper mentions the existence of the problem but no correction Tracer advection is calculated using a linear upstream scheme that updates both the tracer value and its slope within the grid box The additional calculation of the slope maintains tighter gradients against numerical diffusion Mesoscale mixing is parameterized according to the Gent McWilliams scheme although the along isopycnal flux was misaligned resulting in excessive cross isopycnal diffusion I think a polite question to the authors is justified Given a free choice of GCMs I would not choose to use OHC data from a model with a known ocean heat transport problem However it is possible that a corrigendum was issued for the GISS E2 R results and the data accessible via the CMIP5 portals updated If so it would be good to have a pointer to it niclewis Posted Jan 24 2016 at 4 51 AM Permalink Reply Paul The Schmidt and Miller papers were submitted at the same time so I would expect them both to reflect the same position regrading correction or not of the ocean problem in GISS E2 R I can find no mention of the problem in the Marvel paper nor in a paper submitted over a year later about the climate change in GISS ModelE2 under RCP scenarios http onlinelibrary wiley com doi 10 1002 2014MS000403 full 2015 I cannot find signs of any corrigendum for GISS E2 R results It is conceivable that in practice the effects of the ocean problem were small at least in all the main simulation runs I have redownloaded CMIP5 r1i1p1 tas netCDF files for the GISS E2 R Historical simulation They have the same file date 25 March 2011 as those current at the AR5 March 2013 cutoff date kribaez Posted Jan 24 2016 at 2 00 AM Permalink Reply Nic Do you happen to know if and how AIE was included in Miller s All forcings together values for Fi This would not automatically appear in the instantaneous net flux perturbation and would need to be added in either by using the parameterisation algorithm in GISS E2 NINT or by adding in the values calculated from the single forcing case or other I can find no reference in Miller to any such calculation but I may have missed it It must of course be added in for any efficacy calculation to make sense niclewis Posted Jan 24 2016 at 4 08 AM Permalink Reply Paul I don t know for certain that AIE was included in Miller s iRF Fi All forcings together values but I have assumed that it was Miller says the that magnitude of the AIE is tuned using an empirical relation between low cloud cover and the logarithm of aerosol number concentration and that in 2000 the instantaneous AIE at the tropopause is 0 67 W m2 A value for AIE iRF could have been calculated by perturbing the 1850 cloud field used when when computing iRF although there is no mention of doing so in Miller et al Wouldn t adding in values calculated in the single forcing case simply push the question of measuring an iRF for AIE back to that simulation As you will know aerosol indirect effect should not really appear in iRF at all since adjustments by clouds are not instantaneous Hansen 2005 did not show any iRF value for it But there is quite a lot of discussion in Miller et al about aerosol forcing in Miller et al which would all be wrong if the AIE had not been included in their All forcings together measure so Ron Miller seems happy that it was included And my multiple regression results certainly support AIE forcing having been included kribaez Posted Jan 24 2016 at 8 00 AM Permalink Reply Thanks Nic I agree that your multiple regression results support AIE forcing having been included in some guise And certainly what was done in Marvel et al would make no sense if it had not been included so no doubt the co authors believe that it is already included What was going through my mind was the difficulty of assigning any equivalent Fi value to AIE for the historical run Miller makes it clear that he uses pre industrial climate for the evaluation of Fi values This does not require any simulation It just requires activating all of the forcing agents turning on the radiative code at annual intervals and recording the net flux change at the predefined tropopause Because there is no atmospheric simulation involved AIE does not manifest itself in this calculation Strictly speaking it is not a forcing at all but a fast feedback which is unique to tropospheric aerosols Because it is unique to this particular driver as opposed to being a temperature dependent feedback common to all forcing drivers it must be treated as a quasi forcing in order to permit intelligent comparison with other forcings in general and with CO2 forcing in particular Hansen s algorithm for AIE which I described as the parameterisation algorithm in GISS E2 NINT and which you call an empirical relation does permit the indirect effect to be converted into an equivalent forcing So what I suspect was done was that the algorithm was switched on together with the radiative code at each time period The problem with this is that the equivalent forcing value is strongly dependent on climate and particularly cloud cover at the time the algorithm is calculating From the above process for abstracting Fi values cloud cover is fixed at pre industrial level The difference in calculated values may be substantial see Hansen 2005 between fixing the cloud cover and allowing it to vary as it did in the actual historic run simulations If this is what Miller did then he should be able to isolate very simply the AIE forcing from the historic run and confirm that it was identical to the single forcing case abstraction of AIE on the same basis of unchanging climate state That then allows a more definitive statement to be made on the difference between the calculated assumed AIE in the historic run and the true AIE which was based on the successively updated climate state and which should be I believe significantly more negative The alternative to which I referred involves analysis of the single forcing run simulation rather than the abstraction of Fi values from the same but I suspect it is not very relevant The indirect forcing can be abstracted by de convolution of the temperature and net flux data since there is only one known direct forcing which is changing I have left a question on realclimate hoping for some clarification of what was actually done kribaez Posted Jan 24 2016 at 8 11 AM Permalink Reply Here is a copy of the comment I left on RealClimate Gavin I would be very grateful if you could respond to the following three questions 1 Do you have available CO2 benchmarking data for GISS E2 R specifically estimates of Fi Fa and ERF for a range of concentrations If not more specifically are you going to support or modify the Fi value of 4 1 which appears in Marvel et al 2 Can you please advise if and how AIE forcing was included in Miller s All forcing together Fi values for the 20th century historic run 3 Can you confirm that the temperature and net flux data for GISS E2 R available via the CMIP5 portals and KNMI Climate Explorer are based on a model corrected to fix the ocean heat transport problem which you identified in the Russell ocean model in your 2014 paper Many Thanks Patrick M Posted Jan 24 2016 at 9 57 AM Permalink Reply If an algorithm can produce results that are clearly rogue then I would imagine it can produce results that are partially rogue as well As a software developer myself I think this creates a situation where a bug becomes a subjective decision In order to make the determination more objective one would need to define rogue more clearly It s sort of a catch 22 when you design code whose purpose is to find out if anomalies will occur in that your code needs to be free to create anomalies which could just as easily be coding logic errors I would think code of this type would always have to have an independent verification method to check predicted anomalies such as reviewing the physical plausibility of the processes involved In other words I think these models should be used to present questions not answers Jit Posted Jan 24 2016 at 11 33 AM Permalink Reply As striking as LU run 1 is fig 4 it looks like it has half the scale bars of fig 1 runs 2 5 2 5 to 2 5 vs 5 to 5 Is this just a matter of the legend not being updated niclewis Posted Jan 24 2016 at 12 02 PM Permalink Reply Fig 4 has the same scale as Fig 3 not as Fig 2 which I assume is what you meant by Fig 1 A version with a 5 C scale is here Jit Posted Jan 25 2016 at 4 53 AM Permalink Reply Yes sorry fig 2 is what I meant What I was pointing out was that the scale should be the same on all the single run figures Thank you for the link to the figure with the extended scale This seems to show that a 5 anomaly occurs in the north Atlantic not just a 2 5 one as per fig 4 Of course the blues do not look as dramatic with the colour ramp stretched kenfritsch Posted Jan 26 2016 at 11 17 AM Permalink Reply Going back to Nic s pdf critique of Marvel and then rereading Marvel the criticisms that Nic makes of this paper become clearer to me More importantly in addressing the quality of this paper it is the accumulation of problems that Nic sees in this paper It is that accumulation and not necessarily a single problem pointed out that is the important to judging the validity of the results conclusions of this paper What I have seen in the past with criticism of climate science papers from these blogs like Climate Audit is that an author or defender of the paper will clear up or attempt to clear up a single point and fail to answer acknowledge the many problems We who are critical sometimes concentrate on a single issue without continuing to point to the multitude of issues I would hope that the Marvel authors will address all of Nic s criticisms but if they do not that might well say something also kenfritsch Posted Jan 26 2016 at 11 35 AM Permalink Reply Interesting also that the efficacy measures made by Marvel could in some sense and context be construed as factors required to bring the model sensitivity more in line with the empirical results using mostly observed data and that obtains lower sensitivities Going forward and without knowing the origin for the need of the efficacy measures significantly different than unity one might well conclude that prediction of future temperature increases from AGW would be the same with or without the efficacy measure The Marvel paper gets around this thought by talking about the accident of history and implying that the efficacy measure is very much unique to the recent climate conditions and pointing to the efficacy measure different than unity being related to the non uniformity in the x y and z directions of the global atmosphere of the negatively forcing agents It reminded me of the thought process of some climate scientists implying rather strongly that the divergence of proxy responses in recent years must be related to AGW otherwise of course without an explanation we have to seriously question the proxy responses in past times to temperature never minding that the selection process in most of these temperature reconstructions makes the process flawed from the start drissel Posted Jan 26 2016 at 8 46 PM Permalink Reply As a now retired professional programmer I m astonished that anyone believes that Large opaque computer programs work Large opaque computer programs meet their specifications if any Programs and their specifications accurately represent anything as large complex and poorly understood as world climate Programs and their specifications accurately embody the Physics that we do understand like Conservation of Mass Navier Stokes Equations and on and on Programs and their specifications should ever serve as a basis for public policies that could result in impoverishment starvation etc Several of the computer program output anomalies mentioned by Dr Lewis smell to my practiced nose like program bugs Regards Bill Drissel Frisco TX kenfritsch Posted Jan 27 2016 at 2 26 PM Permalink Reply Nic I have been attempting to find the data for the 6 multiple model runs used to determine the ERF for the individual forcing agents in Marvel I find only 1 set of data for these forcings for ERF I was under the impression that ERF and iRF data were both taken from multiple runs niclewis Posted Jan 27 2016 at 4 35 PM Permalink Reply Ken Miller says that iRF is determined by measuring th eradiative imbalance in the 1850 climate state as it was before perturbation by any forcing but with the relevant forcing s imposed That would give the same result for all runs as the climate state has not changed from preindustial So it would just be computed once I think For ERF the SST is fixed but the atmosphere is free to evolve In principle multiple runs would be desirable but as equilibrium is reached quickly with fixed SST it looks aas if they have instead for each forcing averaged across 3 decades from the same run And I don t think they have archived the fixed SST runs involved they don t seem to be in the CMIP5 archive kenfritsch Posted Jan 28 2016 at 3 20 AM Permalink Reply Nic I was not clear about my confusion with the data used for ERF and iRF approaches to efficacy determinations in Marvel but I think I may now have it figured out if you can verify that my understanding is correct All the data required for the iRF approach was available to me in the form of annual GMST and OHC for all the model runs and for all the individual forcing agents and the one set of annual Fi data for each of the forcing agents from Miller 2014 My confusion was with the ERF approach and the source of the GMST and OHC data required to go with the one set of ERF data As you note there are ERF data for 3 different decades for all the forcing agents from a single model run It would have been nice to have data for multiple runs but it is now my understanding that the same GMST and OHC data used in the the iRF approach must have been used in the ERF approach by using the average delta T and trends for OHC from the decade 1996 2005 That gets me to the multiple runs for the ERF approach and the method used in Marvel to obtain uncertainty for both the iRF and ERF approaches to determining efficacy Is this understanding correct I plan to analyze the data using Singular Spectrum Analysis and other analysis approaches niclewis Posted Jan 28 2016 at 12 32 PM Permalink Ken Yes you should use all the same separate run GMST and OHC data with the one set of averaged over 3 decades ERF data But as there is only ERF data for year 2000 forcing efficacies have to be calculated from quotients rather than being able to use regression Marvel s regression with intercept over 1906 2005 method is unsatisfactory in any case You might get better results using data starting in 1850 or 1851 there is a slight jump rather than 1900 and TOA radiative imbalance rather than ocean heat content data for your analysis I ll try to add such data to that which I have already provided at https niclewis wordpress com appraising marvel et al implications of forcing efficacies for climate sensitivity estimates kribaez Posted Jan 28 2016 at 5 51 AM Permalink Reply I have left another comment on RealClimate for Gavin to mull over awaiting moderation copy below while he is I trust assiduously working in the background to answer the previous questions which I have left Gavin You wrote Dropping outliers just because they don t agree with your preconceived ideas is a classical error in statistics it s much better to use the spread as a measure of the uncertainty Gavin Another classical error in statistics is to attribute the error associated with one property to the wrong variable Work by the RNMI Sybren Drijfhout et al 2015 http www pnas org content 112 43 E5777 abstract confirms that GISS E2 R has the capacity for abrupt climate change in the form of inter alia the local collapse of convection in the North Atlantic In this instance if the results of the rogue run in the single forcing LU cases are due to the abrupt collapse of N Atlantic convection as seems increasingly likely from the data then the dramatically different temperature response in the rogue run has nothing whatsoever to do with the uncertainty in transient efficacy of LU forcing The inclusion of the run leads quite simply to an erroneously inflated calculation of the mean transient efficacy for LU and a misleading confounding of the uncertainty associated with the GCM s internal mechanics with the uncertainty in LU transient efficacy Ultimately the Marvel et al paper seeks to argue that sensitivities estimated from actual observational data are biased low on the grounds that GISS E2 R over the historic period is responding to an overall low weighted average forcing efficacy It then seeks to extend the conclusions drawn from the model to realworld observational studies Since we know from the real observational data that there was not a collapse of N Atlantic convection then quite apart from other methodological questions the inclusion of this run for the LU calculation is impossible to justify and on its own is sufficiently large in its impact to bring the study results into question Applying the same logic any of the 20th Century History runs which exhibited similar abrupt shifts Southern Ocean sea ice Tibetan plateau snow melt and N Atlantic convection which were not observed in the realworld should have also been excluded from the ensemble mean for Marvel et al to have any hope of credibly extending inferences to realworld observational data even if we suspend disbelief with respect to other problems associated with data methods and relevance stevefitzpatrick Posted Jan 28 2016 at 5 20 PM Permalink Reply Paul Another good question for Gavin But I think you are unlikely to get a reply to ANY substantive question about Marvel et al at Real Climate unless it is a question which lends support to the conclusions of Marvel or so silly a question that Gavin can just poke fun Gavin is not going to entertain substantive doubts about Marvel any more than Eric Steig was willing to entertain substantive doubts about continent wide Antarctic warming even after O Donnell et al was published The point of Marvel et al is to raise doubts in a high profile publication about the veracity of the many low empirical estimates of sensitivity so that those empirical estimates can be waved away when public energy policy is discussed Marvel et al is just ammunition in the climate wars IMO its quality and accuracy do matter at all to the authors mpainter Posted Jan 28 2016 at 5 55 PM Permalink Reply I think your last sentence left out the word not and with that I can say ditto to your comment Marvel et al are now at the point where to engage the issue any further only exposes and emphasizes the hollowness of their whole position faulty models and all stevefitzpatrick Posted Jan 29 2016 at 8 38 AM Permalink mpainter Yes I left out the word not stevefitzpatrick Posted Jan 29 2016 at 8 55 AM Permalink Reply Paul Gavin has replied to your comment He completely rejects your suggestion that the single very strange land use run is not representative and so should not be included in the analysis He also challenges you to look at the level of variance in all 200 runs of the study and do your own analysis Seems to me that a very reasonable argument can be made about the statistical validity of any 5 run ensemble that includes a single strange run if you know the variability of a much larger group eg Gavin s 200 runs AntonyIndia Posted Jan 29 2016 at 9 32 AM Permalink Reply Is Gavin admitting something there the basic result which that the historical runs don t have the same forcing response pattern as the response to CO2 alone stevefitzpatrick Posted Jan 29 2016 at 11 29 AM Permalink Reply Paul K After thinking a bit more about Gavin s reply it seems to me it should be possible to show the single strange run for land use run 1 from figure 5 in Nic s original post is likely to be a statistical fluke related to model behavior and not at all representative of the actual effect of land use If you calculate the slope of each of the five land use runs and then calculate an unbiased estimate of the standard deviation of the slopes from runs 2 through 5 then the slope of run 1 may very well be outside the 95 inclusive probability window That is run 1 is unlikely to be a member of the same normally distributed sample population as runs 2 through 5 and so is more likely due to an unrelated effect which just was not present in the other LU runs A drastic and wildly unrealistic change in North Atlantic ocean temperature would of course be a plausible unrelated effect Nic Can you point to where the data used to generate figure 5 in your original post is located niclewis Posted Jan 29 2016 at 12 44 PM Permalink Reply steve Interesting idea The data is available in a spreadsheet via the link given at the end of my original post https niclewis wordpress com appraising marvel et al implications of forcing efficacies for climate sensitivity estimates The graph is in the tas sheet at cell BN100 Right click in the graph and choose Select data to see which ranges the data comes from stevefitzpatrick Posted Jan 29 2016 at 2 19 PM Permalink Nic Thanks The slope statistics are Relative slope R1 0 0369 R2 0 0066 R3 0 0144 R4 0 0098 R5 0 0085 Mean 2 5 0 00982 Std deviation 2 5 0 00332 R1 Standard Deviations from the mean 8 15 The estimate of the standard deviation is the unbiased estimate using n 1 in the denominator So it seems to me unlikely that R1 is in the same population as R2 to R5 especially in light of the peculiar pattern of cooling in Run 1 Of course Gavin in his best Steigian imitation is going to discount any slope analysis as irrelevant or will insist ignoring any reasonable interpretation of the actual data that the correct analysis is to include all 5 runs in the estimate of the mean and standard deviation for the slopes In this case the statistics become Rel slope R1 0 0369 R2 0 0066 R3 0 0144 R4 0 0098 R5 0 0085 Mean 0 01524 Std deviation 0 0124 R1 Standard Deviations from the mean 1 74 Which puts R1 just inside of the credible range if you choose to ignore the bizarre pattern of cooling in the North Atlantic which is absolutely not a credible response to a tiny forcing from land use change But such things seem to pass as credible when the results match the desire outcome dynam01 Posted Jan 28 2016 at 3 24 PM Permalink Reply Reblogged this on I Didn t Ask To Be a Blog kenfritsch Posted Jan 30 2016 at 11 10 AM Permalink Reply My apologies if these graphs that I have linked below have been displayed by someone previously The graphs represent the regression of the Marvel GMST versus forcings and plotted on a yearly basis For each forcing agent I have graphed together the Ensemble mean and the 5 model runs I think these representations paint a different picture than using decadal averages Notice that using different parts of the forcing range would give very different trends Where the forcing is changing with time in a trending manner then one could state that the trends would be very different depending on the decade used I have also calculated the trend statistics from the yearly results graphed in the links and while the p values can be impressive over the range of forcing as noted above the trends calculated within parts of the range can change dramatically I have not yet applied the auto correlation ar1 to simulations to determine the confidence intervals for these trends but when I finish I will report the results here That model run for land use in question is very different than the other runs not only in the trend value but in the p value of that trend One can use an alternative method to determine statistical differences by using the model run trend values and the confidence intervals derived from Monte Carlo simulations as described above I have not done that yet but I would predict at this point that there would be a significant difference between the run in question if the confidence intervals for the other land use runs are not too wide Link for GHG and O3 Link for Solar and Land Use Link for Volcanic and Anthro Aerosol stevefitzpatrick Posted Jan 30 2016 at 4 52 PM Permalink Reply Ken Fritsch Yes LU Run 1 is wildly different from the others I doubt using decadal averages like Nic did makes much difference in the trends Run 1 is nothing like the other four no matter how you look at it I think the argument needs to be made that inferring anything about the efficacy of LU forcing with Run 1 included will lead to spurious results BTW I gently suggest that you use the same y axis scale when comparing the trends for the five LU runs Using different y axis scales obscures how different Run 1 is from the others kenfritsch Posted Jan 31 2016 at 8 20 AM Permalink Reply Steve Marvel used decadal averages and Nic merely reported the results Decadal averages will graphically paint a very different picture than using the individual variation in individual runs with yearly data points The calculated confidence intervals need to use yearly data Using decadal averages for that purpose would require some adjustments The differing y axis ranges were just to see if you were paying attention I am finishing the CI calculations and will post them here By my methods the land use Run 1 slope trend is statistically very different than the other runs It would appear than some model runs for the various forcing agents have slopes not significantly different than zero Maybe we can get Gavin to argue for the validity of that happening if we had more than one realization of the earth s climate kenfritsch Posted Jan 30 2016 at 1 00 PM Permalink Reply Nic I have a post in moderation that has graphs with regressions of the temperature response to 6 forcing agents from the Marvel data It is on a yearly basis and I think shows the data in a different light than how it was presented in Marvel Steve unmoderated triggered by number of links kenfritsch Posted Jan 30 2016 at 1 15 PM Permalink Reply I should have added that the large p values to which I refer in my moderated post should be large negative values kenfritsch Posted Jan 31 2016 at 1 35 PM Permalink Reply In the link below is a table with my analysis details of the regression of temperature versus forcing for the 6 forcing agents It shows the trend 95 plus minus confidence intervals CIs the intercept the p values for the trend and intercept unadjusted for auto correlation and the ar1 values used for adjusting the CIs for auto correlation using 10 000 Monte Carlo simulations Notice that slope of the trend in each case can be used to ratio amongst the forcing agents to relate back to the relative efficacy values found in Marvel My results for some of the forcing agents is in general agreement with those from Marvel but not all There is a large difference for anthropogenic aerosols I used the sum of the direct and indirect aerosol values provided by Marvel and that sum when regressed against the aerosol temperature gave very good correlations with very low trend p values My slope values had only one forcing agent with a higher value than GHG and that was Land Use Land Use has slope values for the 5 runs that vary greatly and the CIs for those individual runs are large but show that Run1 is very significantly different than the other runs Run 1 also has a trend p value that is much lower than the other runs and the intercept is significantly different I would judge from the large variations within the Land Use that regression iRF versus temperature for that forcing agent makes little sense GHG and Volcanic were the only forcing agents that had CIs that were a low percentage of the slope values Why my calculations give such relatively low values of forcing agents slopes compared to GHG with the noted exception of Land Use is a puzzle to me If I have not made a mistake here it would also throw huge doubts into the use the instantaneous forcing and regressions to determine efficacy niclewis Posted Jan 31 2016 at 3 22 PM Permalink Ken I agree your slope trends apart from Anthro aerosol where I think you may have made some mistake based on regressing on annual 1900 2005 data Marvel used decadal data which gives somewhat different results If not following their method I think there is merit in using data for the full 1850 2005 simulation runs I ve now uploaded that to my web pages see the link in the update to this article above The low efficacy for volcanic forcing is expected and partly reflects the delay in GMST reponse to a forcing impulse which matters here as volcanic forcing is impulse like With decadal data there is much less distortion as a result of the delay The same applies to an extent to solar forcing kenfritsch Posted Jan 31 2016 at 4 54 PM Permalink Nic your points are well taken and now I will attempt to get my head around what you state here Would a delay in temperature response change a trend measured on an annual versus decadal basis There would be a lag but the response would eventually be manifested in the temperature I think If I used the start and end points only the trend should be near the same Obviously there are differences as you indicate you have made the annual and decadal calculations But is it caused by the lag effect I ll have another look at my AA trend calculations There was a good correlation and I guess that delayed my looking I notice from my plot of temperature versus AA that if I regressed only on the lower levels of forcing I would obtain a much steeper slope and closer to the expected value kenfritsch Posted Jan 31 2016 at 5 28 PM Permalink Nic I may have had a problem with the instantaneous part of the forcing in my thinking If I am measuring temperature response to forcing for a given year and if all the forcing were to occur and was reported in that year but only part of the temperature response occurs then that year would show a lower than expected temperature The next year I would have no forcing but a partial continuation of the temperature response and that year would have a higher than expected temperature given no forcing is reported I was unfortunately thinking about an accummualtion of forcing and temperature responses for my starting and ending point thought Maybe that is what Willis was thinking A sufficiently lagging response might even make a decadal average of instantaneous forcing a poor method of determining efficacy Even an event that forces in one decade and is mostly measured in another could create a problem Conclusion Use ERF kenfritsch Posted Jan 31 2016 at 7 21 PM Permalink I found my error with AA and the revised data and graph are in the link below Considering the CIs for the GHG and AA the mean trend slopes for GHG and AA are not that different Next step is to

    Original URL path: http://climateaudit.org/2016/01/21/marvel-et-al-implications-of-forcing-efficacies-for-climate-sensitivity-estimates-an-update/?replytocom=766275 (2016-02-08)
    Open archived version from archive

  • Marvel et al.: Implications of forcing efficacies for climate sensitivity estimates – update « Climate Audit
    ocean patch south of Greenland warming and the opposite pattern of changes off Antarctica So it would seem a bit surprising if land use forcing were enough to itself initiate glaciation Maybe it is more likely that LU forcing had enough of an effect in the version of GISS E2 R used at least for other CMIP5 runs with the faulty ocean mixing scheme Also Chandler say that GISS E2 R has a regional cool bias in the upper mid latitude Atlantic in its preindustrial control run Whatever the cause it looks to me as if there is a change in the AMOC involved As I wrote earlier whether or not LU run 1 is strictly a rogue it seems to me that there is a good case for excluding it since we know the real world climate system did not behave like this during the 20th century opluso Posted Jan 30 2016 at 6 59 AM Permalink Reply Has anyone seen the most recent Ganopolski paper that got a big PR push Human made climate change suppresses the next ice age https www pik potsdam de news press releases human made climate change suppresses the next ice age opluso Posted Jan 30 2016 at 7 01 AM Permalink Reply Oh my mistake I didn t notice the url link in your comment Thanks for the link nobodysknowledge Posted Jan 23 2016 at 3 37 PM Permalink Reply I can just agree with Marvel in one thing from her blog The climate s sensitivity is hard to nail down but mine is pretty high Well that it is pretty high is an understatement Paul Penrose Posted Jan 23 2016 at 8 18 PM Permalink Reply When I see some evidence that the software models were written by software experts and have been developed using industry standard best practices then I will start taking them a bit more seriously Until then they are about as useful as an uncalibrated piece of lab equipment kribaez Posted Jan 24 2016 at 1 16 AM Permalink Reply Nic Re your latest update Gavin Schmidt noted the heat transport problem in the Russel ocean model in a paper published in March 2014 http onlinelibrary wiley com doi 10 1002 2013MS000265 full It looks like it had not been fixed up to that time The Miller et al paper was published in June 2014 http onlinelibrary wiley com doi 10 1002 2013MS000266 full As far as I can tell the Miller paper mentions the existence of the problem but no correction Tracer advection is calculated using a linear upstream scheme that updates both the tracer value and its slope within the grid box The additional calculation of the slope maintains tighter gradients against numerical diffusion Mesoscale mixing is parameterized according to the Gent McWilliams scheme although the along isopycnal flux was misaligned resulting in excessive cross isopycnal diffusion I think a polite question to the authors is justified Given a free choice of GCMs I would not choose to use OHC data from a model with a known ocean heat transport problem However it is possible that a corrigendum was issued for the GISS E2 R results and the data accessible via the CMIP5 portals updated If so it would be good to have a pointer to it niclewis Posted Jan 24 2016 at 4 51 AM Permalink Reply Paul The Schmidt and Miller papers were submitted at the same time so I would expect them both to reflect the same position regrading correction or not of the ocean problem in GISS E2 R I can find no mention of the problem in the Marvel paper nor in a paper submitted over a year later about the climate change in GISS ModelE2 under RCP scenarios http onlinelibrary wiley com doi 10 1002 2014MS000403 full 2015 I cannot find signs of any corrigendum for GISS E2 R results It is conceivable that in practice the effects of the ocean problem were small at least in all the main simulation runs I have redownloaded CMIP5 r1i1p1 tas netCDF files for the GISS E2 R Historical simulation They have the same file date 25 March 2011 as those current at the AR5 March 2013 cutoff date kribaez Posted Jan 24 2016 at 2 00 AM Permalink Reply Nic Do you happen to know if and how AIE was included in Miller s All forcings together values for Fi This would not automatically appear in the instantaneous net flux perturbation and would need to be added in either by using the parameterisation algorithm in GISS E2 NINT or by adding in the values calculated from the single forcing case or other I can find no reference in Miller to any such calculation but I may have missed it It must of course be added in for any efficacy calculation to make sense niclewis Posted Jan 24 2016 at 4 08 AM Permalink Reply Paul I don t know for certain that AIE was included in Miller s iRF Fi All forcings together values but I have assumed that it was Miller says the that magnitude of the AIE is tuned using an empirical relation between low cloud cover and the logarithm of aerosol number concentration and that in 2000 the instantaneous AIE at the tropopause is 0 67 W m2 A value for AIE iRF could have been calculated by perturbing the 1850 cloud field used when when computing iRF although there is no mention of doing so in Miller et al Wouldn t adding in values calculated in the single forcing case simply push the question of measuring an iRF for AIE back to that simulation As you will know aerosol indirect effect should not really appear in iRF at all since adjustments by clouds are not instantaneous Hansen 2005 did not show any iRF value for it But there is quite a lot of discussion in Miller et al about aerosol forcing in Miller et al which would all be wrong if the AIE had not been included in their All forcings together measure so Ron Miller seems happy that it was included And my multiple regression results certainly support AIE forcing having been included kribaez Posted Jan 24 2016 at 8 00 AM Permalink Reply Thanks Nic I agree that your multiple regression results support AIE forcing having been included in some guise And certainly what was done in Marvel et al would make no sense if it had not been included so no doubt the co authors believe that it is already included What was going through my mind was the difficulty of assigning any equivalent Fi value to AIE for the historical run Miller makes it clear that he uses pre industrial climate for the evaluation of Fi values This does not require any simulation It just requires activating all of the forcing agents turning on the radiative code at annual intervals and recording the net flux change at the predefined tropopause Because there is no atmospheric simulation involved AIE does not manifest itself in this calculation Strictly speaking it is not a forcing at all but a fast feedback which is unique to tropospheric aerosols Because it is unique to this particular driver as opposed to being a temperature dependent feedback common to all forcing drivers it must be treated as a quasi forcing in order to permit intelligent comparison with other forcings in general and with CO2 forcing in particular Hansen s algorithm for AIE which I described as the parameterisation algorithm in GISS E2 NINT and which you call an empirical relation does permit the indirect effect to be converted into an equivalent forcing So what I suspect was done was that the algorithm was switched on together with the radiative code at each time period The problem with this is that the equivalent forcing value is strongly dependent on climate and particularly cloud cover at the time the algorithm is calculating From the above process for abstracting Fi values cloud cover is fixed at pre industrial level The difference in calculated values may be substantial see Hansen 2005 between fixing the cloud cover and allowing it to vary as it did in the actual historic run simulations If this is what Miller did then he should be able to isolate very simply the AIE forcing from the historic run and confirm that it was identical to the single forcing case abstraction of AIE on the same basis of unchanging climate state That then allows a more definitive statement to be made on the difference between the calculated assumed AIE in the historic run and the true AIE which was based on the successively updated climate state and which should be I believe significantly more negative The alternative to which I referred involves analysis of the single forcing run simulation rather than the abstraction of Fi values from the same but I suspect it is not very relevant The indirect forcing can be abstracted by de convolution of the temperature and net flux data since there is only one known direct forcing which is changing I have left a question on realclimate hoping for some clarification of what was actually done kribaez Posted Jan 24 2016 at 8 11 AM Permalink Reply Here is a copy of the comment I left on RealClimate Gavin I would be very grateful if you could respond to the following three questions 1 Do you have available CO2 benchmarking data for GISS E2 R specifically estimates of Fi Fa and ERF for a range of concentrations If not more specifically are you going to support or modify the Fi value of 4 1 which appears in Marvel et al 2 Can you please advise if and how AIE forcing was included in Miller s All forcing together Fi values for the 20th century historic run 3 Can you confirm that the temperature and net flux data for GISS E2 R available via the CMIP5 portals and KNMI Climate Explorer are based on a model corrected to fix the ocean heat transport problem which you identified in the Russell ocean model in your 2014 paper Many Thanks Patrick M Posted Jan 24 2016 at 9 57 AM Permalink Reply If an algorithm can produce results that are clearly rogue then I would imagine it can produce results that are partially rogue as well As a software developer myself I think this creates a situation where a bug becomes a subjective decision In order to make the determination more objective one would need to define rogue more clearly It s sort of a catch 22 when you design code whose purpose is to find out if anomalies will occur in that your code needs to be free to create anomalies which could just as easily be coding logic errors I would think code of this type would always have to have an independent verification method to check predicted anomalies such as reviewing the physical plausibility of the processes involved In other words I think these models should be used to present questions not answers Jit Posted Jan 24 2016 at 11 33 AM Permalink Reply As striking as LU run 1 is fig 4 it looks like it has half the scale bars of fig 1 runs 2 5 2 5 to 2 5 vs 5 to 5 Is this just a matter of the legend not being updated niclewis Posted Jan 24 2016 at 12 02 PM Permalink Reply Fig 4 has the same scale as Fig 3 not as Fig 2 which I assume is what you meant by Fig 1 A version with a 5 C scale is here Jit Posted Jan 25 2016 at 4 53 AM Permalink Reply Yes sorry fig 2 is what I meant What I was pointing out was that the scale should be the same on all the single run figures Thank you for the link to the figure with the extended scale This seems to show that a 5 anomaly occurs in the north Atlantic not just a 2 5 one as per fig 4 Of course the blues do not look as dramatic with the colour ramp stretched kenfritsch Posted Jan 26 2016 at 11 17 AM Permalink Reply Going back to Nic s pdf critique of Marvel and then rereading Marvel the criticisms that Nic makes of this paper become clearer to me More importantly in addressing the quality of this paper it is the accumulation of problems that Nic sees in this paper It is that accumulation and not necessarily a single problem pointed out that is the important to judging the validity of the results conclusions of this paper What I have seen in the past with criticism of climate science papers from these blogs like Climate Audit is that an author or defender of the paper will clear up or attempt to clear up a single point and fail to answer acknowledge the many problems We who are critical sometimes concentrate on a single issue without continuing to point to the multitude of issues I would hope that the Marvel authors will address all of Nic s criticisms but if they do not that might well say something also kenfritsch Posted Jan 26 2016 at 11 35 AM Permalink Reply Interesting also that the efficacy measures made by Marvel could in some sense and context be construed as factors required to bring the model sensitivity more in line with the empirical results using mostly observed data and that obtains lower sensitivities Going forward and without knowing the origin for the need of the efficacy measures significantly different than unity one might well conclude that prediction of future temperature increases from AGW would be the same with or without the efficacy measure The Marvel paper gets around this thought by talking about the accident of history and implying that the efficacy measure is very much unique to the recent climate conditions and pointing to the efficacy measure different than unity being related to the non uniformity in the x y and z directions of the global atmosphere of the negatively forcing agents It reminded me of the thought process of some climate scientists implying rather strongly that the divergence of proxy responses in recent years must be related to AGW otherwise of course without an explanation we have to seriously question the proxy responses in past times to temperature never minding that the selection process in most of these temperature reconstructions makes the process flawed from the start drissel Posted Jan 26 2016 at 8 46 PM Permalink Reply As a now retired professional programmer I m astonished that anyone believes that Large opaque computer programs work Large opaque computer programs meet their specifications if any Programs and their specifications accurately represent anything as large complex and poorly understood as world climate Programs and their specifications accurately embody the Physics that we do understand like Conservation of Mass Navier Stokes Equations and on and on Programs and their specifications should ever serve as a basis for public policies that could result in impoverishment starvation etc Several of the computer program output anomalies mentioned by Dr Lewis smell to my practiced nose like program bugs Regards Bill Drissel Frisco TX kenfritsch Posted Jan 27 2016 at 2 26 PM Permalink Reply Nic I have been attempting to find the data for the 6 multiple model runs used to determine the ERF for the individual forcing agents in Marvel I find only 1 set of data for these forcings for ERF I was under the impression that ERF and iRF data were both taken from multiple runs niclewis Posted Jan 27 2016 at 4 35 PM Permalink Reply Ken Miller says that iRF is determined by measuring th eradiative imbalance in the 1850 climate state as it was before perturbation by any forcing but with the relevant forcing s imposed That would give the same result for all runs as the climate state has not changed from preindustial So it would just be computed once I think For ERF the SST is fixed but the atmosphere is free to evolve In principle multiple runs would be desirable but as equilibrium is reached quickly with fixed SST it looks aas if they have instead for each forcing averaged across 3 decades from the same run And I don t think they have archived the fixed SST runs involved they don t seem to be in the CMIP5 archive kenfritsch Posted Jan 28 2016 at 3 20 AM Permalink Reply Nic I was not clear about my confusion with the data used for ERF and iRF approaches to efficacy determinations in Marvel but I think I may now have it figured out if you can verify that my understanding is correct All the data required for the iRF approach was available to me in the form of annual GMST and OHC for all the model runs and for all the individual forcing agents and the one set of annual Fi data for each of the forcing agents from Miller 2014 My confusion was with the ERF approach and the source of the GMST and OHC data required to go with the one set of ERF data As you note there are ERF data for 3 different decades for all the forcing agents from a single model run It would have been nice to have data for multiple runs but it is now my understanding that the same GMST and OHC data used in the the iRF approach must have been used in the ERF approach by using the average delta T and trends for OHC from the decade 1996 2005 That gets me to the multiple runs for the ERF approach and the method used in Marvel to obtain uncertainty for both the iRF and ERF approaches to determining efficacy Is this understanding correct I plan to analyze the data using Singular Spectrum Analysis and other analysis approaches niclewis Posted Jan 28 2016 at 12 32 PM Permalink Ken Yes you should use all the same separate run GMST and OHC data with the one set of averaged over 3 decades ERF data But as there is only ERF data for year 2000 forcing efficacies have to be calculated from quotients rather than being able to use regression Marvel s regression with intercept over 1906 2005 method is unsatisfactory in any case You might get better results using data starting in 1850 or 1851 there is a slight jump rather than 1900 and TOA radiative imbalance rather than ocean heat content data for your analysis I ll try to add such data to that which I have already provided at https niclewis wordpress com appraising marvel et al implications of forcing efficacies for climate sensitivity estimates kribaez Posted Jan 28 2016 at 5 51 AM Permalink Reply I have left another comment on RealClimate for Gavin to mull over awaiting moderation copy below while he is I trust assiduously working in the background to answer the previous questions which I have left Gavin You wrote Dropping outliers just because they don t agree with your preconceived ideas is a classical error in statistics it s much better to use the spread as a measure of the uncertainty Gavin Another classical error in statistics is to attribute the error associated with one property to the wrong variable Work by the RNMI Sybren Drijfhout et al 2015 http www pnas org content 112 43 E5777 abstract confirms that GISS E2 R has the capacity for abrupt climate change in the form of inter alia the local collapse of convection in the North Atlantic In this instance if the results of the rogue run in the single forcing LU cases are due to the abrupt collapse of N Atlantic convection as seems increasingly likely from the data then the dramatically different temperature response in the rogue run has nothing whatsoever to do with the uncertainty in transient efficacy of LU forcing The inclusion of the run leads quite simply to an erroneously inflated calculation of the mean transient efficacy for LU and a misleading confounding of the uncertainty associated with the GCM s internal mechanics with the uncertainty in LU transient efficacy Ultimately the Marvel et al paper seeks to argue that sensitivities estimated from actual observational data are biased low on the grounds that GISS E2 R over the historic period is responding to an overall low weighted average forcing efficacy It then seeks to extend the conclusions drawn from the model to realworld observational studies Since we know from the real observational data that there was not a collapse of N Atlantic convection then quite apart from other methodological questions the inclusion of this run for the LU calculation is impossible to justify and on its own is sufficiently large in its impact to bring the study results into question Applying the same logic any of the 20th Century History runs which exhibited similar abrupt shifts Southern Ocean sea ice Tibetan plateau snow melt and N Atlantic convection which were not observed in the realworld should have also been excluded from the ensemble mean for Marvel et al to have any hope of credibly extending inferences to realworld observational data even if we suspend disbelief with respect to other problems associated with data methods and relevance stevefitzpatrick Posted Jan 28 2016 at 5 20 PM Permalink Reply Paul Another good question for Gavin But I think you are unlikely to get a reply to ANY substantive question about Marvel et al at Real Climate unless it is a question which lends support to the conclusions of Marvel or so silly a question that Gavin can just poke fun Gavin is not going to entertain substantive doubts about Marvel any more than Eric Steig was willing to entertain substantive doubts about continent wide Antarctic warming even after O Donnell et al was published The point of Marvel et al is to raise doubts in a high profile publication about the veracity of the many low empirical estimates of sensitivity so that those empirical estimates can be waved away when public energy policy is discussed Marvel et al is just ammunition in the climate wars IMO its quality and accuracy do matter at all to the authors mpainter Posted Jan 28 2016 at 5 55 PM Permalink Reply I think your last sentence left out the word not and with that I can say ditto to your comment Marvel et al are now at the point where to engage the issue any further only exposes and emphasizes the hollowness of their whole position faulty models and all stevefitzpatrick Posted Jan 29 2016 at 8 38 AM Permalink mpainter Yes I left out the word not stevefitzpatrick Posted Jan 29 2016 at 8 55 AM Permalink Reply Paul Gavin has replied to your comment He completely rejects your suggestion that the single very strange land use run is not representative and so should not be included in the analysis He also challenges you to look at the level of variance in all 200 runs of the study and do your own analysis Seems to me that a very reasonable argument can be made about the statistical validity of any 5 run ensemble that includes a single strange run if you know the variability of a much larger group eg Gavin s 200 runs AntonyIndia Posted Jan 29 2016 at 9 32 AM Permalink Reply Is Gavin admitting something there the basic result which that the historical runs don t have the same forcing response pattern as the response to CO2 alone stevefitzpatrick Posted Jan 29 2016 at 11 29 AM Permalink Reply Paul K After thinking a bit more about Gavin s reply it seems to me it should be possible to show the single strange run for land use run 1 from figure 5 in Nic s original post is likely to be a statistical fluke related to model behavior and not at all representative of the actual effect of land use If you calculate the slope of each of the five land use runs and then calculate an unbiased estimate of the standard deviation of the slopes from runs 2 through 5 then the slope of run 1 may very well be outside the 95 inclusive probability window That is run 1 is unlikely to be a member of the same normally distributed sample population as runs 2 through 5 and so is more likely due to an unrelated effect which just was not present in the other LU runs A drastic and wildly unrealistic change in North Atlantic ocean temperature would of course be a plausible unrelated effect Nic Can you point to where the data used to generate figure 5 in your original post is located niclewis Posted Jan 29 2016 at 12 44 PM Permalink Reply steve Interesting idea The data is available in a spreadsheet via the link given at the end of my original post https niclewis wordpress com appraising marvel et al implications of forcing efficacies for climate sensitivity estimates The graph is in the tas sheet at cell BN100 Right click in the graph and choose Select data to see which ranges the data comes from stevefitzpatrick Posted Jan 29 2016 at 2 19 PM Permalink Nic Thanks The slope statistics are Relative slope R1 0 0369 R2 0 0066 R3 0 0144 R4 0 0098 R5 0 0085 Mean 2 5 0 00982 Std deviation 2 5 0 00332 R1 Standard Deviations from the mean 8 15 The estimate of the standard deviation is the unbiased estimate using n 1 in the denominator So it seems to me unlikely that R1 is in the same population as R2 to R5 especially in light of the peculiar pattern of cooling in Run 1 Of course Gavin in his best Steigian imitation is going to discount any slope analysis as irrelevant or will insist ignoring any reasonable interpretation of the actual data that the correct analysis is to include all 5 runs in the estimate of the mean and standard deviation for the slopes In this case the statistics become Rel slope R1 0 0369 R2 0 0066 R3 0 0144 R4 0 0098 R5 0 0085 Mean 0 01524 Std deviation 0 0124 R1 Standard Deviations from the mean 1 74 Which puts R1 just inside of the credible range if you choose to ignore the bizarre pattern of cooling in the North Atlantic which is absolutely not a credible response to a tiny forcing from land use change But such things seem to pass as credible when the results match the desire outcome dynam01 Posted Jan 28 2016 at 3 24 PM Permalink Reply Reblogged this on I Didn t Ask To Be a Blog kenfritsch Posted Jan 30 2016 at 11 10 AM Permalink Reply My apologies if these graphs that I have linked below have been displayed by someone previously The graphs represent the regression of the Marvel GMST versus forcings and plotted on a yearly basis For each forcing agent I have graphed together the Ensemble mean and the 5 model runs I think these representations paint a different picture than using decadal averages Notice that using different parts of the forcing range would give very different trends Where the forcing is changing with time in a trending manner then one could state that the trends would be very different depending on the decade used I have also calculated the trend statistics from the yearly results graphed in the links and while the p values can be impressive over the range of forcing as noted above the trends calculated within parts of the range can change dramatically I have not yet applied the auto correlation ar1 to simulations to determine the confidence intervals for these trends but when I finish I will report the results here That model run for land use in question is very different than the other runs not only in the trend value but in the p value of that trend One can use an alternative method to determine statistical differences by using the model run trend values and the confidence intervals derived from Monte Carlo simulations as described above I have not done that yet but I would predict at this point that there would be a significant difference between the run in question if the confidence intervals for the other land use runs are not too wide Link for GHG and O3 Link for Solar and Land Use Link for Volcanic and Anthro Aerosol stevefitzpatrick Posted Jan 30 2016 at 4 52 PM Permalink Reply Ken Fritsch Yes LU Run 1 is wildly different from the others I doubt using decadal averages like Nic did makes much difference in the trends Run 1 is nothing like the other four no matter how you look at it I think the argument needs to be made that inferring anything about the efficacy of LU forcing with Run 1 included will lead to spurious results BTW I gently suggest that you use the same y axis scale when comparing the trends for the five LU runs Using different y axis scales obscures how different Run 1 is from the others kenfritsch Posted Jan 31 2016 at 8 20 AM Permalink Reply Steve Marvel used decadal averages and Nic merely reported the results Decadal averages will graphically paint a very different picture than using the individual variation in individual runs with yearly data points The calculated confidence intervals need to use yearly data Using decadal averages for that purpose would require some adjustments The differing y axis ranges were just to see if you were paying attention I am finishing the CI calculations and will post them here By my methods the land use Run 1 slope trend is statistically very different than the other runs It would appear than some model runs for the various forcing agents have slopes not significantly different than zero Maybe we can get Gavin to argue for the validity of that happening if we had more than one realization of the earth s climate kenfritsch Posted Jan 30 2016 at 1 00 PM Permalink Reply Nic I have a post in moderation that has graphs with regressions of the temperature response to 6 forcing agents from the Marvel data It is on a yearly basis and I think shows the data in a different light than how it was presented in Marvel Steve unmoderated triggered by number of links kenfritsch Posted Jan 30 2016 at 1 15 PM Permalink Reply I should have added that the large p values to which I refer in my moderated post should be large negative values kenfritsch Posted Jan 31 2016 at 1 35 PM Permalink Reply In the link below is a table with my analysis details of the regression of temperature versus forcing for the 6 forcing agents It shows the trend 95 plus minus confidence intervals CIs the intercept the p values for the trend and intercept unadjusted for auto correlation and the ar1 values used for adjusting the CIs for auto correlation using 10 000 Monte Carlo simulations Notice that slope of the trend in each case can be used to ratio amongst the forcing agents to relate back to the relative efficacy values found in Marvel My results for some of the forcing agents is in general agreement with those from Marvel but not all There is a large difference for anthropogenic aerosols I used the sum of the direct and indirect aerosol values provided by Marvel and that sum when regressed against the aerosol temperature gave very good correlations with very low trend p values My slope values had only one forcing agent with a higher value than GHG and that was Land Use Land Use has slope values for the 5 runs that vary greatly and the CIs for those individual runs are large but show that Run1 is very significantly different than the other runs Run 1 also has a trend p value that is much lower than the other runs and the intercept is significantly different I would judge from the large variations within the Land Use that regression iRF versus temperature for that forcing agent makes little sense GHG and Volcanic were the only forcing agents that had CIs that were a low percentage of the slope values Why my calculations give such relatively low values of forcing agents slopes compared to GHG with the noted exception of Land Use is a puzzle to me If I have not made a mistake here it would also throw huge doubts into the use the instantaneous forcing and regressions to determine efficacy niclewis Posted Jan 31 2016 at 3 22 PM Permalink Ken I agree your slope trends apart from Anthro aerosol where I think you may have made some mistake based on regressing on annual 1900 2005 data Marvel used decadal data which gives somewhat different results If not following their method I think there is merit in using data for the full 1850 2005 simulation runs I ve now uploaded that to my web pages see the link in the update to this article above The low efficacy for volcanic forcing is expected and partly reflects the delay in GMST reponse to a forcing impulse which matters here as volcanic forcing is impulse like With decadal data there is much less distortion as a result of the delay The same applies to an extent to solar forcing kenfritsch Posted Jan 31 2016 at 4 54 PM Permalink Nic your points are well taken and now I will attempt to get my head around what you state here Would a delay in temperature response change a trend measured on an annual versus decadal basis There would be a lag but the response would eventually be manifested in the temperature I think If I used the start and end points only the trend should be near the same Obviously there are differences as you indicate you have made the annual and decadal calculations But is it caused by the lag effect I ll have another look at my AA trend calculations There was a good correlation and I guess that delayed my looking I notice from my plot of temperature versus AA that if I regressed only on the lower levels of forcing I would obtain a much steeper slope and closer to the expected value kenfritsch Posted Jan 31 2016 at 5 28 PM Permalink Nic I may have had a problem with the instantaneous part of the forcing in my thinking If I am measuring temperature response to forcing for a given year and if all the forcing were to occur and was reported in that year but only part of the temperature response occurs then that year would show a lower than expected temperature The next year I would have no forcing but a partial continuation of the temperature response and that year would have a higher than expected temperature given no forcing is reported I was unfortunately thinking about an accummualtion of forcing and temperature responses for my starting and ending point thought Maybe that is what Willis was thinking A sufficiently lagging response might even make a decadal average of instantaneous forcing a poor method of determining efficacy Even an event that forces in one decade and is mostly measured in another could create a problem Conclusion Use ERF kenfritsch Posted Jan 31 2016 at 7 21 PM Permalink I found my error with AA and the revised data and graph are in the link below Considering the CIs for the GHG and AA the mean trend slopes for GHG and AA are not that different Next step is to use the

    Original URL path: http://climateaudit.org/2016/01/21/marvel-et-al-implications-of-forcing-efficacies-for-climate-sensitivity-estimates-an-update/?replytocom=766282 (2016-02-08)
    Open archived version from archive

  • Marvel et al.: Implications of forcing efficacies for climate sensitivity estimates – update « Climate Audit
    the same ocean patch south of Greenland warming and the opposite pattern of changes off Antarctica So it would seem a bit surprising if land use forcing were enough to itself initiate glaciation Maybe it is more likely that LU forcing had enough of an effect in the version of GISS E2 R used at least for other CMIP5 runs with the faulty ocean mixing scheme Also Chandler say that GISS E2 R has a regional cool bias in the upper mid latitude Atlantic in its preindustrial control run Whatever the cause it looks to me as if there is a change in the AMOC involved As I wrote earlier whether or not LU run 1 is strictly a rogue it seems to me that there is a good case for excluding it since we know the real world climate system did not behave like this during the 20th century opluso Posted Jan 30 2016 at 6 59 AM Permalink Reply Has anyone seen the most recent Ganopolski paper that got a big PR push Human made climate change suppresses the next ice age https www pik potsdam de news press releases human made climate change suppresses the next ice age opluso Posted Jan 30 2016 at 7 01 AM Permalink Reply Oh my mistake I didn t notice the url link in your comment Thanks for the link nobodysknowledge Posted Jan 23 2016 at 3 37 PM Permalink Reply I can just agree with Marvel in one thing from her blog The climate s sensitivity is hard to nail down but mine is pretty high Well that it is pretty high is an understatement Paul Penrose Posted Jan 23 2016 at 8 18 PM Permalink Reply When I see some evidence that the software models were written by software experts and have been developed using industry standard best practices then I will start taking them a bit more seriously Until then they are about as useful as an uncalibrated piece of lab equipment kribaez Posted Jan 24 2016 at 1 16 AM Permalink Reply Nic Re your latest update Gavin Schmidt noted the heat transport problem in the Russel ocean model in a paper published in March 2014 http onlinelibrary wiley com doi 10 1002 2013MS000265 full It looks like it had not been fixed up to that time The Miller et al paper was published in June 2014 http onlinelibrary wiley com doi 10 1002 2013MS000266 full As far as I can tell the Miller paper mentions the existence of the problem but no correction Tracer advection is calculated using a linear upstream scheme that updates both the tracer value and its slope within the grid box The additional calculation of the slope maintains tighter gradients against numerical diffusion Mesoscale mixing is parameterized according to the Gent McWilliams scheme although the along isopycnal flux was misaligned resulting in excessive cross isopycnal diffusion I think a polite question to the authors is justified Given a free choice of GCMs I would not choose to use OHC data from a model with a known ocean heat transport problem However it is possible that a corrigendum was issued for the GISS E2 R results and the data accessible via the CMIP5 portals updated If so it would be good to have a pointer to it niclewis Posted Jan 24 2016 at 4 51 AM Permalink Reply Paul The Schmidt and Miller papers were submitted at the same time so I would expect them both to reflect the same position regrading correction or not of the ocean problem in GISS E2 R I can find no mention of the problem in the Marvel paper nor in a paper submitted over a year later about the climate change in GISS ModelE2 under RCP scenarios http onlinelibrary wiley com doi 10 1002 2014MS000403 full 2015 I cannot find signs of any corrigendum for GISS E2 R results It is conceivable that in practice the effects of the ocean problem were small at least in all the main simulation runs I have redownloaded CMIP5 r1i1p1 tas netCDF files for the GISS E2 R Historical simulation They have the same file date 25 March 2011 as those current at the AR5 March 2013 cutoff date kribaez Posted Jan 24 2016 at 2 00 AM Permalink Reply Nic Do you happen to know if and how AIE was included in Miller s All forcings together values for Fi This would not automatically appear in the instantaneous net flux perturbation and would need to be added in either by using the parameterisation algorithm in GISS E2 NINT or by adding in the values calculated from the single forcing case or other I can find no reference in Miller to any such calculation but I may have missed it It must of course be added in for any efficacy calculation to make sense niclewis Posted Jan 24 2016 at 4 08 AM Permalink Reply Paul I don t know for certain that AIE was included in Miller s iRF Fi All forcings together values but I have assumed that it was Miller says the that magnitude of the AIE is tuned using an empirical relation between low cloud cover and the logarithm of aerosol number concentration and that in 2000 the instantaneous AIE at the tropopause is 0 67 W m2 A value for AIE iRF could have been calculated by perturbing the 1850 cloud field used when when computing iRF although there is no mention of doing so in Miller et al Wouldn t adding in values calculated in the single forcing case simply push the question of measuring an iRF for AIE back to that simulation As you will know aerosol indirect effect should not really appear in iRF at all since adjustments by clouds are not instantaneous Hansen 2005 did not show any iRF value for it But there is quite a lot of discussion in Miller et al about aerosol forcing in Miller et al which would all be wrong if the AIE had not been included in their All forcings together measure so Ron Miller seems happy that it was included And my multiple regression results certainly support AIE forcing having been included kribaez Posted Jan 24 2016 at 8 00 AM Permalink Reply Thanks Nic I agree that your multiple regression results support AIE forcing having been included in some guise And certainly what was done in Marvel et al would make no sense if it had not been included so no doubt the co authors believe that it is already included What was going through my mind was the difficulty of assigning any equivalent Fi value to AIE for the historical run Miller makes it clear that he uses pre industrial climate for the evaluation of Fi values This does not require any simulation It just requires activating all of the forcing agents turning on the radiative code at annual intervals and recording the net flux change at the predefined tropopause Because there is no atmospheric simulation involved AIE does not manifest itself in this calculation Strictly speaking it is not a forcing at all but a fast feedback which is unique to tropospheric aerosols Because it is unique to this particular driver as opposed to being a temperature dependent feedback common to all forcing drivers it must be treated as a quasi forcing in order to permit intelligent comparison with other forcings in general and with CO2 forcing in particular Hansen s algorithm for AIE which I described as the parameterisation algorithm in GISS E2 NINT and which you call an empirical relation does permit the indirect effect to be converted into an equivalent forcing So what I suspect was done was that the algorithm was switched on together with the radiative code at each time period The problem with this is that the equivalent forcing value is strongly dependent on climate and particularly cloud cover at the time the algorithm is calculating From the above process for abstracting Fi values cloud cover is fixed at pre industrial level The difference in calculated values may be substantial see Hansen 2005 between fixing the cloud cover and allowing it to vary as it did in the actual historic run simulations If this is what Miller did then he should be able to isolate very simply the AIE forcing from the historic run and confirm that it was identical to the single forcing case abstraction of AIE on the same basis of unchanging climate state That then allows a more definitive statement to be made on the difference between the calculated assumed AIE in the historic run and the true AIE which was based on the successively updated climate state and which should be I believe significantly more negative The alternative to which I referred involves analysis of the single forcing run simulation rather than the abstraction of Fi values from the same but I suspect it is not very relevant The indirect forcing can be abstracted by de convolution of the temperature and net flux data since there is only one known direct forcing which is changing I have left a question on realclimate hoping for some clarification of what was actually done kribaez Posted Jan 24 2016 at 8 11 AM Permalink Reply Here is a copy of the comment I left on RealClimate Gavin I would be very grateful if you could respond to the following three questions 1 Do you have available CO2 benchmarking data for GISS E2 R specifically estimates of Fi Fa and ERF for a range of concentrations If not more specifically are you going to support or modify the Fi value of 4 1 which appears in Marvel et al 2 Can you please advise if and how AIE forcing was included in Miller s All forcing together Fi values for the 20th century historic run 3 Can you confirm that the temperature and net flux data for GISS E2 R available via the CMIP5 portals and KNMI Climate Explorer are based on a model corrected to fix the ocean heat transport problem which you identified in the Russell ocean model in your 2014 paper Many Thanks Patrick M Posted Jan 24 2016 at 9 57 AM Permalink Reply If an algorithm can produce results that are clearly rogue then I would imagine it can produce results that are partially rogue as well As a software developer myself I think this creates a situation where a bug becomes a subjective decision In order to make the determination more objective one would need to define rogue more clearly It s sort of a catch 22 when you design code whose purpose is to find out if anomalies will occur in that your code needs to be free to create anomalies which could just as easily be coding logic errors I would think code of this type would always have to have an independent verification method to check predicted anomalies such as reviewing the physical plausibility of the processes involved In other words I think these models should be used to present questions not answers Jit Posted Jan 24 2016 at 11 33 AM Permalink Reply As striking as LU run 1 is fig 4 it looks like it has half the scale bars of fig 1 runs 2 5 2 5 to 2 5 vs 5 to 5 Is this just a matter of the legend not being updated niclewis Posted Jan 24 2016 at 12 02 PM Permalink Reply Fig 4 has the same scale as Fig 3 not as Fig 2 which I assume is what you meant by Fig 1 A version with a 5 C scale is here Jit Posted Jan 25 2016 at 4 53 AM Permalink Reply Yes sorry fig 2 is what I meant What I was pointing out was that the scale should be the same on all the single run figures Thank you for the link to the figure with the extended scale This seems to show that a 5 anomaly occurs in the north Atlantic not just a 2 5 one as per fig 4 Of course the blues do not look as dramatic with the colour ramp stretched kenfritsch Posted Jan 26 2016 at 11 17 AM Permalink Reply Going back to Nic s pdf critique of Marvel and then rereading Marvel the criticisms that Nic makes of this paper become clearer to me More importantly in addressing the quality of this paper it is the accumulation of problems that Nic sees in this paper It is that accumulation and not necessarily a single problem pointed out that is the important to judging the validity of the results conclusions of this paper What I have seen in the past with criticism of climate science papers from these blogs like Climate Audit is that an author or defender of the paper will clear up or attempt to clear up a single point and fail to answer acknowledge the many problems We who are critical sometimes concentrate on a single issue without continuing to point to the multitude of issues I would hope that the Marvel authors will address all of Nic s criticisms but if they do not that might well say something also kenfritsch Posted Jan 26 2016 at 11 35 AM Permalink Reply Interesting also that the efficacy measures made by Marvel could in some sense and context be construed as factors required to bring the model sensitivity more in line with the empirical results using mostly observed data and that obtains lower sensitivities Going forward and without knowing the origin for the need of the efficacy measures significantly different than unity one might well conclude that prediction of future temperature increases from AGW would be the same with or without the efficacy measure The Marvel paper gets around this thought by talking about the accident of history and implying that the efficacy measure is very much unique to the recent climate conditions and pointing to the efficacy measure different than unity being related to the non uniformity in the x y and z directions of the global atmosphere of the negatively forcing agents It reminded me of the thought process of some climate scientists implying rather strongly that the divergence of proxy responses in recent years must be related to AGW otherwise of course without an explanation we have to seriously question the proxy responses in past times to temperature never minding that the selection process in most of these temperature reconstructions makes the process flawed from the start drissel Posted Jan 26 2016 at 8 46 PM Permalink Reply As a now retired professional programmer I m astonished that anyone believes that Large opaque computer programs work Large opaque computer programs meet their specifications if any Programs and their specifications accurately represent anything as large complex and poorly understood as world climate Programs and their specifications accurately embody the Physics that we do understand like Conservation of Mass Navier Stokes Equations and on and on Programs and their specifications should ever serve as a basis for public policies that could result in impoverishment starvation etc Several of the computer program output anomalies mentioned by Dr Lewis smell to my practiced nose like program bugs Regards Bill Drissel Frisco TX kenfritsch Posted Jan 27 2016 at 2 26 PM Permalink Reply Nic I have been attempting to find the data for the 6 multiple model runs used to determine the ERF for the individual forcing agents in Marvel I find only 1 set of data for these forcings for ERF I was under the impression that ERF and iRF data were both taken from multiple runs niclewis Posted Jan 27 2016 at 4 35 PM Permalink Reply Ken Miller says that iRF is determined by measuring th eradiative imbalance in the 1850 climate state as it was before perturbation by any forcing but with the relevant forcing s imposed That would give the same result for all runs as the climate state has not changed from preindustial So it would just be computed once I think For ERF the SST is fixed but the atmosphere is free to evolve In principle multiple runs would be desirable but as equilibrium is reached quickly with fixed SST it looks aas if they have instead for each forcing averaged across 3 decades from the same run And I don t think they have archived the fixed SST runs involved they don t seem to be in the CMIP5 archive kenfritsch Posted Jan 28 2016 at 3 20 AM Permalink Reply Nic I was not clear about my confusion with the data used for ERF and iRF approaches to efficacy determinations in Marvel but I think I may now have it figured out if you can verify that my understanding is correct All the data required for the iRF approach was available to me in the form of annual GMST and OHC for all the model runs and for all the individual forcing agents and the one set of annual Fi data for each of the forcing agents from Miller 2014 My confusion was with the ERF approach and the source of the GMST and OHC data required to go with the one set of ERF data As you note there are ERF data for 3 different decades for all the forcing agents from a single model run It would have been nice to have data for multiple runs but it is now my understanding that the same GMST and OHC data used in the the iRF approach must have been used in the ERF approach by using the average delta T and trends for OHC from the decade 1996 2005 That gets me to the multiple runs for the ERF approach and the method used in Marvel to obtain uncertainty for both the iRF and ERF approaches to determining efficacy Is this understanding correct I plan to analyze the data using Singular Spectrum Analysis and other analysis approaches niclewis Posted Jan 28 2016 at 12 32 PM Permalink Ken Yes you should use all the same separate run GMST and OHC data with the one set of averaged over 3 decades ERF data But as there is only ERF data for year 2000 forcing efficacies have to be calculated from quotients rather than being able to use regression Marvel s regression with intercept over 1906 2005 method is unsatisfactory in any case You might get better results using data starting in 1850 or 1851 there is a slight jump rather than 1900 and TOA radiative imbalance rather than ocean heat content data for your analysis I ll try to add such data to that which I have already provided at https niclewis wordpress com appraising marvel et al implications of forcing efficacies for climate sensitivity estimates kribaez Posted Jan 28 2016 at 5 51 AM Permalink Reply I have left another comment on RealClimate for Gavin to mull over awaiting moderation copy below while he is I trust assiduously working in the background to answer the previous questions which I have left Gavin You wrote Dropping outliers just because they don t agree with your preconceived ideas is a classical error in statistics it s much better to use the spread as a measure of the uncertainty Gavin Another classical error in statistics is to attribute the error associated with one property to the wrong variable Work by the RNMI Sybren Drijfhout et al 2015 http www pnas org content 112 43 E5777 abstract confirms that GISS E2 R has the capacity for abrupt climate change in the form of inter alia the local collapse of convection in the North Atlantic In this instance if the results of the rogue run in the single forcing LU cases are due to the abrupt collapse of N Atlantic convection as seems increasingly likely from the data then the dramatically different temperature response in the rogue run has nothing whatsoever to do with the uncertainty in transient efficacy of LU forcing The inclusion of the run leads quite simply to an erroneously inflated calculation of the mean transient efficacy for LU and a misleading confounding of the uncertainty associated with the GCM s internal mechanics with the uncertainty in LU transient efficacy Ultimately the Marvel et al paper seeks to argue that sensitivities estimated from actual observational data are biased low on the grounds that GISS E2 R over the historic period is responding to an overall low weighted average forcing efficacy It then seeks to extend the conclusions drawn from the model to realworld observational studies Since we know from the real observational data that there was not a collapse of N Atlantic convection then quite apart from other methodological questions the inclusion of this run for the LU calculation is impossible to justify and on its own is sufficiently large in its impact to bring the study results into question Applying the same logic any of the 20th Century History runs which exhibited similar abrupt shifts Southern Ocean sea ice Tibetan plateau snow melt and N Atlantic convection which were not observed in the realworld should have also been excluded from the ensemble mean for Marvel et al to have any hope of credibly extending inferences to realworld observational data even if we suspend disbelief with respect to other problems associated with data methods and relevance stevefitzpatrick Posted Jan 28 2016 at 5 20 PM Permalink Reply Paul Another good question for Gavin But I think you are unlikely to get a reply to ANY substantive question about Marvel et al at Real Climate unless it is a question which lends support to the conclusions of Marvel or so silly a question that Gavin can just poke fun Gavin is not going to entertain substantive doubts about Marvel any more than Eric Steig was willing to entertain substantive doubts about continent wide Antarctic warming even after O Donnell et al was published The point of Marvel et al is to raise doubts in a high profile publication about the veracity of the many low empirical estimates of sensitivity so that those empirical estimates can be waved away when public energy policy is discussed Marvel et al is just ammunition in the climate wars IMO its quality and accuracy do matter at all to the authors mpainter Posted Jan 28 2016 at 5 55 PM Permalink Reply I think your last sentence left out the word not and with that I can say ditto to your comment Marvel et al are now at the point where to engage the issue any further only exposes and emphasizes the hollowness of their whole position faulty models and all stevefitzpatrick Posted Jan 29 2016 at 8 38 AM Permalink mpainter Yes I left out the word not stevefitzpatrick Posted Jan 29 2016 at 8 55 AM Permalink Reply Paul Gavin has replied to your comment He completely rejects your suggestion that the single very strange land use run is not representative and so should not be included in the analysis He also challenges you to look at the level of variance in all 200 runs of the study and do your own analysis Seems to me that a very reasonable argument can be made about the statistical validity of any 5 run ensemble that includes a single strange run if you know the variability of a much larger group eg Gavin s 200 runs AntonyIndia Posted Jan 29 2016 at 9 32 AM Permalink Reply Is Gavin admitting something there the basic result which that the historical runs don t have the same forcing response pattern as the response to CO2 alone stevefitzpatrick Posted Jan 29 2016 at 11 29 AM Permalink Reply Paul K After thinking a bit more about Gavin s reply it seems to me it should be possible to show the single strange run for land use run 1 from figure 5 in Nic s original post is likely to be a statistical fluke related to model behavior and not at all representative of the actual effect of land use If you calculate the slope of each of the five land use runs and then calculate an unbiased estimate of the standard deviation of the slopes from runs 2 through 5 then the slope of run 1 may very well be outside the 95 inclusive probability window That is run 1 is unlikely to be a member of the same normally distributed sample population as runs 2 through 5 and so is more likely due to an unrelated effect which just was not present in the other LU runs A drastic and wildly unrealistic change in North Atlantic ocean temperature would of course be a plausible unrelated effect Nic Can you point to where the data used to generate figure 5 in your original post is located niclewis Posted Jan 29 2016 at 12 44 PM Permalink Reply steve Interesting idea The data is available in a spreadsheet via the link given at the end of my original post https niclewis wordpress com appraising marvel et al implications of forcing efficacies for climate sensitivity estimates The graph is in the tas sheet at cell BN100 Right click in the graph and choose Select data to see which ranges the data comes from stevefitzpatrick Posted Jan 29 2016 at 2 19 PM Permalink Nic Thanks The slope statistics are Relative slope R1 0 0369 R2 0 0066 R3 0 0144 R4 0 0098 R5 0 0085 Mean 2 5 0 00982 Std deviation 2 5 0 00332 R1 Standard Deviations from the mean 8 15 The estimate of the standard deviation is the unbiased estimate using n 1 in the denominator So it seems to me unlikely that R1 is in the same population as R2 to R5 especially in light of the peculiar pattern of cooling in Run 1 Of course Gavin in his best Steigian imitation is going to discount any slope analysis as irrelevant or will insist ignoring any reasonable interpretation of the actual data that the correct analysis is to include all 5 runs in the estimate of the mean and standard deviation for the slopes In this case the statistics become Rel slope R1 0 0369 R2 0 0066 R3 0 0144 R4 0 0098 R5 0 0085 Mean 0 01524 Std deviation 0 0124 R1 Standard Deviations from the mean 1 74 Which puts R1 just inside of the credible range if you choose to ignore the bizarre pattern of cooling in the North Atlantic which is absolutely not a credible response to a tiny forcing from land use change But such things seem to pass as credible when the results match the desire outcome dynam01 Posted Jan 28 2016 at 3 24 PM Permalink Reply Reblogged this on I Didn t Ask To Be a Blog kenfritsch Posted Jan 30 2016 at 11 10 AM Permalink Reply My apologies if these graphs that I have linked below have been displayed by someone previously The graphs represent the regression of the Marvel GMST versus forcings and plotted on a yearly basis For each forcing agent I have graphed together the Ensemble mean and the 5 model runs I think these representations paint a different picture than using decadal averages Notice that using different parts of the forcing range would give very different trends Where the forcing is changing with time in a trending manner then one could state that the trends would be very different depending on the decade used I have also calculated the trend statistics from the yearly results graphed in the links and while the p values can be impressive over the range of forcing as noted above the trends calculated within parts of the range can change dramatically I have not yet applied the auto correlation ar1 to simulations to determine the confidence intervals for these trends but when I finish I will report the results here That model run for land use in question is very different than the other runs not only in the trend value but in the p value of that trend One can use an alternative method to determine statistical differences by using the model run trend values and the confidence intervals derived from Monte Carlo simulations as described above I have not done that yet but I would predict at this point that there would be a significant difference between the run in question if the confidence intervals for the other land use runs are not too wide Link for GHG and O3 Link for Solar and Land Use Link for Volcanic and Anthro Aerosol stevefitzpatrick Posted Jan 30 2016 at 4 52 PM Permalink Reply Ken Fritsch Yes LU Run 1 is wildly different from the others I doubt using decadal averages like Nic did makes much difference in the trends Run 1 is nothing like the other four no matter how you look at it I think the argument needs to be made that inferring anything about the efficacy of LU forcing with Run 1 included will lead to spurious results BTW I gently suggest that you use the same y axis scale when comparing the trends for the five LU runs Using different y axis scales obscures how different Run 1 is from the others kenfritsch Posted Jan 31 2016 at 8 20 AM Permalink Reply Steve Marvel used decadal averages and Nic merely reported the results Decadal averages will graphically paint a very different picture than using the individual variation in individual runs with yearly data points The calculated confidence intervals need to use yearly data Using decadal averages for that purpose would require some adjustments The differing y axis ranges were just to see if you were paying attention I am finishing the CI calculations and will post them here By my methods the land use Run 1 slope trend is statistically very different than the other runs It would appear than some model runs for the various forcing agents have slopes not significantly different than zero Maybe we can get Gavin to argue for the validity of that happening if we had more than one realization of the earth s climate kenfritsch Posted Jan 30 2016 at 1 00 PM Permalink Reply Nic I have a post in moderation that has graphs with regressions of the temperature response to 6 forcing agents from the Marvel data It is on a yearly basis and I think shows the data in a different light than how it was presented in Marvel Steve unmoderated triggered by number of links kenfritsch Posted Jan 30 2016 at 1 15 PM Permalink Reply I should have added that the large p values to which I refer in my moderated post should be large negative values kenfritsch Posted Jan 31 2016 at 1 35 PM Permalink Reply In the link below is a table with my analysis details of the regression of temperature versus forcing for the 6 forcing agents It shows the trend 95 plus minus confidence intervals CIs the intercept the p values for the trend and intercept unadjusted for auto correlation and the ar1 values used for adjusting the CIs for auto correlation using 10 000 Monte Carlo simulations Notice that slope of the trend in each case can be used to ratio amongst the forcing agents to relate back to the relative efficacy values found in Marvel My results for some of the forcing agents is in general agreement with those from Marvel but not all There is a large difference for anthropogenic aerosols I used the sum of the direct and indirect aerosol values provided by Marvel and that sum when regressed against the aerosol temperature gave very good correlations with very low trend p values My slope values had only one forcing agent with a higher value than GHG and that was Land Use Land Use has slope values for the 5 runs that vary greatly and the CIs for those individual runs are large but show that Run1 is very significantly different than the other runs Run 1 also has a trend p value that is much lower than the other runs and the intercept is significantly different I would judge from the large variations within the Land Use that regression iRF versus temperature for that forcing agent makes little sense GHG and Volcanic were the only forcing agents that had CIs that were a low percentage of the slope values Why my calculations give such relatively low values of forcing agents slopes compared to GHG with the noted exception of Land Use is a puzzle to me If I have not made a mistake here it would also throw huge doubts into the use the instantaneous forcing and regressions to determine efficacy niclewis Posted Jan 31 2016 at 3 22 PM Permalink Ken I agree your slope trends apart from Anthro aerosol where I think you may have made some mistake based on regressing on annual 1900 2005 data Marvel used decadal data which gives somewhat different results If not following their method I think there is merit in using data for the full 1850 2005 simulation runs I ve now uploaded that to my web pages see the link in the update to this article above The low efficacy for volcanic forcing is expected and partly reflects the delay in GMST reponse to a forcing impulse which matters here as volcanic forcing is impulse like With decadal data there is much less distortion as a result of the delay The same applies to an extent to solar forcing kenfritsch Posted Jan 31 2016 at 4 54 PM Permalink Nic your points are well taken and now I will attempt to get my head around what you state here Would a delay in temperature response change a trend measured on an annual versus decadal basis There would be a lag but the response would eventually be manifested in the temperature I think If I used the start and end points only the trend should be near the same Obviously there are differences as you indicate you have made the annual and decadal calculations But is it caused by the lag effect I ll have another look at my AA trend calculations There was a good correlation and I guess that delayed my looking I notice from my plot of temperature versus AA that if I regressed only on the lower levels of forcing I would obtain a much steeper slope and closer to the expected value kenfritsch Posted Jan 31 2016 at 5 28 PM Permalink Nic I may have had a problem with the instantaneous part of the forcing in my thinking If I am measuring temperature response to forcing for a given year and if all the forcing were to occur and was reported in that year but only part of the temperature response occurs then that year would show a lower than expected temperature The next year I would have no forcing but a partial continuation of the temperature response and that year would have a higher than expected temperature given no forcing is reported I was unfortunately thinking about an accummualtion of forcing and temperature responses for my starting and ending point thought Maybe that is what Willis was thinking A sufficiently lagging response might even make a decadal average of instantaneous forcing a poor method of determining efficacy Even an event that forces in one decade and is mostly measured in another could create a problem Conclusion Use ERF kenfritsch Posted Jan 31 2016 at 7 21 PM Permalink I found my error with AA and the revised data and graph are in the link below Considering the CIs for the GHG and AA the mean trend slopes for GHG and AA are not that different Next step is to

    Original URL path: http://climateaudit.org/2016/01/21/marvel-et-al-implications-of-forcing-efficacies-for-climate-sensitivity-estimates-an-update/?replytocom=766318 (2016-02-08)
    Open archived version from archive

  • Marvel et al.: Implications of forcing efficacies for climate sensitivity estimates – update « Climate Audit
    patch south of Greenland warming and the opposite pattern of changes off Antarctica So it would seem a bit surprising if land use forcing were enough to itself initiate glaciation Maybe it is more likely that LU forcing had enough of an effect in the version of GISS E2 R used at least for other CMIP5 runs with the faulty ocean mixing scheme Also Chandler say that GISS E2 R has a regional cool bias in the upper mid latitude Atlantic in its preindustrial control run Whatever the cause it looks to me as if there is a change in the AMOC involved As I wrote earlier whether or not LU run 1 is strictly a rogue it seems to me that there is a good case for excluding it since we know the real world climate system did not behave like this during the 20th century opluso Posted Jan 30 2016 at 6 59 AM Permalink Reply Has anyone seen the most recent Ganopolski paper that got a big PR push Human made climate change suppresses the next ice age https www pik potsdam de news press releases human made climate change suppresses the next ice age opluso Posted Jan 30 2016 at 7 01 AM Permalink Reply Oh my mistake I didn t notice the url link in your comment Thanks for the link nobodysknowledge Posted Jan 23 2016 at 3 37 PM Permalink Reply I can just agree with Marvel in one thing from her blog The climate s sensitivity is hard to nail down but mine is pretty high Well that it is pretty high is an understatement Paul Penrose Posted Jan 23 2016 at 8 18 PM Permalink Reply When I see some evidence that the software models were written by software experts and have been developed using industry standard best practices then I will start taking them a bit more seriously Until then they are about as useful as an uncalibrated piece of lab equipment kribaez Posted Jan 24 2016 at 1 16 AM Permalink Reply Nic Re your latest update Gavin Schmidt noted the heat transport problem in the Russel ocean model in a paper published in March 2014 http onlinelibrary wiley com doi 10 1002 2013MS000265 full It looks like it had not been fixed up to that time The Miller et al paper was published in June 2014 http onlinelibrary wiley com doi 10 1002 2013MS000266 full As far as I can tell the Miller paper mentions the existence of the problem but no correction Tracer advection is calculated using a linear upstream scheme that updates both the tracer value and its slope within the grid box The additional calculation of the slope maintains tighter gradients against numerical diffusion Mesoscale mixing is parameterized according to the Gent McWilliams scheme although the along isopycnal flux was misaligned resulting in excessive cross isopycnal diffusion I think a polite question to the authors is justified Given a free choice of GCMs I would not choose to use OHC data from a model with a known ocean heat transport problem However it is possible that a corrigendum was issued for the GISS E2 R results and the data accessible via the CMIP5 portals updated If so it would be good to have a pointer to it niclewis Posted Jan 24 2016 at 4 51 AM Permalink Reply Paul The Schmidt and Miller papers were submitted at the same time so I would expect them both to reflect the same position regrading correction or not of the ocean problem in GISS E2 R I can find no mention of the problem in the Marvel paper nor in a paper submitted over a year later about the climate change in GISS ModelE2 under RCP scenarios http onlinelibrary wiley com doi 10 1002 2014MS000403 full 2015 I cannot find signs of any corrigendum for GISS E2 R results It is conceivable that in practice the effects of the ocean problem were small at least in all the main simulation runs I have redownloaded CMIP5 r1i1p1 tas netCDF files for the GISS E2 R Historical simulation They have the same file date 25 March 2011 as those current at the AR5 March 2013 cutoff date kribaez Posted Jan 24 2016 at 2 00 AM Permalink Reply Nic Do you happen to know if and how AIE was included in Miller s All forcings together values for Fi This would not automatically appear in the instantaneous net flux perturbation and would need to be added in either by using the parameterisation algorithm in GISS E2 NINT or by adding in the values calculated from the single forcing case or other I can find no reference in Miller to any such calculation but I may have missed it It must of course be added in for any efficacy calculation to make sense niclewis Posted Jan 24 2016 at 4 08 AM Permalink Reply Paul I don t know for certain that AIE was included in Miller s iRF Fi All forcings together values but I have assumed that it was Miller says the that magnitude of the AIE is tuned using an empirical relation between low cloud cover and the logarithm of aerosol number concentration and that in 2000 the instantaneous AIE at the tropopause is 0 67 W m2 A value for AIE iRF could have been calculated by perturbing the 1850 cloud field used when when computing iRF although there is no mention of doing so in Miller et al Wouldn t adding in values calculated in the single forcing case simply push the question of measuring an iRF for AIE back to that simulation As you will know aerosol indirect effect should not really appear in iRF at all since adjustments by clouds are not instantaneous Hansen 2005 did not show any iRF value for it But there is quite a lot of discussion in Miller et al about aerosol forcing in Miller et al which would all be wrong if the AIE had not been included in their All forcings together measure so Ron Miller seems happy that it was included And my multiple regression results certainly support AIE forcing having been included kribaez Posted Jan 24 2016 at 8 00 AM Permalink Reply Thanks Nic I agree that your multiple regression results support AIE forcing having been included in some guise And certainly what was done in Marvel et al would make no sense if it had not been included so no doubt the co authors believe that it is already included What was going through my mind was the difficulty of assigning any equivalent Fi value to AIE for the historical run Miller makes it clear that he uses pre industrial climate for the evaluation of Fi values This does not require any simulation It just requires activating all of the forcing agents turning on the radiative code at annual intervals and recording the net flux change at the predefined tropopause Because there is no atmospheric simulation involved AIE does not manifest itself in this calculation Strictly speaking it is not a forcing at all but a fast feedback which is unique to tropospheric aerosols Because it is unique to this particular driver as opposed to being a temperature dependent feedback common to all forcing drivers it must be treated as a quasi forcing in order to permit intelligent comparison with other forcings in general and with CO2 forcing in particular Hansen s algorithm for AIE which I described as the parameterisation algorithm in GISS E2 NINT and which you call an empirical relation does permit the indirect effect to be converted into an equivalent forcing So what I suspect was done was that the algorithm was switched on together with the radiative code at each time period The problem with this is that the equivalent forcing value is strongly dependent on climate and particularly cloud cover at the time the algorithm is calculating From the above process for abstracting Fi values cloud cover is fixed at pre industrial level The difference in calculated values may be substantial see Hansen 2005 between fixing the cloud cover and allowing it to vary as it did in the actual historic run simulations If this is what Miller did then he should be able to isolate very simply the AIE forcing from the historic run and confirm that it was identical to the single forcing case abstraction of AIE on the same basis of unchanging climate state That then allows a more definitive statement to be made on the difference between the calculated assumed AIE in the historic run and the true AIE which was based on the successively updated climate state and which should be I believe significantly more negative The alternative to which I referred involves analysis of the single forcing run simulation rather than the abstraction of Fi values from the same but I suspect it is not very relevant The indirect forcing can be abstracted by de convolution of the temperature and net flux data since there is only one known direct forcing which is changing I have left a question on realclimate hoping for some clarification of what was actually done kribaez Posted Jan 24 2016 at 8 11 AM Permalink Reply Here is a copy of the comment I left on RealClimate Gavin I would be very grateful if you could respond to the following three questions 1 Do you have available CO2 benchmarking data for GISS E2 R specifically estimates of Fi Fa and ERF for a range of concentrations If not more specifically are you going to support or modify the Fi value of 4 1 which appears in Marvel et al 2 Can you please advise if and how AIE forcing was included in Miller s All forcing together Fi values for the 20th century historic run 3 Can you confirm that the temperature and net flux data for GISS E2 R available via the CMIP5 portals and KNMI Climate Explorer are based on a model corrected to fix the ocean heat transport problem which you identified in the Russell ocean model in your 2014 paper Many Thanks Patrick M Posted Jan 24 2016 at 9 57 AM Permalink Reply If an algorithm can produce results that are clearly rogue then I would imagine it can produce results that are partially rogue as well As a software developer myself I think this creates a situation where a bug becomes a subjective decision In order to make the determination more objective one would need to define rogue more clearly It s sort of a catch 22 when you design code whose purpose is to find out if anomalies will occur in that your code needs to be free to create anomalies which could just as easily be coding logic errors I would think code of this type would always have to have an independent verification method to check predicted anomalies such as reviewing the physical plausibility of the processes involved In other words I think these models should be used to present questions not answers Jit Posted Jan 24 2016 at 11 33 AM Permalink Reply As striking as LU run 1 is fig 4 it looks like it has half the scale bars of fig 1 runs 2 5 2 5 to 2 5 vs 5 to 5 Is this just a matter of the legend not being updated niclewis Posted Jan 24 2016 at 12 02 PM Permalink Reply Fig 4 has the same scale as Fig 3 not as Fig 2 which I assume is what you meant by Fig 1 A version with a 5 C scale is here Jit Posted Jan 25 2016 at 4 53 AM Permalink Reply Yes sorry fig 2 is what I meant What I was pointing out was that the scale should be the same on all the single run figures Thank you for the link to the figure with the extended scale This seems to show that a 5 anomaly occurs in the north Atlantic not just a 2 5 one as per fig 4 Of course the blues do not look as dramatic with the colour ramp stretched kenfritsch Posted Jan 26 2016 at 11 17 AM Permalink Reply Going back to Nic s pdf critique of Marvel and then rereading Marvel the criticisms that Nic makes of this paper become clearer to me More importantly in addressing the quality of this paper it is the accumulation of problems that Nic sees in this paper It is that accumulation and not necessarily a single problem pointed out that is the important to judging the validity of the results conclusions of this paper What I have seen in the past with criticism of climate science papers from these blogs like Climate Audit is that an author or defender of the paper will clear up or attempt to clear up a single point and fail to answer acknowledge the many problems We who are critical sometimes concentrate on a single issue without continuing to point to the multitude of issues I would hope that the Marvel authors will address all of Nic s criticisms but if they do not that might well say something also kenfritsch Posted Jan 26 2016 at 11 35 AM Permalink Reply Interesting also that the efficacy measures made by Marvel could in some sense and context be construed as factors required to bring the model sensitivity more in line with the empirical results using mostly observed data and that obtains lower sensitivities Going forward and without knowing the origin for the need of the efficacy measures significantly different than unity one might well conclude that prediction of future temperature increases from AGW would be the same with or without the efficacy measure The Marvel paper gets around this thought by talking about the accident of history and implying that the efficacy measure is very much unique to the recent climate conditions and pointing to the efficacy measure different than unity being related to the non uniformity in the x y and z directions of the global atmosphere of the negatively forcing agents It reminded me of the thought process of some climate scientists implying rather strongly that the divergence of proxy responses in recent years must be related to AGW otherwise of course without an explanation we have to seriously question the proxy responses in past times to temperature never minding that the selection process in most of these temperature reconstructions makes the process flawed from the start drissel Posted Jan 26 2016 at 8 46 PM Permalink Reply As a now retired professional programmer I m astonished that anyone believes that Large opaque computer programs work Large opaque computer programs meet their specifications if any Programs and their specifications accurately represent anything as large complex and poorly understood as world climate Programs and their specifications accurately embody the Physics that we do understand like Conservation of Mass Navier Stokes Equations and on and on Programs and their specifications should ever serve as a basis for public policies that could result in impoverishment starvation etc Several of the computer program output anomalies mentioned by Dr Lewis smell to my practiced nose like program bugs Regards Bill Drissel Frisco TX kenfritsch Posted Jan 27 2016 at 2 26 PM Permalink Reply Nic I have been attempting to find the data for the 6 multiple model runs used to determine the ERF for the individual forcing agents in Marvel I find only 1 set of data for these forcings for ERF I was under the impression that ERF and iRF data were both taken from multiple runs niclewis Posted Jan 27 2016 at 4 35 PM Permalink Reply Ken Miller says that iRF is determined by measuring th eradiative imbalance in the 1850 climate state as it was before perturbation by any forcing but with the relevant forcing s imposed That would give the same result for all runs as the climate state has not changed from preindustial So it would just be computed once I think For ERF the SST is fixed but the atmosphere is free to evolve In principle multiple runs would be desirable but as equilibrium is reached quickly with fixed SST it looks aas if they have instead for each forcing averaged across 3 decades from the same run And I don t think they have archived the fixed SST runs involved they don t seem to be in the CMIP5 archive kenfritsch Posted Jan 28 2016 at 3 20 AM Permalink Reply Nic I was not clear about my confusion with the data used for ERF and iRF approaches to efficacy determinations in Marvel but I think I may now have it figured out if you can verify that my understanding is correct All the data required for the iRF approach was available to me in the form of annual GMST and OHC for all the model runs and for all the individual forcing agents and the one set of annual Fi data for each of the forcing agents from Miller 2014 My confusion was with the ERF approach and the source of the GMST and OHC data required to go with the one set of ERF data As you note there are ERF data for 3 different decades for all the forcing agents from a single model run It would have been nice to have data for multiple runs but it is now my understanding that the same GMST and OHC data used in the the iRF approach must have been used in the ERF approach by using the average delta T and trends for OHC from the decade 1996 2005 That gets me to the multiple runs for the ERF approach and the method used in Marvel to obtain uncertainty for both the iRF and ERF approaches to determining efficacy Is this understanding correct I plan to analyze the data using Singular Spectrum Analysis and other analysis approaches niclewis Posted Jan 28 2016 at 12 32 PM Permalink Ken Yes you should use all the same separate run GMST and OHC data with the one set of averaged over 3 decades ERF data But as there is only ERF data for year 2000 forcing efficacies have to be calculated from quotients rather than being able to use regression Marvel s regression with intercept over 1906 2005 method is unsatisfactory in any case You might get better results using data starting in 1850 or 1851 there is a slight jump rather than 1900 and TOA radiative imbalance rather than ocean heat content data for your analysis I ll try to add such data to that which I have already provided at https niclewis wordpress com appraising marvel et al implications of forcing efficacies for climate sensitivity estimates kribaez Posted Jan 28 2016 at 5 51 AM Permalink Reply I have left another comment on RealClimate for Gavin to mull over awaiting moderation copy below while he is I trust assiduously working in the background to answer the previous questions which I have left Gavin You wrote Dropping outliers just because they don t agree with your preconceived ideas is a classical error in statistics it s much better to use the spread as a measure of the uncertainty Gavin Another classical error in statistics is to attribute the error associated with one property to the wrong variable Work by the RNMI Sybren Drijfhout et al 2015 http www pnas org content 112 43 E5777 abstract confirms that GISS E2 R has the capacity for abrupt climate change in the form of inter alia the local collapse of convection in the North Atlantic In this instance if the results of the rogue run in the single forcing LU cases are due to the abrupt collapse of N Atlantic convection as seems increasingly likely from the data then the dramatically different temperature response in the rogue run has nothing whatsoever to do with the uncertainty in transient efficacy of LU forcing The inclusion of the run leads quite simply to an erroneously inflated calculation of the mean transient efficacy for LU and a misleading confounding of the uncertainty associated with the GCM s internal mechanics with the uncertainty in LU transient efficacy Ultimately the Marvel et al paper seeks to argue that sensitivities estimated from actual observational data are biased low on the grounds that GISS E2 R over the historic period is responding to an overall low weighted average forcing efficacy It then seeks to extend the conclusions drawn from the model to realworld observational studies Since we know from the real observational data that there was not a collapse of N Atlantic convection then quite apart from other methodological questions the inclusion of this run for the LU calculation is impossible to justify and on its own is sufficiently large in its impact to bring the study results into question Applying the same logic any of the 20th Century History runs which exhibited similar abrupt shifts Southern Ocean sea ice Tibetan plateau snow melt and N Atlantic convection which were not observed in the realworld should have also been excluded from the ensemble mean for Marvel et al to have any hope of credibly extending inferences to realworld observational data even if we suspend disbelief with respect to other problems associated with data methods and relevance stevefitzpatrick Posted Jan 28 2016 at 5 20 PM Permalink Reply Paul Another good question for Gavin But I think you are unlikely to get a reply to ANY substantive question about Marvel et al at Real Climate unless it is a question which lends support to the conclusions of Marvel or so silly a question that Gavin can just poke fun Gavin is not going to entertain substantive doubts about Marvel any more than Eric Steig was willing to entertain substantive doubts about continent wide Antarctic warming even after O Donnell et al was published The point of Marvel et al is to raise doubts in a high profile publication about the veracity of the many low empirical estimates of sensitivity so that those empirical estimates can be waved away when public energy policy is discussed Marvel et al is just ammunition in the climate wars IMO its quality and accuracy do matter at all to the authors mpainter Posted Jan 28 2016 at 5 55 PM Permalink Reply I think your last sentence left out the word not and with that I can say ditto to your comment Marvel et al are now at the point where to engage the issue any further only exposes and emphasizes the hollowness of their whole position faulty models and all stevefitzpatrick Posted Jan 29 2016 at 8 38 AM Permalink mpainter Yes I left out the word not stevefitzpatrick Posted Jan 29 2016 at 8 55 AM Permalink Reply Paul Gavin has replied to your comment He completely rejects your suggestion that the single very strange land use run is not representative and so should not be included in the analysis He also challenges you to look at the level of variance in all 200 runs of the study and do your own analysis Seems to me that a very reasonable argument can be made about the statistical validity of any 5 run ensemble that includes a single strange run if you know the variability of a much larger group eg Gavin s 200 runs AntonyIndia Posted Jan 29 2016 at 9 32 AM Permalink Reply Is Gavin admitting something there the basic result which that the historical runs don t have the same forcing response pattern as the response to CO2 alone stevefitzpatrick Posted Jan 29 2016 at 11 29 AM Permalink Reply Paul K After thinking a bit more about Gavin s reply it seems to me it should be possible to show the single strange run for land use run 1 from figure 5 in Nic s original post is likely to be a statistical fluke related to model behavior and not at all representative of the actual effect of land use If you calculate the slope of each of the five land use runs and then calculate an unbiased estimate of the standard deviation of the slopes from runs 2 through 5 then the slope of run 1 may very well be outside the 95 inclusive probability window That is run 1 is unlikely to be a member of the same normally distributed sample population as runs 2 through 5 and so is more likely due to an unrelated effect which just was not present in the other LU runs A drastic and wildly unrealistic change in North Atlantic ocean temperature would of course be a plausible unrelated effect Nic Can you point to where the data used to generate figure 5 in your original post is located niclewis Posted Jan 29 2016 at 12 44 PM Permalink Reply steve Interesting idea The data is available in a spreadsheet via the link given at the end of my original post https niclewis wordpress com appraising marvel et al implications of forcing efficacies for climate sensitivity estimates The graph is in the tas sheet at cell BN100 Right click in the graph and choose Select data to see which ranges the data comes from stevefitzpatrick Posted Jan 29 2016 at 2 19 PM Permalink Nic Thanks The slope statistics are Relative slope R1 0 0369 R2 0 0066 R3 0 0144 R4 0 0098 R5 0 0085 Mean 2 5 0 00982 Std deviation 2 5 0 00332 R1 Standard Deviations from the mean 8 15 The estimate of the standard deviation is the unbiased estimate using n 1 in the denominator So it seems to me unlikely that R1 is in the same population as R2 to R5 especially in light of the peculiar pattern of cooling in Run 1 Of course Gavin in his best Steigian imitation is going to discount any slope analysis as irrelevant or will insist ignoring any reasonable interpretation of the actual data that the correct analysis is to include all 5 runs in the estimate of the mean and standard deviation for the slopes In this case the statistics become Rel slope R1 0 0369 R2 0 0066 R3 0 0144 R4 0 0098 R5 0 0085 Mean 0 01524 Std deviation 0 0124 R1 Standard Deviations from the mean 1 74 Which puts R1 just inside of the credible range if you choose to ignore the bizarre pattern of cooling in the North Atlantic which is absolutely not a credible response to a tiny forcing from land use change But such things seem to pass as credible when the results match the desire outcome dynam01 Posted Jan 28 2016 at 3 24 PM Permalink Reply Reblogged this on I Didn t Ask To Be a Blog kenfritsch Posted Jan 30 2016 at 11 10 AM Permalink Reply My apologies if these graphs that I have linked below have been displayed by someone previously The graphs represent the regression of the Marvel GMST versus forcings and plotted on a yearly basis For each forcing agent I have graphed together the Ensemble mean and the 5 model runs I think these representations paint a different picture than using decadal averages Notice that using different parts of the forcing range would give very different trends Where the forcing is changing with time in a trending manner then one could state that the trends would be very different depending on the decade used I have also calculated the trend statistics from the yearly results graphed in the links and while the p values can be impressive over the range of forcing as noted above the trends calculated within parts of the range can change dramatically I have not yet applied the auto correlation ar1 to simulations to determine the confidence intervals for these trends but when I finish I will report the results here That model run for land use in question is very different than the other runs not only in the trend value but in the p value of that trend One can use an alternative method to determine statistical differences by using the model run trend values and the confidence intervals derived from Monte Carlo simulations as described above I have not done that yet but I would predict at this point that there would be a significant difference between the run in question if the confidence intervals for the other land use runs are not too wide Link for GHG and O3 Link for Solar and Land Use Link for Volcanic and Anthro Aerosol stevefitzpatrick Posted Jan 30 2016 at 4 52 PM Permalink Reply Ken Fritsch Yes LU Run 1 is wildly different from the others I doubt using decadal averages like Nic did makes much difference in the trends Run 1 is nothing like the other four no matter how you look at it I think the argument needs to be made that inferring anything about the efficacy of LU forcing with Run 1 included will lead to spurious results BTW I gently suggest that you use the same y axis scale when comparing the trends for the five LU runs Using different y axis scales obscures how different Run 1 is from the others kenfritsch Posted Jan 31 2016 at 8 20 AM Permalink Reply Steve Marvel used decadal averages and Nic merely reported the results Decadal averages will graphically paint a very different picture than using the individual variation in individual runs with yearly data points The calculated confidence intervals need to use yearly data Using decadal averages for that purpose would require some adjustments The differing y axis ranges were just to see if you were paying attention I am finishing the CI calculations and will post them here By my methods the land use Run 1 slope trend is statistically very different than the other runs It would appear than some model runs for the various forcing agents have slopes not significantly different than zero Maybe we can get Gavin to argue for the validity of that happening if we had more than one realization of the earth s climate kenfritsch Posted Jan 30 2016 at 1 00 PM Permalink Reply Nic I have a post in moderation that has graphs with regressions of the temperature response to 6 forcing agents from the Marvel data It is on a yearly basis and I think shows the data in a different light than how it was presented in Marvel Steve unmoderated triggered by number of links kenfritsch Posted Jan 30 2016 at 1 15 PM Permalink Reply I should have added that the large p values to which I refer in my moderated post should be large negative values kenfritsch Posted Jan 31 2016 at 1 35 PM Permalink Reply In the link below is a table with my analysis details of the regression of temperature versus forcing for the 6 forcing agents It shows the trend 95 plus minus confidence intervals CIs the intercept the p values for the trend and intercept unadjusted for auto correlation and the ar1 values used for adjusting the CIs for auto correlation using 10 000 Monte Carlo simulations Notice that slope of the trend in each case can be used to ratio amongst the forcing agents to relate back to the relative efficacy values found in Marvel My results for some of the forcing agents is in general agreement with those from Marvel but not all There is a large difference for anthropogenic aerosols I used the sum of the direct and indirect aerosol values provided by Marvel and that sum when regressed against the aerosol temperature gave very good correlations with very low trend p values My slope values had only one forcing agent with a higher value than GHG and that was Land Use Land Use has slope values for the 5 runs that vary greatly and the CIs for those individual runs are large but show that Run1 is very significantly different than the other runs Run 1 also has a trend p value that is much lower than the other runs and the intercept is significantly different I would judge from the large variations within the Land Use that regression iRF versus temperature for that forcing agent makes little sense GHG and Volcanic were the only forcing agents that had CIs that were a low percentage of the slope values Why my calculations give such relatively low values of forcing agents slopes compared to GHG with the noted exception of Land Use is a puzzle to me If I have not made a mistake here it would also throw huge doubts into the use the instantaneous forcing and regressions to determine efficacy niclewis Posted Jan 31 2016 at 3 22 PM Permalink Ken I agree your slope trends apart from Anthro aerosol where I think you may have made some mistake based on regressing on annual 1900 2005 data Marvel used decadal data which gives somewhat different results If not following their method I think there is merit in using data for the full 1850 2005 simulation runs I ve now uploaded that to my web pages see the link in the update to this article above The low efficacy for volcanic forcing is expected and partly reflects the delay in GMST reponse to a forcing impulse which matters here as volcanic forcing is impulse like With decadal data there is much less distortion as a result of the delay The same applies to an extent to solar forcing kenfritsch Posted Jan 31 2016 at 4 54 PM Permalink Nic your points are well taken and now I will attempt to get my head around what you state here Would a delay in temperature response change a trend measured on an annual versus decadal basis There would be a lag but the response would eventually be manifested in the temperature I think If I used the start and end points only the trend should be near the same Obviously there are differences as you indicate you have made the annual and decadal calculations But is it caused by the lag effect I ll have another look at my AA trend calculations There was a good correlation and I guess that delayed my looking I notice from my plot of temperature versus AA that if I regressed only on the lower levels of forcing I would obtain a much steeper slope and closer to the expected value kenfritsch Posted Jan 31 2016 at 5 28 PM Permalink Nic I may have had a problem with the instantaneous part of the forcing in my thinking If I am measuring temperature response to forcing for a given year and if all the forcing were to occur and was reported in that year but only part of the temperature response occurs then that year would show a lower than expected temperature The next year I would have no forcing but a partial continuation of the temperature response and that year would have a higher than expected temperature given no forcing is reported I was unfortunately thinking about an accummualtion of forcing and temperature responses for my starting and ending point thought Maybe that is what Willis was thinking A sufficiently lagging response might even make a decadal average of instantaneous forcing a poor method of determining efficacy Even an event that forces in one decade and is mostly measured in another could create a problem Conclusion Use ERF kenfritsch Posted Jan 31 2016 at 7 21 PM Permalink I found my error with AA and the revised data and graph are in the link below Considering the CIs for the GHG and AA the mean trend slopes for GHG and AA are not that different Next step is to use the decadal

    Original URL path: http://climateaudit.org/2016/01/21/marvel-et-al-implications-of-forcing-efficacies-for-climate-sensitivity-estimates-an-update/?replytocom=766331 (2016-02-08)
    Open archived version from archive

  • Appraising Marvel et al.: Implications of forcing efficacies for climate sensitivity estimates « Climate Audit
    which is further adrift 1 Kate Marvel Gavin A Schmidt Ron L Miller and Larissa S Nazarenko et al Implications for climate sensitivity from the response to individual forcings Nature Climate Change DOI 10 1038 NCLIMATE2888 The paper is pay walled but the Supplementary Information SI is not 2 The Historical simulations have an average temperature anomaly of 0 84 C for 1996 2005 relative to 1850 whereas HadCRUT4v4 shows an increase of 0 73 C from 1850 1859 to 1996 2005 and Figure 7 of Miller et al 2014 shows consistently greater warming for GISS E2 R than per GISTEMP since 2000 The same simulations show average ocean heat uptake of 0 84 W m 2 over 1996 2005 mean slope estimate compared to 0 40 W m 2 using AR5 Box 3 1 Figure 1 data or 0 67 W m 2 using NOAA Levitus et al 2012 data 3 Hansen J et al 2005 Efficacy of climate forcings J Geophys Res 110 D18104 doi 101029 2005JD005776 4 Chapter 8 of AR5 is available here 5 See Section 10 8 1 in Chapter 10 of AR5 for a discussion of the use of these equations in estimating TCR and ECS 6 Miller R L et al CMIP5 historical simulations 1850 2012 with GISS ModelE2 J Adv Model Earth Syst 6 441 477 2014 7 Or with climate state but feedbacks vary little with climate state within limits in most GCMs 8 I estimate GISS E2 R s effective climate sensitivity applicable to the historical period as 1 9 C and its ERF F 2xCO2 as 4 5 Wm 2 implying a climate feedback parameter of 2 37 Wm 2 K 1 based on a standard Gregory plot regression of Δ F Δ N on Δ T for 35 years following an abrupt quadrupling of CO 2 concentration The efficacy weighted mean period from the imposition of incremental forcing to the end of the historical period is of this order I also estimate the model s effective climate sensitivity as 2 0 C from regressing the same variables over the first 100 years of its 1 p a CO 2 increase simulation this estimate is little affected by F 2xCO2 value 9 Miller et al 2014 noted a 15 increase in GHG forcing in GISS ModelE2 compared to the CMIP3 version ModelE despite their forcing RF for a doubling of CO 2 being nearly identical but were unable to identify the cause 10 The 0 86 divisor comes from the coefficient on the integral of TOA imbalance anomaly Δ N when regressing the ocean heat content OHC anomaly against both that integral and time thus isolating any fixed offset between Δ Q and Δ N that may exist 11 The 1996 2005 Δ T for the sum of the six single forcing cases is 0 76 C compared to 0 84 C for Historical all forcings For iRF the corresponding Δ F values from the archived data are 2 53 W m 2 and 2 75 W m 2 However the values plotted are 2 74 W m 2 and 3 05 W m 2 respectively For ERF the sum of single forcings and the Historical forcing Δ F values from the data are respectively 2 99 W m 2 and 2 84 W m 2 but the values plotted in Figure 1c are 3 03 W m 2 and 2 93 W m 2 12 Otto et al used regression based estimates of ERF in multiple CMIP5 models Lewis and Curry used estimates from Table AII 1 2 of AR5 which are stated to be ERFs but in most cases aerosol forcing being the most notable exception assessed to be the same as their RFs 13 The AR5 Glossary Annex III states The traditional radiative forcing is computed with all tropospheric properties held fixed at their unperturbed values and after allowing for stratospheric temperatures if perturbed to readjust to radiative dynamical equilibrium Radiative forcing is called instantaneous if no change in stratospheric temperature is accounted for And early in Chapter 8 it says RF is hereafter taken to mean the stratospherically adjusted RF 14 However Hansen 2005 found that only in the cases of aerosol and BCsnow forcing was there a major difference between RF and ERF AR5 after surveying a wider range of evidence reached similar conclusions and accordingly in other cases estimated ERF to be the same as RF with an implied efficacy estimate of one but gave wider ranges for ERF to allow for uncertainty in the relationship between ERF and RF 15 AR5 states Section 7 5 1 of Chapter 7 it is inherently difficult to separate RFaci from subsequent rapid cloud adjustments either in observations or model calculations For this reason estimates of RFaci are of limited interest and are not assessed in this report 16 Transient efficacy estimates using iRF based respectively on unconstrained decadal regression from 1906 2015 to 1996 2005 as in Marvel et al changes from 1850 to 1996 2005 and zero intercept regression are LU 3 89 1 64 1 03 Oz 0 60 0 57 0 70 SI 1 53 1 68 1 82 and VI 0 56 26 45 0 31 In principle using changes is preferable to zero intercept regression for transient estimation because of the cold start issue but its superior noise suppression leads to more consistent estimation from zero intercept regression when forcing is small 17 Schmidt G A et al 2014 Configuration and assessment of the GISS ModelE2 contributions to the CMIP5 archive J Adv Model Earth Syst 6 141 184 doi 10 1002 2013MS000265 18 The GHG forcing in 1996 2005 is 10 higher in ERF than in iRF terms GHG forcing in 1996 2005 was dominated by CO 2 and Hansen 2005 found GHG had an efficacy of very close to one both in terms of F s which is very similar to ERF and using iRF 1 02 and 1 04 respectively That suggests scaling the actual F 2xCO2 iRF of 4 1 W m 2 by the ratio of Marvel et al s iRF and ERF values for GHG forcing which implies a 10 higher F 2xCO2 ERF of 4 52 W m 2 That value is also in line with F 2xCO2 of 4 53 W m 2 estimated from a Gregory plot regression over the 35 years following an abrupt quadrupling of CO 2 19 There were no material differences between the digitised and data values for Δ T so I used only the data values which were more precise Note that Marvel et al do not specify whether for ERF efficacy estimates ensemble means are taken before or after calculating quotients As only a single forcing value is given and ensmble means were taken before regressing in the iRF case I have assumed the former which also seems more appropriate 20 Lewis N Curry JA 2014 The implications for climate sensitivity of AR5 forcing and heat uptake estimates Clim Dyn DOI 10 1007 s00382 014 2342 y Non typeset version available here 21 Shindell D T et al 2013 Interactive ozone and methane chemistry in GISS E2 historical and future climate simulations Atmos Chem Phys 13 2653 2689 This study found that iRF ozone forcing from 1850 to 2000 was 0 28 W m 2 when the climate state was allowed to evolve in line with the Historical simulation and 0 22 W m 2 when a fixed present day climate was used and ERF was calculated as 0 22 W m 2 These values are substantially below those used in Marvel et al of 0 45 W m 2 iRF and 0 38 W m 2 ERF Substituting Shindell et al s values for Marvel et al s would raise the ozone iRF and ERF transient efficacies values to respectively 0 92 and 1 18 22 If one excludes LU run 1 no individual run for any forcing including Historical produces a 1950 2005 mean GMST response that differs by more than 0 031 C from the ensemble mean response for that forcing But for LU run 1 the difference is 0 134 C and would be 0 168 C were run 1 excluded from the ensemble mean 23 Chapter 8 of AR5 referring to a seven model study states that There is no agreement on the sign of the temperature change induced by anthropogenic land use change and concludes that a net cooling of the surface accounting for processes that are not limited to the albedo is about as likely as not 24 Schmidt H et al 2012 Solar irradiance reduction to counteract radiative forcing from a quadrupling of CO2 climate responses simulated by four earth system models Earth Syst Dynam 3 63 78 25 The GISS E2 R increase in GHG ERF is 3 39 W m2 The 1850 2000 increase in GHG RF and ERF per AR5 Table AII 1 2 is 2 25 W m2 but I use the higher 1842 2000 increase of 2 30 W m2 since the 1850 CO 2 concentration in GISS ModelE2 was first reached in 1842 according to the AR5 data 26 I calculate TCR and ECS values as shown in the below table from the efficacies stated in Marvel et al s SI Table 1 digitising from their Figure 1 for GHG E 1 means assuming all efficacies are one Median estimates Shindell 2014 Lewis and Curry 2014 Otto et al 2013 E 1 iRF ERF E 1 iRF ERF E 1 iRF ERF TCR As stated in SI Table 3 1 4 2 0 1 9 1 3 1 6 1 7 1 3 1 8 1 8 From SI Table 1 GHG from Fig 1 1 98 1 58 1 92 1 60 1 92 1 69 ECS As stated in SI Table 3 2 1 4 0 3 6 1 5 2 0 2 3 2 0 2 9 3 4 From SI Table 1 GHG from Fig 1 3 88 3 48 2 77 2 73 3 90 3 78 27 Sokolov A P 2005 Does model sensitivity to changes in CO2 provide a measure of sensitivity to other forcings J Climate 19 3294 3305 28 Shindell DT 2014 Inhomogeneous forcing and transient climate sensitivity Nature Clim Chg DOI 10 1038 NCLIMATE2136 29 Ocko IB V Ramaswamy and Y Ming 2014 Contrasting climate responses to the scattering and absorbing features of anthropogenic aerosol forcings J Climate 27 5329 5345 30 Kummer J R and A E Dessler 2014 The impact of forcing efficacy on the equilibrium climate sensitivity GRL 10 1002 2014GL060046 Update Data and calculations are available here in Excel form Like this Like Loading Related This entry was written by niclewis posted on Jan 8 2016 at 4 42 PM filed under Uncategorized and tagged Climate sensitivity Efficacy Bookmark the permalink Follow any comments here with the RSS feed for this post Post a comment or leave a trackback Trackback URL Update of Model Observation Comparisons Bob Carter 83 Comments michael hart Posted Jan 8 2016 at 6 21 PM Permalink Reply The efficacy of a forcing is defined as its effect on GMST relative to that of the same amount of forcing by CO2 Notwithstanding those who like to count joules in the deep oceans if that definition is reasonable then what does it say about whether the feedbacks should have un equal efficacies In other words if forcings are not all equal then it seems reasonable to ask if feedbacks are not equal either Steve McIntyre Posted Jan 8 2016 at 6 24 PM Permalink Reply Nic thanks for this impressive discussion Michael Jankowski Posted Jan 8 2016 at 6 28 PM Permalink Reply Why did they stop in 2005 Is that the last year common year in Otto et al 2013 Lewis and Curry 2014 and Shindell 2014 ristvan Posted Jan 8 2016 at 8 18 PM Permalink Reply No it is not Cherrypick thomaswfuller2 Posted Jan 8 2016 at 7 46 PM Permalink Reply In your introduction if you change assert to contend it would make the beginning of your paper sound less a bit less charged What s remarkable about your piece here is the clarity of the English writing I was able to follow it all despite being a non scientist Thanks for the hard work My only other suggestion would be a quick section on how you would recommend Marvel et al proceed to improve their work niclewis Posted Jan 9 2016 at 6 29 AM Permalink Reply thomasfuller2 thanks for your comment There was no intention on my part to use a charged term I consider assert to be a more neutral term than contend See http the difference between com contend assert the difference between assert and contend is that assert is to declare with assurance or plainly and strongly to state positively while contend is to strive in opposition to contest to dispute to vie to quarrel to fight Marvel et al could withdraw their paper and submit a new one using more satisfactory methodology and providing more detail after performing a set of simulations that showed how the GISS model responded to each type of forcing as the climate state evolved during the historical period Preferably extended to 2012 to match the simulation results in Miller et al 2014 which is a much higher quality paper But I see very little chance of that happening There is in any case a question mark over how suitable a model GISS E2 is for this purpose As I indicate in the article GISS E2 seems to have amazingly high forcing from non CO2 long lived greenhouse gases methane nitrous oxide CFCs etc and a remarkably strong GMST response to them if the forcing from a doubling of CO2 in the model is as taken in Marvel at al Richard Drake Posted Jan 9 2016 at 6 52 AM Permalink Reply Couple of questions on the last para 1 Which GCM would in your view have been better How easy is that to even evaluate 2 How long elapsed would it have taken to run the various simulations for the different forcings on GISS E2 leading to the write up in Marvel et al On the standard GISS supercomputer under standard loading or whatever they would have had available I realise it may only be the authors who can give any idea of the second but it would be interesting to get a feel for how easy it would be for others to play around with this stuff I m still after six years struggling with what openness if climate software even means compared to areas with which I m much more familar niclewis Posted Jan 9 2016 at 10 48 AM Permalink RichardDrake 1 Probably best to use a number of unconnected AOGCMs from different groups Perhaps focussing on those from grous in western Europe and North America judging from the views of the modellers I have met 2 I suspect a fair while See my response to Alberto Zaragoza Comendador below sue Posted Jan 10 2016 at 2 45 AM Permalink Reply Nic GISS E2 seems to have amazingly high forcing from non CO2 long lived greenhouse gases methane Very interesting since Gavin discourages ppl from worrying about methane even got into a row w Wadhoms sp over it How different are their scenarios I assume very different niclewis Posted Jan 10 2016 at 10 03 AM Permalink Sue As I say in note 25 The GISS E2 R increase in GHG ERF is 3 39 W m2 The 1850 2000 increase in GHG RF and ERF per AR5 Table AII 1 2 is 2 25 W m2 but I use the higher 1842 2000 increase of 2 30 W m2 since the 1850 CO2 concentration in GISS ModelE2 was first reached in 1842 If one strips out the CO2 contributions of 1 38 W m2 for AR5 based on an F2xCO2 of 3 71 W m2 and of 1 53 W m2 for GISS E2 R based on an ERF F2xCO2 of 4 1 W m2 the the contribution of the other long lived GHG is 0 92 W m2 per AR5 and 1 86 W m2 for GISS E2 R That is methane nitrous oxide CFCs and minor GHGs add TWICE as much forcing in GISS E2 R as per the AR5 best estimate As I wrote it looks as if GISS E2 R radiative transfer computation in GISS E2 may be inaccurate Although methane is classed as a long lived GHG its lifetime is only of the order of a decade so it presents much less of a long term problem than CO2 part of which is expected to remain in the atmosphere for 1000 years On the other hand as well as being a powerful GHG it is a source of tropospheric ozone and stratospheric water vapour both of which add to the basic forcing from methane Brandon Shollenberger Posted Jan 11 2016 at 2 29 PM Permalink niclewis Although methane is classed as a long lived GHG its lifetime is only of the order of a decade so it presents much less of a long term problem than CO2 part of which is expected to remain in the atmosphere for 1000 years On the other hand as well as being a powerful GHG it is a source of tropospheric ozone and stratospheric water vapour both of which add to the basic forcing from methane Another interesting feature of methane is when it breaks down it largely breaks down into C02 There is far less methane in the atmosphere than C02 but that effect may well have contributed a couple percent to the observed rise in C02 levels wkernkamp Posted Jan 22 2016 at 1 36 AM Permalink There is no reason to believe that excess CO2 will remain in the atmosphere very long Already only about half of the human produced CO2 in any given year as can be calculated from the atmospheric CO2 increase The other half is immediately removed by nature This indicates that a 33 increase in CO2 causes the natural removal processes to increase by this amount Therefore if we stopped all emissions we should cause CO2 to decline at about the same rate as it now is increasing This is so because the increased absorption persists until CO2 is lower At that rate it would not take thousands of years to remove all the CO2 from fossil fuels but less than one hundred years This timescale is also in agreement with the rapid decline of the C14 spike due to atmospheric nuclear explosions in the fifties mpainter Posted Jan 8 2016 at 9 02 PM Permalink Reply Nic This definition is reasonable CO2 is the dominant greenhouse gas Next to water vapor you must mean niclewis Posted Jan 9 2016 at 4 29 AM Permalink Reply GHG here means long lived greenhouse gases which excludes ozone as well as water vapour But I am afraid the definition appears after the term GHG has already been used Geoff Sherrington Posted Jan 8 2016 at 9 08 PM Permalink Reply Thank you Nic for yet another detailed study There is a matter arising from observations about relations between land temperature and local rainfall For example at several Australian weather stations studied in detail with statistics recorder local rainfall correlates with recorded temperatures quite significantly That is GHG are not the only driver of temperature changes as recorded Wetter is cooler Rainfall does not seem to sit within the 7 individual forcings you have studied it might pls correct if I am wrong Given that local rainfall statistically can account for 30 50 of the variation in local temperatures and given that simple physics help explain this I am left wondering where the effect of rainfall on local temperatures in inserted into sensitivity studies if indeed it needs to be As studies become more detailed it is likely that many odd questions of this type will emerge Another is from the Dec 2015 Schmidtusen GRL paper claiming a cooling over the Antarctic as atmospheric CO2 increases IR emissions to space do not come from the ground surface there because it is too cold so the use of land surface as a reference layer elsewhere might be compromised While models might gather up local effects like these they can be hard to track down Even if they are incorporated one wonders if the mathematics in the models are set to sum or integrate only positive values of sensitivities at defined locations not ECS or TCR as globally defined but locally I hope I am not wasting your time here There are bigger problems for us at home preventing some detailed digging niclewis Posted Jan 9 2016 at 5 55 AM Permalink Reply Thanks Geoff Forcings generally have in GCMs similar global effects even if they are concentrated in particular regions or differ between the hemispheres Figure 24 of the Hansen 2005 paper that I provided a link to shows this very well But variation in local feedbacks and hence in local climate sensitivity does seem to have more local effects GCMs do incorporate this although their simulations of feedbacks and their effects may not be correct The models don t distinguish between positive and negative local sensitivities In many GCMs sensitivity is negative in the deep tropics net outgoing radiation goes down when surface temperature increases because water vapour and cloud feedbacks are so strongly positive there That means there would be runaway warming there if heat from the deep tropics couldn t be exported to higher latitudes Maybe not hte sort of negative sensitivity you had in mind but it proves the point Models generally aren t very good at simulating changes in rainfall patterns to increasing GHG and resulting global warming But they do all agree that total rainfall will increase In fact the lower climate sensitivity is the faster must total precipitation increase with GMST or the atmosphere would heat up too much But where the extra rain falls is a different question it could almost all be over the oceans Bishop Hill Posted Jan 9 2016 at 6 25 AM Permalink Reply Does that mean that recent flooding in the UK is evidence for low climate sensitivity Richard Drake Posted Jan 9 2016 at 6 35 AM Permalink Got there before me Bish But this last paragraph plugged an important gap in my understanding thank you Nic There may be others AntonyIndia Posted Jan 9 2016 at 9 22 PM Permalink Reply I asked Gavin Schmidt s comment on your review on his co article on Realclimate and he answered Mostly confused but there are a couple of points worth following up on Should have the relevant sensitivity tests available next week gavin http www realclimate org index php archives 2016 01 marvel et al 2015 part 2 media responses comment 640742 sue Posted Jan 10 2016 at 2 49 AM Permalink 1 Looking forward to his follow up gymnosperm Posted Jan 8 2016 at 11 48 PM Permalink Reply 1 00 for CO2 forcing C mon Water is what 1 9 The feedbacks are entirely hypothetical The radiative forcing of CO2 is expressed as unity entirely ignoring its saturation While the trends of temperature and Co2 are mysteriously different the variability of CO2 is substantially captured by temperature even in the last 35 years FerdiEgb Posted Jan 9 2016 at 3 50 AM Permalink Reply Gymnosperm As intensively discussed here http wattsupwiththat com 2015 11 25 about spurious correlations and causation of the co2 increase 2 Most of the variability of the CO2 rate of change is caused by the influence of temperature variability Pinatubo El Niño on tropical variation That is proven by the opposite CO2 and δ13C changes Vegetation is not the cause of the trend in CO2 it is an increasing sink for CO2 at least since 1990 http www sciencemag org content 287 5462 2467 short and http www bowdoin edu mbattle papers posters and talks BenderGBC2005 pdf Variability and trend of CO2 have nothing in common they are driven by different processes opluso Posted Jan 9 2016 at 8 00 AM Permalink Reply It would seem that their methodology single forcing model runs would be most valuable in identifying areas for improvement in the GISS E2 R model I am at a loss to see how that methodology would be superior to estimating TCR ECS directly from observational data sets Perhaps the answer lies behind the Marvel et al paywall but did they calculate the relative contribution from each single forcing estimate to the ultimate increase in their respective TCR ECS estimates kribaez Posted Jan 9 2016 at 8 56 AM Permalink Reply Observational based studies must make some estimate of the forcing which gave rise to the observed temperature If a large forcing is assumed estimated then this implies a low climate sensitivity Conversely a low forcing giving rise to the same observed temperature gain implies a high climate sensitivity By definition TCR and ECS relate only to CO2 forcing It is known that in the models at least not all forcings produce identical temperature responses some higher than expected from a CO2 equivalent forcing and some lower than expected Marvel et al argue that by an accident of history the apparent summed forcings are higher than they would be if all of the forcings were expressed in terms of their equivalence to CO2 forcings By so doing they argue that the total forcings used as input into observational studies are too high relative to CO2 equivalence and hence climate sensitivities which again have to be CO2 specific are therefore biased low Hope this helps niclewis Posted Jan 9 2016 at 9 46 AM Permalink Reply opluso Their work certainly highlights some peculiarities in the GISS model Your question is a good one Marvel don t give any results for the relative contributions of diferent forcings to their increases in observational TCR ECS estimates I have worked them out for TCR using ERF forcings this is the only case for which their methodology doesn t need changing the true ERF F2xCO2 value is unknown but varying it would change all contributions in the same direction Their very high efficacy for land use is the biggest contributor closely followed by the slightly sub unity efficacy of GHG and then by the pretty low efficacy for ozone broadly half as important as LU Aerosols and solar have similarly small but opposing effects Volcanic should be small but I think they ve got the wrong VI forcing for the Otto and Shindell studies they made their own estimates of this forcing I believe kribaez Posted Jan 9 2016 at 8 25 AM Permalink Reply Nic Thank you for the detailed and thoughtful input to this problem Before making a comment on the results I would like to underline that outwith the gross methodological errors in Marvel et al there are two elements which I find bizarre Firstly to do efficacy comparisons meaningfully requires carrying at least 3sf accuracy through the calculations of derivative data One piece of fundamental input is the evolution of net flux or at a minimum an accurate estimate of the change in net flux over a pre specified period In this context Marvel s choice of using OHC data rather than making direct use of the available net flux data from the model runs seems absurd In observation based estimates of CS and feedbacks researchers are forced to use OHC data as a means of accessing net flux estimates over the longer term This requires some fairly coarse assumptions to be made including what percentage of any net flux imbalance is converted to sensible heat in the ocean as you point out Going from model calculated OHC back to net flux imbalance in the model with any accuracy is extremely difficult since as well as the natural net flux variation in the pre industrial control which is integrated in some guise into the GCM s energy accumulation and needs to be discounted there is also conversion of radiative input into sensible heat and latent heat conversion to momentum flux and distribution of sensible heat between land sea and atmosphere In addition there is energy leakage from the model climate system it is not fully conserved All of these elements are model specific I can quite honestly think of no excuse for the use of OHC data in this context when the net flux data should be available to the GISS researchers Its sole consequence in the efficacy calculations is the introduction of unnecessary error and uncertainty Secondly engineers would recognise an efficacy calculation as a benchmark calibration study A fundamental requirement for such a study is to have the benchmark measurements available Hansen 2005 recognised this and took great pains to measure the forcing data for the CO2 cases which form the benchmark against which all other responses are calibrated He provided estimates of Fi iRF Fa RF and Fs ERF across a range of concentrations of CO2 Commendable For Marvel et al on the other hand we have a statement in Miller saying However forcing associated with a doubling of CO2 is nearly identical between the CMIP3 and CMIP5 models Hansen et al 2005 Schmidt et al 2014a This is then contradicted by the iRF value cited in Marvel et al and the Fa values in Schmidt 2014 No reference is provided at all for ERF values This is a dog s dinner a benchmark study without benchmarks The above two elements strongly suggest to me that this did not start as an efficacy study My speculation is that it started as a study to show that by applying the same methods used in observation studies to the GISS ER 2 data you got the wrong answer for sensitivity They then found that you actually got very compatible answers if done with reasonable estimates of historical forcing and a sensible treatment of OHC The study then morphed into one which had to show that the historical forcing had an overall weighted average efficacy less than unity I can think of no other explanation for carrying out an efficacy study which uses OHC instead of net flux and which is based on a woefully inadequate definition of the benchmark data on the CO2 cases stevefitzpatrick Posted Jan 9 2016 at 9 33 AM Permalink Reply kribaez The paper is clearly an effort to discount the lower sensitivity estimates from empirical studies how the Marvel et al work evolved is speculative but the overall objective is obvious discount all low empirical estimates of TCR and ECS There have been several other papers from GISS where GCM behavior was used to discount lower empirical estimates of sensitivity one paper critical Stephen Schwartz s temperature autocorrelation based estimate of sensitivity immediately comes to mind The general class of paper can be described as you can t ever show the GCM projections are too high by using actual data In other fields efforts to discredit empirical data rather than improve a model would be laughed at but is oddly enough taken very seriously in climate science I will go out on a limb and predict GISS will produce similar critiques of other empirical estimates in the future kribaez Posted Jan 9 2016 at 9 55 AM Permalink Reply Yes the rebuttal to Schwartz is a very pertinent analogue In that instance the GISS team argued that the Schwartz method could not be sound because when applied to GISS data it gave the wrong answer for climate sensitivity The reality was that it actually gave the correct answer for GISS climate sensitivity over the temperature interval tested The error in the rebuttal was the failure to recognise the difference between the effective equilibrium temperature and the model reported ECS Because of the curvature in the net flux vs temperature relationship for a step forcing which GISS exhibits like most GCMs the latter is not tested The identical error among others is being made in the Marvel et al study JamesG Posted Jan 12 2016 at 7 33 AM Permalink Reply And of course Giss have the unique advantage of adjusting their own empirical data to fit what their model predicts For a while the satellite data was a minor constraint on doing that but since Best sic avoided reconciling satellite data with a sideways swipe it seems Giss felt ok to follow suit The next step will be twisting Carl Mears arm to apply an upwards adjustment to RSS and leave UAH as the lone outlier run by easily dismissible skeptics It all plays like a handbook of how to distort research in support of a predetermined agenda niclewis Posted Jan 9 2016 at 10 06 AM Permalink Reply kribaez Thank you for your comment I compltely agree with you about the use of OHC rather than TOA radiative imbalance data and the lack of benchmark values for the forcing from a doubling of CO2 Using the OHC slope rather than TOA radiative imbalance N seems bizarre and scientifically indefensible It does of course produce biased low estimates of the model ECS from historical period forcings or indeed from any type of forcing Schmidt 2014 states GISS E2 R has a stratospherically adjusted Fa F2xCO2 value of 4 1 W m2 which is in line with the 4 08 4 12 W m2 for Fa in GISS E per Hansen 2005 But Hansen gives the iRF Fi value as 4 52 W m2 whereas Marvel uses 4 1 W m2 stevefitzpatrick Posted Jan 9 2016 at 9 04 AM Permalink Reply Nic Thanks for this clearly written post Two questions 1 Since you were a coauthor of two of the three empirical estimate papers which Marvel et al claim to be inaccurate it seems to me that the journal editor should have considered you as a reviewer Were you asked to review the paper 2 Are you and or others considering submitting a comment on Marvel et al to the journal niclewis Posted Jan 9 2016 at 10 30 AM Permalink Reply stevefitzpatrick Thanks I answer to your questions 1 No I wasn t 2 I shall reserve my position on that but I am aware that journals often seek to avoid publishing comments I suspect Nature CC may be worse than most in this regard Comments also have very tight length restrictions climategrog Posted Jan 11 2016 at 3 29 AM Permalink Reply IFAIK Nature not Nature CC has something very restrictive like 500 word limit and a 6mo shut out mpainter Posted Jan 11 2016 at 11 31 AM Permalink 500 words No problem give an abstract on each point and links to Climate Audit Alberto Zaragoza Comendador Posted Jan 9 2016 at 9 46 AM Permalink Reply Marvel et al say near the end that the historicalMisc archive is sparse and these experiments were a low priority in CMIP5 so very few groups performed comparable calculations of radiative forcings associated with each forcing agent Yeah I m using the free version and cannot copy paste But their point is clear replication will be difficult The thing is at least one paper mentioned by Nic Ocko et al 2014 reference 29 had done the same kind of experiments and arrived at different conclusions But Marvel et al don t cite Ocko either in the paper itself or in the SI My question to Nic would be are these problems with historicalMisc whatever that may be real Or is the lack of single forcing experiments more due to plain lack of interest from researchers Non paywalled version here http www nature com articles

    Original URL path: http://climateaudit.org/2016/01/08/appraising-marvel-et-al-implications-of-forcing-efficacies-for-climate-sensitivity-estimates/?replytocom=765914 (2016-02-08)
    Open archived version from archive

  • Appraising Marvel et al.: Implications of forcing efficacies for climate sensitivity estimates « Climate Audit
    further adrift 1 Kate Marvel Gavin A Schmidt Ron L Miller and Larissa S Nazarenko et al Implications for climate sensitivity from the response to individual forcings Nature Climate Change DOI 10 1038 NCLIMATE2888 The paper is pay walled but the Supplementary Information SI is not 2 The Historical simulations have an average temperature anomaly of 0 84 C for 1996 2005 relative to 1850 whereas HadCRUT4v4 shows an increase of 0 73 C from 1850 1859 to 1996 2005 and Figure 7 of Miller et al 2014 shows consistently greater warming for GISS E2 R than per GISTEMP since 2000 The same simulations show average ocean heat uptake of 0 84 W m 2 over 1996 2005 mean slope estimate compared to 0 40 W m 2 using AR5 Box 3 1 Figure 1 data or 0 67 W m 2 using NOAA Levitus et al 2012 data 3 Hansen J et al 2005 Efficacy of climate forcings J Geophys Res 110 D18104 doi 101029 2005JD005776 4 Chapter 8 of AR5 is available here 5 See Section 10 8 1 in Chapter 10 of AR5 for a discussion of the use of these equations in estimating TCR and ECS 6 Miller R L et al CMIP5 historical simulations 1850 2012 with GISS ModelE2 J Adv Model Earth Syst 6 441 477 2014 7 Or with climate state but feedbacks vary little with climate state within limits in most GCMs 8 I estimate GISS E2 R s effective climate sensitivity applicable to the historical period as 1 9 C and its ERF F 2xCO2 as 4 5 Wm 2 implying a climate feedback parameter of 2 37 Wm 2 K 1 based on a standard Gregory plot regression of Δ F Δ N on Δ T for 35 years following an abrupt quadrupling of CO 2 concentration The efficacy weighted mean period from the imposition of incremental forcing to the end of the historical period is of this order I also estimate the model s effective climate sensitivity as 2 0 C from regressing the same variables over the first 100 years of its 1 p a CO 2 increase simulation this estimate is little affected by F 2xCO2 value 9 Miller et al 2014 noted a 15 increase in GHG forcing in GISS ModelE2 compared to the CMIP3 version ModelE despite their forcing RF for a doubling of CO 2 being nearly identical but were unable to identify the cause 10 The 0 86 divisor comes from the coefficient on the integral of TOA imbalance anomaly Δ N when regressing the ocean heat content OHC anomaly against both that integral and time thus isolating any fixed offset between Δ Q and Δ N that may exist 11 The 1996 2005 Δ T for the sum of the six single forcing cases is 0 76 C compared to 0 84 C for Historical all forcings For iRF the corresponding Δ F values from the archived data are 2 53 W m 2 and 2 75 W m 2 However the values plotted are 2 74 W m 2 and 3 05 W m 2 respectively For ERF the sum of single forcings and the Historical forcing Δ F values from the data are respectively 2 99 W m 2 and 2 84 W m 2 but the values plotted in Figure 1c are 3 03 W m 2 and 2 93 W m 2 12 Otto et al used regression based estimates of ERF in multiple CMIP5 models Lewis and Curry used estimates from Table AII 1 2 of AR5 which are stated to be ERFs but in most cases aerosol forcing being the most notable exception assessed to be the same as their RFs 13 The AR5 Glossary Annex III states The traditional radiative forcing is computed with all tropospheric properties held fixed at their unperturbed values and after allowing for stratospheric temperatures if perturbed to readjust to radiative dynamical equilibrium Radiative forcing is called instantaneous if no change in stratospheric temperature is accounted for And early in Chapter 8 it says RF is hereafter taken to mean the stratospherically adjusted RF 14 However Hansen 2005 found that only in the cases of aerosol and BCsnow forcing was there a major difference between RF and ERF AR5 after surveying a wider range of evidence reached similar conclusions and accordingly in other cases estimated ERF to be the same as RF with an implied efficacy estimate of one but gave wider ranges for ERF to allow for uncertainty in the relationship between ERF and RF 15 AR5 states Section 7 5 1 of Chapter 7 it is inherently difficult to separate RFaci from subsequent rapid cloud adjustments either in observations or model calculations For this reason estimates of RFaci are of limited interest and are not assessed in this report 16 Transient efficacy estimates using iRF based respectively on unconstrained decadal regression from 1906 2015 to 1996 2005 as in Marvel et al changes from 1850 to 1996 2005 and zero intercept regression are LU 3 89 1 64 1 03 Oz 0 60 0 57 0 70 SI 1 53 1 68 1 82 and VI 0 56 26 45 0 31 In principle using changes is preferable to zero intercept regression for transient estimation because of the cold start issue but its superior noise suppression leads to more consistent estimation from zero intercept regression when forcing is small 17 Schmidt G A et al 2014 Configuration and assessment of the GISS ModelE2 contributions to the CMIP5 archive J Adv Model Earth Syst 6 141 184 doi 10 1002 2013MS000265 18 The GHG forcing in 1996 2005 is 10 higher in ERF than in iRF terms GHG forcing in 1996 2005 was dominated by CO 2 and Hansen 2005 found GHG had an efficacy of very close to one both in terms of F s which is very similar to ERF and using iRF 1 02 and 1 04 respectively That suggests scaling the actual F 2xCO2 iRF of 4 1 W m 2 by the ratio of Marvel et al s iRF and ERF values for GHG forcing which implies a 10 higher F 2xCO2 ERF of 4 52 W m 2 That value is also in line with F 2xCO2 of 4 53 W m 2 estimated from a Gregory plot regression over the 35 years following an abrupt quadrupling of CO 2 19 There were no material differences between the digitised and data values for Δ T so I used only the data values which were more precise Note that Marvel et al do not specify whether for ERF efficacy estimates ensemble means are taken before or after calculating quotients As only a single forcing value is given and ensmble means were taken before regressing in the iRF case I have assumed the former which also seems more appropriate 20 Lewis N Curry JA 2014 The implications for climate sensitivity of AR5 forcing and heat uptake estimates Clim Dyn DOI 10 1007 s00382 014 2342 y Non typeset version available here 21 Shindell D T et al 2013 Interactive ozone and methane chemistry in GISS E2 historical and future climate simulations Atmos Chem Phys 13 2653 2689 This study found that iRF ozone forcing from 1850 to 2000 was 0 28 W m 2 when the climate state was allowed to evolve in line with the Historical simulation and 0 22 W m 2 when a fixed present day climate was used and ERF was calculated as 0 22 W m 2 These values are substantially below those used in Marvel et al of 0 45 W m 2 iRF and 0 38 W m 2 ERF Substituting Shindell et al s values for Marvel et al s would raise the ozone iRF and ERF transient efficacies values to respectively 0 92 and 1 18 22 If one excludes LU run 1 no individual run for any forcing including Historical produces a 1950 2005 mean GMST response that differs by more than 0 031 C from the ensemble mean response for that forcing But for LU run 1 the difference is 0 134 C and would be 0 168 C were run 1 excluded from the ensemble mean 23 Chapter 8 of AR5 referring to a seven model study states that There is no agreement on the sign of the temperature change induced by anthropogenic land use change and concludes that a net cooling of the surface accounting for processes that are not limited to the albedo is about as likely as not 24 Schmidt H et al 2012 Solar irradiance reduction to counteract radiative forcing from a quadrupling of CO2 climate responses simulated by four earth system models Earth Syst Dynam 3 63 78 25 The GISS E2 R increase in GHG ERF is 3 39 W m2 The 1850 2000 increase in GHG RF and ERF per AR5 Table AII 1 2 is 2 25 W m2 but I use the higher 1842 2000 increase of 2 30 W m2 since the 1850 CO 2 concentration in GISS ModelE2 was first reached in 1842 according to the AR5 data 26 I calculate TCR and ECS values as shown in the below table from the efficacies stated in Marvel et al s SI Table 1 digitising from their Figure 1 for GHG E 1 means assuming all efficacies are one Median estimates Shindell 2014 Lewis and Curry 2014 Otto et al 2013 E 1 iRF ERF E 1 iRF ERF E 1 iRF ERF TCR As stated in SI Table 3 1 4 2 0 1 9 1 3 1 6 1 7 1 3 1 8 1 8 From SI Table 1 GHG from Fig 1 1 98 1 58 1 92 1 60 1 92 1 69 ECS As stated in SI Table 3 2 1 4 0 3 6 1 5 2 0 2 3 2 0 2 9 3 4 From SI Table 1 GHG from Fig 1 3 88 3 48 2 77 2 73 3 90 3 78 27 Sokolov A P 2005 Does model sensitivity to changes in CO2 provide a measure of sensitivity to other forcings J Climate 19 3294 3305 28 Shindell DT 2014 Inhomogeneous forcing and transient climate sensitivity Nature Clim Chg DOI 10 1038 NCLIMATE2136 29 Ocko IB V Ramaswamy and Y Ming 2014 Contrasting climate responses to the scattering and absorbing features of anthropogenic aerosol forcings J Climate 27 5329 5345 30 Kummer J R and A E Dessler 2014 The impact of forcing efficacy on the equilibrium climate sensitivity GRL 10 1002 2014GL060046 Update Data and calculations are available here in Excel form Like this Like Loading Related This entry was written by niclewis posted on Jan 8 2016 at 4 42 PM filed under Uncategorized and tagged Climate sensitivity Efficacy Bookmark the permalink Follow any comments here with the RSS feed for this post Post a comment or leave a trackback Trackback URL Update of Model Observation Comparisons Bob Carter 83 Comments michael hart Posted Jan 8 2016 at 6 21 PM Permalink Reply The efficacy of a forcing is defined as its effect on GMST relative to that of the same amount of forcing by CO2 Notwithstanding those who like to count joules in the deep oceans if that definition is reasonable then what does it say about whether the feedbacks should have un equal efficacies In other words if forcings are not all equal then it seems reasonable to ask if feedbacks are not equal either Steve McIntyre Posted Jan 8 2016 at 6 24 PM Permalink Reply Nic thanks for this impressive discussion Michael Jankowski Posted Jan 8 2016 at 6 28 PM Permalink Reply Why did they stop in 2005 Is that the last year common year in Otto et al 2013 Lewis and Curry 2014 and Shindell 2014 ristvan Posted Jan 8 2016 at 8 18 PM Permalink Reply No it is not Cherrypick thomaswfuller2 Posted Jan 8 2016 at 7 46 PM Permalink Reply In your introduction if you change assert to contend it would make the beginning of your paper sound less a bit less charged What s remarkable about your piece here is the clarity of the English writing I was able to follow it all despite being a non scientist Thanks for the hard work My only other suggestion would be a quick section on how you would recommend Marvel et al proceed to improve their work niclewis Posted Jan 9 2016 at 6 29 AM Permalink Reply thomasfuller2 thanks for your comment There was no intention on my part to use a charged term I consider assert to be a more neutral term than contend See http the difference between com contend assert the difference between assert and contend is that assert is to declare with assurance or plainly and strongly to state positively while contend is to strive in opposition to contest to dispute to vie to quarrel to fight Marvel et al could withdraw their paper and submit a new one using more satisfactory methodology and providing more detail after performing a set of simulations that showed how the GISS model responded to each type of forcing as the climate state evolved during the historical period Preferably extended to 2012 to match the simulation results in Miller et al 2014 which is a much higher quality paper But I see very little chance of that happening There is in any case a question mark over how suitable a model GISS E2 is for this purpose As I indicate in the article GISS E2 seems to have amazingly high forcing from non CO2 long lived greenhouse gases methane nitrous oxide CFCs etc and a remarkably strong GMST response to them if the forcing from a doubling of CO2 in the model is as taken in Marvel at al Richard Drake Posted Jan 9 2016 at 6 52 AM Permalink Reply Couple of questions on the last para 1 Which GCM would in your view have been better How easy is that to even evaluate 2 How long elapsed would it have taken to run the various simulations for the different forcings on GISS E2 leading to the write up in Marvel et al On the standard GISS supercomputer under standard loading or whatever they would have had available I realise it may only be the authors who can give any idea of the second but it would be interesting to get a feel for how easy it would be for others to play around with this stuff I m still after six years struggling with what openness if climate software even means compared to areas with which I m much more familar niclewis Posted Jan 9 2016 at 10 48 AM Permalink RichardDrake 1 Probably best to use a number of unconnected AOGCMs from different groups Perhaps focussing on those from grous in western Europe and North America judging from the views of the modellers I have met 2 I suspect a fair while See my response to Alberto Zaragoza Comendador below sue Posted Jan 10 2016 at 2 45 AM Permalink Reply Nic GISS E2 seems to have amazingly high forcing from non CO2 long lived greenhouse gases methane Very interesting since Gavin discourages ppl from worrying about methane even got into a row w Wadhoms sp over it How different are their scenarios I assume very different niclewis Posted Jan 10 2016 at 10 03 AM Permalink Sue As I say in note 25 The GISS E2 R increase in GHG ERF is 3 39 W m2 The 1850 2000 increase in GHG RF and ERF per AR5 Table AII 1 2 is 2 25 W m2 but I use the higher 1842 2000 increase of 2 30 W m2 since the 1850 CO2 concentration in GISS ModelE2 was first reached in 1842 If one strips out the CO2 contributions of 1 38 W m2 for AR5 based on an F2xCO2 of 3 71 W m2 and of 1 53 W m2 for GISS E2 R based on an ERF F2xCO2 of 4 1 W m2 the the contribution of the other long lived GHG is 0 92 W m2 per AR5 and 1 86 W m2 for GISS E2 R That is methane nitrous oxide CFCs and minor GHGs add TWICE as much forcing in GISS E2 R as per the AR5 best estimate As I wrote it looks as if GISS E2 R radiative transfer computation in GISS E2 may be inaccurate Although methane is classed as a long lived GHG its lifetime is only of the order of a decade so it presents much less of a long term problem than CO2 part of which is expected to remain in the atmosphere for 1000 years On the other hand as well as being a powerful GHG it is a source of tropospheric ozone and stratospheric water vapour both of which add to the basic forcing from methane Brandon Shollenberger Posted Jan 11 2016 at 2 29 PM Permalink niclewis Although methane is classed as a long lived GHG its lifetime is only of the order of a decade so it presents much less of a long term problem than CO2 part of which is expected to remain in the atmosphere for 1000 years On the other hand as well as being a powerful GHG it is a source of tropospheric ozone and stratospheric water vapour both of which add to the basic forcing from methane Another interesting feature of methane is when it breaks down it largely breaks down into C02 There is far less methane in the atmosphere than C02 but that effect may well have contributed a couple percent to the observed rise in C02 levels wkernkamp Posted Jan 22 2016 at 1 36 AM Permalink There is no reason to believe that excess CO2 will remain in the atmosphere very long Already only about half of the human produced CO2 in any given year as can be calculated from the atmospheric CO2 increase The other half is immediately removed by nature This indicates that a 33 increase in CO2 causes the natural removal processes to increase by this amount Therefore if we stopped all emissions we should cause CO2 to decline at about the same rate as it now is increasing This is so because the increased absorption persists until CO2 is lower At that rate it would not take thousands of years to remove all the CO2 from fossil fuels but less than one hundred years This timescale is also in agreement with the rapid decline of the C14 spike due to atmospheric nuclear explosions in the fifties mpainter Posted Jan 8 2016 at 9 02 PM Permalink Reply Nic This definition is reasonable CO2 is the dominant greenhouse gas Next to water vapor you must mean niclewis Posted Jan 9 2016 at 4 29 AM Permalink Reply GHG here means long lived greenhouse gases which excludes ozone as well as water vapour But I am afraid the definition appears after the term GHG has already been used Geoff Sherrington Posted Jan 8 2016 at 9 08 PM Permalink Reply Thank you Nic for yet another detailed study There is a matter arising from observations about relations between land temperature and local rainfall For example at several Australian weather stations studied in detail with statistics recorder local rainfall correlates with recorded temperatures quite significantly That is GHG are not the only driver of temperature changes as recorded Wetter is cooler Rainfall does not seem to sit within the 7 individual forcings you have studied it might pls correct if I am wrong Given that local rainfall statistically can account for 30 50 of the variation in local temperatures and given that simple physics help explain this I am left wondering where the effect of rainfall on local temperatures in inserted into sensitivity studies if indeed it needs to be As studies become more detailed it is likely that many odd questions of this type will emerge Another is from the Dec 2015 Schmidtusen GRL paper claiming a cooling over the Antarctic as atmospheric CO2 increases IR emissions to space do not come from the ground surface there because it is too cold so the use of land surface as a reference layer elsewhere might be compromised While models might gather up local effects like these they can be hard to track down Even if they are incorporated one wonders if the mathematics in the models are set to sum or integrate only positive values of sensitivities at defined locations not ECS or TCR as globally defined but locally I hope I am not wasting your time here There are bigger problems for us at home preventing some detailed digging niclewis Posted Jan 9 2016 at 5 55 AM Permalink Reply Thanks Geoff Forcings generally have in GCMs similar global effects even if they are concentrated in particular regions or differ between the hemispheres Figure 24 of the Hansen 2005 paper that I provided a link to shows this very well But variation in local feedbacks and hence in local climate sensitivity does seem to have more local effects GCMs do incorporate this although their simulations of feedbacks and their effects may not be correct The models don t distinguish between positive and negative local sensitivities In many GCMs sensitivity is negative in the deep tropics net outgoing radiation goes down when surface temperature increases because water vapour and cloud feedbacks are so strongly positive there That means there would be runaway warming there if heat from the deep tropics couldn t be exported to higher latitudes Maybe not hte sort of negative sensitivity you had in mind but it proves the point Models generally aren t very good at simulating changes in rainfall patterns to increasing GHG and resulting global warming But they do all agree that total rainfall will increase In fact the lower climate sensitivity is the faster must total precipitation increase with GMST or the atmosphere would heat up too much But where the extra rain falls is a different question it could almost all be over the oceans Bishop Hill Posted Jan 9 2016 at 6 25 AM Permalink Reply Does that mean that recent flooding in the UK is evidence for low climate sensitivity Richard Drake Posted Jan 9 2016 at 6 35 AM Permalink Got there before me Bish But this last paragraph plugged an important gap in my understanding thank you Nic There may be others AntonyIndia Posted Jan 9 2016 at 9 22 PM Permalink Reply I asked Gavin Schmidt s comment on your review on his co article on Realclimate and he answered Mostly confused but there are a couple of points worth following up on Should have the relevant sensitivity tests available next week gavin http www realclimate org index php archives 2016 01 marvel et al 2015 part 2 media responses comment 640742 sue Posted Jan 10 2016 at 2 49 AM Permalink 1 Looking forward to his follow up gymnosperm Posted Jan 8 2016 at 11 48 PM Permalink Reply 1 00 for CO2 forcing C mon Water is what 1 9 The feedbacks are entirely hypothetical The radiative forcing of CO2 is expressed as unity entirely ignoring its saturation While the trends of temperature and Co2 are mysteriously different the variability of CO2 is substantially captured by temperature even in the last 35 years FerdiEgb Posted Jan 9 2016 at 3 50 AM Permalink Reply Gymnosperm As intensively discussed here http wattsupwiththat com 2015 11 25 about spurious correlations and causation of the co2 increase 2 Most of the variability of the CO2 rate of change is caused by the influence of temperature variability Pinatubo El Niño on tropical variation That is proven by the opposite CO2 and δ13C changes Vegetation is not the cause of the trend in CO2 it is an increasing sink for CO2 at least since 1990 http www sciencemag org content 287 5462 2467 short and http www bowdoin edu mbattle papers posters and talks BenderGBC2005 pdf Variability and trend of CO2 have nothing in common they are driven by different processes opluso Posted Jan 9 2016 at 8 00 AM Permalink Reply It would seem that their methodology single forcing model runs would be most valuable in identifying areas for improvement in the GISS E2 R model I am at a loss to see how that methodology would be superior to estimating TCR ECS directly from observational data sets Perhaps the answer lies behind the Marvel et al paywall but did they calculate the relative contribution from each single forcing estimate to the ultimate increase in their respective TCR ECS estimates kribaez Posted Jan 9 2016 at 8 56 AM Permalink Reply Observational based studies must make some estimate of the forcing which gave rise to the observed temperature If a large forcing is assumed estimated then this implies a low climate sensitivity Conversely a low forcing giving rise to the same observed temperature gain implies a high climate sensitivity By definition TCR and ECS relate only to CO2 forcing It is known that in the models at least not all forcings produce identical temperature responses some higher than expected from a CO2 equivalent forcing and some lower than expected Marvel et al argue that by an accident of history the apparent summed forcings are higher than they would be if all of the forcings were expressed in terms of their equivalence to CO2 forcings By so doing they argue that the total forcings used as input into observational studies are too high relative to CO2 equivalence and hence climate sensitivities which again have to be CO2 specific are therefore biased low Hope this helps niclewis Posted Jan 9 2016 at 9 46 AM Permalink Reply opluso Their work certainly highlights some peculiarities in the GISS model Your question is a good one Marvel don t give any results for the relative contributions of diferent forcings to their increases in observational TCR ECS estimates I have worked them out for TCR using ERF forcings this is the only case for which their methodology doesn t need changing the true ERF F2xCO2 value is unknown but varying it would change all contributions in the same direction Their very high efficacy for land use is the biggest contributor closely followed by the slightly sub unity efficacy of GHG and then by the pretty low efficacy for ozone broadly half as important as LU Aerosols and solar have similarly small but opposing effects Volcanic should be small but I think they ve got the wrong VI forcing for the Otto and Shindell studies they made their own estimates of this forcing I believe kribaez Posted Jan 9 2016 at 8 25 AM Permalink Reply Nic Thank you for the detailed and thoughtful input to this problem Before making a comment on the results I would like to underline that outwith the gross methodological errors in Marvel et al there are two elements which I find bizarre Firstly to do efficacy comparisons meaningfully requires carrying at least 3sf accuracy through the calculations of derivative data One piece of fundamental input is the evolution of net flux or at a minimum an accurate estimate of the change in net flux over a pre specified period In this context Marvel s choice of using OHC data rather than making direct use of the available net flux data from the model runs seems absurd In observation based estimates of CS and feedbacks researchers are forced to use OHC data as a means of accessing net flux estimates over the longer term This requires some fairly coarse assumptions to be made including what percentage of any net flux imbalance is converted to sensible heat in the ocean as you point out Going from model calculated OHC back to net flux imbalance in the model with any accuracy is extremely difficult since as well as the natural net flux variation in the pre industrial control which is integrated in some guise into the GCM s energy accumulation and needs to be discounted there is also conversion of radiative input into sensible heat and latent heat conversion to momentum flux and distribution of sensible heat between land sea and atmosphere In addition there is energy leakage from the model climate system it is not fully conserved All of these elements are model specific I can quite honestly think of no excuse for the use of OHC data in this context when the net flux data should be available to the GISS researchers Its sole consequence in the efficacy calculations is the introduction of unnecessary error and uncertainty Secondly engineers would recognise an efficacy calculation as a benchmark calibration study A fundamental requirement for such a study is to have the benchmark measurements available Hansen 2005 recognised this and took great pains to measure the forcing data for the CO2 cases which form the benchmark against which all other responses are calibrated He provided estimates of Fi iRF Fa RF and Fs ERF across a range of concentrations of CO2 Commendable For Marvel et al on the other hand we have a statement in Miller saying However forcing associated with a doubling of CO2 is nearly identical between the CMIP3 and CMIP5 models Hansen et al 2005 Schmidt et al 2014a This is then contradicted by the iRF value cited in Marvel et al and the Fa values in Schmidt 2014 No reference is provided at all for ERF values This is a dog s dinner a benchmark study without benchmarks The above two elements strongly suggest to me that this did not start as an efficacy study My speculation is that it started as a study to show that by applying the same methods used in observation studies to the GISS ER 2 data you got the wrong answer for sensitivity They then found that you actually got very compatible answers if done with reasonable estimates of historical forcing and a sensible treatment of OHC The study then morphed into one which had to show that the historical forcing had an overall weighted average efficacy less than unity I can think of no other explanation for carrying out an efficacy study which uses OHC instead of net flux and which is based on a woefully inadequate definition of the benchmark data on the CO2 cases stevefitzpatrick Posted Jan 9 2016 at 9 33 AM Permalink Reply kribaez The paper is clearly an effort to discount the lower sensitivity estimates from empirical studies how the Marvel et al work evolved is speculative but the overall objective is obvious discount all low empirical estimates of TCR and ECS There have been several other papers from GISS where GCM behavior was used to discount lower empirical estimates of sensitivity one paper critical Stephen Schwartz s temperature autocorrelation based estimate of sensitivity immediately comes to mind The general class of paper can be described as you can t ever show the GCM projections are too high by using actual data In other fields efforts to discredit empirical data rather than improve a model would be laughed at but is oddly enough taken very seriously in climate science I will go out on a limb and predict GISS will produce similar critiques of other empirical estimates in the future kribaez Posted Jan 9 2016 at 9 55 AM Permalink Reply Yes the rebuttal to Schwartz is a very pertinent analogue In that instance the GISS team argued that the Schwartz method could not be sound because when applied to GISS data it gave the wrong answer for climate sensitivity The reality was that it actually gave the correct answer for GISS climate sensitivity over the temperature interval tested The error in the rebuttal was the failure to recognise the difference between the effective equilibrium temperature and the model reported ECS Because of the curvature in the net flux vs temperature relationship for a step forcing which GISS exhibits like most GCMs the latter is not tested The identical error among others is being made in the Marvel et al study JamesG Posted Jan 12 2016 at 7 33 AM Permalink Reply And of course Giss have the unique advantage of adjusting their own empirical data to fit what their model predicts For a while the satellite data was a minor constraint on doing that but since Best sic avoided reconciling satellite data with a sideways swipe it seems Giss felt ok to follow suit The next step will be twisting Carl Mears arm to apply an upwards adjustment to RSS and leave UAH as the lone outlier run by easily dismissible skeptics It all plays like a handbook of how to distort research in support of a predetermined agenda niclewis Posted Jan 9 2016 at 10 06 AM Permalink Reply kribaez Thank you for your comment I compltely agree with you about the use of OHC rather than TOA radiative imbalance data and the lack of benchmark values for the forcing from a doubling of CO2 Using the OHC slope rather than TOA radiative imbalance N seems bizarre and scientifically indefensible It does of course produce biased low estimates of the model ECS from historical period forcings or indeed from any type of forcing Schmidt 2014 states GISS E2 R has a stratospherically adjusted Fa F2xCO2 value of 4 1 W m2 which is in line with the 4 08 4 12 W m2 for Fa in GISS E per Hansen 2005 But Hansen gives the iRF Fi value as 4 52 W m2 whereas Marvel uses 4 1 W m2 stevefitzpatrick Posted Jan 9 2016 at 9 04 AM Permalink Reply Nic Thanks for this clearly written post Two questions 1 Since you were a coauthor of two of the three empirical estimate papers which Marvel et al claim to be inaccurate it seems to me that the journal editor should have considered you as a reviewer Were you asked to review the paper 2 Are you and or others considering submitting a comment on Marvel et al to the journal niclewis Posted Jan 9 2016 at 10 30 AM Permalink Reply stevefitzpatrick Thanks I answer to your questions 1 No I wasn t 2 I shall reserve my position on that but I am aware that journals often seek to avoid publishing comments I suspect Nature CC may be worse than most in this regard Comments also have very tight length restrictions climategrog Posted Jan 11 2016 at 3 29 AM Permalink Reply IFAIK Nature not Nature CC has something very restrictive like 500 word limit and a 6mo shut out mpainter Posted Jan 11 2016 at 11 31 AM Permalink 500 words No problem give an abstract on each point and links to Climate Audit Alberto Zaragoza Comendador Posted Jan 9 2016 at 9 46 AM Permalink Reply Marvel et al say near the end that the historicalMisc archive is sparse and these experiments were a low priority in CMIP5 so very few groups performed comparable calculations of radiative forcings associated with each forcing agent Yeah I m using the free version and cannot copy paste But their point is clear replication will be difficult The thing is at least one paper mentioned by Nic Ocko et al 2014 reference 29 had done the same kind of experiments and arrived at different conclusions But Marvel et al don t cite Ocko either in the paper itself or in the SI My question to Nic would be are these problems with historicalMisc whatever that may be real Or is the lack of single forcing experiments more due to plain lack of interest from researchers Non paywalled version here http www nature com articles nclimate2888 epdf

    Original URL path: http://climateaudit.org/2016/01/08/appraising-marvel-et-al-implications-of-forcing-efficacies-for-climate-sensitivity-estimates/?replytocom=765916 (2016-02-08)
    Open archived version from archive

  • niclewis « Climate Audit
    cautionary tale about a mystery that had an unexpected explanation It s not intended as a criticism of the scientists involved and the problem involved although potentially serious actually had little impact on the results of the study concerned However I am hopeful that mathematically and computing orientated readers will find it of Posted in Modeling Uncategorized Tagged lewis sensitivity Comments 34 Newer posts Tip Jar The Tip Jar is working again via a temporary location Pages About Blog Rules and Road Map CA Assistant CA blog setup Contact Steve Mc Econometric References FAQ 2005 Gridded Data High Resolution Ocean Sediments Hockey Stick Studies Proxy Data Station Data Statistics and R Subscribe to CA Tip Jar Categories Categories Select Category AIT Archiving Nature Science climategate cg2 Data Disclosure and Diligence Peer Review FOIA General Holocene Optimum Hurricane Inquiries Muir Russell IPCC ar5 MBH98 Replication Source Code Spot the Hockey Stick Modeling Hansen Santer UK Met Office Multiproxy Studies Briffa Crowley D Arrigo 2006 Esper et al 2002 Hansen Hegerl 2006 Jones Mann 2003 Jones et al 1998 Juckes et al 2006 Kaufman 2009 Loehle 2007 Loehle 2008 Mann et al 2007 Mann et al 2008 Mann et al 2009 Marcott 2013 Moberg 2005 pages2k Trouet 2009 Wahl and Ammann News and Commentary MM Proxies Almagre Antarctica bristlecones Divergence Geological Ice core Jacoby Mann PC1 Medieval Noamer Treeline Ocean sediment Post 1980 Proxies Solar Speleothem Thompson Yamal and Urals Reports Barton Committee NAS Panel Satellite and gridcell Scripts Sea Ice Sea Level Rise Statistics Multivariate RegEM Spurious Steig at al 2009 Surface Record CRU GISTEMP GISTEMP Replication Jones et al 1990 SST Steig at al 2009 UHI TGGWS Uncategorized Unthreaded Articles CCSP Workshop Nov05 McIntyre McKitrick 2003 MM05 GRL MM05 EE NAS Panel Reply to Huybers Reply to von Storch Blogroll Accuweather Blogs Andrew Revkin Anthony Watts Bishop Hill Bob Tisdale Dan Hughes David Stockwell Icecap Idsos James Annan Jeff Id Josh Halpern Judith Curry Keith Kloor Klimazweibel Lubos Motl Lucia s Blackboard Matt Briggs NASA GISS Nature Blogs RealClimate Roger Pielke Jr Roger Pielke Sr Roman M Science of Doom Tamino Warwick Hughes Watts Up With That William Connolley WordPress com World Climate Report Favorite posts Bring the Proxies up to date Due Diligence FAQ 2005 McKitrick What is the Hockey Stick debate about Overview Responses to MBH Some thoughts on Disclosure Wegman and North Reports for Newbies Links Acronyms Latex Symbols MBH 98 Steve s Public Data Archive WDCP Wegman Reply to Stupak Wegman Report Weblogs and resources Ross McKitrick Surface Stations Archives Archives Select Month February 2016 January 2016 December 2015 September 2015 August 2015 July 2015 June 2015 April 2015 March 2015 February 2015 January 2015 December 2014 November 2014 October 2014 September 2014 August 2014 July 2014 June 2014 May 2014 April 2014 March 2014 February 2014 January 2014 December 2013 November 2013 October 2013 September 2013 August 2013 July 2013 June 2013 May 2013 April 2013 March 2013 January 2013 December 2012 November 2012 October

    Original URL path: http://climateaudit.org/author/niclewis/page/2/ (2016-02-08)
    Open archived version from archive



  •