Friday, December 23, 2011


From: Ben Santer <>
To: Peter Thorne <>
Subject: Re: [Fwd: sorry to take your time up, but really do need a scrub of this singer/christy/etc effort]
Date: Wed, 05 Dec 2007 13:04:05 -0800
Cc: Carl Mears <>, Leopold Haimberger <>, Karl Taylor <>, Tom Wigley <>, Phil Jones <>, Tom Wigley <>, Steve Sherwood <>, John Lanzante <>, Dian Seidel <>, Melissa Free <>, Frank Wentz <>, Steve Klein <>

Dear folks,

Thank you very much for all of your emails, and my apologies for the
delay in replying - I've been on travel for much of the past week.

Peter, I think you've done a nice job in capturing some of my concerns
about the Douglass et al. paper. Our CCSP Report helped to illustrate
that there were large structural uncertainties in both the radiosonde-
and MSU-based estimates of tropospheric temperature change. The
scientific evidence available at the time we were finalizing the CCSP
Report - from Sherwood et al. (2005) and the (then-unpublished) Randel
and Wu paper - strongly suggested that a residual cooling bias existed
in the sonde-based estimates of tropospheric temperature change.
As you may recall, we showed results from both the RATPAC and HadAT2
radiosonde datasets in the CCSP Report and the Santer et al. (2005)
Science paper. From the latter (see, e.g., our Figure 3B and Figures
4C,D), it was clear that there were physically-significant differences
between the simulated temperature trends in the tropical lower
troposphere (over 1979 to 1999) and the trends estimated from RATPAC,
HadAT2, and UAH data. In both the Science paper and the CCSP Report, we
judged that residual biases in the observations provided the most likely
explanation for these model-versus-data trend discrepancies.

Douglass et al. come to a fundamentally different conclusion, and
ascribe model-versus-data differences to model error. They are not
really basing this conclusion on new model data or on new observational
data. The only "new" observational dataset that they use is an early
version of Leo Haimberger's radiosonde dataset (RAOBCORE v1.2). Leo's
dataset was under development at the time all of us were working on the
CCSP Report and the Santer et al. Science paper. It was not available
for our assessment in 2005. As Leo has already shared with you, newer
versions of RAOBCORE (v1.3 and v1.4) show amplification of surface
warming in the tropical troposphere, in reasonable agreement with the
model results that we presented in Fig. 3B of our Science paper.
Douglass et al. did not use these newer versions of RAOBCORE v1.2. Nor
did Douglass et al. use any "inconvenient" observational datasets (such
as the NESDIS-based MSU T2 dataset of Zou et al., or the MSU T2 product
of Vinnikov and Grody) showing pronounced tropospheric warming over the
satellite era. Nor did Douglass et al. discuss the "two timescale issue"
that formed an important part of our Science paper (i.e., how could
models and multiple observational datasets show amplification behavior
that was consistent in terms of monthly variability but inconsistent in
terms of decadal trends?) Nor did Douglass et al. fairly portray results
from Peter's 2007 GRL paper. In my personal opinion, Douglass et al.
have ignored all scientific evidence that is in disagreement with their
view of how the real world should be behaving.

I don't think it's a good strategy to submit a response to the Douglass
et al. paper to the International Journal of Climatology (IJC). As Phil
pointed out, IJC has a large backlog, so it might take some time to get
a response published. Furthermore, Douglass et al. probably would be
given the final word.

My suggestion is to submit (to Science) a short "update" of our 2005
paper. This update would only be submitted AFTER publication of the four
new radiosonde-based temperature datasets mentioned by Peter. The update
would involve:

1) Use of all four new radiosonde datasets.

2) Use of the latest versions of the UAH and RSS TLT data, and the
latest versions of the T2 data from UAH, RSS, UMD (Vinnikov and Grody),
and NESDIS (Zou et al.).

3) Use of the T2 data in 2) above AND the UAH and RSS T4 data to
calculate tropical "TFu" temperatures, with all possible combinations of
T4 and T2 datasets (e.g., RSS T4 and UMD T2, UAH T4 and UMD T2, etc.)

4) Calculating synthetic MSU temperatures from all model 20c3m runs
currently available in the IPCC AR4 database. Calculation of synthetic
MSU temperatures would rely on a method suggested by Carl (using
weighting functions that depend on both the surface type [land, ocean]
and the surface pressure at each grid-point) rather than on the static
global-mean weighting function that we used previously. This is probably
several months of work - but at least it will keep me off the streets
and out of trouble.

5) Formal determination of statistical significance of
model-versus-observed trend differences.

6) Brief examination of timescale-dependence of amplification factors.

7) As and both Peter and Melissa suggested, brief examination of
sensitivity of estimated trends to the selected analysis period (e.g.,
use of 1979 to 1999; use of 1979 to 2001 or 2003 [for the small number
of model 20c3m runs ending after 1999]; use of data for the post-NOAA9

This will be a fair bit of effort, but I think it's worth it. Douglass
et al. will try to make maximum political hay out of their IJC paper -
which has already been sent to Andy Revkin at the New York Times. You
can bet they've sent it elsewhere, too. I'm pretty sure that our
colleague JC will portray Douglass et al. as definitive "proof" that all
climate models are fundamentally flawed, UAH data are in amazing
agreement with sonde-based estimates of tropospheric temperature change,
global warming is not a serious problem, etc.

One of the most disturbing aspects of Douglass et al. is its abrupt
dismissal of the finding (by Sherwood et al. and Randel and Wu) of a
residual tropospheric cooling bias in the sonde data. Douglass et al.
base this dismissal on the Christy et al. (2007) JGR paper, and on
Christy's finding of biases in the night-time sonde data that magically
offset the biases in the day-time data. Does that sound familiar? When
did we last hear about new biases magically offsetting the effect of
recently-discovered biases? As Yogi Berra would say, this is deja vu all
over again....

I hope that one of the papers on the new sonde-based datasets directly
addresses the subject of 'error compensation' in the day-time and
night-time sonde data. This would be important to do.

It's unfortunate that Douglass et al. will probably be published well
before the appearance of the papers on the new radiosonde datasets, and
before an updated comparison of modeled-and observed tropospheric
temperature trends.

I'd be grateful if you could let me know whether you are in agreement
with the response strategy I've outlined above, and would like to be
involved with an update of our 2005 Science paper.

With best regards,

Peter Thorne wrote:
> All,
> There are several additional reasons why we may not expect perfect
> agreement between models and obs that are outlined in the attached
> paper.
> It speaks in part to the trend uncertainty that Carl alluded to - taking
> differences between linear trend estimates is hard when the underlying
> series is noisy and perhaps non-linear. Work that John and Dian have
> done also shows this. Taking the ratio between two such estimates is
> always going to produce noisy results over relatively short trend
> periods when the signal is small relative to the natural variability.
> Also, 1979 as a start date may bias those estimates towards a "bias", I
> believe (this is unproven) because of endpoint effects due to natural
> variability that tend to damp the ratio of Trop/Surf trends (ENSO
> phasing and El Chichon) for any trend period with this start date. Given
> the N-9 uncertainty a reasonable case could be made for an evaluation of
> the obs that started only after N-9 and this may yield a very different
> picture.
> It also shows that the model result really is constrained to perturbed
> physics, at least for HadCM3. Unsurprising as convective adjustment is
> at the heart of most models. Certainly ours anyway. This result was
> cherry-picked and the rest of the paper discarded by Douglass et al.
> In addition to this, the state of play on the radiosondes has moved on
> substantially with RAOBCORE 1.4 (accepted I believe, Leo Haimberger
> should be in this - I'm adding him) which shows warming intermediate
> between UAH and RSS and I know of three additional efforts on
> radiosondes all of which strongly imply that the raobs datasets used in
> this paper are substantially under-estimating the warming rate (Steve
> Sherwood x2 and our automated system). So, there's going to be a whole
> suite of papers hopefully coming out within the next year or so that
> imply we at least cannot rule out from the radiosonde data warming
> consistent even with the absurd "mean of the model runs" criteria that
> is used in this paper.
> For info, our latest results imply a true raobs trend for 2LT in the
> tropics somewhere >0.08K/decade (we cannot place a defensible upper
> limit) ruling out most of the datasets used in the Douglass paper and
> ruling in possibility of consistency with models.
> Douglass et al also omit the newer MSU studies from the NESDIS group
> which in the absence of a reasonable criteria (a criteria I think we are
> some way away from still) to weed out bad obs datasets should be
> considered. Placing all obs datasets and the likely new raobs datasets
> would pretty much destroy this paper's main point. There's been a fair
> bit of cherry picking on the obs side which needs correcting here.
> Peter
> On Tue, 2007-12-04 at 15:40 -0800, carl mears wrote:
>> Karl -- thanks for clarifying what I was trying to say
>> Some further comments.....
>> At 02:53 PM 12/4/2007, Karl Taylor wrote:
>>> Dear all,
>>> 2) unforced variability hasn't dominated the observations.
>> But on this short time scale, we strongly suspect that it has
>> dominated. For example, the
>> 2 sigma error bars from table 3.4, CCSP for satellite TLT are 0.18 (UAH) or
>> 0.19 (RSS), larger
>> than either group's trends (0.05, 0.15) for 1979-2004. These were
>> calculated using a "goodness
>> of linear fit" criterion, corrected for autocorrelation. This is a
>> probably a reasonable
>> estimate of the contribution of unforced variability to trend uncertainty.
>>> Douglass et al. have *not* shown that every individual model is in fact
>>> inconsistent with the observations. If the spread of individual model
>>> results is large enough and at least 1 model overlaps the observations,
>>> then one cannot claim that all models are wrong, just that the mean is biased.
>> Given the magnitude of the unforced variability, I would say "the mean
>> *may* be biased." You can't prove this
>> with only one universe, as Tom alluded. All we can say is that the
>> observed trend cannot be proven to
>> be inconsistent with the model results, since it is inside their range.
>> It we interesting to see if we can say anything more, when we start culling
>> out the less realistic models,
>> as Ben has suggested.
>> -Carl

Benjamin D. Santer
Program for Climate Model Diagnosis and Intercomparison
Lawrence Livermore National Laboratory
P.O. Box 808, Mail Stop L-103
Livermore, CA 94550, U.S.A.
Tel: (925) 422-2486
FAX: (925) 422-7675

No comments:

Post a Comment