To: Leopold Haimberger <email@example.com>
Subject: Re: Update on response to Douglass et al., Dian, something like this?
Date: Thu, 10 Jan 2008 19:07:03 -0800
Cc: Peter Thorne <firstname.lastname@example.org>, Dian Seidel <email@example.com>, Tom Wigley <firstname.lastname@example.org>, Karl Taylor <email@example.com>, Thomas R Karl <Thomas.R.Karl@noaa.gov>, John Lanzante <John.Lanzante@noaa.gov>, Carl Mears <firstname.lastname@example.org>, "David C. Bader" <email@example.com>, "'Francis W. Zwiers'" <firstname.lastname@example.org>, Frank Wentz <email@example.com>, Melissa Free <firstname.lastname@example.org>, "Michael C. MacCracken" <email@example.com>, Phil Jones <firstname.lastname@example.org>, Steve Sherwood <Steven.Sherwood@yale.edu>, Steve Klein <email@example.com>, 'Susan Solomon' <firstname.lastname@example.org>, Tim Osborn <email@example.com>, Gavin Schmidt <firstname.lastname@example.org>, "Hack, James J." <email@example.com>
Thanks very much for your email. I can easily make the observations a
bit more prominent in Figure 1. As you can see from today's
(voluminous!) email traffic, I've received lots of helpful suggestions
regarding improvements to the Figures. I'll try to produce revised
versions of the Figures tomorrow.
On the autocorrelation issue: The models have a much larger range of
lag-1 autocorrelation coefficients (0.66 to 0.95 for T2LT, and 0.69 to
0.95 for T2) than the UAH or RSS data (which range from 0.87 to 0.89). I
was concerned that if we used the model lag-1 autocorrelations to guide
the choice of AR-1 parameter in the synthetic data analysis, Douglass
and colleagues would have an easy opening for criticising us ("Aha!
Santer et al. are using model results to guide them in their selection
of the coefficients for their AR-1 model!") I felt that it was much more
difficult for Douglass et al. to criticize what we've done if we used
UAH data to dictate our choice of the AR-1 parameter and the "scaling
factor" for the amplitude of the temporal variability.
As you know, my personal preference would be to include in our response
to Douglass et al. something like the Figure 4 that Peter has produced.
While inclusion of a Figure 4 is not essential for the purpose of
illuminating the statistical flaws in the Douglass et al. "consistency
test", such a Figure would clearly show the (currently large) structural
uncertainties in radiosonde-based estimates of the vertical profile of
atmospheric temperature changes. I think this is an important point,
particularly in view of the fact that Douglass et al. failed to discuss
versions 1.3 and 1.4 of your RAOBCORE data - even though they had
information from those datasets in their possession.
However, I fully agree with Tom's comment that we don't want to do
anything to "steal the thunder" from ongoing efforts to improve
sonde-based estimates of atmospheric temperature change, and to better
quantify structural uncertainties in those estimates. Your group,
together with the groups at the Hadley Centre, Yale, NOAA ARL and NOAA
GFDL, deserve great credit for making significant progress on a
difficult, time-consuming, yet important problem.
I guess the best solution is to leave this decision up to all of you
(the radiosonde dataset developers). I'm perfectly happy to include a
version of Figure 4 in our response to Douglass et al. If we do go with
inclusion of a Figure 4, you, Peter, Dian, Melissa, Steve Sherwood and
John should decide whether you feel comfortable providing radiosonde
data for such a Figure. I will gladly abide by your decisions. As you
note in your email, our use of a Figure 4 would not preclude a more
detailed and thorough comparison of simulated and observed amplification
in some later publication.
Once again, thanks for all your help with this project, Leo.
With best regards,
Leopold Haimberger wrote:
> These three figures are really very clear and leave no doubts that the
> Douglass et al analysis is flawed. This is true especially for Fig. 1.
> In Fig. 1 one has to look carefully to find the RSS and UAH "observed"
> trends to the right of all the model trends. Maybe one can make their
> symbols more prominent.
> Concerning Fig. 3 I wonder whether the UAH autocorrelation is the lowest
> of all available data. .86 is quite substantial autocorrelation. Maybe
> it is a good idea to be on the safe side and use the lowest
> autocorrelation of all datasets (models, RSS, UAH) for this analysis.
> Concerning Fig. 4, I like Peter's and Dian's idea to include RAOBCORE,
> HadAT2, RATPAC and Steve's data and compare it in one plot with model
> output. While I agree that the first three figures and the corresponding
> text are already sufficient for the reply, they target mainly to the
> right panel of Fig. 1 in Douglass et al's paper. The trend profile plot
> of Fig. 4 is complementary as a counterpart to the left panel of their
> plot. To see the trend amplification in in some of the vertical profiles
> is much more suggestive than seeing the LT trends being larger than
> surface trends, at least for me. Showing all available profiles adds
> value beyond the RAOBCORE v1.2 vs RAOBCORE v1.4 issue. Yes, it is work
> in progress and such a plot as drafted by Peter makes that very clear.
> In this paper it is sufficient to show that the uncertainty of
> radiosonde trends is much larger than suggested by Douglass et al. and
> we do not need to have the final answer yet. I have nothing against
> Peter doing the drawing of the figure, since he has most of the
> necessary data. The plot would be needed for 1979-1999, however. Peter,
> I will send you the trend profiles for this period a bit later.
> Publishing the reply in either IJC or GRL including Fig. 4 is fine for me.
> When we first discussed a follow up of the Santer et al paper in
> October, we had in mind to publish post-FAR climate model data up to
> present (not just 1999) and also new radiosonde data up to present in a
> highest ranking journal. I am confident that this is still possible even
> if some of the new material planned for such a paper is submitted
> already now. What do you think?
> With best Regards,
> Peter Thorne wrote:
>> as it happens I am preparing a figure precisely as Dian suggested. This
>> has only been possible due to substantial efforts by Leo in particular,
>> but all the other dataset providers also. I wanted to give a feel for
>> where we are at although I want to tidy this substantially if we were to
>> use it. To do this I've taken every single scrap of info I have in my
>> possession that has a status of at least submitted to a journal. I have
>> considered the common period of 1979-2004. So, assuming you are all
>> sitting comfortably:
>> Grey shading is a little cheat from Santer et al using a trusty ruler.
>> See Figure 3.B in this paper, take the absolute range of model scaling
>> factors at each of the heights on the y-axis and apply this scaling to
>> HadCRUT3 tropical mean trend denoted by the star at the surface. So, if
>> we assume HadCRUT3 is correct then we are aiming for the grey shading or
>> not depending upon one's pre-conceived notion as to whether the models
>> are correct.
>> Red is HadAT2 dataset.
>> black dashed is the raw data used in Titchner et al. submitted (all
>> tropical stations with a 81-2000 climatology)
>> Black whiskers are median, inter-quartile range and max / min from
>> Titchner et al. submission. We know, from complex error-world
>> assessments, that the median under-cooks the required adjustment here
>> and that the truth may conceivably lie (well) outside the upper limit.
>> Bright green is RATPAC
>> Then, and the averaging and trend calculation has been done by Leo here
>> and not me so any final version I'd want to get the raw gridded data and
>> do it exactly the same way. But for the raw raobs data that Leo provided
>> as a sanity check it seems to make a miniscule (<0.05K/decade even at
>> height) difference:
>> Lime green: RICH (RAOBCORE 1.4 breaks, neighbour based adjustment
>> Solid purple: RAOBCORE 1.2
>> Dotted purple: RAOBCORE 1.3
>> Dashed purple: RAOBCORE 1.4
>> I am also in possession of Steve's submitted IUK dataset and will be
>> adding this trend line shortly.
>> I'll be adding a legend in the large white space bottom left.
>> My take home is that all datasets are heading the right way and that
>> this reduces the probability of a discrepancy. Compare this with Santer
>> et al. Figure 3.B.
>> I'll be using this in an internal report anyway but am quite happy for
>> it to be used in this context too if that is the general feeling. Or for
>> Leo's to be used. Whatever people prefer.
Benjamin D. Santer
Program for Climate Model Diagnosis and Intercomparison
Lawrence Livermore National Laboratory
P.O. Box 808, Mail Stop L-103
Livermore, CA 94550, U.S.A.
Tel: (925) 422-2486
FAX: (925) 422-7675