Based on my own experience with large thermal models I have severe doubts about the reliability of the global circulation models used to predict cataclysmic global warming. When the models were first discussed, it was acknowledged that they did not accurately predict the past, something that casts doubt on their accuracy. Later it was said that the models were refined so that they did project the alleged past temperature history of earth. I suspected that the modellers had put in some sort of fudge factors, which may eneble them to predict the past, since that is known, but which does not ensure that the models correctly predict the future. I say this based on my own modelling experience. The great physicist Freeman Dyson has suggested the same thing, that fudge factors are not reliable. The reason for this is that there may be an important process that is not included in the model. That makes no difference as fudge factors are applied to match past temperatures, but means the model will not predict the future. It is also known that there are many investigators who use different models, all of which predict the past now, but which do not agree on the future. (Some models even predict a temperature decline, but the IPCC rejected those results.) Here is a piece from Climate-Skeptic about this issue:
Climate Models Match History Because They are Fudged
When catastrophist climate models were first run against history, they did not even come close to matching. Over the last several years, after a lot of time under the hood, climate models have been tweaked and forced to match historic warming observations pretty closely. A prominent catastrophist and climate modeller finally asks the logical question:
One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.
The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy. Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at www.nature.com/reports/climatechange, 2007) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.
One wonders how it took so long for supposedly trained climate scientists right in the middle of the modelling action to ask an obvious question that skeptics have been asking for years (though this particular guy will probably have his climate decoder ring confiscated for bringing this up). The answer seems to be that rather than using observational data, modellers simply make man-made forcing a plug figure, meaning that they set the man-made historic forcing number to whatever number it takes to make the output match history.
Gee, who would have guessed? Well, actually, I did, though I guessed the wrong plug figure. I did, however, guess that one of the key numbers was a plug for all the models to match history so well:
I am willing to make a bet based on my long, long history of modeling (computers, not fashion). My guess is that the blue band, representing climate without man-made effects, was not based on any real science but was instead a plug. In other words, they took their models and actual temperatures and then said "what would the climate without man have to look like for our models to be correct." There are at least four reasons I strongly suspect this to be true:
Every computer modeler in history has tried this trick to make their models of the future seem more credible. I don't think the climate guys are immune.
There is no way their models, with our current state of knowledge about the climate, match reality that well.
The first time they ran their models vs. history, they did not match at all. This current close match is the result of a bunch of tweaking that has little impact on the model's predictive ability but forces it to match history better. For example, early runs had the forecast run right up from the 1940 peak to temperatures way above what we see today.
The blue line totally ignores any of our other understandings about the changing climate, including the changing intensity of the sun. It is conveniently exactly what is necessary to make the pink line match history. In fact, against all evidence, note the blue band falls over the century. This is because the models were pushing the temperature up faster than we have seen it rise historically, so the modelers needed a negative plug to make the numbers look nice.
Here is one other reason I know the models to be wrong: The climate sensitivities quoted above of 1.5 to 4.5 degrees C are unsupportable by history. In fact, this analysis shows pretty clearly that 1.2 is about the most one can derive for sensitivity from our past 120 years of experience, and even that makes the unreasonable assumption that all warming for the past century was due to CO2.
0 Comments:
Post a Comment
<< Home