Name:
Location: Pantego, Texas, United States

Tuesday, June 10, 2008

The "butterfly effect" is always fun to consider. That is the proposition that a butterfly flapping its wings in Siberia causes a hurrican in Florida. THe fact is that weather is a non-linear, chaotic dynamic system. Modelling weather to the extent that future climate is accurately predicted is difficult: Al Gore thinks people can do it, but I don't think it is likely that the modellers have it nailed yet. One of the problems is error growth of the numerical solutions. I have had some experience with that in large models of heat transfer systems. Here is an interesting discussion that the IPCC should consider, and should exhibit a bit more humility.

Error Growth Beyond The Hapless Butterfly
Filed under: Guest Weblogs — Roger Pielke Sr. @ 7:00 am
Climate Science is fortunate to have another guest weblog by the internationally respected scientist Professor Hendrik Tennekes [see also his excellent earlier guest weblogs].



Weblog: Error Growth Beyond The Hapless Butterfly by Henk Tennekes





In the minds of the general public, the sensitive dependence on initial conditions that many nonlinear systems exhibit is expressed vividly by Ed Lorenz’ description of a butterfly which, merely by flapping its wings, might cause a tornado far away. It is unfortunate that Lorenz’ poetry has been taken too literally, even by scientists. As far as I have been able to determine, Lorenz meant to illustrate error growth caused by data assimilation and initialization errors, not the possible upscale propagation of errors. In my mind, an undetected small-scale disturbance cannot cause an unexpected large-scale event. Even a million Monarch butterflies taking off from their winter roost in Mexico cannot cause a tornado in Kansas. Also, it takes considerable time for small-scale calculation errors to propagate toward the large-scale end of the energy spectrum, especially when, as in all turbulence, the flow is strongly dissipative. Errors that creep in through subtle deficiencies in the codes employed are most effective when they invade the large scales of motion directly. Aliasing between neighboring wave numbers is a good example. Also, the upscale transfer of error “energy” in the subgridscale realm is ruled out by parameterization. Whenever individual eddies are replaced by a parameterized estimate of the subgrid scale motion, the issue of sensitive dependence on small-scale errors in initial conditions is moot. Lorenz’ butterfly deserves a more intelligent treatment.



The matter of sensitive dependence on initial conditions addresses what might happen to individual realizations. Once ensembles are formed, as by averaging over time and/or space, one encounters the core of the turbulence problem: the dynamical properties of averages differ substantially from those of individual events. Error growth in ensembles is unlikely to parallel error growth in individual realizations. Turbulent pipe flow, for example, is stable in the mean, even if the individual eddies are not. Much further study is needed, but progress in this area is impeded by the conveniences offered by General Circulation Models as applied to climate projections. These are run in a quasi-forecasting mode and imitate features like cyclogenesis on average rather well, even if the timing and pathways of individual storms are poorly represented. It is unfortunate that no robust theory exists of the dynamics of the general circulation. Such a theory would offer a conceptual framework for the study of the many varieties of error growth in GCMs. Climate forecasting is far from being mature. No systematic work on the admittedly very complicated dynamics of error growth has been done. Even the relatively straightforward matter of estimating the prediction horizon of climate models has received no attention to speak of. If a reliable method for calculating the effective prediction horizon exists anywhere, it must have slipped past me unawares, though I have been anxiously waiting for it these past twenty years.



In view of the manifestly chaotic behavior of the weather, one should be suspicious of claims about the stability of the climate system. The idea that the climate might be well-behaved, even if the weather is not, is not supported by any investigations that I am aware of. The very claim that there exist no processes in the climate system that may exhibit sensitive dependence on initial conditions, or on misrepresentations of the large-scale environment in which these processes occur, is ludicrous. Just think of the many factors that promote the birth of a hurricane. It is not just the sea water temperature that may trip such an event, but also the presence or absence of wind shear, the upper atmosphere temperature field, and so on. In short, the climate would be stable if there exists not a single potential “tipping point”. I consider that inconceivable.



In the absence of a theoretical framework, one has to investigate all possible causes of error growth. Data assimilation and initialization errors are but one source of trouble. What to think of errors caused by the unavoidable shortcomings in the parameterization of the “physics”? Parameterization always involves simplification and smoothing; in a complex nonlinear system like the climate one cannot assume offhand that these tricks will not lead to unexpected kinds of error growth. Also, any error in this category is not triggered by a single impulse at startup time. Instead, it is aggravated by new impulses at each time step in the calculations.



Let me illustrate this with the simple model Ed Lorenz used to popularize nonlinear behavior. The repeated iteration



x(n + 1) = x(n)^2 – 1.8



is sensitive to initial errors, but it is also sensitive to other kinds of mistakes. One might imagine that the exact value of the coefficient in front of x-squared is unknown, or that the additive term 1.8 is subject to a small parameterization defect, so that it is taken to be 1.82, a mere 1% off the “true” value 1.8. Now determine what will happen. If the iteration is started with x(0) = 1 and the additive constant equals 1.8, we obtain the sequence



1, -0.8, -1.16, -0.4544, -1.59352, 0.73931, and so on.



But if the additive constant is 1% off, we get



1, -0.82, -1.1476, -0.50301, -1.56698, 0.63542, and so on.



In just five steps, the 1% “parameterization error” has grown a factor of sixteen!



One can vary this theme in many ways. Imagine, for example, that one cannot be sure of the exponent in the algorithm. It is taken as two, but what would happen if one has to accept a 10% uncertainty because of inadequate knowledge of the “physics”? In climate modeling, several processes are modeled with parameterizations of questionable accuracy. The difference between clouds in the atmosphere and cloudiness in a model involves several conceptual simplifications of dubious reliability, including the lack of attention to the difference between the behavior of ensembles (“cloudiness” is an ensemble) and that of the clouds that pass my window at this moment. The standard trick of making models behave “realistically” by adding an overdose of numerical viscosity is, to put it mildly, unprofessional. The viscosity dampens unwanted behavior, but decisions as to what is wanted and what is not are made subjectively. If such choices are not open to public scrutiny, the science involved is probably substandard. I maintain, as I have for many years, that it is up to climate modelers to demonstrate by which methods the accuracy, reliability, and forecast horizons of their model runs can be assessed. Good intentions aren’t good enough.



Ed Lorenz is also famous for the attractor in his three-variable model for deterministic, nonperiodic flow (1963). That attractor has a shape vaguely reminiscent of butterfly wings, which was of great help in spreading the butterfly fairytale. In the youthful enthusiasm of the early years of chaos theory, many people were hunting for the dimension of the climate attractor. Numbers around nine were mentioned with some frequency. These days we know better. The climate attractor is incredibly complex; its multidimensional landscape of hills, valleys and “tipping points” has not yet been charted with any accuracy. Future generations of climate scientists will have to study the possible sensitive dependence of each feature in that landscape on assimilation, initialization, and parameterization errors. I dare to venture that they will find so many conceivable “tipping points” that they may decide to throw their hats in the ring and give up on the idea of climate forecasting altogether. I did so many years ago, when I realized that sensitive dependence on initial conditions is not nearly as dangerous as the unwillingness to explore possible sensitive dependence on shortcomings in the codes employed and in the data assimilation software.



Let me conclude. I adhere to the Lorenz paradigm because I do not want to forget for a moment that small mistakes of whatever kind on occasion have large consequences. As far as I am concerned, the climate of our planet continuously balances on the verge of chaos. In my opinion, optimistic pronouncements about the stability of the climate system are unwarranted and unprofessional. I prefer modesty.

0 Comments:

Post a Comment

<< Home