ecosmak.ru

Modeling of climatic processes. Climatology Climate Change Prediction General Circulation Models

Send your good work in the knowledge base is simple. Use the form below

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

Introduction

1. Global simulation

Literature

Introduction

The current stage of scientific and technological progress, associated with the awareness of the global ecological situation on Earth with its characteristic limited energy, geological, biogeocenotic and other resources, highlights the problem of an information resource taken in relation to global environmental knowledge - knowledge of the conditions for evolution of man and nature. The level of this resource was determined for millennia by the weakly correlated total activity of Hoto sapiens and was relatively small until the beginning of the industrial era. Then, with a fairly rapid historical approach to the situation when the commercial attitude to the biosphere became the defining strategy of mankind and when the ecological impasse became visible, the information resource rose on the scale of significance to close to the limit values.

Any environmental problem has "openness", inclusion in the system global problems modernity, the main of which is the preservation of the homeostasis of mankind (Kondhayev, 2000). This means that the “thunderstorm over the biosphere” that arose and was realized at the end of the twentieth century posed to the civilized world the problem of the survival of the Homo Sapiens species, and, consequently, the problem of a responsible attitude to nature. At the same time, ecological and moral problems entered into interaction at the same time.

1. Global simulation

On present stage scientific and technological progress in the field of environmental protection, intensive developments are underway, the analysis of which makes it possible to identify characteristics environmental knowledge and the problems of applied methods in order to establish the basic requirements for an effective information technology. One of the prerequisites for the creation of condition tracking systems environment was the presence of different-quality data and the many mathematical models generated by them various types(balance, optimization, evolutionary, statistical, etc.). Synthesized on the basis of parametrization and, as a rule, linearization of regularities natural phenomena these models include a wide range of deterministic and probabilistic descriptions of geological, ecological, oceanological, biogeochemical and biogeocenotic processes of global, regional and local nature. The vast majority of them are focused on the theoretical understanding of the features of living systems. high level using existing knowledge and only a small part is aimed at the first steps towards an objective assessment of the current global environmental situation. Differing in goals and mathematical apparatus of description, many models turn out to be forcedly rough due to the limitedness, incompleteness and underdetermination of the information base, as well as due to the lack of modern instrumental systems in the field of simulation experiment. An increase in the number of considered components of the biosphere in order to increase the adequacy of the models under study, as is known, leads to their multiparametric nature, i.e. to the problem of the "curse of multidimensionality".

As the main tool for resolving these features, a number of authors reasonably consider the method of simulation modeling, which makes it possible to "join" different-quality data related to various mathematical formalisms and remove multiparametricity. The desired model is built in this case on the basis of empirical information, which is not limited in advance by the framework of any mathematical apparatus, which determines the "softness" of formalization, which is inevitable in cases where the essential patterns of phenomena are unknown.

The development of simulation modeling by expanding the information base, combining formal and informal methods in the process of step-by-step synthesis of the required model and, finally, actively connecting a person to a dialogue with a computer, according to many researchers, will provide an effective technology for system-ecological modeling. However, even now it turns out that the situation is not so clear-cut. Indeed, if we compare the available information requests in the field environmental issues and the existing information support for their solution (various mathematical and simulation models, principles of environmental information processing), it is easy to see that not all levels of natural and anthropogenic complexes have a developed apparatus for their description, and even more so for the design of effective information technologies in order to obtain the necessary assessments of problem situations. The difficulties that arise in this connection are not only and not so much of the technical nature of the accumulation of models of various types. These features are most clearly manifested in global modeling, the experience of which has shown a significant and fundamentally irremovable incompleteness of knowledge about the processes occurring in nature, which manifests itself both in the fragmentation of empirical data and in the absence of adequate ideas about the patterns of evolution. natural processes. It is already clear that the mechanical set of model hierarchies and the desire to accumulate banks of empirical data is an attempt to revive the primitive schemes of reasoning about a holistic picture of the development of biospheric processes without hope of success, without the possibility of explaining the ability of living systems to permanent self-organization and without significant progress towards understanding well-established mechanism of functioning of the Nature-Society system. The situation is such that it is necessary to use computer technologies that combine the methods of evolutionary and simulation modeling. This will allow taking into account the internal dynamics (evolution) of the structure of the processes being modeled and adaptively synthesizing models under conditions of incompleteness and partial reliability of data.

Traditional approaches to the construction of a global model encounter difficulties in the algorithmic description of many socio-economic and climatic processes, so that, as a result, one has to deal with information uncertainty. Developed approaches to global modeling simply ignore this uncertainty, with the result that the structure of the models does not adequately capture real processes. The joint use of evolutionary and simulation modeling makes it possible to eliminate this disadvantage by synthesizing a combined model, the structure of which is subjected to adaptation based on the prehistory of the complex of biospheric and climatic components. At the same time, the implementation of the model can also be combined in different classes of models, using software tools on traditional computers and special processors of an evolutionary type. The form of such a combination is diverse and depends on the space-time completeness of global databases.

The existing experience of global modeling is replete with examples of insurmountable difficulties in trying to find ways to describe scientific and technological progress and human activity in its various manifestations. No less difficulties arise when modeling a climate characterized by a superposition of processes with different temporal rates of variability. As for the completeness of the description in the global model, here it is impossible to clearly outline the limits of information security and the boundaries of the necessary spatial and structural detail. Therefore, without delving into the natural-philosophical analysis of global problems and without trying to give an exhaustive recipe for global modeling, we will discuss only one of the possible ways that reflect how evolutionary modeling in a special processor implementation allows us to overcome the above-mentioned difficulties.

Adjustment of the evolutionary model according to the prehistory of natural rhythms makes it possible to obtain a model that implicitly tracks various patterns of the dynamics of the Nature-Society system in the past and makes it possible to predict in the same time rhythm. The special processor version of the model completely removes all algorithmic and computational difficulties that arise due to the large dimension of the global model and the presence of many parametric uncertainties.

2. Modeling of climatic processes

The climatic component of the Nature-Society system presents the greatest difficulty in the synthesis of the global model, since it is characterized by a large number of feedbacks, most of which are unstable. Among them are ice-albedo, water vapor-radiation, cloudiness-radiation, aerosol-radiation, and many others. anthropogenic structures. Therefore, the construction of a climate model requires taking into account numerous factors, the role of which in its formation in many cases is not well understood. Attempts to comprehensively describe the Earth's climate system using mathematical methods have not yet yielded results that could be used in the State Historical Museum.

There are two approaches to the synthesis of the global model. One approach is based on the inclusion of biospheric components in created or developed climate models. Another approach is to develop, within the framework of the mathematical model of the biosphere, a block that would simulate the dependence of biospheric components on climatic parameters. In the first case, there are problems of instability of the solutions of the corresponding systems of differential equations, which makes it difficult to obtain predictive estimates of global changes in the environment. In the second case, it is possible to obtain stable forecasts of environmental changes, but their reliability depends on the accuracy of parameterization of correlations between climate and biosphere elements. The second approach has the advantage that it allows one to connect climate models to the mathematical model of the biosphere, which can be described at the scenario level. Detailed analysis of climate modeling and assessment issues state of the art can be found in Marchuk and Kondratiev (1992), Kondratiev (1999), Kondratiev and Johannessen (1993). Here we discuss a number of models of individual components of the Nature-Society system that correspond to the second approach. Among them are models of the general circulation of the atmosphere, the interaction of the atmosphere and the ocean, the sensitivity of climatic parameters to boundary conditions on the Earth's surface, the relationship between biogeochemical and climatic processes, etc.

The climate system is a physical-chemical-biological system with an unlimited degree of freedom. Therefore, any attempts to model such a complex system are associated with insurmountable difficulties. It is this circumstance that explains the variety of parametric descriptions of individual processes in this system. For a global model with a time discretization step of up to one year, an acceptable approach is to use two options. The first option is to jointly apply correlations between particular processes of climate formation in a given territory in conjunction with climate scenarios. The second option is based on the use of global monitoring data, which is the basis for the formation of data series on climatic parameters with their territorial-temporal reference and used to restore a complete picture of their spatial distribution. One of the common correlation functions is the dependence of the variation average temperature DT g „ of the atmosphere on the content of CO 2 in it:

25, oh? 1

5.25 o 2 + 12.55 o - 7.3, o< 1

where o is the ratio of the current content of CO 2 in the atmosphere C a (t) to its pre-industrial level C a (1850).

From (1) it can be seen that T g is an increasing function of the amount of atmospheric CO 2 . An increase in the amount of CO 2 in the atmosphere by 20% leads to an increase in temperature by 0.3 °C. A doubling of atmospheric CO2 causes an increase in Tg by 1.3°C. A detailed analysis of function (1) and a comparison of the observed joint variations in DT g and o show that the application of model (1) makes it possible to simplify the climatic block of the Nature-Society model. In particular, if (1) is used to calculate (DT g) 2[ CO2 ] for doubling the concentration of atmospheric CO 2, then to estimate current trend in changing DT g , you can use the formula:

DT g = (DT g) 2[ CO 2 ] 1po/ln2 , (2)

where, according to accepted estimates, the pre-industrial value of C a (1850) = 270 ppm.

Formula (2) approximates already known data well with an error of about 50%. Indeed, from (2) at C a (1980)=338 ppm it follows that DT g =1.3°K, while the real warming is estimated by many authors as 0.6°K.

Undoubtedly, conducted in last years discussions about the greenhouse effect due to the growth partial pressure CO 2 in earth's atmosphere, should be reflected in the State Historical Museum. Formula (1) takes into account the influence of CO 2 According to Mintzer (1987), it is possible to expand the consideration of the temperature effect from other greenhouse gases:

DT? \u003d DT CO 2 + DT N 2 0 + DT CH 4 + DT O 3 + DT C F C 11 + DT C F C 12, where

DT CO 2 \u003d - 0.677 + 3.019lp [C a (t) / C a (t o)],

DT N 20 = 0.057([ N 2 0(t)] 1/2 - [ N 2 0(t 0)] 1/2 ) ,

DT CH 4 \u003d 0.19 ([ CH4 (t)] 1/2 - [CH4 (t 0)] 1/2) ,

DT O 3 = 0.7/15,

DT С F С 11 \u003d 0.14 [СFС11 (t) - СFС11 (t 0)],

DT С F С 12 \u003d 0.16 [СFС12 (t) - СFС12 (t 0)] .

The value of t 0 is identified with 1980, when the concentrations of greenhouse gases are assumed to be known.

Among the simple formulas for calculating the latitudinal distribution of the mean temperature over the globe you can specify the schemes proposed by Sergin (1974)

T (c) \u003d T g + g (sin 2 c T - sin 2 c) (3)

where ts is the latitude in radians, r is the temperature difference between the pole and the equator, ts T is the latitude at which T(ts) = T g. Latitudinal temperature variations during the year are satisfactorily described by the model (Sergin, 1974):

T e - 2c (T e - T N) / p for the northern hemisphere,

T e - 2c (T e - T S) / p for the southern hemisphere,

T N , min +2t(T N , max - T N , min)/t D, tЄ;

T N , min +2(t D - t) (T N , max - T N , min)/ t D,tЄ;

T S , max +2t(T S , min - T S , max)/t D,tЄ;

T S , max +2(t D - t) (T S , min - T S , max)/ t D,tЄ;

T N , min (T S , min) and T N , max (T S , max) minimum and maximum temperature at the north (south) pole, respectively, °С; t D - the length of the year in units of D, T e - the temperature of the atmosphere at the equator, ° C; Many authors use such estimates,

as T N , min = - 30°C, T N , max = 0°C, T S , min = - 50°C, T S , max = -10°C, T e = 28 0 C.

Of course, such zone-averaged temperatures have dispersions that lead to significant errors. To more accurately reflect the role of various factors in changing the main climatic parameter, which is temperature, it is necessary to calculate the contribution of each factor separately. This can be done based on the assumption of the additivity of the role of feedbacks:

DT a , final = DT a + DT a , feedback

DT a , final = DT a

The parameter b is expressed in terms of the gain g: b = 1/(1-g). The value of the exponent g is equivalent to the albedo b, which in global scale is a function of T a A rough approximation of this dependence can be represented in the following form:

b ice at T a? T ice,

b(T a) = b free at T a? T free,

b free + b(T free - T) at T ice<Т а < Т free

Here T ice and T free are the average planetary temperatures at which the entire surface of the Earth is covered with ice or free of it, respectively; b is the coefficient of transition between the critical states of the Earth's albedo. Usually taken T i ce Є °K.

The use of simple and rather rough climate models can be refined by taking into account the characteristic times of feedback operation. Some estimates of the time of establishment of equilibria during the interaction of climatic subsystems are given in Table. 1. It can be seen that the time range of response delays within the Nature-Society system is diverse, and it is necessary to take it into account when assessing the consequences of changes within one or more climatic subsystems. In particular, the reserves of cold in the Antarctic ice sheet are so enormous that in order to increase its temperature to 0 °C, it will be necessary to lower the average temperature of the World Ocean by 2 °C, i.e. transfer it from the state Т 0 = 5.7 °С to the state Т 0 = 3.7 °С. Taking into account the data in Table. 1 the inertia of such an operation will be hundreds of years. The observed rate of climate warming for anthropogenic reasons does not yet have such energy costs.

Table 1

Equilibrium times for some

subsystems of the Earth's climate system

Area of ​​influence

climate system

Settling time

equilibrium state

Atmosphere:

free

boundary layer

World Ocean:

mixed layer

deep ocean

sea ​​ice

from days to 100 years

Continents

lakes and rivers

soil-vegetation formations

snow cover and surface ice

mountain glaciers

ice sheets

Mantle of the Earth

30 Ma

The mechanism of anthropogenic impact on the climate system is manifested through greenhouse gas emissions and albedo changes due to the reconstruction of land covers, interference with the water cycle and atmospheric pollution. Aerosol particles with a radius of 10 -7 h10 -2 cm are found at almost all altitudes of the atmosphere. Particles of non-anthropogenic origin enter the atmosphere from the surface of the land or ocean, and are also formed as a result of chemical reactions between gases. Particles of anthropogenic origin arise mainly as a result of fuel combustion. Table 1 gives an idea of ​​the relationship between these particle fluxes into the atmosphere. 2.

table 2

Estimates of fluxes of particles with a radius of less than 20 microns emitted into the atmosphere or formed in it (Butcher, Charleson, 1977)

Kind of particles

Number of particles, 10 6 t/year

Particles of natural origin (weathering, erosion, etc.)

Particles of forest fires and forestry waste burning

Sea salt

Volcanic dust

Particles formed during the release of gases:

natural processes

sulfates from H 2 S

ammonium salts from HN 3

nitrates from N0 x

bicarbonates from plant compounds anthropogenic processes

sulfates from SO 2

nitrates from NOx

bicarbonates

Total particles emitted into the atmosphere:

for natural reasons

for anthropogenic reasons

Total particle flux into the atmosphere

The mechanism of the influence of particles on the temperature of the atmosphere is explained by the fact that solar radiation falling on the Earth, mainly in the range of 0.4–4 µm, is partially reflected and absorbed by them. In this case, the global albedo of the "Earth's surface - atmosphere" system changes. In addition, particles affect the processes of moisture condensation in the atmosphere, since the formation of clouds, rain and snow occurs with their participation. Let's use the heat balance equation of the "Earth's surface-atmosphere" system:

(1- b) E 0 * + E a - yT S 4 \u003d 0, (4)

where T S is the average effective temperature of the radiation of the system, close to the temperature of the average energy level near the surface of 400 mb, E 0 *=0.487 cal cm -2 min -1 - the average intensity of incoming solar radiation for the hemisphere; b- albedo; y \u003d 8.14-10 "" cal cm -2 min -1 Stefan-Boltzmann constant, E a - the total intensity of anthropogenic energy sources per unit surface.

Let albedo b = b 0 - Db, where b 0 = 0.35 - albedo in modern conditions, Db - a small part of the albedo, determined by the influence of anthropogenic aerosols. From equation (4) we obtain an expression for temperature:

T S \u003d [ E 0 * (1-b) / y] 1/4 1/4 (5)

Assuming that db<< 1 и Е а /Е 0 *<< 1, разложим функцию правой части уравнения (5) в ряд Тейлора по степеням Дб и Е а / Е 0 * и выпишем первые члены ряда:

T S \u003d [ E 0 * (1- b 0) / y ] 1/4 (1 + 0.25 dB (1-b 0) -1 ) (6)

It follows from (6) that the temperature under not too strong anthropogenic impacts is the sum of the term describing the relationships in the "Earth surface - atmosphere" system without taking into account anthropogenic factors, and the terms T 1 and T 2 , expressing the contribution of heat and aerosol emissions, respectively:

T 1 \u003d 0.25 (1- b 0) -1 [ E 0 * (1- b 0) / y] 1/4 E a / E 0 * \u003d 96.046 E a / E 0 *,

T 2 \u003d 0.25 (1- b 0) -1 [ E 0 * (1- b 0) / y] 1/4 dB \u003d 96.046 dB,

Note that the addition of T 1 in modern conditions is very small. If we take E a \u003d 4 10 - 5 cal cm -2 min -1 and, therefore, E a / E 0 *= 8.21 -10 - 4, then T 1 \u003d 0.0079 ° С. Thus, the direct impact of world energy on the average temperature of the atmosphere is currently insignificant. It follows from the expression for T1 that in order to increase the atmospheric temperature due to heat emissions by 0.5 °С, the condition Еа /Е0 * = 0.0052 must be satisfied, which means an increase in anthropogenic heat fluxes into the environment by 63.4 times. This is equivalent to the release of energy by burning 570 10 9 tons of standard fuel per year.

If we assume that energy production is proportional to the population, then Т 1 = 96.046 k TG Gу S / Е 0 * , where G is the population density, people / km 2; at S - land area, km 2; k TG - the amount of energy produced per person, cal / min.

If we neglect the effect of aerosol on the thermal regime of the atmosphere, then direct radiation E, its change dE and the change in atmospheric turbidity dB will be related by the equation: dE / E = k B dB, where k B = 0.1154 km 2 / t is the coefficient of proportionality, B - the amount of aerosols of anthropogenic origin, t / km 2. After integrating this equation, we get: E \u003d E 0 * (1-b 0) exp (-k B B). On the other hand, according to the definition of albedo, E \u003d E 0 * (1- b) \u003d E 0 * (1- b 0 + dB). Equating these expressions for E, we get Db \u003d - (1-b 0). Therefore, the change in temperature associated with anthropogenic pollution of the atmosphere by aerosols is equal to:

T 2 \u003d -0.25 [E 0 * (1-b 0) / y] 1/4 \u003d -62.43

Since the average release of aerosols of anthropogenic origin, according to many authors, is 300 10 6 t/year, and the average residence time of aerosols in the atmosphere is estimated at 3 weeks, there are on average 17.262 10 6 t of particles in the atmosphere. From the formula for T 2 , in this case it follows that the temperature of the atmosphere should decrease by 0.84 °C/year.

Many authors consider the atmospheric turbidity factor В T instead of the indicator b, defining it as the ratio of the coefficient b r of the attenuation of the energy of solar radiation in the real atmosphere to the coefficient b I of the attenuation in an ideal atmosphere:

B T = b r / b I = (b I + b W - b A)/ b I , where b W and b A are the attenuation coefficients for water vapor and aerosols, respectively. The State Historical Museum has adopted the following assessments:

3 in middle latitudes,

B T = 3.5 in tropical latitudes,

2 with a reduced content of dust and water vapor.

The experience of modeling the Earth's climate suggests that the desire of many authors to take into account all possible feedbacks and elements of the climate system as accurately and completely as possible leads to complex mathematical problems, the solution of which requires a huge amount of data, and in most cases the solutions of the corresponding equations turn out to be unstable. Therefore, the use of such complex models as a block of the global model of the KPO system inevitably leads to a negative result, i.e. to the impossibility of synthesizing an efficient model. The most promising approach is by far the combination of climate models with global monitoring data. The scheme of such a combination is very simple. The existing ground-based and satellite systems for monitoring climate-forming processes cover a certain part of the cells (Sij) of the earth's surface. Temperature, cloudiness, content of water vapor, aerosols and gases, albedo and many other parameters of energy flows are measured above these cells. The use of simple climate models, as well as spatiotemporal interpolation methods, makes it possible to reconstruct, on the basis of these measurements, a complete picture of the distribution of climatic parameters over the entire territory of the Shield.

The social aspect entered the field of interaction with the problems of harmony in the relationship between society and nature. The fate of the biosphere will depend on how the population of the Earth quickly solves the problems of finding the optimal balance between "reasonable" and "unreasonable" attitudes towards the environment. Moreover, as model estimates have shown, 90% of all mankind should accept this. But it is unlikely that at this stage of history, such a part of the population is able to consciously, according to its moral and ethical principles, painlessly and voluntarily switch from the position of conquering nature to the position of developing new harmonious relationships between nature and society. To achieve global harmony, it is necessary to focus on negative environmental and socio-economic changes so that environmental knowledge is put into practice, i.e. it should be brought to the stage of constructive applications in the form of specific technologies that provide high quality decision-making in the field of environmental protection.

Literature

1. V.F. Krapivin, K.Ya. Kondratiev. "Global environmental changes: ecoinformatics". - St. Petersburg, 2002

2. http://climate2008.igce.ru/v2008/htm/1.htm- ASSESSMENT REPORT ON CLIMATE CHANGE AND THEIR IMPACTS ON THE TERRITORY OF THE RUSSIAN FEDERATION

Similar Documents

    Acquaintance with the peculiarities of comparing the indices of climatic fluctuations and global temperature with fluctuations in the Earth's rotation. The ENSO phenomenon as the main mode of ocean-atmosphere oscillations regularly observed in the equatorial Pacific Ocean.

    thesis, added 08/26/2017

    Methodological and theoretical foundations of the process of modeling ecological systems and processes. Study of the effect of surfactants on aquatic plants using the example of elodea. Comparative analysis of the components of synthetic detergents.

    term paper, added 01/23/2013

    General principles and tasks of modeling. General concept of the predator-prey model. Two kinds of competition. Tiered-mosaic forest concept, gap modeling. Mathematical model of the ecosystem of boreal forests in Eastern Siberia. Problems of modeling in ecology.

    term paper, added 12/03/2012

    The value of mathematical models of processes occurring in soils. Mathematical model of thermal and temperature regimes of soils, water regime of soils. Features of the model of humus accumulation processes and the specifics of modeling the productivity of agroecosystems.

    term paper, added 05/31/2012

    Mathematical modeling in ecology. Interspecies interaction of the "Predator-Prey" type. Computer modeling of relations. Stationary points of the system of equations. Construction of phase trajectories using the isocline method. Numerical simulation of the problem.

    abstract, added 12/09/2012

    Features of modeling processes in natural-technogenic complexes. Model of movement of heavy metals and light oil products. Forecasting the functioning of natural-technogenic complexes. Mineralization of groundwater in reclamation systems.

    abstract, added on 01/07/2014

    Permafrost zone, its characteristics. Dynamics and consequences of global climate change; uncertainty assessment. Prediction of geocryological risks for infrastructure. Influence of methane emission during permafrost degradation.

    abstract, added 11/07/2014

    General characteristics of ozone and the processes accompanying its formation. The importance of ozone in the functioning of the climate system, its distribution with height. The impact of atmospheric circulation on the dynamics of the ozonosphere, the causes and consequences of destruction.

    term paper, added 05/10/2011

    The concept of a systematic approach to solving environmental problems. Simulation modeling of ecological models and processes. Instruments for determining soil pollution and measuring soil characteristics. Device for express analysis of toxicity "Biotoks-10M".

    term paper, added 06/24/2010

    Studying the state of climate (warming and cooling) in Greenland in past epochs using the Spa method. Location of a deep sea drilling station in the North Atlantic. Study of the state of climate and landscapes of Western Siberia in the Holocene.

A particular increase in interest in climate change has been noted since the end of the last century. This is due to the increase in changes in nature, which is already obvious at the level of a simple layman. To what extent are these changes due to natural processes, and to what extent are they related to human activities? Today, a conversation with experts, leading researchers from the Institute of Computational Mathematics of the Russian Academy of Sciences, will help us figure this out. Evgeny Volodin and Nikolai Diansky, with whom we are talking today, are engaged in climate modeling at the institute and are Russian members of the International Panel on Climate Change ( Intergovernmental Panel on Climate Change, IPCC).

— What facts of global climate change are reflected in the studies and included in the fourth assessment report?

— We all feel the consequences of global warming even at the household level — for example, winters have become warmer. If we turn to scientific data, they also show that 11 of the last 12 years are the warmest for the entire period of instrumental observations of global temperature (since 1850). Over the past century, the global mean air temperature has changed by 0.74°C, with a linear temperature trend over the past 50 years that has been almost twice the corresponding value for the century. If we talk about Russia, then the winter months in most of our country over the past 20 years have been on average 1-3 degrees warmer than the winters in the previous twenty years.

Climate change does not mean a simple increase in temperature. The well-established term "global climate change" means the restructuring of all geosystems. And warming is considered only as one aspect of change. Observational data indicate a rise in the level of the World Ocean, melting of glaciers and permafrost, increased unevenness of precipitation, changes in the flow of rivers and other global changes associated with climate instability.

Significant changes have taken place not only in average climatic characteristics, but also in climate variability and extremeness. Paleoclimatic data confirm the unusual nature of the ongoing climatic changes, at least for the last 1300 years.

How is a scientific climate forecast made? How are climate models built?

— One of the most important tasks in modern climatology is the task of predicting climate change in the coming centuries. The complexity of the processes occurring in the climate system does not allow the use of extrapolation of past trends or statistical and other purely empirical methods to obtain projections. It is necessary to build complex climate models to obtain such estimates. In such models, experts try to take into account all the processes that affect weather and climate in the most complete and accurate way. Moreover, the objectivity of forecasts is improved if several different models are used, since each model has its own characteristics. Therefore, an international program is currently underway to compare climate change forecasts obtained using various climate models under the scenarios proposed by the IPCC, of ​​possible future changes in the content of greenhouse gases, aerosols and other pollutants in the atmosphere. The Institute of Computational Mathematics of the Russian Academy of Sciences (INM RAS) participates in this program. In total, it covers about two dozen models from different countries where the areas of science necessary to create such models have received sufficient development: from the USA, Germany, France, Great Britain, Russia, Australia, Canada, China...

The main components of the Earth's climate model are the general circulation models of the atmosphere and the ocean, the so-called joint models. At the same time, the atmosphere serves as the main “generator” of climate change, and the ocean is the main “accumulator” of these changes. The climate model created at INM RAS reproduces the large-scale circulation of the atmosphere and the World Ocean in good agreement with observational data and with a quality that is not inferior to modern climate models. This is mainly achieved due to the fact that during the creation and tuning of models of the general circulation of the atmosphere and ocean, it was possible to ensure that these models (in offline mode) reproduce the climatic states of the atmosphere and ocean quite well. Moreover, before starting to predict future climate changes, our climate model, like others, was verified (in other words, tested) on the reproduction of past climate changes from the end of the 19th century to the present.

And what are the simulation results?

— We have conducted several experiments under the IPCC scenarios. The most important of them are three: relatively speaking, this is a pessimistic scenario (A2), when the human community will develop without paying attention to the environment, moderate (A1B), when restrictions such as the Kyoto Protocol will be imposed, and optimistic (B1) - with more stronger restrictions on anthropogenic impact. Moreover, under all three scenarios, it is assumed that the volume of fuel combustion (and, consequently, carbon emissions into the atmosphere) will grow, only at a more or less rapid pace.

According to the pessimistic, "warmest" scenario, the average warming near the surface in 2151-2200. compared to 1951-2000. will be about 5 degrees. With a more moderate development, it will be about 3 degrees.

Significant climate warming will also occur in the Arctic. Even according to a more optimistic scenario, in the second half of the 21st century, the temperature in the Arctic will increase by about 10 degrees compared to the second half of the 20th century. It is possible that in less than 100 years, polar sea ice will persist only in winter and melt in summer.

At the same time, according to our and other models, there will be no intensive sea level rise in the next century. The fact is that the melting of the continental ice of Antarctica and Greenland will be largely compensated by an increase in snowfall in these regions, associated with an increase in precipitation during warming. The main contribution to ocean level rise should come from the expansion of water with increasing temperature.

The results of experiments with the INM RAS climate system model for predicting climate change, together with the results of other foreign models, were included in the IPCC report, which was awarded together with A. Gore the Nobel Peace Prize in 2007.

It should be noted that, to date, only the results obtained using the INM climate model are presented from Russia in the fourth IPCC report.

They say that European weather is born in the Atlantic - is it really true?

— Weather events occurring over the North Atlantic certainly have a strong impact on Europe. This is because in temperate latitudes from the surface of the Earth up to 15-20 km, the wind mainly blows from west to east, i.e. air masses come to Europe most often from the west, from the Atlantic. But this does not always happen, and in general it is impossible to single out any one place where European weather is completely formed.

European weather as a large-scale phenomenon is formed by the general state of the atmosphere of the Northern Hemisphere. Naturally, the Atlantic occupies a significant place in this process. However, what is more important here is not the intrinsic variability (deviation from the annual course) of the circulation oceanic processes in the North Atlantic, but the fact that the atmosphere, as a much more variable environment, uses the North Atlantic as an energy reservoir for the formation of its own variability.

Here we move from climate prediction and modeling to weather prediction and modeling. We need to separate these two issues. In principle, for both problems, approximately the same models are used that describe the dynamics of the atmosphere. The difference is that the initial conditions of the model are very important for weather prediction. Their quality largely determines the quality of the forecast.

When modeling climate change for a period of several decades to several centuries and millennia, the initial data do not play such an important role, and an important role is played by taking into account those external influences in relation to the atmosphere, due to which climate change occurs. Such impacts can be a change in the concentration of greenhouse gases, the release of volcanic aerosols into the atmosphere, a change in the parameters of the earth's orbit, etc. Our institute is developing one of these models for Roshydromet.

What can be said about climate change in Russia? What should be especially feared?

- In general, as a result of warming, the climate of central Russia will even improve to some extent, but in the south of Russia it will worsen due to increased aridity. A big problem will arise due to the thawing of permafrost, the territories of which occupy large areas.

In Russia, when calculating warming under any scenario, the temperature will increase approximately twice as fast as the average for the Earth, which is also confirmed by the data of other models. In addition, according to our model data, Russia will become warmer in winter than in summer. For example, with an average global warming of 3 degrees in Russia, warming will be 4-7 degrees on average per year. At the same time, it will get warmer by 3-4 degrees in summer, and by 5-10 degrees in winter. Winter warming in Russia will be due, among other things, to the fact that the atmospheric circulation will change slightly. The intensification of westerly winds will bring more warm Atlantic air masses.

— What is the conclusion of the IPCC and, in particular, domestic scientists regarding the anthropogenic contribution to climate change?

- Historical experience shows that any intervention in nature does not go unpunished.

The IPCC report emphasizes that the warming observed in recent decades is mainly a consequence of human influence and cannot be explained by natural causes alone. The anthropogenic factor is at least five times greater than the effect of fluctuations in solar activity. The reliability of these conclusions, based on the latest results of the analysis of observational data, is assessed as very high.

The results of our modeling also convincingly demonstrate the dominant role of the anthropogenic contribution. Climate models reproduce observed warming well if they take into account emissions of greenhouse gases and other gases due to human activities, and do not reproduce warming if only natural factors are taken into account. In other words, model experiments demonstrate that without the "contribution" of man, the climate would not have changed to today's values.

Let us clarify that modern climate models also include the calculation of CO 2 concentration. Such models show that natural fluctuations in CO 2 concentration in the climate system on time scales of centuries or less do not exceed a few percent. The existing reconstructions also speak of this. In the last few thousand years of the pre-industrial era, the concentration of CO 2 in the atmosphere was stable and ranged from 270 to 285 ppm (parts per million). Now it is about 385 ppm. Calculations with models, as well as estimates from measurement data, show that, on the contrary, the climate system tends to compensate for CO 2 emissions, and only about half or slightly more of all emissions go to increasing CO 2 concentration in the atmosphere. The remaining half dissolves in the ocean and goes to increase the mass of carbon in plants and soils.

How do you think climate forecasts will evolve?

“The climate system is very complex, and humanity needs a reliable forecast. All models developed so far have their drawbacks. The international scientific community has chosen from the existing about two dozen of the most successful models, by comparing which a generalized forecast is issued. It is believed that the errors of various models are compensated in this case.

Modeling is a difficult task and a lot of work. Many parameters are included in the calculations, taking into account the transport processes, the interaction of the atmosphere and the ocean. Now a new version of the model is being made at our institute. For example, there is a problem near the pole, where, due to the convergence of the meridians, the steps along the longitude are reduced, which leads to unjustified “noise” in the model solution. The new model will use higher spatial resolution in atmospheric and ocean models and more advanced parameterization of physical processes. Due to this, the accuracy of the simulation will increase, and a new forecast will be made on this model of a new level.

For some reason, much less attention is paid to modeling problems in our country than in the West, where significant financial and scientific resources are allocated precisely to the task of creating numerical models of the circulation of the atmosphere and ocean. These tasks require high-performance multiprocessor computing systems (the INM supercomputer used for climate forecasting is included in the TOP-50 rating of the CIS countries). Our work was supported only by some programs of the Russian Academy of Sciences and projects of the Russian Foundation for Basic Research.

In the near future, a new stage of experiments with joint models under the IPCC program will begin. This phase will involve updated models of the Earth's climate with higher spatial resolution and the inclusion of a wider range of modeled physical processes. Climate models are gradually evolving into models of the earth system as a whole, which no longer only calculate the dynamics of the atmosphere and ocean, but also include detailed submodels of atmospheric chemistry, vegetation, soil, marine chemistry and biology, and other processes and phenomena that affect climate.

Introduction

The central problem of modern climate theory is the problem of predicting climate change caused by anthropogenic activity. Due to the specific features of the climate system, which will be discussed below, this problem cannot be solved by traditional methods that have been repeatedly tested in the natural sciences. It can be stated that the main methodological basis for solving this problem is currently the numerical modeling of the climate system using global climate models, which are based on global models of the general circulation of the atmosphere and ocean. Naturally, the formulation of climate models requires field experiments, the analysis of the results of which makes it possible to formulate more and more accurate models of specific physical processes that determine the dynamics of the climate system. However, such experiments do not solve the main problem - to determine the sensitivity of a real climate system to small external influences.

Climate system and climate

Climate is understood as the most frequently repeated weather features for a given area, which create a typical regime of temperature, moisture, and atmospheric circulation. At the same time, “typical” refers to those traits that remain practically unchanged throughout one generation, i.e. about 30 - 40 years. These features include not only average values, but also indicators of variability, such as, for example, the amplitude of temperature fluctuations. When dealing with such long-term processes, it is impossible to consider the climate of any area in isolation. Due to heat exchange and air circulation, the entire planet takes part in its formation. Therefore, it is natural to use the concept of the climate of the planet Earth. The peculiarities of the climate of individual regions are the refraction of general patterns in a particular situation. So the global climate is not so much made up of local climates as local climates are determined by the global climate. And the weather, not climate change, is determined by phenomena that occur only in the atmosphere, but also in other geospheres. The atmosphere is not only influenced but also dependent on it by the ocean, vegetation, snow and ice cover, soil and further human activity. So, the climate system includes the atmosphere, as well as the processes and properties of other elements of the geographic envelope that affect the atmosphere and depend on it. External phenomena, unlike internal ones, affect the atmosphere, but do not depend on it. Such, for example, is the radiation coming from outer space.



Features of the climate system as a physical object

The climate system as a physical object has a number of specific features.

1. The main components of the system - the atmosphere and the ocean - can be geometrically considered as thin films, since the ratio of the vertical to horizontal scale is about 0.01 - 0.001. Thus, the system is quasi-two-dimensional, however, vertical density stratification is very important, and large-scale vertical motions are responsible for baroclinic energy transformations. The characteristic time scales of energetically significant physical processes range from 1 hour to tens and hundreds of years. All this leads to the fact that laboratory modeling of such a system, to put it mildly, is extremely difficult.

2. It is impossible to put a purposeful physical experiment with the climate system. Indeed, we cannot pump the climate system, for example, with carbon dioxide and, keeping other conditions equal, measure the effect obtained.

3. We have at our disposal only short series of observational data, and even then only about individual components of the climate system. Of course, there are many other important features of the climate system that should be considered, however, even those listed above allow us to conclude that the main means of studying the climate system is mathematical modeling. The experience of recent years shows that the main results of climate theory were obtained on the basis of the construction and use of global climate models.

Mathematical models of the climate system

In this section, we will briefly discuss the main assumptions on which the construction of modern climate models is based. Modern climate models are models based on the modern model of the general circulation of the atmosphere and ocean, and the central direction of their development is an increasingly accurate description of all physical processes involved in climate formation. The construction of modern climate models is based on a number of principles. It is assumed that the equations of classical equilibrium thermodynamics are locally valid. It is further assumed that the Navier-Stokes equations for a compressible fluid are valid for describing the dynamics of the atmosphere and ocean. Since in modern models, mainly due to computational capabilities, the Reynolds equations are used - the Navier-Stokes equations averaged over some spatial and temporal scales, it is believed that there is a fundamental possibility of their closure. The closure procedure assumes that the effects of subgrid scale processes (scales smaller than the averaging scale) can be expressed in terms of the characteristics of large scale processes. These processes include:

1) radiation transfer (shortwave and longwave radiation);

2) phase transitions of moisture and the process of local sedimentation;

3) convection;

4) boundary and internal turbulent layers (some characteristics of these layers are described explicitly);

5) small-scale orography;

6) wave resistance (interaction of small-scale gravity waves with the main flow);

7) small-scale dissipation and diffusion;

8) small-scale processes in the active layer of the land.

Finally, to describe large-scale atmospheric and oceanic movements, the hydrostatics approximation is valid: the vertical pressure gradient is balanced by gravity. The use of such an approximation requires additional simplifications (constant radius of the Earth, neglect of the components of the Coriolis force with the vertical velocity component) so that the energy conservation law is fulfilled in the system of equations in the absence of external energy sources and dissipation. Atmospheric and oceanic hydrothermodynamics equations, subgrid scale processes closure and boundary conditions.

I. Global solvability theorem on any, arbitrarily large, time interval t.

Unfortunately, there is currently no such theorem in a spherical coordinate system with "correct" boundary conditions, which is not a consequence of the absence of such theorems for the three-dimensional Navier-Stokes equations. Equations of modern climate models have "2.5" - the dimension, because instead of the full third equation of motion, the equation of hydrostatics is used.

II. Existence of a global attractor.

This assertion is proved under the condition that S is a strictly positive-definite operator:

(Sϕ ϕ) ≥ µ(ϕ,ϕ), µ >0

The problem is that in the general case this cannot be written, since the continuity equation for a compressible fluid is not dissipative.

III. Dimension of the attractor.

Constructive estimates for the dimension of attractors for models of this class are very rough. They are upper bounds, which are generally unsuitable for the theory considered in the previous section.

MINISTRY OF EDUCATION AND SCIENCE OF UKRAINE

ODESSA STATE ENVIRONMENTAL UNIVERSITY

At the student scientific conference OGECU

"Analysis of climate models using physical methods"

Made st.gr. VB-11

Smokova V.D.

Scientific adviser: d.t.s.

Romanova R.I.

Odessa-2015

Bibliography:

http://umeda.ru/concept_climate

http://www.inm.ras.ru/vtm/lection/direct2/direct2.pdf

Volodin E.M., Diansky N.A. Response of a coupled atmospheric-ocean general circulation model to an increase in carbon dioxide.

Volodin E.M., Diansky N.A. Simulation of climate change in the 20th - 22nd centuries using a joint model of the general circulation of the atmosphere and the ocean.

Gritsun A.S., Dymnikov V.P. Response of the barotropic atmosphere to small external influences. Theory and numerical experiments.

Dymnikov V.P., Lykosov V.N., Volodin E.M., Galin V.Ya., Glazunov A.V., Gritsun A.S., Diansky N.A., Tolstykh M.A., Chavro A. .AND. Modeling of climate and its changes. - In: "Modern problems of computational mathematics and mathematical modeling",

To provide a better understanding of the complex climate system, computer programs should describe the interaction model of climate components. These general circulation models (GCMs) are widely used to understand past climate changes and to try to identify possible future responses of the climate system to changing conditions. Can change occur over a short period of time, such as a decade or a century? Will the changes be preceded by such phenomena as, for example, the increase in the frequency of El Niño and their intervention in the warm western waters of the Pacific Ocean, directed towards South America? What are the different mechanisms of heat transfer to the pole that can provide the essence of other states of the climate? These questions, and many others, point to the complexity of modern climate research. Simple causal explanations usually fail in this arena. Sophisticated computer models are practically the only tools available, so they are commonly used to prove claims about climate and global dynamics.

During and 20 years, climate modelers have been using some version of the National Center for Atmospheric Research (NCAI) Public Climate Model (MOC1). MOK1, which was produced in 1987, was used on large serial supercomputers. Now, many of these researchers are using MOK2, a step forward whose importance is characterized as moving from some other planet to the earth. This move roughly corresponds to the arrival of large, shared-memory, parallel, vector computers, such as cray YMP. Parallel computers allow more detailed climate modeling. A detailed study of the balance of physical processes in models approaches the observed position with an increase in the modeling of details and with the achievement of confidence in what is described by physics.

Modern atmospheric climate models very well describe the qualitative structure of the global circulation. The transfer of energy from warm equatorial regions to cold poles and the division of common winds into parts are reproduced in the simulations both qualitatively and quantitatively. Tropical Hadley wind, mid-latitude Ferrel winds, and jet streams are in good agreement with observations. These are the main patterns of atmospheric circulation that are felt on the earth's surface, such as calm streaks, trade winds, mid-latitude westerly winds, and polar heights.

The ability of models to reproduce the current climate builds confidence in their physical validity. This statement, however, is not a reason to use models to predict the future climate. Another important evidence for the use of models was their application to past climate regimes. The IOC NCAI was used to simulate the climatic effects caused by increased summer solar radiation in the north due to changes in the Earth's orbit. One of the consequences was a warming of the earth's temperature, which caused more intense monsoons. The increase or decrease in solar radiation caused by changes in the earth's orbit is the proposed cause of the conditions that provided the climate of past periods. According to Stefan Schneider of NCAI, "The ability of computer models to reproduce local climate responses to changes in solar radiation brought about by variations in the Earth's orbit provides the basis for confidence in the reliability of these models as tools for predicting the future climate impacts of an increased greenhouse effect."

IOC 2 , the latest code in a series of climate models developed by NCAI, captures the complex interplay of the physical processes described above. Suitable for university and industrial research users, this climate model simulates the time-varying response of the climate system to daily and seasonal changes in solar heat and sea surface temperatures. Over the past 10 years and for the foreseeable future, these models form the basis of a wide variety of climate research and scenario testing used in national energy and environmental policy decisions.

Parallel Computing Used in Global Circulation Models

Advances in computer technology have been welcomed by climate researchers because long-term climate simulations can take months of computing time to complete. The latest generation of supercomputers is based on the idea of ​​parallelism. The Intel Paragon XP/S 150 can solve a single difficult task using the combined speed of 2048 processors. This computer differs from other supercomputers in that the memory of each processor is not available to other processors. Such a system is called distributed memory rather than shared memory. This computer design allows for enormous parallelism to be applied to tasks, but complicates the formulation of computations.

IOC 2 is used almost exclusively in parallel supercomputers. The large computational requirements and the heavy volume of output data generated by the model preclude their efficient use in workstation-class systems. The basis of the dynamics algorithm in MOC2 is based on spherical overtones, favorite functions of mathematicians and physicists, who must represent functions as values ​​on the surface of a sphere. The method converts sphere data into a compact, accurate representation. Data for a 128x64 dot grid on the earth's surface could be represented using as few as 882 numbers (coefficients) instead of 8192. This method dominated the choice of method for weather and climate models for a long time due to the accuracy of the spherical harmonic representation and the efficiency of the methods used to compute the transformation. The transformation is a "global" method in the sense that it requests data from all over the globe in order to calculate the harmonic coefficient. In distributed memory parallel computers, these calculations require communication between all processors. Because communication is expensive in a parallel computer, many thought that the transformation method had outlived its day.

Further research at ORNL found ways to organize the computations to enable the climate model to run on huge parallel computers.

Before the ORNL researchers were involved, the parallelism in the models was limited to the shared memory paradigm, which used only a few - from 1 to 16 - processors. Because of the global coupling required for spectral transformation, distributed memory parallel computers did not look promising. However, further research at ORNL has found ways to organize computations, completely changing our view and making it possible to implement MOC2 on huge parallel computers.

Our research has identified several parallel algorithms that keep the conversion method competitive even when ORNL uses many processors such as the Intel Paragon XP/S 150. This powerful machine has 1024 node boards, each with two compute processors and a communications processor. The full MOK2 climate model was developed for this parallel computer through the collaboration of researchers from ORNL, Argonne National Laboratory, and NCAI. It is currently being used by the Department of Computer Science and Mathematics at ORNL as the basis for the development of a coupled ocean-atmospheric climate model under the sponsorship of the Department of Health and Environmental Research.

With the growth in computing power offered by the new generation of parallel computers, many researchers are looking to improve the climate model.

With the increase in computing power offered by the new generation of parallel computers, many researchers are looking to improve models by linking the ocean and atmosphere. This remarkable advance in modeling brings us one step closer to a complete climate system model. With this type of built-in model, many areas of climate study will open up. First, there will be an improved method for simulating the carbon cycle on Earth. Ocean and land processes (eg forests and soils) act as sources and sites for atmospheric carbon to be deposited. Second, incorporating atmospheric models with high-resolution, eddy-asserting ocean models will enable scientists to observe hitherto incomprehensible questions of climate prediction. The models will show the typical behavior of the ocean-atmosphere interaction. El Niño is just one of the interaction modes. Detection and recognition of these regimes will help to get the key to the problem of climate prediction.

Our models could be used to predict the overall impact on climate of counteracting atmospheric effects, both artificial and natural, warming due to the "greenhouse effect" and cooling due to sulfate aerosols. Using the increased computing power of Intel, IBM SP2, or Cray Research T3D, researchers must advance step by step in understanding the complex interdependencies between natural processes and human activities such as the burning of fossil fuels and the climate of our earthly home.

The climate model is a mathematical model of the climate system.

The model of the climate system should include a formalized description of all its elements and the relationships between them. The basis is a thermodynamic construction based on mathematical expressions of conservation laws (momentum, energy, mass, as well as water vapor in the atmosphere and fresh water in the ocean and on land). This macroblock of the climate model makes it possible to take into account the arrival of energy from outside and calculate the resulting state of the planet's climate.

Modeling of thermodynamic processes is a necessary but not sufficient condition to ensure a complete reproduction of the climatic regime. Some chemical processes and geochemical contacts between the elements of the climate system play an important role. At the same time, they talk about cycles or cycles - this is a carbon cycle in the ocean, oxygen (and others: chlorine, bromine, fluorine, hydrogen) ozone cycles in the stratosphere, a sulfur cycle, etc. Therefore, an important place in the climate model should be occupied by a macroblock of climatically significant chemical processes .

The third macroblock in the climate system should include climate-forming processes provided by the activities of living organisms on land and in the ocean. The synthesis of these main links should constitute an ideal climate model.

Models should be created taking into account the characteristic time of the processes involved in climate formation. To create a unified model capable of working on any time scale is, if not impossible, then at least inexpedient in terms of computational costs. Therefore, the practice of creating models for describing climatic processes of a certain specific scale has been adopted. Outside the scale chosen for modeling, on the side of slow processes, constant boundary conditions and parameters are used (it is believed that the changes are too slow compared to those under study). From the side of smaller scales, it is assumed that “fast” random fluctuations occur, a detailed description of which can be replaced by a statistical account of the resulting effects (for example, through gradients of mean states, as is customary in the semi-empirical theory of turbulence).

The general principles underlying the ideal model can be implemented with varying degrees of completeness. So, in modern models, biological effects and chemical processes are extremely fragmented. This is partly due to the fact that models have been developed with a focus on studying short-term climate changes, when considering long-term (for example, geochemical) effects, they can be characterized by a set of constants. Therefore, modern climate models are, first of all, thermodynamic models. In some cases, chemical or biological blocks with a limited set of feedbacks are added to them.

Thermodynamic models, in turn, differ greatly in the degree of detail in the description of processes. Some are based on simplified expressions, others use "complete" mathematical forms of writing basic physical laws. In accordance with this, each model can be represented as a certain set of algorithms, some of which have a clear mathematical and physical justification (and from this point of view it is flawless), and the other part is phenomenological, imitative in nature. These are the so-called parameterizations.

The differences between the "complete" and simplified models are manifested in the fact that the former have a richer physical content. Due to this, the range of feedbacks that are automatically implemented in the complete system is wider. In simplified models, the necessary feedbacks have to be “inserted by hand”, that is, forcibly, often without deep justification, some dependencies are added to the equations. Procedures of this type reduce the value of modeling, since the artificial imposition of a feedback model actually predetermines the result of modeling a priori. In addition, the specified relationship is always in one form or another based on information about the current state of the climate, and when switching to other climatic conditions, it is not guaranteed that such a construction will give reliable results. Therefore, the improvement of models is not an end in itself, but a way to physically more complete reproducibility of operating mechanisms.

However, it will be possible to completely abandon the task of effects only in an ideal model. Modern models do not include important biological and chemical effects that have to be parameterized.

Despite the seemingly clear advantage of "complete" models, simplified models continue to be used and developed. This is due to the following reasons. First, the so-called "complete" models are in fact, as already noted, far from complete, some of the parameterizations included in them are very rough, namely, the imperfection of individual blocks determines the imperfection of the model as a whole. Secondly, simplified models are simpler, their practical implementation is much, fundamentally easier than "full" models. They require lower (by orders of magnitude!) speed of computers and therefore it is possible to perform long-term computer experiments, perform preliminary calculations, and test new paramerization schemes. Fourth, simplified models give much clearer, easier to interpret results than "complete" models. This "transparency" of the results sometimes makes it possible to study any individual effect using a simplified model - for example, to isolate the direct and feedback relationships of the thermal regime and surface albedo, to carefully study the radiation effects of small gas impurities, etc.

If we rank climate models according to the degree of their physical completeness, and at the same time, according to complexity, as well as according to the increasing requirements for computer resources (speed, exchange rate with external devices), then the so-called models of the Budyko-Sellers type will be the simplest, then models "intermediate complexity" and, finally, complete models of climate models.

All models, before being used for the purposes of diagnosing and predicting climate change, go through a validation stage. It consists in checking whether the models, given a set of parameters that correspond to the current state of climate-forming factors, are capable of adequately reproducing the current climate in reality. If this is carried out successfully enough, then we can argue as follows: if the model is able to correctly respond to a given (random, generally speaking) set of external conditions, then it will just as successfully reproduce the conditions corresponding to another set of parameters. Naturally, this condition will be plausible only if the model is assumed to be complete, that is, devoid of any tuning parameters and connections.

Energy balance models (models of the Budyko-Sellers type) are based on a simplified expression of the climate system's energy budget equation, in which only one quantity, temperature, acts as an unknown quantity. On the basis of models of this type, the effectiveness of feedback between the thermal regime and surface albedo has been demonstrated for the first time. There are one-dimensional (with temperature dependence on latitude) and two-dimensional (latitude and longitude) versions of the models.

The positive aspects of intermediate complexity models are obvious. They do not impose special requirements on computer technology, and therefore can be used to perform long-term experiments; the results obtained, as with any "simple" model, are clear enough for interpretation. The shortcomings are also understandable - the fundamental one is that there is no certainty whether the simplified models are capable of reproducing the climate in other, different from the modern, climate formation conditions.

The next step in the development of models is the so-called general circulation models of the atmosphere. This name is assigned to global three-dimensional models based on the so-called complete equations of thermohydrodynamics. The spatial resolution of AGCM is from approximately 200x200 km in latitude and longitude and about 20 levels to ~30x30 km and 60 levels in the atmosphere. Already in the 1990s, an understanding of the optimal structure of the AOGCM was achieved, which would compromise the tasks of modeling and the resources of computer technology.

Improving climate models follows the path of improving ocean modeling. Models are already appearing with a resolution of the first few tens of kilometers with several tens of vertical levels, which have the most important property for models - eddies in the ocean, the main circulation and energy-bearing formations, are reproduced in them automatically, without the use of parametrizations.

The development of the land block follows the path of a detailed description of hydrological processes and heat and moisture exchange between land and atmosphere, taking into account the role of vegetation cover. In some cases, depending on the orientation of the models, blocks of the dynamics of continental glaciation are attached to the AGCM.

Further development of the models involves the subsequent increase in the detail of the simulated fields. This requires the joint efforts of physicists, mathematicians, and specialists in the architecture of modern computers. Generally speaking, it is not clear whether this will lead to the desired physical “completeness” of the model, to its approximation to the ideal one, since new problems immediately arise for the next, deeper consideration of processes, problems insufficient network of observational data, etc. Thus, the fundamental transition from the Reynolds equations, which are used to describe large-scale dynamics, to the Navier-Stokes equations will give rise to new problems, in particular, detailed information on the spatial distribution of the molecular viscosity coefficient, etc. will be required.

Loading...