Angus Ferraro

A tiny soapbox for a climate researcher.


2 Comments

Can stratospheric aerosols directly affect global precipitation?

What is the effect of stratospheric aerosol geoengineering on global precipitation? If we were to inject sulphate aerosol into the stratosphere it would reflect some sunlight and cool the Earth, but the atmosphere’s CO2 levels would remain high. This is important, because CO2 actually has an effect on precipitation even when it doesn’t affect surface temperature. In a recent paper with a summer student, I’ve shown the aerosols can contribute a similar effect.

Three climate models (CanESM2, HadGEM2-ES, MPI-ESM-LR) did simulations of the future with and without geoengineering. The simulations with stratospheric aerosols (G3 and G4) show greater temperature-independent precipitation reductions than the simulations without them (RCP4.5 and G3S).

Three climate models (CanESM2, HadGEM2-ES, MPI-ESM-LR) did simulations of the future with and without geoengineering. The simulations with stratospheric aerosols (G3 and G4) show greater temperature-independent precipitation reductions than the simulations without them (RCP4.5 and G3S).

Precipitation as energy flow

Precipitation transfers energy from the Earth’s surface to its atmosphere. It takes energy to evaporate water from the surface. Just as evaporation of sweat from your skin cools you off by taking up heat from your skin, evaporation from the Earth’s surface cools it through energy transfer. Precipitation occurs when this water condenses out in the atmosphere. Condensation releases the heat energy stored when the water evaporated, warming the atmosphere. Globally, precipitation transfers about 78 Watts per square metre of energy from the surface to the atmosphere. Multiplying that by global surface area that’s a total energy transfer of about 40 petajoules (that’s 40 with 15 zeros after it) of energy every second! To put that in a bit of context, it’s about 40% of the amount of energy the Sun transfers to the Earth’s surface.

If precipitation changes, that’s the same as saying the atmospheric energy balance changes. If we warm the atmosphere up, it is able to radiate more energy (following the Stefan-Boltzmann law). To balance that, more energy needs to go into the atmosphere. This happens through precipitation changes.

Direct effects of gases on precipitation

Now imagine we change the amount of CO2 in the atmosphere. This decreases the amount of energy the atmosphere emits to space, meaning the atmosphere has more energy coming in than out. To restore balance the atmospheric heating from precipitation goes down. This means that the global precipitation response to global warming from increasing CO2 has two opposing components: a temperature-independent effect of the CO2, which decreases precipitation, and a temperature-dependent effect which arises from the warming the CO2 subsequently causes. In the long run the temperature-dependent effect is larger. Global warming will increase global precipitation – although there could be local increases or decreases.

But what happens if we do geoengineering? Say we get rid of the temperature-dependent part using aerosols to reduce incoming solar radiation. The temperature-independent effect of CO2 remains and global precipitation will go down.

Detecting the effect of stratospheric aerosol

CO2 isn’t the only thing that has a temperature-independent effect. Any substance that modifies the energy balance of the atmosphere has one. In our new study, we ask whether stratospheric sulphate aerosol has a detectable effect on global precipitation. Theoretically it makes sense, but it is difficult to detect because usually there are temperature-dependent effects obscuring it.

We used a common method to remove the temperature-dependent effect. We calculated the precipitation change for a given surface temperature change from a separate simulation, then used this to remove the temperature-dependent effect in climate model simulations of the future. We did this for future scenarios with and without geoengineering.

As expected, we found a temperature-independent influence which reduced precipitation. Importantly, this effect was bigger when geoengineering aerosols were present in the stratosphere. This was detectable in three different climate models. The figure above shows this. The non-geoengineered ‘RCP4.5’ simulation shows a precipitation decline when the temperature effect is removed. This comes mainly from the CO2.  The ‘G3’ and ‘G4’ geoengineering simulations (blue and green lines) have an even greater decline. The aerosol is acting to decrease precipitation further.

How does aerosol affect precipitation?

The temperature-independent effect wasn’t present when geoengineering was done by ‘dimming the Sun’. The ‘G3S’ simulation  (orange lines in the figure) does this, and it has a similar precipitation change to RCP4.5. So what causes the precipitation reduction when stratospheric aerosols are used? We calculated the effect of the aerosol on the energy budget of the troposphere (where the precipitation occurs). We separated this in two: the aerosol itself, and the stratospheric warming that occurs because of the effect of the aerosol on the stratosphere’s energy budget.

Black bars show the temperature-independent precipitation changes simulated by the models. Orange bars show our calculation of the effect of the stratospheric warming. Green bars show our calculation of effect of the aerosol itself. Grey bars show our calculation of the total effect, which is very close to the actual simulated result.

Black bars show the temperature-independent precipitation changes simulated by the models. Orange bars show our calculation of the effect of the stratospheric warming. Green bars show our calculation of effect of the aerosol itself. Grey bars show our calculation of the total effect, which is very close to the actual simulated result.

We found the main effect was from the aerosol itself. The aerosol’s main effect is to reduce incoming solar radiation and cool the surface. But we showed it also interferes a little with the radiation escaping to space, and this alters the energy balance of the troposphere. The precipitation has to respond to these energy balance changes.

This effect is not huge. We had to use many model simulations of the 21st Century to detect it above the ‘noise’ of internal variability. In the real world we only have one ‘simulation’, so this implies the temperature-independent effect of stratospheric aerosol on precipitation would not be detectable in real-world moderate geoengineering scenario. This also means climate model simulations not including the effects of the aerosol could capture much of the effects of geoengineering on the global hydrological cycle.

This effect could be more important under certain circumstances. If geoengineering was more extreme, with more aerosol injected for longer, precipitation would decrease more. But, based on these results, the main effect of geoengineering on precipitation is that the temperature-dependent changes are minimised. This means the temperature-independent effect of increasing CO2 concentrations is unmasked, reducing precipitation.

Take a look at the paper for more details – it’s open access!

Ferraro, A. J., & Griffiths, H. G. (2016). Quantifying the temperature-independent effect of stratospheric aerosol geoengineering on global-mean precipitation in a multi- model ensemble. Environmental Research Letters, 11, 034012. doi:10.1088/1748-9326/11/3/034012.


On a personal note, this paper is significant because it is the culmination of the first research project I truly led.  Of course I managed my own research as a PhD student and post-doc, but my supervisors secured the funding. They also acted as collaborators. Here I came up with the idea, applied for funding, supervised Hannah (the excellent student who did much of the analysis) and wrote up the results. It’s a milestone on the way to becoming an independent scientific researcher. For this reason this work will always be special to me. Thanks also to Hannah for being such a good student!


Leave a comment

Stratospheric aerosol geoengineering and the polar vortex

Geoengineering by reducing the amount of solar radiation the Earth absorbs has become a hot topic in the last few years. Of all the impacts geoengineering might have on our climate, why on earth should we care about what goes on in the stratosphere, 10 kilometres above our heads? It turns out what goes on up there has a substantial impact on what goes on down here.

This is the subject of the final paper (open access!) from my PhD work with Andrew Charlton-Perez and Ellie Highwood, at the University of Reading. In it we ask what effect stratospheric aerosol geoengineering might have on the stratosphere, and how those effects might be communicated to the troposphere below.

We used some idealised simulations with a climate model to investigate, placing a layer of aerosol in the model’s stratosphere. Since we don’t know exactly how geoengineering might turn out, we had to make some simplifying assumptions about the size of the aerosol particles and the shape of the aerosol cloud. Not all of these were realistic, so it’s important to think about how our results might be affected if these assumptions changed. That’s a rule that holds true for all science, of course.

ferraro2015_fig4

Strength of the polar vortex as measured by winds at 60N, 10 hPa. Each grey line shows the wind speed over 1 year. The mean of the Control simulation is shown by the dashed black lines. The means from the other simulations are shown by solid black lines.

In our model simulations we compared three different potential deployments of geoengineering. One used sulphate aerosol, mimicking the effect of natural sulphate aerosols produced by volcanic eruptions. Another used titania (titanium dioxide) aerosol, which is much more reflective than sulphate and may do less damage to the ozone layer. Finally, we looked at the case where geoengineering was represented by simply dimming the Sun. In practice this could only be achieved using mirrors placed in space, but it has also been used as a representation of geoengineering with stratospheric aerosols.

We found that the aerosols intensified the stratospheric polar vortex by warming the tropical stratosphere. The polar vortex is linked to the midlatitude jet streams in the troposphere, which act as guides for weather systems. As the polar vortex gets stronger the jet streams tend to shift further poleward. This would obviously have an effect on the meteorology of a geoengineered world. The jet streams would still wobble and meander about all over the place, but on average they would be located closer to the poles, changing which regions experience the strongest storms and most rainfall.

The link between the stratospheric polar vortex and the jet streams is extremely well documented, and reproduced by models. There is, however, still quite a lot of debate over exactly how the two things are linked, and the extent to which models get it right. For example, the polar vortex intensifies in response to volcanic eruptions, just like it does in simulations of geoengineering, but climate models don’t simulate very well the shifting of the jet streams associated with it.

ferraro2015_fig5

Changes in probability density function of North Atlantic jet latitude in (a) December-January-February, (b) March-April-May, (c) June-July-August, and (d) September-October-November. Grey shading shows the interquartile range of the Control simulation with the median marked with a white bar.

That said, the shifting of the jet streams under stratospheric aerosol geoengineering should be fairly robust. Stratospheric aerosols are known to intensify the polar vortex. This is because they absorb thermal radiation in the tropics (where they get energy from the warm troposphere below) more than they do at the poles (where the underlying troposphere is colder). This temperature gradient sets up a pressure gradient, intensifying the westerly winds of the polar vortex.

The jet streams will shift in response to this, although exactly how, or how much, is open to question. Those are the questions that are more important to answer.

Unfortunately, our study can’t really help with that, for two main reasons.

The first is that we used a single climate model, which means we can’t generalise our results. In order to test the robustness of our results, we would need to look at a number of different models, with different representations of the dynamics of the atmosphere. We also didn’t delve deeply into the theory behind the linkage between the polar vortex and the jets. This is because the science of stratosphere-troposphere coupling is still rather mysterious, and attempting to come up with a theory explaining it is a huge task.

The second reason we can’t use our results to make predictions is that our representation of geoengineering wasn’t particularly realistic. We placed a huge amount of aerosol into the model. In our set up we could put as much in as we wanted because the aerosol particles don’t interact with the atmospheric circulation, or each other. In model simulations where these interactions are allowed, large aerosol injections caused the aerosols to stick together, grow, and fall out of the stratosphere rather quickly. This means it might not even be possible to put such huge amounts of aerosol into the stratosphere.

Whether it would be or not would depend on the degree to which the aerosols stick together. This process would occur differently for different aerosols. For example, sulphate aerosols are liquid and coagulate quite easily. Titania is a solid ‘dust’-type aerosol, which might be more resistant to this. More research is needed on this, though. As far as I am aware no one has done any simulations of how titania might actually behave in the stratosphere.

Another important caveat to our results is that our model didn’t include the effects of the aerosol on stratospheric ozone. As well as it’s important role in blocking UV radiation, ozone affects stratospheric temperatures. Other studies have shown stratospheric aerosol geoengineering would reduce ozone at higher latitudes, cooling the polar stratosphere. This effect would further enhance the intensification of the polar vortices.

So there are a number of reasons we should take care in interpreting our results. The central message, though, is that stratospheric aerosols influence the midlatitude jets, and they do this via polar vortex changes caused by absorption of radiation by the aerosol particles. If an aerosol that didn’t absorb as much was used these effects could be reduced. This is one of the reasons titania is being investigated as a geoengineering aerosol. Titania reflects more radiation than sulphate and absorbs less, meaning one could accomplish the same surface cooling with less aerosol, and have a smaller impact on the midlatitude jets. If we found an aerosol that didn’t absorb radiation at all (not really likely) we would essentially have a very similar case to our solar dimming simulation, which shows very minimal jet shifts.

Finally, it’s important to emphasise this is all hypothetical. I see research like this as part of an effort to understand what stratospheric aerosol geoengineering is. What are the potential risks as well as the potential benefits? This is the first step in understanding geoengineering as a policy option, but it is not the last. There are plenty of potential problems with geoengineering to do with issues of justice, conflict and ultimately, the human relationship with the natural world.


2 Comments

EUMETSAT Conference 2014: Final highlights

2014-09-27 11.27.13

Headquarters of the WMO, which we visited during the conference for a discussion on the socioeconomic benefits of meteorological satellites.

The EUMETSAT Meteorological Satellites Conference 2014 featured a lot of new science. Two particular points which stood out to me was the assimilation of new products into numerical weather forecasting systems, and the use of satellite data in improving our conceptual understanding of weather systems.

Until this conference I was not aware how new it was to incorporate soil moisture into numerical weather forecasting systems. Such forecasting systems spend a good deal of resources on assimilating observational data to initialise the forecast. This is very important because, as pioneering work by Ed Lorenz showed back in the 1960s, tiny differences in the initial state of an atmospheric model (and of course the real atmosphere) can lead to huge differences in the resulting forecast, even for relatively short-range forecasts.

Soil moisture is clearly a useful thing to know about in our forecasts. For weather forecasts it mainly plays a role in supplying water for weather systems. Wet surfaces supply water to the atmosphere, causing or intensifying rainfall.

A few years ago soil moisture satellite products were not considered mature enough to assimilate into weather forecast systems. This is partly because our measurements were quite uncertain (we couldn’t attach very accurate numbers to them), but also because our uncertainty was poorly characterised (we didn’t know how accurate our measurements were). In a sense, the latter is more important. Like much of science, the point is not always knowing things exactly, but accepting that it’s impossible to achieve perfect accuracy and to at least know exactly how certain we are about a measurement (Tamsin Edwards has a related blog post focusing on climate rather than weather).

After some experimental studies showed the potential for soil moisture data to improve weather forecasts, operational forecasting centres across the world began to adopt this extra data source – the ones I heard about at the conference were ECMWF and the UK Met Office, but there are probably others.

Now let’s move to something less mathematical, but equally as important and exciting. On Thursday I listened to two excellent presentations on the Conceptual Models for the Southern Hemisphere (CM4SH) project. The rationale behind CM4SH is that the vast majority of weather forecasting ‘wisdom’ is derived from Northern Hemisphere perspectives, through an accident of history. But understanding the weather of the Southern Hemisphere isn’t as simple as flipping everything upside down. Although the physics of the weather is clearly the same, the actual meteorological situation in Southern Hemisphere countries is different. For example, South Africa lies in the midlatitude belt like Europe does, but it sits rather closer towards the Equator, so the same weather system could have different effects. The configuration of Southern Hemisphere land masses is very different, and that leads to rather different weather behaviour.

CM4SH is a collaboration between the national meteorological services of South Africa, Argentina, Australia and Brazil. The work focused on building up a catalogue of typical meteorological situations in different regions of the Southern Hemisphere, analysing similarities and differences. The international CM4SH team used Google Drive to build a catalogue of these situations, their typical causes, behaviour and effects. Satellite imagery is obviously a major part of the catalogue, as it allows forecasters to track the flow of moisture, presence of clouds, direction and strength of winds. The resulting catalogue allows Southern Hemisphere forecasters to classify meteorological situations and quickly find out the typical effects of different systems. For example, if a forecaster sees a particular meteorological configuration, they can quickly check the catalogue for the effects of similar situations in the past, and see that they need to assess the risk of, say, flooding, in a vulnerable region.

I think projects like this reflect the power of the Internet to supercharge our science. Earlier this week I wrote about how the data from the new GPM mission were available and easily accessible within weeks. GPM is a huge international collaboration combining the resources of a whole constellation of satellites. CM4SH is a project which makes use of expertise from four national meteorological services to create an unprecedented collaborative resource for forecaster training and education, freely available. The CM4SH catalogue will grow over time and become more refined – the beauty of collaborative projects like this is that, as long as someone does a little pruning now and then, they can only ever become more useful.

EUMETSAT Post 1: Challenges and advances in satellite measurement

EUMETSAT Post 2: Socioeconomic benefits of meteorological satellites


2 Comments

EUMETSAT Conference 2014: Socioeconomic benefits of meteorological satellites

Globally, governments spend about $10 billion on meteorological satellites every year. That’s a lot of money. How do we know it’s worth it?

Yesterday night the EUMETSAT conference branched off to the WMO for a side-event asking that very question. I was impressed by the rigour of their calculations, but also by the thoughtful responses to the question of how this information should – and should not – be used.

Alain Ratier, Director of EUMETSAT, presented the results of a comprehensive activity aiming at calculating the benefit-cost ratio to the EU of polar-orbiting meteorological satellites. The cost of these things is relatively easy to estimate, but the benefits are a little more difficult. They approached the problem in two steps: first, what is the economic benefit derived from weather forecasts? Second, what impact do meteorological satellites have on weather forecast skill?

The resulting report contains some fascinating facts and figures. It has been estimated that as much as one third of the EU’s GDP is ‘weather-sensitive’. Of course, this isn’t the same as ‘weather forecast sensitive’, but it at least gives a sense of potential vulnerability. The report concluded that the total benefit of weather forecasts to the EU was just over €60 billion per year. Most of that comes in the form of ‘added value to the European economy’ (broadly, use of weather information to help manage transport networks, electricity generation, agricultural activities, and so on), but there are also contributions from protection of property and the value of private use by citizens.

Compared to the calculation of the economic benefits of weather forecasts, the calculation of the effects of satellite data on those forecasts is quite straightforward. One can assess this by ‘suppressing’ source of data in our weather forecasts. Forecasts proceed by using a numerical model of atmospheric physics to predict the future atmospheric state. Since weather prediction is a chaotic problem, it’s important we start the forecast from as close as possible a representation of the real atmospheric state. This is called initialisation and it’s absolutely crucial to weather forecasting.  In order to calculate the effects of satellite information, we can simply exclude satellites from the initialisation phase of the weather prediction.

(left) 5-day forecast for Superstorm Sandy, (middle) the forecast without polar-orbiting satellite data and (right) the actual conditions that occurred. Credit: ECMWF.

The results are quite astounding. Satellite data contributes 64% of the effect of initialisation in improving 24-hour forecasts (the other 36% comes from in-situ observations). This approach reveals that measurements from a single satellite, the EUMETSAT MetOp-A, accounts for nearly 25% of all the improvement in 24-hour forecast accuracy derived from observations. MetOp-A is a relatively new platform, indicating that recent advances are providing huge benefits to weather forecasts.

The impact of satellite observations is vividly illustrated by considering 5-day forecasts of the track of Superstorm Sandy made with and without satellite initialisation. Without the use of polar-orbiting satellites, forecasters would not have predicted that the storm would make landfall on the Western US coast. As it was, the 5-day forecast of the storm track was remarkably close to reality, allowing forecasters to issue warnings of imminent risk of high winds and flooding.

The conclusion is that meteorological satellites provide benefits that outweigh their costs by a factor of 20. This is a conservative estimate in which high-end cost estimates have been compared with low-end benefit estimates. One reason we might expect benefit estimates to be low is that private companies are often reluctant to reveal how they use weather forecasts, either because this information is commercially sensitive or because they risk being charged more for the forecast data they receive!

It’s important to consider the limits of this approach. The obvious one is that cost-benefit estimates do not include the number of human lives that have been saved by weather forecasts. Not only is this difficult to calculate, it’s also impossible to put an economic value to. It would be very interesting to see if the toolbox of social science research has some methods to assess the ‘social’ part of the ‘socioeconomic’ benefits, moving away from attaching monetary value to things and considering those benefits which aren’t as easy to quantify. This doesn’t have to mean human life; any non-monetary social benefit of weather forecasting could be considered.

I think this is especially valuable because it’s questionable whether the cost-benefit approach is truly appropriate. Cost-benefit analyses frame things in a certain way; the WMO and EUMETSAT representatives at the meeting were well aware of this. They may imply greater certainty than is appropriate, and they may encourage a naively quantitative approach to what is fundamentally a qualitative problem: is it for better or for worse that we have meteorological satellites? Answering such a question involves some value judgements simple quantitative approach can gloss over. As LP Riishøjgaard pointed out, although we can make this kind of cost-benefit estimate ‘frighteningly’ easily, it’s not obvious that we should.

EUMETSAT Post 1: Challenges and advances in satellite measurement.

EUMETSAT Post 3: Final highlights.


2 Comments

Transformational Climate Science – the future of climate research

On 15-16 May a diverse group of climate researchers gathered at the University of Exeter to discuss the state of climate change following the publication of the IPCC Fifth Assessment Report and the future of the field. In a previous post I discussed some of the key themes. Here I’m going to summarise some of what went on at the conference in terms of how we should proceed with climate research in the future. It will be biased towards physical science, since that’s my personal area of interest.

What are the outstanding challenges in climate research? What are the areas that need further investigation? Should the IPCC process function as a driver for new research efforts?

Science & policy panell (left to right): Thomas Stocker, Saffron O’Neill, Georgina Mace, Andrea Tilche, Asuncion St Clair, Chris Field. Credit: University of Exeter via Flickr.

I think the final question there is an especially interesting one. The role of the IPCC is to bring together diverse research findings and assess our state of knowledge. And yet, sometimes it is seen as an end in itself. One of the speakers at the conference noted he sometimes sees research justified as ‘important for the IPCC assessment’, and that this is a big turn-off. If that’s the best thing the researcher can say about their work it’s probably not going to be that interesting. Of course, it might be that the research is fascinating and yields new insight into some of the big challenges of contemporary climate science. In that case the authors should say so. The challenges of contemporary climate science are not challenges because the IPCC says so; they are challenges because there are scientific and policy questions that need answering. Thomas Stocker, in his remarks, noted that one of the most important things to do in future climate research is to continue with ‘curiosity-driven research’. There are many examples of pure research that did not have any obvious application spawning major advances, often with great commercial success.

I’m no science policy scholar, so I won’t discuss where the balance should lie between ‘pure’ and ‘applied’ research, but this conference provided some food for thought. Some speakers emphasised both equally, generating a tension which isn’t easily resolved. Indeed, the majority of the ‘challenges’ identified at the meeting fell on the ‘applied’ side in the sense that they were suggestions to make climate research more policy-relevant. Perhaps that is unsurprising at a meeting structured around the IPCC, with its strong emphasis on policy-relevance.

One of the main challenges identified during the meeting was moving from the robust aspects of climate theory to those phenomena which actually matter to people on the ground. Robust aspects of climate theory are largely thermodynamically driven, argued Stephen Belcher. We understand that the accumulating energy balance of the Earth will lead to warming, and that the land will warm faster than the ocean. We understand that surface warming leads to greater evaporation and consequently, on average, greater precipitation. But the things we really care about are rather smaller in scale. We experience climate through weather events, and these are influenced as much by dynamic as thermodynamic factors. Unfortunately, we have much less confidence in our understanding of these dynamical processes. They have smaller spatial scales and shorter temporal scales, and so they are much more computationally demanding to model. They involve processes which are not well understood. Ted Shepherd has spoken similarly about the need to focus on the climate dynamics of global warming. It certainly seems like a fertile area for future research, though also a very challenging one.

On the subject of things that people actually care about, Mat Collins and David Stephenson both discussed moving from simplistic averages to the broader statistics of climate. We experience climate through weather, and we care about it most of all when it’s extreme. It’s the ‘tails’ of the probability distribution of weather events that we care about. Unfortunately, said Mat Collins, we don’t really have a good idea about how to assess this. Our current batch of climate model simulations are a statistically questionable sample – they have known deficiencies, biases and interdependencies. We need to address this or develop techniques to deal with it.

On the theme of translating our physical understanding into more relevant information, there was also some discussion of modelling of the politico-economic systems. Integrated Assessment Models attempt to do this, but there is no coordinated intercomparison of these models like there is for climate models. Some at the meeting objected, saying we don’t have good enough theory to be able to credibly model economics. Perhaps that’s true, but just because something is complicated and uncertain doesn’t mean we shouldn’t try to model it; in fact, perhaps it means we should! An intercomparison would at least help us know where we stand.

A final note: this continued emphasis on relevance seems to me to require a greater role of values in presenting stories about what humans care about. Simon Caney spoke about the major breakthrough of including ethicists and philosophers in WG3. More broadly, I think a move to greater policy-relevance would need everyone involved to be crystal clear about what is factual and what it normative (value-based). People were mostly good at that in this meeting. A productive discussion on climate change needs good-quality factual basis and a wide range of normative viewpoints. There was even some discussion about how it might required new forms of collaborative decision-making.

Regardless, the very necessary shift towards policy relevance will mean the potential for even greater controversies. Sam Fankhauser spoke about the need to develop very clear channels for communication to help get around this: ‘whatever we say will be used in that very emotional debate’. It’s difficult and sometimes downright unpleasant, but I think ultimately we have to embrace that.

claimtoken-5385ae663679f


1 Comment

Paper review: ‘Tuning the climate of a global model’

Citation: Mauritsen, T., et al. (2012), Tuning the climate of a global model, J. Adv. Model. Earth Syst., 4, M00A01, doi:10.1029/2012MS000154.

The other day I read a really excellent paper in the open-access model development journal JAMES (Journal of Advances in Modeling Earth Systems).

The unique aspect of this paper is its frankness. In it the authors speak in a clear and honest way about how they tuned the new model from the Max Planck Institute, MPI-ESM. They discuss the goals of tuning and the methods used to accomplish this. Previously model development, seen as an unglamourous subject, has not been deemed worthy of publications. Although they write from the point of view of MPI-ESM, the concepts are relevant to all models.

Evaluating models based on their ability to represent the TOA [top of the atmosphere] radiation balance usually reflects how closely the models were tuned to that particular target, rather than the models’ intrinsic qualities.

…[W]e both document and reflect on the model tuning that accompanied the preparation of a new version of our model system…Through the course of preparation we took note of the decision-making process applied in selecting and adjusting parameters, and these notes are elaborated upon…

The language here is remarkably self-aware. Previous generations of climate models were relatively poorly documented and contained plenty of mysteries, even to those who developed them. It is unlikely comphrensive notes on the decision-making process (not just the outcome) were recorded in the development of the CMIP3 models. Developers of the CMIP5 models are required to publish more details of their model formulation. This is a good thing, but his paper goes beyond that. It gives us an insight into the actual process by which the model was developed, not just the end result.

In this paper the authors go on to alter parameters to produce three alternative ‘worlds’. Whereas a perturbed-physics ensemble systematically varies all parameters within preset bounds and runs with a huge number of combinations, here the focus is on finding an equally-valid tuned set of parameters. The model developers recognise that some choices in the tuning process are somewhat subjective and that other equally-defensible choices could be made. They then look at the differences between the ‘official’ MPI-ESM and the three alternative ‘worlds’. They find some intriguing differences in the way the models represent smaller-scale features like the land/ocean contrast in tropical rainfall; usually such improvements weaken other aspects of the model’s climate, introducing an interesting trade-off.

Another interesting point is the extent to which models’ ability to reproduce the 20th Century temperature record is the result of tuning. The authors squash this idea:

The MPI-ESM was not tuned to better fit the 20th Century. In fact, we only had the capability to run the full 20th Century simulation…after…the model was frozen.

Thus the tuning was based more on physical metrics like the radiation balance at the top of the atmosphere, cloud and water vapour amounts. The emphasis was to produce a model which fits with our physical understanding rather than simply producing a simulator which reproduced observed temperatures.

For me, one of the implications of this paper was the importance of maintaining numerous independent models. As the authors here so candidly explains, model tuning is a subjective process. Choices are made to improve the representation of some aspects of the climate system which may degrade performance in other areas. Which areas are considered more important depend on the opinions of the modelling group and their research focus. By having numerous models with different strengths and weaknesses (partially a result of the choices made during the tuning process) and considering results from the models together (this is the goal of the CMIPs) we can hope to remove any bias introduced by these subjective choice.