Angus Ferraro

A tiny soapbox for a climate researcher.


Leave a comment

RMetS meeting report: Quantifying Uncertainties in Climate Science

A man with a watch knows what time it is. A man with two watches is never sure.

Segal’s law

Scientific results are uncertain. That is, we can’t have complete confidence in them. Science requires that we have very high confidence in our results, but it is difficult to demonstrate scientifically that something is 100% true. This applies to all science, and climate science is no different. ‘Quantifying uncertainties in climate science‘ was the topic of last Wednesday’s meeting of the Royal Meteorological Society.

What is uncertainty?

Uncertainty tells us how sure we are that a result lies in a particular range. Uncertainty doesn’t imply that don’t know anything. It implies that we know something with a certain amount of confidence. For example, if I were to ask you what time it is now without letting you look at the clock, you are unlikely to estimate that it 1:57 pm (which in this example is the true time). Since you don’t have a clock to look at you must estimate based on what the time was last time you looked at the clock and your perception of how much time has passed since then. Since you are unlikely to be able to estimate the passage of time that accurately, you might estimate somewhere between 1:45 and 2 pm. You can be uncertain about what time it is and yet know that it’s certainly not 9 am. Uncertainty tells us how confident we are and helps us truly understand exactly what we know and what we don’t know (e.g. we know it’s the afternoon, but we don’t know whether it’s before or after 2 pm).

We can think about uncertainty in terms of probability. For example, we can say there is a probability of 0.95 that it is between 1:45 and 2 pm. If we were certain that the true time was between 1:45 and 2 pm the probability would be 1.00.

Why should we think about it?

Jonty Rougier of the University of Bristol gave a concise and eloquent explanation of the power of thinking in terms of uncertainty. It is extremely hard to make good policy decisions without it. He used the example of sea-level rise. If sea level rises by a small amount it would be cheaper to do nothing rather than take action by building flood defences. If it rises a lot then it would be cheaper to act rather than face the costs of the sea-level rise. To work out what the best option is we must work out how much sea level is going to rise.

Projections of climate change are always uncertain. There are three main sources:

  • Model uncertainty. Since we are going into the future we must use models. Models are not perfect, so they introduce uncertainty.
  • Scenario uncertainty. This is the future. We don’t know what’s going to happen. We don’t know how future population and energy systems will change, for example.
  • Natural variability. This is uncertainty that comes from ‘random’ variation in weather conditions. Even if the globe warms there will be periods of cold weather. This means that, although the global temperature is expected to be warmer than today in a decade or two, any single year might be colder. If you count up the warmer years and the colder years, there will be more warmer ones. Natural variability means the precise temperature of a given year is quite uncertain.

Sources of uncertainty in future global temperatures, from Ed Hawkins (University of Reading).

So we don’t know exactly how much sea level will rise. It depends on a wide range of things. But if we work out how uncertain we are we can say how probable it is that a certain amount of sea-level rise actually happens. Then we can work out, based on the range of possible sea-level rises, which is the most cost-effective course of action.

How to calculate it and how to reduce it

David Sexton (Met Office) spoke about the UK’s climate prediction programme, UKCP09 (lots of information available on their website) which goes to a great deal of trouble to come up with useful uncertainty estimates. Lindsay Lee (University of Leeds) spoke about sources of uncertainty in the effect of aerosols (tiny particles of dust, soot, sea salt, sulphuric acid and other substances) on the climate system. Aerosols generally cool the climate, but specific types can in fact produce warming. The processes that affect aerosols are very difficult to model because they happen on such tiny scales. Imagine trying to track millions of tiny particles each a thousandth of a metre across – impossible! Our models can only approximate their effects and that introduces uncertainty.

Tamsin Edwards (University of Bristol) spoke about using information from the distant past (thousands of years ago) to work out how much climate changes for a given change in carbon dioxide. Her approach involves using both observations and models. Here’s an interesting point: observations are also uncertain! Observations of the distant past must be got at through ‘proxies’ – for example, by looking at the type of shells from ocean-dwelling creatures found in sediments on the sea bed. Even observations of the present day are uncertain (although much less so). This is annoying, because it means there is no single ‘true’ state of the climate system, even today! One of the great skills of a good scientist is deriving useful information from a fusion of a variety of uncertain sources of data.

Finally, Paul Williams (University of Reading) showed us the power of random noise. Technically, random processes are called ‘stochastic’ processes. This basically means adding some random variation into models. Think back to the aerosol example. Aerosols are tiny. Imagine trying to calculate how many aerosol particles fall out of the atmosphere per second. That’s an impossibly complicated calculation. Climate models calculate things on the scales of hundreds of kilometres, so they can’t tell us much about what happens to particles thousands of times smaller unless we make some assumptions about how they behave. The atmosphere is very turbulent, so the upwards and downward motions can happen on small scales. For example, at one point a building might be forcing upward motion, but move 10 metres away and the motion might be downwards. Climate models can’t capture this. But what we can do is try to represent it. We can think: ‘Turbulence looks pretty random. What if we just take the vertical velocity calculated by our model and add a number to it – a random number?’. It turns out that helps our models look like the real world.

Of course, we have to tell the model what range to pick its random numbers from, otherwise it could change things too much. But we could make it alter the vertical motion by, say a few millimetres per second. Paul’s work shows this kind of behaviour very much helps. Even when processes in the atmosphere aren’t actually random, they behave like they are random, so we can pretend they are for the purposes of our prediction.

Paul’s work shows how counter-intuitive some climate science is. Who would have thought making models a bit more random would help them become more realistic? It’s some out-of-the-box thinking which might help us understand and use uncertainty better in the future.

Advertisements