Investigating rainfall models: Why a comprehensive and systematic approach is essential

In this article we take a look at why we need comprehensive and systematic evaluation of rainfall models. We also examine a new model evaluation framework, with examples of the framework in action.


Imagine a typical scene from a detective novel:

Sirens scream past – like every Tuesday in this forsaken town. I was about to close up shop for the night when a worried young man stepped sheepishly into my office. I couldn’t understand him at first. But through his mumblings it became clear that something was wrong. Something was wrong with … The Rain.

Actually, the work of a hydrologist is much like a detective.

Rainfall models and the character of rain

We rely on models of ‘fake’ rain. These rainfall models are relied upon to assess the hydrological impacts of droughts, floods, land-use and climate change. For example, to evaluate flood risk you can select a spatial rainfall model capable of generating long sequences of rainfall for the catchment.


Daily spatial rainfall field simulation over the Onkaparinga catchment

But to provide robust assessments, the simulated rainfall must reproduce observed rainfall characteristics in space and time. And across a wide range of scales.

This is not a simple task. Many potential issues can arise – not enough data, model is too simple, etc.

When people think of ‘rain’ they think of one character, when actually there is a whole family. Some of the main characters include:

  • Daily rainfall amounts
  • Total annual rainfall
  • Inter-annual variability (i.e. year-to-year variability in the rainfall)
  • Wet/dry spell distributions
  • Seasonality
  • Spatial variability
  • Extremes

This Family of Rain Characters is complex. Each has its own personality. The problem is that when there is trouble reproducing an ‘observed rainfall’ characteristic, any one of these characters (or perhaps all of them) could be a culprit.

It is challenging because they are all interlinked. When we try to isolate ‘who’ caused what effect, they can provide alibis for each other! For example, an issue with low variability between years could actually be an issue with seasonality instead.

Imagine how our detective would tackle this challenge:

It smelt fishy to me. When dealing with a bunch of low lives like the Rainfall family you need to be thorough. It may be tempting to only interrogate a few key players and repeat offenders (inter-annual variability and wet-dry pattern come to mind). But going with your gut won’t cut it in this case. It occurred to me that any analysis of these slippery characters needs to be comprehensive and systematic. They need to be lined up side-by-side and interrogated to figure out who is pulling the strings and who is in cahoots.

In reality, past evaluations of rainfall models have presented performance in descriptive terms (e.g. words like ‘satisfactory’ or ‘well’). They have often used a set of selected statistics, sites or time periods. It is not systematic.

A new model evaluation framework

To address these issues, members of this research group have developed a new framework for evaluating rainfall model performance. The framework uses quantitative criteria to assess model performance across a comprehensive range of observed statistics of interest.

The framework is comprehensive. It plainly summarises performance across a range of time scales (years/months/days), and spatial scales (sites/fields). By using quantitative criteria (defined a priori) the evaluation is made transparent and avoids the need to frame performance results in purely descriptive terms.

These features of the framework help to identify model strengths and weaknesses, and to untangle the origin of deficiencies.

The framework in action

Let’s look at applying the framework to evaluate the performance of a rainfall model in simulating 100 realisations of daily rainfall for 73 years. We’ll look at this rainfall across 19 sites for a range of statistics, scales and seasons. The problem has many dimensions and needs to be tackled in a comprehensive and systematic fashion.

The performance criteria of the framework is used first, to assess the performance of each individual statistic of interest for each site/scale.  Then, the individual analyses can be summarised to provide an overview of model performance across a range of model properties.

A short summary table is presented below to illustrate this concept. In the table, ‘Good’ performance is displayed in green, ‘Fair’ in yellow, and ‘Poor’ in red, according to the applied quantitative performance criteria. The figure below illustrates that the majority of sites and months are categorised as ‘Good’ in simulating:

  • mean wet day amounts
  • standard deviation of wet day amounts
  • the mean number of wet days
  • the mean total monthly rainfall
  • the standard deviation of  monthly total rainfall.

Table 1 – Comparison of performance (adapted from Bennett et al. 2016).

Figure 1 – Comparison of performance (adapted from Bennett et al. 2016). The quantitative performance criteria for each individual statistic are • Good – less than 10% of observations fall outside the simulation’s 90% probability limits (indicated using green) • Fair – the observed statistic lies within the 99.7% limits or the absolute relative different between the simulated and observed mean is 5% or less (indicated using yellow) • Poor – otherwise (indicated using red)


However, looking at the annual scale, the majority of sites are categorised as ‘Fair’ or ‘Poor’ in simulating the lower tail of the total annual rainfall distribution and variability in annual totals.

The ‘Poor’ performance is due to an over-estimation of the annual total rainfall in the lower tail, by 15% on average (see Figure 2).

This under-prediction of variability in aggregate totals is a known issue for many rainfall simulators [2], [3]. It is often attributed to a lack of model persistence between months or years. However, in this case, the comprehensive evaluation framework demonstrated that the model performance for year-to-year and month-to-month persistence were categorised as ‘Good’. In this case, the lack of variability in the number of wet days simulated annually (see the last row of Figure 1) was identified as the likely cause of the ‘Poor’ performance in simulating variability in total annual rainfall.

At site annual totals. Adapted from Bennett et al. 2016

Figure 2 – At site annual totals for all sites (left) standard deviations and (right) lower tail (5th percentile), 90% probability limits shown. Barcharts indicate performances as a percentage of sites. Adapted from Bennett et al. 2016

This ability to identify model strengths and weakness via systematic and comprehensive evaluation is the key advantage of the framework.

For more on the applying the full framework read the full journal article here.


[1] BENNETT, B., THYER, M., LEONARD, M., LAMBERT, M. & BATES, B. 2016. A comprehensive and systematic evaluation framework for a parsimonious daily rainfall field model, Journal of Hydrology, Available online 27 December 2016,

[2] MEHROTRA, R., & SHARMA, A. (2007). A semi-parametric model for stochastic generation of multi-site daily rainfall exhibiting low-frequency variability.Journal of Hydrology,  335(1), 180-193,

[3] WILKS, D. S. (1999). Interannual variability and extreme-value characteristics of several stochastic daily precipitation models. Agricultural and Forest Meteorology, 93(3), 153-169, DOI: 10.1016/S0168-1923(98)00125-7.

Sizing up extreme storms in a future climate

Storms are becoming more intense as the climate changes. This is just one reason why the signing of the Paris Agreement by 170 nations in New York last week is great news. Finally, there is a sense of momentum; a shared purpose worldwide that the rate of climate change urgently needs to be slowed down. The next—and most important—step is for countries to ratify the treaty and enshrine the agreement in national legislation. Let’s all hope this progresses smoothly and quickly.


This collective enthusiasm to reduce humanity’s greenhouse gas emissions cannot come fast enough. The first three months of this year have been record-breakers in terms of global temperature, causing  heatwaves, extensive coral bleaching, and continued declines in the Arctic sea ice.

Implications of climate change on rainfall intensity

Potentially equally important to these direct effects of increasing temperature are the indirect effects of greenhouse gas emissions on the world’s weather patterns. And of these, changes to extreme storms are particularly important. Alarmingly, the evidence is suggesting that storms have started becoming much more intense.

The basic logic is that a warmer atmosphere can hold more water. If this seems unexpected, then consider why we use warm air for hand-driers, or why on warm humid days you can get water droplets forming on the outside of cold water bottles. Or why storms in the tropics are often much more intense than those in the higher latitudes.

This is a photograph of the bottom third of a water bottle with condensation on the outside.

Warm air can hold more moisture. This helps explain processes such as evaporation (which occurs more quickly when the air is hot) and condensation (which occurs when warm air is cooled down, forcing the air to release some of its moisture). Image source: Wikipedia.

Because of this, we have good reason to expect that storms should become more intense as the climate warms. In fact, scientists who have looked into this expect the rainfall intensity from storms to increase by between 7% and 14% for every degree of global temperature increase.

This is not just some theoretical concept about what scientists expect in the future; it’s something we’ve already started to observe.

The changing size of storms

But is a change in the intensity of storms the only thing we can expect? This was the question we asked when I began collaborating with PhD student Conrad Wasko and his supervisor Professor Ashish Sharma at the University of New South Wales. Here, we did a comprehensive analysis of data from 1300 rain gauges and 1700 temperature stations across Australia to see how air temperature affects the spatial organisation of storms.

What we found was surprising: not only do storms intensify with temperature, but they also become more concentrated over a smaller area. This is because as the storm cells intensify, they also become more effective at drawing in moisture toward the storm centre.

This is illustrated in the image below, where we looked at the 1000 most intense storms that occurred when the atmosphere was relatively cool (about 18 degrees Celsius or below, shown as the blue curves), and compared them to 1000 most intense storms when the atmosphere was relatively warm (above 25 degrees Celsius).

Warmer temperatures can mean more intense rainfall at the storm core, and less rainfall further away from the storm centre.

But what was really surprising was how uniform the changes were around Australia. In fact, regardless of whether we were looking at storms in temperate Tasmania or the tropical north of Australia, the humid east coast or the arid interior, the results were pretty much the same. A warmer atmosphere is associated with more intense rainfall, occurring over a smaller geographic area.

If we combine this with the large number of other studies that have recently been published on changes to extreme rainfall under climate change, it is clear that we are beginning to have a much better understanding of these changes. But there is still much to learn about what climate change will mean for changes in storms, and how this is  related to possible changes in flooding, hail damage and other meteorological hazards.

Nevertheless, the sorts of changes our team and others around the world have been documenting are cause for concern. So let’s hope that Australia starts following the chorus of international support for stronger action of climate change and takes the action needed to ratify the Paris Agreement as soon as possible.

Further reading

There are literally hundreds of scientific papers that have been published describing historical and expected future changes in extreme rainfall. I’ve provided a selection of papers from our own group on the topic below. The first paper provides a review of nearly 250 scientific papers on the topic, and can be used as an entry point to the broader literature.


The most important questions to ask about climate change

In this article we look at the most important questions to ask about climate change, so we can make better decisions. Those questions are: Under what circumstances will a system fail? And can we extend a system’s breaking point?


Thinking forward prompts the question of when things will change, and by how much. Answering these questions often involves the use of climate projections. These often disagree and don’t provide much certainty on a timeline. This can be an issue when making decisions for systems that are vulnerable to climate change – like open water storages.

Climate change question 1: Under what circumstances will a system fail?

Climate change is gradually impacting our water resource systems. It is changing both demand and supply. Our systems are robust enough to resist some changes, including those seen in the last few decades. While the effects of climate change will continue to be gradual, uninterrupted change will cause our systems to fail, like the proverbial boiled frog.

This is why we need ‘tipping points’.

A ‘tipping point’ is the last point at which you would still consider performance satisfactory. This point has a corresponding climate described by measurements of variables like temperature, precipitation and evaporation. And, importantly, you can identify this climate without the use of climate change projections.

The previous blog post by Danlu Guo is a great example of how to do just that. Taking historical climate conditions – under which a system is performing well – and changing them can allow you to see the point in which the system breaks. Changes might include higher temperatures or different rainfall combinations.

To demonstrate this idea, we considered a simplified model of Lake Como, Italy. The lake regulator has to protect the city of Como from flooding, and ensure there is enough water for irrigating one of the largest agricultural regions in Europe. This creates challenging decisions: The two are competing interests.

After defining some ‘tipping points’ (for both the allowable flooded area and irrigation deficit) we varied annual temperatures and precipitation to see what would cause the system to fail.

Figure 1. Tipping points for Lake Como.

This figure shows changed rainfall by percentage, combined with a temperature change in degrees celsius. It shows that the system’s success limits are just under zero degrees change through to 15 degrees change, in an upward trend. Meaning that as temperature rises, the system is only successful in climate change scenarios if precipitation also increases, but then only within a small band of variation.


The figure shows that when the amount of rain increases the system floods, and when temperature increases, the system doesn’t have enough water for irrigation. But we also found some robustness when temperature and precipitation increase together. The system remains in a healthy balance as more rain occurs with more evaporation. Importantly, we can quantify the changes in climate that result in failure.

The next question to consider is: How far can this breaking point extend?

Climate change question 2: Can we extend the breaking point of the system?

The answer to this question depends on what a decision maker controls. Each of the climate scenarios in the above figure are fixed. The impact is a result of the physical system and its management.

For example, raising the reservoir walls would decrease the danger of flooding, and decrease the amount of failure scenarios in Figure 1. These large infrastructure actions are quite costly, and are not easily reversible. This could lead to regret if the climate were to change in a way that instead threatened irrigation.

Most water storage systems require a daily release decision. Changes to this operation are low cost, fast to implement and are quite flexible. The flexibility allows decision makers to tailor a response to a particular climate. This can be beneficial as the climate changes in unpredictable ways.

To illustrate, we took the same climate scenarios for Lake Como as above, and instead of using the current operation model, designed a new operation as a response to each individual climate. Figure 2 shows this. The green scenarios represent how far the failure boundary extended.

This ‘adaptive capacity’ approach helped Lake Como perform successfully in three times as many scenarios. Compared to Figure 1, it appears to be easier to adapt to irrigation deficit than to flooding.

Figure 2. Upper limit adaptive capacity.

Figure 2 has green (adaptation), blue (success) and red (failure) in a chart that shows change to rainfall by percent, and change to temperature by degrees celsius. It shows that the ‘adaptive capacity’ approach helped Lake Como perform successfully in three times as many climate change scenarios as those from the previous figure.


When will these tipping points occur?

Climate projection models are still the only way of answering this question.

Below are two projections for the The Lake Como scenario. They are in the same categories of the previous figures. For example, in 2025, 3/22 models predict failure, and 6/22 predict a climate that can be adapted to.

Looking at the projections alone doesn’t give you much information about the system. It is difficult to tell how close some of these projections are to failing. That information is important, given the errors they contain.

Figure 3. Climate projections for the Lake Como reservoir.

The figures show two separate climate change projections: One for 2025, one for 2050.


Identifying the above tipping points can be an important context for looking at projections.

Figure 4 (below) contains 22 climate model projections of the climate of Lake Como in 2025 and 2050. The figure shows that, while the projections change, how the system performs under specific climates does not.

This is where such an approach is useful. You can be more confident in the performance under some projections than others, based on how close they are to the tipping points.

Figure 4. Tipping point approach.

This figure shows the two climate change projections overlaid over one of the previous charts. It shows that while the climate may change, the system understanding does not – and how this might aid decision making.


Water utilities adapt through data-driven decisions

The important decisions made by water utilities need to include considerations of an uncertain future. This may mean that they will need to adapt their thinking.


The article outlines a survey of water utilities that is part of a current Water Research Australia project. The project is  titled, Better data-driven decision making under future climate uncertainty.

Adapting to a changing climate

Australian water utilities have dealt with extreme events and changes in their operating environments for a long timeTheir resilience is tested by disruptions like water scarcity, floods, power outages and pipe failuresWhile these disruptions may or may not be linked to climate change, they are an indication of their ability to cope with future challenges.
Adapting to Australia’s variable climate and extreme weather events has already cost the urban water industry millionsIn some cases, the responses to these events by governments and water utilities have been heavily criticised.
This is why water industry decision makers need to have appropriate techniques that they can use with suitable climate dataIt will help them to make robust business, planning and operational decisions for an uncertain future.

How is the future changing thinking?

We know water utilities appreciate they are exposed to climate-related risks. Such risks include:
  • water security (e.g. higher demand and reduced rainfall)
  • infrastructure (e.g. elevated sea level and associated increase in flooding)
  • safety (e.g. exposure of personnel to extreme heat and fire danger)
  • inter-dependencies (e.g. power or communications failure leading to service disruption).
What we don’t know is how water utilities are making decisions to address these risks. That is what the research project wants to discover.
The project is led by SA Water and the University of Adelaide. It’s funded by the Australian water industry through contributions to Water Research Australia.

Survey of water utilities: How are important decisions being made?

The first stage of the research involves experts in a range of areas. Those areas include decision-making, climate change science, climate change impacts and climate change adaptation.
Participants are drawn from 17 industry, university and consulting partners. The group of experts includes personnel from 11 Australian water utilities.
The survey is currently underway.
By surveying water utility executives, management and staff, we want to find out how important decisions are being made.

How will the survey help utilities adapt?

The survey will provide insight into:
  • Climate data and information that is used to inform decision-making
  • How climate data are analysed, and how resulting information this used
  • Rules of thumb and decision processes applied in decision-making
  • Formal option evaluation methods to adapt solutions
  • Climate change exposure to externalities, such as customers, the urban environment, and businesses.

Who is the survey for?

The survey targets decision-makers in key areas exposed to climate-related risk. Those areas include:
  • Strategy and planning
  • Asset management
  • Operations
  • Communications
  • Finance.
Survey results will benchmark current and best practices for climate-related decision-making in water utilitiesThe results will inform an online framework that will connect decision-makers to appropriate tools and resources.

What’s in the survey?

The survey encourages decision-makers to list 5-10 key decisions they make in their roles. Questions then help us analyse the features for each decision. Such questions may include:
  • What climate data supports the decision?
  • What planning horizons are used?
  • Are future scenarios or extreme events considered?
  • Are inter-dependencies with other utilities considered (e.g. power, transport)?

Preliminary results

Early results from the survey show that organisations may understand and appreciate areas exposed to risk. That understanding is at a higher level. The strategies used decision-makers in asset management and operations must also give them the right tools.
Several participants have taken the survey in groups. This has led to great discussion about the ways we can embed climate adaptation strategy. Group surveys has helped us identify water utilities ‘ inter-dependencies. It has also helped us identify climate risks in asset management and operations.

5 Things Farmers Should Know About Farm Dams

A farmer must be cautious before installing a farm dam. This type of dam is not only full of water, but also of rocks, silt, and dirt. As a result, the runoff from floods fills the dam, taking sediments with it. These substances are unhealthy, and a bad smell is often associated with them. As a result, a farmer should install a water tank that has low evaporation and no seepage.

While it may seem like a good idea to build a dam to protect the water supply, the process is not easy. First of all, dams must be built with enough land to prevent flooding and erosion. Secondly, dams must be constructed to provide electricity to farms. Lastly, farm dams must be built and maintained appropriately. Some farmers may consider the cost of constructing a dam to be more affordable than the benefits of having one.

Farmers should also consider the environmental impact of dams. While some people tout the use of dams as a renewable energy source, the reality is that they block water and have detrimental effects on ecosystems and people downstream. For example, the Grand Ethiopian Renaissance Dam in Ethiopia is filling, and is on track to be Africa’s largest hydroelectric source. Egypt, on the other hand, is concerned about the reduced water for agriculture.

As a farmer, you should take precautions to protect your farm dam from excessive algal growth. These organisms can cause serious health risks, so it’s important to prevent them. Luckily, there are several ways to reduce the growth of algae in your farm dam. Using flocculation and pumps to remove suspended matter are two methods to make your water clear. And if you’ve already installed a dam, you can simply fill it up with water and use it for livestock.

A farm dam’s integrity is important. A poorly built dam can compromise the integrity of the wall. As a result, a farmer should not neglect to repair it. A good dam is one that is designed to last for generations. If you plan to use it for farming, you should consider the longevity of its wall. It should also be able to handle the high amounts of water that the farmers need. The wall should be made of dry earth, which will compromise the integrity of the dam.

There are a few other factors to consider before building a farm dam. The soil surrounding the dam is important to prevent the formation of algae, which can affect the stability of the dam. Hence, it is essential to plan for a well-maintained structure for the dam’s integrity. Despite these factors, it is essential to plan for a disaster before deciding on the best method of constructing a dam.

Besides preserving the soil around a farm dam, it is essential to pay special attention to its health. A dirty dam can cause problems by accumulating dirt and organic materials. It also impedes water flow. It’s essential to have regular checks on the condition of your farm dam. In addition to the above, it is also necessary to check the integrity of the wall. If the wall is not in good condition, it could lead to collapse during a flood.

Dams are essential for water management. The water in a dam is vital to farming. If it isn’t functioning properly, it will be useless. Its quality depends on how well it is constructed. A dam is a valuable asset for agricultural production, so it must be safe for livestock, the environment, and human life. Ensure that it is durable by taking proper care of it. Its integrity should be assured to ensure the safety of the water in your farm.

A dam should be inspected periodically to ensure it is in good condition. A leaking dam is a risky proposition. It should be checked and refilled if necessary. A leaky dam is a liability. A faulty dam should be inspected and repaired by an expert. You must also check for cracks and other signs of structural damage. Nevertheless, a dam must be checked to ensure it is safe.

Practical advice to reduce uncertainty in hydrological predictions

In this article researchers and industry will learn how to reduce uncertainty in hydrological predictions. It presents the recommendations of a recent study published in the Water Resources Research journal [1].

For the first time, we have identified the best error model to use for representing uncertainty in predictions for hydrological modelling applications. So you can use the recommendations most effectively, we begin by explaining the importance of estimating uncertainty in hydrological predictions.

This was an outcome of a long-term collaboration between the University of Adelaide, University of Newcastle and the seasonal streamflow forecasting team at the Bureau of Meteorology.

The long-term goal of this research is to improve streamflow forecasts around Australia (see impact).


Why should we quantify the uncertainty in hydrological predictions?

Rainfall-runoff models predict the response of flow in streams and rivers to rainfall (referred to as hydrological predictions).

The hydrological predictions from these models are widely used to inform decisions by a range of authorities. They include:

  • flood warning services
  • water supply authorities
  • environmental managers
  • irrigators
  • hydroelectricity generators.

Given the reliance on these hydrological predictions, it is important to understand the uncertainty in these predictions.

Hydrological predictions are not perfect and can have large errors.

These errors are typically in the order of 40–50% [2]. There are errors between observed and predicted streamflow, because

  • Catchments are complex, and hydrological models are simplified representations of complicated catchment physics
  • Catchment processes are hard to measure: Rainfall varies in space and time. Streamflow is not measured directly. This produces observation errors in the rainfall and streamflow data used to develop and test these models.

Quantifying the errors in predictions allows us to estimate uncertainty in predictions.

Uncertainty estimation is essential for quantifying risk

If we do not account for uncertainty we can under-estimate the risk of failure.

Figure 1: Hypothetical example showing predicted system performance (e.g. flood mitigation, drought security, stream health) resulting from Actions A and B. In this case we do not consider uncertainty in system performance for each action.

Consider the hypothetical example introduced in Figure 1.

You are given the task of choosing between Action A and Action B to improve system performance, and avoiding system failure. This could be flood mitigation, drought security or stream health. If you ignore the uncertainty in performance (as in Figure 1) you would choose Action B since it has the highest performance.

Figure 2: The same as Figure 1, but now considering uncertainty in system performance for each action.

But, you might make a different decision if you were to quantify the uncertainty (see Figure 2). Action A has a much lower probability of failure and would be the preferred action if you want to reduce risk.

Water management is all about balancing risks, e.g. risk of floods, risk of water shortage. So it is clear that quantifying uncertainty is essential for quantifying risk. If we ignore it, we are under-estimating the risk of unwanted outcomes.

It’s not that difficult

There is a perception that uncertainty analysis is hard. Our research shows that you can get robust uncertainty estimates using simple approaches. See recommendations.

What are the challenges with estimating uncertainty?

Time series showing observed and predicted streamflow. There are differences between the two time series. These differences are largest when predicted streamflow is largest.

Figure 3: Observed streamflow data from the Cotter River (ACT, Australia) compared with predictions from the GR4J hydrological model. The size of the errors between observations and predictions is larger for higher streamflow predictions. The ovals highlight period where this is evident.

Uncertainty estimation is based on the statistical modelling of the errors between hydrological predictions and observations.

There are many challenges in modelling these errors. For example, higher streamflow predictions have larger errors. This is seen in Figure 3. This is known as “heteroscedasticity” in errors (non-constant variance). Errors are also persistent (e.g. large errors typically follow large errors) and skewed.

Appropriate modelling of errors needs to account for these properties.

How can we estimate uncertainty in predictions?

Many approaches are used to estimate uncertainty in hydrological predictions. Methods such as Bayesian Total Error Analysis (BATEA) [3] disaggregate uncertainty into different components. These components include input data errors, model structure errors, and output data errors.

But, in many practical applications, an error model that aggregates all errors is preferable because we are primarily interested in the uncertainty in predictions. These are referred to as “residual error models” in the literature, but here we are going to simplify this to “error models”.

In both operational and research settings, a wide range of different error models are used. These include

  1. weighted least squares (WLS) approaches, and
  2. approaches based on transformations of the data (e.g. Log and Box Cox transformations).

But until now, no one has evaluated which error models work best over a diverse range of catchments.

So how big a difference can the choice of error model make?

Within the hydrological community, both WLS and transformation approaches are widely used to account for heteroscedasticity in errors. Thus, we may think that the specific error model would not make a big difference to predictions.

Surprisingly, it can make a very big difference.

Probability limits for streamflow predictions in Cotter River, based on WLS and BC0.2 error models. The width of the 90% confidence intervals are much larger for WLS than BC0.2.


Figure 4: Uncertainty estimates for hydrological predictions in the Cotter River (ACT, Australia) based on the Weighted Least Squares (WLS) error model and the Box Cox error model with fixed parameter (BC0.2).

Figure 4 shows uncertainty estimates for streamflow in the Cotter River, based on the widely used GR4J hydrological model. In the top panel, a weighted least squares (WLS) error model is used. In the bottom panel, the Box Cox transformation is used, with a transformation parameter lambda=0.2 (BC0.2).

We see that WLS over-estimates high flows, and the uncertainty in the WLS predictions is greater than the BC0.2 predictions.

The BC0.2 error model produces predictions that are more precise and more consistent with the observed data. Thus they would be far more useful for management.

How do we identify robust error models for multiple catchments?

The results in Figure 4 highlight the importance of the error model in predicting uncertainty. But are these results consistent over multiple catchments, with different physical characteristics? How do other error models compare with the two models considered in Figure 4? And why do some error models perform better than others?

To identify robust error models for practical purposes we performed a wide range of empirical case studies based on

  • 8 common error models
  • 23 catchments from Australia and the USA
  • 2 hydrological models.

We strengthened the robustness of our findings using theoretical analyses to understand when and why error models performed the way they did.

The findings of this study have recently been published in the Water Resources Research journal.

How do we work out which error model is best?

To estimate uncertainty and describe risk, we want predictions that are

  • Reliable: probabilistic predictions are statistically consistent with observed data. For example, 5% of the observed data should lie outside the 95% confidence limits. If 20% of the observed are outside the 95% limits then the probabilistic predictions are not reliable.
  • Precise: small uncertainty in predictions.  For example, we want the 95% confidence intervals to be narrow, as with BC0.2 in Figure 4, and not unnecessarily wide, as with WLS.
  • Unbiased: total volume matches observations.

To compare performance across 368 case studies (23 catchments x 2 hydrological models x 8 error models), we summarise these aspects of predictive performance using metrics.

Ideally we would use probabilistic predictions that are both reliable, precise and un-biased. But in practice this is hard to achieve.

So which error model should I use?

Based on empirical case studies and theoretical analysis we came to the following four conclusions.

1. Error models that transform the data produce more reliable predictions

Error models based on the Log transformation and Box Cox transformation with fixed parameter are better than the weighted least squares (WLS) error model. This is because transformation approaches capture the real skew in residuals.

2. Choosing the best error model depends on the type of flow regime in the catchment.

For perennial catchments (that always flow), the log and log-sinh error models produce reliable and precise predictions.

In ephemeral catchments (with a large number of zero flow days) these error models produce very imprecise predictions. The BC transformation with lambda=0.2 or lambda=0.5 is better in these catchments.

3. More complex error models do not necessarily produce the best predictions.

Calibrating the parameter in the Box Cox error model produces predictions that are reliable. But these predictions are often extremely imprecise. This method produces improved estimates of low flows at the expense of high-flows.

The two parameter log-sinh transformation error model produced similar predictions as the simpler log transformation error model in perennial catchments. In ephemeral catchments it produced predictions with poor precision.

4. No single error model performs best in all aspects of predictive performance.

In other words, there is a trade-off between different aspects of performance.

In perennially flowing catchments, we found that the Log transformation error model produced best reliability. But in these same catchments, the Box Cox transformation with lambda=0.2 produced predictions with the best precision.

This means that your choice of error model will depend on:

  • what you will use predictions for, and thus which metrics are most important to you, and
  • the resources available for trialling different error models

Broad Recommendations 

If you’re after a simple choice of a single error model and don’t want to undertake an in-depth analysis of performance trade-offs, we make the following broad recommendations.

Perennial catchments

In perennial catchments, use:

  • Log error model if reliability is important
  • Box Cox transformation with lambda=0.2 if precision is important
  • Box Cox transformation with lambda=0.5 if low bias is important.

Ephemeral catchments

In ephemeral catchments, use:

  • Box Cox transformation with lambda=0.2 if reliability is important
  • Box Cox transformation with lambda=0.5 if precision or bias is important.

See the “Recommendations” section of our paper for further details.

What is the impact of these findings on improving predictive uncertainty?

If you follow these broad recommendations, you can expect to reduce your predictive uncertainty (i.e. precision) from approximately 105% to 40% of observed streamflow, and decrease the biases in total volume from 25% to 4% (based on the median metrics across the 46 case studies), without major compromises in reliability.

The Bureau of Meteorology (BOM) is currently testing the recommended error model to improve their Seasonal Streamflow Forecasting Service. The recommendations are being trialled to improve the post-processing of monthly and seasonal forecasts.

Initial results are promising, with a significant increase in forecast performance over a large number of sites across Australia. We plan to publish this in an upcoming article in the Hydrology and Earth Systems Science journal [4].

For more information on heteroscedastic residual error models, please check out our recent article in Water Resources Research.

We will also be presenting this work at European Geophysical Union General Assembly 2017 on 23-28th April in Vienna, Austria (abstract). We look forward to seeing you there.

For a sneak peak, see the seminar we recently presented at the Bureau of Meteorology, “Advances in improving streamflow predictions, with application in forecasting” available on figshare.


The outcomes of this research represent the combined work of a great team of researchers and operational personnel at the University of Adelaide (UoA), University of Newcastle (UoN) and the Bureau of Meteorology (BoM).

This includes intellectual contributions from Associate Professor Mark Thyer (UoA), Professor Dmitri Kavetski (UoA), Prof. George Kuczera (U of Newcastle), Narendra Tuteja (BoM), Julien Lerat (BoM), Daehyok Shin (BoM) and Fitsum Woldemsekel (BoM) and financial support of the Australian Research Council, through the ARC Linkage Grant, LP140100978, the Bureau of Meteorology, South East Queensland Water.  The opinions expressed in this article are the author’s own and do not reflect the view of the University of Adelaide, University of Newcastle, the Bureau of Meteorology or South East Queensland Water.


1. McInerney, D., Thyer, M., Kavetski, D., Lerat, J. and Kuczera, G. (2017), Improving probabilistic prediction of daily streamflow by identifying Pareto optimal approaches for modeling heteroscedastic residual errors. Water Resources Research. doi:10.1002/2016WR019168

2. Evin, G., M. Thyer, D. Kavetski, D. McInerney, and G. Kuczera (2014), Comparison of joint versus postprocessor approaches for hydrological uncertainty estimation accounting for error autocorrelation and heteroscedasticity, Water Resources Research50(3).

3. Kavetski D, Kuczera G and Franks, SW (2006) Bayesian analysis of input uncertainty in hydrological modelling: 1. Theory, Water Resources Research, 42, W03407.

4. Woldemsekel F., Lerat, J., Tuteja, N., Shin, D.H., Thyer, M., McInerney, D., Kavetski, D., Kuczera, G. (2017) Evaluating residual error approaches to post-processing monthly and seasonal streamflow forecasts, Hydrology and Earth System Sciences, Special Issue on Sub-seasonal to seasonal hydrological forecasting (in preparation).

Does it need to be there? Remembering exposure in risk

In the engineering world we are often met with questions around risk mitigation, especially when it relates to the rising risk of flooding and coastal inundation. This is often met with a large concrete structure or building hardening, all designed to withstand the elements. But we can take a more holistic approach to risk management and reduction.


Hazard, Exposure & Vulnerability

Risk is composed of three distinct elements: Hazard, exposure, and vulnerability. They allows us to understand and reduce risk in different and complementary ways. Each is a critical element, and each demands a specific management approach.

To manage the hazard, we try to hold back the water using levees and sea walls.

To decrease our vulnerability we develop higher buildings, with stronger walls and flexible connections.

But to remove the need for either of these we can shift our exposure. In other words, leave nothing valuable to be flooded.

A holistic approach considers all three elements and finds the appropriate balance of measures dependent on level of risk, cost to mitigate, and socioeconomic benefits of the asset in question.

New Questions are Needed in Flood-Prone Areas

Recent flood events around the world highlight the importance of asking, does it really need to be there?

If we look at the Mississippi River, it has been engineered to such an extreme level that today it barely resembles the natural and changing flow channel it once was. This, coupled with residential and commercial development in its floodplains, left only the inevitable to happen. Three days of rain starting on the 26 December 2015 caused 25 deaths, caused thousands to be evacuated, and resulted in huge rebuild costs (The Economist, 2016). So should we have continued to develop in its floodplains?

Floods over Christmas 2015 in the UK similarly highlighted the need to consider development approvals in flood-prone areas. It’s expected that 20,000 homes will be built in flood-prone areas across the UK (The Telegraph, 2015).

Although these new developments may be behind existing protection measures, the ever-changing nature of the hazard, driven by climate change, means the water is only getting higher. The story is no different in Australia with the Productivity Commission last year calling land use planning perhaps the ‘most potent policy lever’ for influencing the level of future natural disaster risk.

Understanding Exposure

The emphasis on land use planning and consideration of exposure in disaster risk reduction often focuses on restricting new development. But it can (and should) be more subtle than that.

When managing the exposure to any natural hazard, considering supply chains, critical infrastructure, essential services and network redundancies are all equally important. When we broaden our thinking and delve into the factors that allow society to evolve, our management approaches equally broaden. This provides decisions makers with many more approaches with which to deal with the hazards societies face, apart from a yes or no development approval.

Modelling Exposure

As we broaden our thinking in terms of risk, to manage and reduce it we need to model all of its components. The modelling of exposure is particularly challenging. It’s a challenge that relates to some of this group’s (iWade’s) work.

Modelling exposure into the future requires an understanding of demographic and economic drivers for new investments and developments. The uncertainty involved in this can also be staggering. Methods need to be developed to ensure risk reduction options are robust or can adapt to future hazards and societal needs.

An approach this research group is taking is to model land use change. This is driven by the need to meet the State’s population and economic projections. We are also overlaying flood modelling (along with other disasters) to understand the changing risk due to climate change, economic development and population changes. These, coupled with developing scenarios for the future of cities, allows the capture of uncertainties and the testing of policies to assess their future effectiveness.

Research Report on Modelling, Understanding Reducing Exposure

Members of this research group are currently developing decision support systems which include the land use change and hazard models for government departments in South Australia, Victoria and Tasmania.

The research includes developing software packages and running workshops to ensure the models are designed to be as relevant as possible to assist decision makers to make better long-term decisions for risk reduction.

Optimise pump controls automatically to save time and money

This article looks at how to optimise pump operations using rule-based controls using such things as the EPANET2 Toolkit. Until now, this has not been possible. We modified the toolkit so that it is.


Every time we open a tap, water comes out. I had never asked myself why, before starting my civil engineering degree. It was only after I did that I realised that there is a lot of work behind water distribution systems (a water distribution system is the system of pipes, valves, tanks and pumps that deliver us water). Before that, I thought it was kind of magic.

Now, about 15 years later, I think it is still a kind of magic. Not only because the design of these systems is complex, but also because their operation needs to take into account a lot of constraints. In particular, the pump operation.

How do you operate a pump?

The easy answer is to switch it on or off depending on whether the tank is empty or full, respectively. But, the small example in Figure 1 will show you that it is not that easy after all.

The pump in Figure 1 fills a tank that is used to provide water to the users. We can use the tank level to decide when to switch on or off a pump.

Figure 2 shows the tank level and the pump operation if we decide to switch the pump on when the tank level reaches 7.9 m (this is the lower trigger level) and if we decide to switch the pump off when the tank level reaches 9.7 m (this is the upper trigger level).

This figure shows a pumping system. On the far right are some houses, labelled “users”. Above them is a tank, labelled “tank”. To the left and below the tank is an item labelled “pump” and next to it is an item labelled “water source”. It shows that the pump moves water from the water source to the tank, and then the tank serves the users.

Figure 1: simple example of pumping system

The pump controls in Figure 2 are not bad after all:

  • the tank is never empty, so users can have water the whole time
  • the tank is refilled after 24 hours
  • the number of pump switches is not excessive.

    This diagram shows a comparison between pump flow and tank levels. Between 1 pm and 1 am, at around 7 pm, a label says “some of this pumping could have been delayed to the off-peak tariff period!”. That off-peak tariff period is shown above it. The label says this because the pump flow is above 100 Litres per second, and yet the tank level is low at that time.

    Figure 2: example of pump operation with one set of tank trigger levels

However, we could have done better and saved a bit of money if we had pumped more in the off-peak tariff period, where energy is cheaper!

Who cares?

Maybe, at this point, you are already wondering ‘who cares?’ Well, the water utility, and all the people involved in the pump operations do.  Research has also cared for a relatively long time (e.g. Lingireddy and Wood, 1998; van Zyl et al. 2004; López-Ibáñez et al. 2008).

On some level, you should care too. Here’s why: Pumps use energy, which costs money. No matter where you are, you pay for water (and the electricity used to move it) directly or indirectly (e.g. through taxes).

It makes sense to switch the pumps on when the energy is cheaper (i.e. in the off-peak tariff period), but that is not easy. We could define the pump operation based on the time of the day (i.e. using scheduling), so that we are sure that we pump as much as we can when energy is cheaper. Figure 3 shows an example where we decide to switch off the pump at 8 am and to switch it on again at 4 pm. Now we exploit the off-peak tariff period as much as we can!  Perfect! Or, at least, it seems perfect. But what if the demands were bigger than expected and the tank runs empty before the off-peak tariff period starts? You cannot let this happen.

The problem is that we don’t know the water demands ahead of time. It makes predicting when we need to switch on or off a pump difficult.

Figure 3: the pump is switched on or off according to the time of the day

Rule-based controls help deal with uncertainty

One way to take into account the uncertainty in water demands is to control the pumps based on multiple conditions.

For example, if we define a different set of tank trigger levels (when to switch on or off a pump) for peak and off-peak tariff periods, we can reach the pump operations shown in Figure 4. The figure shows that we don’t pump more than necessary in the peak-tariff period, but we can  also make sure that the pump will be switched on before our tank runs empty.

Figure 4: example of pump operation with two sets of tank trigger levels (one for the peak and one for the off-peak tariff period) using rule-based controls

We can implement this type of pump controls in the hydraulic simulator EPANET2 (Rossman, 2000).

A rule-based control in EPANET2 looks like this:

AND TANK t6 LEVEL < 8.5000

You can see that the status of the pump depends both on the time of the day and on the tank level.

What happens is that usually the tank trigger levels in the peak tariff period are lower than the tank trigger levels in the off-peak tariff period. This way, the tank will not be completely refilled during the expensive period of the day; and it will be maintained as full as possible in the off-peak tariff period.

Optimising rule-based controls

The hydraulic simulator EPANET2 can be easily linked to optimisation algorithms using the EPANET2 toolkit. By doing this, the optimisation algorithm can find the best solution (or solutions) for you. For example, optimisation can find the optimal set of tank trigger levels to minimise costs and/or minimise energy consumption etc, Using trial and error would take a lot more time.

EPANET2 toolkit modification allows automation

Until  now, the toolkit did not allow the automatic modification of rule-based controls during optimisation. Now, we have adjusted the EPANET2 toolkit (see Marchi et al. (2016) and the ETTAR toolkit), so that we can optimise rule-based controls automatically.

This means that we can have an optimisation algorithm that optimises the tank trigger levels taking into account peak and off-peak tariff periods.

Automatic optimisation saves you time and money

But what if you want to operate a pump based on the level of multiple tanks? Real systems usually have more than one pump and one tank! Now we can optimise multiple conditions at the same time.

In Marchi et al. (2016) we also tried to let the algorithm optimise the entire set of rules. That is, the algorithm is deciding every word and value in a rule (for example RULE 1 above).

What we showed was that the algorithm was able to find less expensive solutions for the 24 hours tested!

There are many other possibilities

Maybe having the algorithm decide everything seems a bit too futuristic (even for me!). There is still a long way to go. There are a lot of considerations to take into account before the algorithm can really decide everything. But, this opens up a lot of interesting possibilities.

My hope is that the ETTAR toolkit can be used to find more cost-effective and reliable pump controls.

I hope you enjoyed this blog! If you are interested in this topic too, please leave a comment here or contact me.



Lingireddy, S. and Wood, D. (1998). “Improved Operation of Water Distribution Systems Using Variable-Speed Pumps.” J. Energy Eng., 10.1061/(ASCE)0733-9402(1998)124:3(90), 90-103.

López-Ibáñez, M., Prasad, T., and Paechter, B. (2008). “Ant Colony Optimization for Optimal Control of Pumps in Water Distribution Networks.” J. Water Resour. Plann. Manage., 10.1061/(ASCE)0733-9496(2008)134:4(337), 337-346.

Marchi, A.Simpson, A., and Lambert, M. (2016). “Optimization of Pump Operation Using Rule-Based Controls in EPANET2: New ETTAR Toolkit and Correction of Energy Computation.” J. Water Resour. Plann. Manage. , 10.1061/(ASCE)WR.1943-5452.0000637 , 04016012.

Rossman L.A., “EPANET2 user’s manual”, National Risk Management Research Laboratory, United States Environmental Protection Agency, Cincinnati, OH, 2000.

van Zyl, J. E. , Savic, D. A. , and Walters, G. A. (2004). “Operational optimization of water distribution systems using a hybrid genetic algorithm.” J. Water Resour. Plann. Manage. 130 (), 160–170.

The importance of multi-disciplinary research for alternative water sources

As researchers, we need a range of expertise to fully understand complex water supply systems. In this article I demonstrate what this means in the real world. Read on to find out how multi-disciplinary teams can be so important.


Water supply systems are complex. As we begin using more alternative sources of water, such as harvested stormwater and groundwater, these systems only become more complex. In a traditional water supply system, we rely on surface water from less developed or natural catchments. Climate change, population growth and overuse of these sources have put stress on our water resources. To ease the pressure, water utilities, local councils, developers and other water system managers have turned to alternative water supplies.

Analysing systems that use alternative sources from a hydraulic engineering background does not provide all the answers. We need different expertise to fully understand these systems.

How the hydraulic analysis approach works, in a traditional water supply system

A hydraulic analysis of a water supply/distribution system typically considers the system all the way from a supply reservoir to the consumers. It includes the pumps, tanks, pipes and valves along the way (Fig. 1). When we design and analyse such a system from a hydraulic perspective, our main considerations are:

  • Sizing of tanks: Taking into account the amount of water required by consumers and supplying water in emergencies (such as fires or pump outages).
  • Sizing of gravity pipelines:  To provide adequate pressure for consumers, but also avoid high velocities that may damage pipelines.
  • Sizing of pumps and pressure pipelines together: Considering pressure and velocity constraints on the pipe and energy losses due to friction
  • Adding valves where required: To sustain pressure, reduce pressure, or isolate sections of the network.
  • Determining operating rules for the system that ensure tanks always have enough water to supply demands, and, where possible, defer pumping to off-peak (cheaper) electricity tariff periods.

This approach assumes that water is always available in the water supply reservoir at the start of the system. We may need some assistance from hydrologists to ensure that this is a reasonable assumption.

This diagram shows the path of supply and distribution. The model runs left to right, as follows: Reservoir, pump, through a pressure pipeline and up to a tank, then through a gravity pipeline downwards to consumers.










This diagram shows the path of supply and distribution. The model runs left to right, as follows: Reservoir, pump, through a pressure pipeline and up to a tank, then through a gravity pipeline downwards to consumers.

Figure 1: A simple example of a traditional water supply system


Harvested stormwater systems need additional expertise

Harvested stormwater is run-off collected from urban areas. It is often used for non-potable supplies such as irrigation of open green spaces.

The Ridge Park Managed Aquifer Recharge Project in the City of Unley, South Australia, which is at the edge of Adelaide (Fig. 2) is a harvested stormwater system. In winter, this system:

  • collects water from Glen Osmond Creek (run-off comes from urbanised areas around the bottom of the South Eastern Freeway)
  • treats the water through biofiltration and a small treatment plant
  • injects the water into an aquifer for storage.

In summer, water is extracted from the aquifer and used to irrigate parks and reserves in the City of Unley area (Fig. 3).

This image shows a map of South Eastern metropolitan Adelaide in South Australia. Highlighted is the Glen Osmond Creek. Circled is the approximate catchment area upstream of the harvest point.

Ridge Park Managed Aquifer Recharge Project in the City of Unley, South Australia

This is a diagram of the Ridge Park Stormwater Harvesting and Aquifer Recharge System.

This is a diagram of the Ridge Park Stormwater Harvesting and Aquifer Recharge System. It shows why additional expertise is needed. This diagram has a harvest pond that Glen Osmond Creek flows into. Water is pumped from the pond into a bioretention basin, and from there up into a storage tank via a treatment plant. Water is also pumped into the tank from the aquifer, but that water does not go via the treatment plant. From the tank water is then distributed.

Figure 3: The Ridge Park Stormwater Harvesting and Aquifer Recharge System

Additional information needed to analyse the system

We need to consider the hydrology of Glen Osmond Creek and its catchment to know how much water is available to be harvested. This is in addition to a typical hydraulic understanding of the pumps, pipes and valves.

When analysing this system in our research, we have come across several problems that require expertise of other disciplines. Of note is the expertise provided by hydrogeologists and electrical engineers.

  • How much pressure is required to pump water into and out of the aquifer?
  • How do the aquifer properties affect the flow rates that can be achieved?
  • How much energy does it take to pump water through the treatment plant?
  • How much water can be held by the biofiltration basin and how long does it take to filter through?

In order to solve these problems, we reached out to people in our networks that have different expertise.

Non-technical expertise can be important

Input from non-technical areas, for example economic and social aspects, may be important.

The economic analysis of a system is particularly important in the concept or proposal phase. It can help to justify the benefits of going ahead with a project.

A social analysis of a system is also important. It helps us consider how alternative water source systems affect people’s use of water and public land. It also helps in considering the amenity of the land used for the system’s infrastructure. For example, building a dam on a creek to harvest stormwater may take land away from public use. But, if the water is used to irrigate other open green spaces, the project may be beneficial overall.

Networks and co-operative research centres improve research

Researchers, particularly PhD students, often work on very specific topics. They have very deep but not necessarily broad knowledge in their respective technical areas. In order to solve the problems identified above, we need to talk to people with different technical backgrounds and learn from them.

PhD students do not often have a broad network of people that they can go to for help on issues outside their fields. Our academic supervisors can be great resources in this respect. They help us to expand our networks, and show us where to start looking for answers.

I have found that being part of a Cooperative Research Centre (CRC) for Water Sensitive Cities has also proved useful. This CRC is a group of researchers from several different universities, from different disciplines. We collaborate with industry and government partners. Our outcomes are urban water management solutions, education and industry engagement. The goal is to make towns and cities more water sensitive. This large group of researchers and industry partners help me to better understand my research. It also improves the final results of my work.


This research is part of the CRC for Water Sensitive Cities Project C5.1 (Intelligent Urban Water Networks). It is supported by funding for post-doctoral research and a PhD top-up scholarship. The support of the Commonwealth of Australia through the Cooperative Research Centre program is acknowledged.