ANUDEM is a program that calculates regular grid digital elevation models (DEMs) with sensible shape and drainage structure from arbitrarily large topographic data sets. It has been used to develop DEMs ranging from fine scale experimental catchments to continental scale.
Digital elevation models (DEMs) underpin an extensive range of research and applications in natural resource analysis and assessment (Hutchinson 2008, Hutchinson and Gallant 1999, 2000). They are very commonly used to support hydrological applications that depend on accurate representation of surface drainage structure.
ANUDEM has been used to develop the Nine-second Australian Digital Elevation Model (ANU Fenner School of Environment and Society and Geoscience Australia 2008), which has in turn been used to derive nested catchments and sub-catchments for the Australian continent (PDF, 142KB) (Stein 2006).
Input data to ANUDEM may include point elevations, elevation contours, streamlines, sink data points, cliff lines, boundary polygons, lake boundaries and data mask polygons.
ANUDEM ensures good shape and drainage structure in the calculated DEMs in five main ways by:
- Imposing a drainage enforcement condition on the fitted grid values that automatically removes spurious sinks or pits. This eliminates one of the main weaknesses of elevation grids produced by general purpose interpolation techniques. It greatly improves the utility of the DEM for hydrological applications. It can also aid in the efficient detection of data errors.
- Incorporating surface drainage constraints directly from input streamline data.
- Delineating ridges and streams automatically from input contour line data. This is achieved by inserting curvilinear ridge and streamlines associated with corners of contour lines that indicate where these lines cross the elevation contours.
- Breaking the continuity of the DEM over data cliff lines.
- Ensuring compatibility of lake boundaries with the elevations of connecting streamlines and neighbouring DEM points.
The drainage enforcement algorithm is one of the principal innovations of ANUDEM. It has been found in practice to be a powerful condition that can significantly increase the accuracy, especially in terms of drainage properties, of digital elevation models interpolated from both sparse and dense elevation data.
The drainage enforcement algorithm acts conservatively when attempting to remove sinks and does not impose drainage conditions that would plainly contradict the neighbouring elevation data. A consequence of this is that errors in both elevation and position of input elevation data can often be indicated by sinks in the final fitted grid, especially when the input data include at least the principal streamline network. This is particularly useful when processing very large data sets. The program can write a file with the locations of the remaining spurious sinks to assist in the correction of data errors. The number of such sinks is usually quite small. The conservative nature of the program imposed drainage conditions also makes the program quite robust to moderate errors in the positions of input streamline data and capable of producing generalised (coarse resolution) elevation models with appropriately generalised drainage properties.
ANUDEM has a comprehensive set of procedures for assessing the quality of the fitted DEM, for optimising DEM resolution and for detecting data errors. In addition to flagging remaining spurious sinks and circular data stream networks, the program can write a file of largest scaled residuals. The largest of these residuals indicate large elevation errors and locations where elevation data are inconsistent with streamline data. Where there are inconsistencies between elevation data and streamline data, these can be due to small but significant errors in input elevation data or errors in location or direction of input streamline data.
- Major revision of the method for incorporating and applying drainage enforcement to elevation contour data. The new method ensures more faithful representation of elevation contours and significantly improves the performance of the drainage enforcement algorithm on DEMs derived from elevation contours.
- Major revision of the method for incorporating lakes and interconnecting streamlines. The new method refines the positioning of lake boundaries as lines separating grid points (Hutchinson et al. 2011) and significantly improves the automated determination of the elevations of lake boundaries.
- Major revision of the method for incorporating streamline data. The new method refines the positioning of streamlines and upgrades the representation of streamline side conditions to better represent associated catchments.
- Upgraded drainage enforcement algorithm to prevent modification of drainage conditions associated with streamline data.
Main data flows
The flow chart below shows the main data flows through the ANUDEM program. Two point data types and six line data types are supported. Detection and correction of data errors is a very important part of quality DEM production. The point and line diagnostic files facilitate rapid and reliable detection of data errors. In particular, output sinks are a key indicator of the drainage properties of the DEM and of its overall quality. The diagnostic files are designed for ready plotting by a GIS.
Drainage enforcement algorithm
Drainage enforcement is achieved by attempting to remove all sink points that have not been identified as such in input sink data files. The essence of the drainage enforcement algorithm is to find for each sink point the lowest adjacent saddle point that leads to a lower data point, sink or edge and enforcing a descending chain condition from the sink, via the intervening saddle, to the lower data point, sink or edge (Hutchinson 1989). This action is not executed if a conflicting elevation data point has been allocated to the saddle. The action of the drainage enforcement algorithm is modified by the systematic application of two user supplied elevation tolerances. The program also enforces drainage by using streamline data.
The first elevation tolerance allows the user to adjust the strength of drainage enforcement in relation to both the accuracy and density of the input elevation data. The detailed action of this tolerance has undergone considerable development and testing with data sets of varying densities and accuracies at a variety of scales. The aim has been to achieve the strongest possible drainage enforcement without making serious errors in automated placement of drainage lines, particularly when the input data are limited in terms of accuracy or density. The action of the tolerance naturally become less critical as the accuracy and density of the input data improves. When the tolerance has been set appropriately, the sink points not cleared by the program are those associated with significant errors in elevation data or streamline data or with areas where the input data are not of sufficient density to reliably resolve the drainage characteristics of the fitted grid.
The first tolerance should principally reflect the elevation accuracy of the input data points but it can also reflect the density of the input elevation data. Elevation differences between data points not exceeding the first tolerance are judged to be insignificant with respect to drainage. Thus data points that block drainage by no more than the first tolerance are removed. When data points are not sufficiently dense to accurately resolve drainage, the first tolerance may be increased somewhat to yield a more generalised drainage pattern at the expense of fidelity to the elevation data. This is especially useful when working at broader scales (coarser than say 1:100,000). When gridding contour data the first elevation tolerance should be set to half the data contour interval.
The first tolerance is also used when searching for possible clearances of remaining sinks to favour adjacent saddles that lead to destinations significantly lower in elevation than the remaining sink over saddles that lead to sinks at similar elevations to the remaining sink. This is particularly important in identifying connected drainage structure in areas with low elevation relief. This tolerance is also used to slightly favour saddle points that are not associated with elevation data points over saddle points that are associated with elevation data points. The tolerance is also used to slightly favour saddles associated with drainage constraints that are consistent with the intended drainage enforcement over saddles associated with drainage constraints that are inconsistent with the intended drainage enforcement. The drainage enforcement algorithm does not reverse constraints associated with input streamline data. The sinks that remain because of this are often good indicators of errors in the direction of input streamline data.
The second elevation tolerance is used to prevent drainage enforcement through unrealistically high barriers, whether or not supported by elevation data. Drainage is not enforced through saddle points that are more than this tolerance above the associated sink. This tolerance is rarely active and its size is not critical. The program provided default value is six times the first elevation tolerance. On rare occasions, when analysing difficult data sets with large variation in local relief, the user may increase this tolerance. The second elevation tolerance is likely to be inactive when source data are reasonably dense or mainly consist of elevation contours.
Drainage enforcement is particularly effective when used in conjunction with input streamline data. This is useful when more accurate placement of streams is required than what can be calculated automatically by the program. Input streamline data can also be used to remove sinks that would not otherwise be removed by the automatic drainage enforcement algorithm. This is in fact the recommended way to correct drainage anomalies in elevation grids if there are no errors in the input topographic data. Input streamlines must be directed in the direction of elevation descent. All downstream elevation data points that conflict with strict descent down each streamline are removed. The program removes closed loops from input streamlines and writes the locations of such loops to the output stream error file.
ANUDEM permits modelling of stream distributaries by allowing each grid point to have up to two downstream directions. Elevations along all streams, including all distributaries, are initialised using a recursive procedure that uses all elevation data points that lie on streamlines. The output stream error file includes a flag for all distributary points to permit checking for possible streamline direction errors.
Side conditions are also set for each data streamline. These ensure that the streamline acts as a breakline for the interpolation conditions across the streamline so that each streamline lies at the bottom of its accompanying valley. Side conditions are not set for data points beside streams whose elevations are more than the second elevation tolerance below the height of the stream. Remaining sinks associated with such points are a good indicator of elevation errors and streamline direction errors.
New data types
Three new data types have been introduced with ANUDEM to further improve its locally adaptive capacity to model the shape and drainage structure of the landscape and to take advantage of existing source data.
Cliff line data
A capacity to process cliff line data was first introduced to allow for broad scale breaks in elevation values in selected areas of the Australian continent (ANU Fenner School of Environment and Society and Geoscience Australia, 2008). Cliff lines permit a complete break in continuity between neighbouring grid elevation values each side of the data cliff lines, as they are encoded into the grid. Further details of this algorithm will be described in a forthcoming publication. Cliff lines must be supplied to ANUDEM as directed lines, with the low side of each cliff line on the left and the high side of the cliff line on the right. This permits removal of elevation data points that lie on the wrong side of the cliffs, as they are encoded onto the grid, and enables better placement of cliffs in relation to streamlines.
The initial method for encoding cliffs permitted accurate breaking of continuity of the fitted DEM over data cliff lines, provided these lines were not within two grid cells of each other. This was unnecessarily restrictive since cliffs in general can be arbitrarily close to each other. The method has been redesigned to completely remove this restriction. The efficiency in coding of the revised method has permitted better processing of cliffs in terms of the quality of the output DEM and in terms of computational efficiency.
It has also been found that the minor shifts in position that are imposed on streams and cliffs as they are incorporated into the grid can lead to spurious interactions between these data. An automated method has therefore been developed to make small adjustments in the placement of both streams and cliff lines in the grid to minimise these spurious interactions. The magnitudes of these adjustments are normally less than the width of one grid cell but the adjustments can make a significant improvement in the quality of DEMs that depend on both stream and cliff line data. The maximum adjustments in cliffs and streamlines can be set by the user to reflect different positional accuracies of each data type.
Lake boundary data
Lake polygons were initially incorporated in ANUDEM as simple masks to set the elevation of each lake surface to the minimum elevation of all DEM values immediately neighbouring the gridded lake. This simple algorithm is not sufficient to accurately model landscapes with many lakes with interconnecting streams. The method for incorporating lakes has been completely revised to make full use of the information implicit in such lakes.
The revised method treats each lake boundary as a contour with unknown elevation and iteratively estimates the elevation of this contour from the grid points on the lake boundary. At the same time, the elevation of each lake boundary is made to conform with the elevations of any upstream and downstream lakes. The elevation of each lake boundary is also made to be consistent with the neighbouring DEM values. Grid points immediately outside the lake are made to lie above the elevation of the lake boundary and grid points on the interior of the lake made to lie below the elevation of the lake boundary. These conditions are satisfied using an iterative procedure to be described in a forthcoming publication. The method also flags errors in connecting stream line networks, including circular stream networks and lakes with multiple outflows.
Data mask polygon data
It is sometimes convenient to remove certain elevation data from the interpolation process without explicitly removing them from the elevation data files. This is particularly the case when there are many large data files. Data typically removed are those associated with features on the actual land surface that can interrupt accurate representation of shape and drainage structure of the true land surface. The underlying aim of ANUDEM is to represent the true ground surface. Unwanted data typically include dam walls and bridges over streams. They can also include ill-defined lake heights from remotely sensed elevation data sets, although in this case it may be preferable to remove the offending lake height data from the data files completely using standard GIS techniques. Data masks are enacted by digitising closed polygons around the unwanted features and submitting the polygons to ANUDEM as data mask polygons.
DEM quality assessment
The quality of a derived DEM can vary greatly depending on the data source and the interpolation technique. The desired quality depends on the application for which the DEM is to be used, but a DEM created for one application is often used for other purposes. Any DEM should therefore be created with care, using the best available data sources and processing techniques. Efficient detection of spurious features in DEMs can lead to improvements in DEM generation techniques, as well as detection of errors in source data as indicated above.
Since most applications of DEMs depend on representations of surface shape and drainage structure, absolute measures of elevation error do not provide a complete assessment of DEM quality (Hutchinson and Gallant 2000). A number of graphical techniques for assessing data quality have been developed. These are non-classical measures of data quality that offer means of confirmatory data analysis without the use of an accurate reference DEM. Assessment of DEMs in terms of their representation of surface aspect has been examined by Wise (1998).
Spurious sinks or local depressions in DEMs are frequently encountered and are a significant source of problems in hydrological applications. Sinks may be caused by incorrect or insufficient data, or by an interpolation technique that does not enforce surface drainage. They are easily detected by comparing elevations with surrounding neighbours. Hutchinson and Dowling (1991) noted the sensitivity of this method in detecting elevation errors as small as 20 metres in source data used to interpolate a continent wide DEM with a horizontal resolution of 2.5 kilometres. More subtle drainage artefacts in a DEM can be detected by performing a full drainage analysis to derive catchment boundaries and streamline networks, using the technique of Jenson and Domingue (1988).
Computing shaded relief allows a rapid visual inspection of the DEM for local anomalies that show up as bright or dark spots. It can indicate both random and systematic errors. It can also identify problems with insufficient vertical resolution, since low relief areas will show as highly visible steps between flat areas. It can also detect edge matching problems (Hunter and Goodchild 1995). Shaded relief is a graphical way of checking the representation of slopes and aspects in the DEM. These can also be checked by standard statistical analysis if there is an accurate reference DEM or accurately surveyed ground data (e.g. Sasowsky et al. 1994, Bolstad and Stowe 1994, Giles and Franklin 1996).
Contours derived from a DEM provide a sensitive check on terrain structure since their position, aspect and curvature depend directly on the elevation, aspect and plan curvature respectively of the DEM. Derived contours are a particularly useful diagnostic tool because of their sensitivity to elevation errors in source data. Subtle errors in labelling source data contours digitised from topographic maps are common, particularly for small contour isolations that may have no label in the printed map.
Other deficiencies in the quality of a DEM can be detected by examining frequency histograms of elevation and aspect. DEMs derived from contour data usually show an increased frequency of contour elevations in the elevation histogram. The severity of this bias depends on the interpolation algorithm. Work is in progress to reduce this bias in DEMs created by ANUDEM. The frequency histogram of aspect can be biased towards multiples of 45 and 90 degrees by interpolation algorithms that restrict searching to a few specific directions between pairs of data points.
- ANU Fenner School of Environment and Society and Geoscience Australia, 2008. GEODATA 9 Second DEM and D8 Digital Elevation Model and Flow Direction Grid, User Guide (PDF, 1 MB). Geoscience Australia, 43 pp.
- Bolstad, P.V. and Stowe, T. 1994. An evaluation of DEM accuracy: elevation, slope and aspect.Photogrammetric Engineering and Remote Sensing 60: 1327-1332.
- Garbrecht, J. and Starks, P. 1995. Note on the use of USGS level 1 7.5-minute DEM coverages for landscape drainage analyses. Photogrammetric Engineering and Remote Sensing 61: 519-522.
- Giles, P.T. and Franklin, S.E. 1996. Comparison of derivative topographic surfaces of a DEM generated from stereographic SPOT images with field measurements. Photogrammetric Engineering and Remote Sensing 62: 1165-1171.
- Hunter, G.J. and Goodchild, M.F. 1995. Dealing with error in spatial databases: a simple case study.Photogrammetric Engineering and Remote Sensing 61: 529-537.
- Hutchinson, M.F. 1988. Calculation of hydrologically sound digital elevation models. Proceedings of the Third International Symposium on Spatial Data Handling, August 17-19, Sydney. International Geographical Union, Columbus, Ohio, pp 117-133.
- Hutchinson, M.F. 1989. A new method for gridding elevation and streamline data with automatic removal of pits. Journal of Hydrology 106: 211-232.
- Hutchinson, M. F. 1996. A locally adaptive approach to the interpolation of digital elevation models. Third International Conference/Workshop on Integrating GIS and Environmental Modeling. NCGIA, University of California, Santa Barbara.
- Hutchinson, M.F. 2000. Optimising the degree of data smoothing for locally adaptive finite element bivariate smoothing splines. Australian & New Zealand Industrial and Applied Mathematics Journal 42(E): C774-C796.
- Hutchinson, M.F. 2008. Adding the Z-dimension. In: Wilson, J.P. and Fotheringham, A.S. (eds), Handbook of Geographic Information Science. Blackwell, pp 144-168.
- Hutchinson, M. F. and Dowling, T. I. 1991. A continental hydrological assessment of a new grid-based digital elevation model of Australia. Hydrological Processes 5: 45-58.
- Hutchinson, M. F. and Gallant, J. C. 1999. Representation of terrain. In: Geographical Information Systems: Principles, Technical Issues, Management Issues and Applications. Second Edition. Edited by Longley, P.A., Goodchild, M.F., Maguire, D.J. and Rhind, D.W. Wiley, New York, Chapter 9, pp 105-124.
- Hutchinson, M.F. and Gallant, J.C. 2000. Digital elevation models and representation of terrain shape. In: Wilson,J.P. and Gallant,J.C. (eds), Terrain Analysis: Principles and Applications, Wiley, New York, Chapter 2, pp 29-50.
- Hutchinson, M.F., Stein, J.A., Stein, J.L. and Xu, T. 2009. Locally adaptive gridding of noisy high resolution topographic data. In Anderssen, R.S., R.D. Braddock and L.T.H. Newham (eds) 18th World IMACS Congress and MODSIM09 International Congress on Modelling and Simulation. Modelling and Simulation Society of Australia and New Zealand and International Association for Mathematics and Computers in Simulation, July 2009, pp. 2493-2499. ISBN: 978-0-9758400-7-8.
- Hutchinson, M.F., Xu, T. and Stein, J.A. 2011. Recent Progress in the ANUDEM Elevation Gridding Procedure. In: Geomorphometry 2011, edited by T. Hengel, I.S. Evans, J.P. Wilson and M. Gould, pp. 19-22. Redlands, California, USA.
- Jenson, S. K. and Domingue, J. O. 1988. Extracting topographic structure from digital elevation data for geographic information system analysis. Photogrammetric Engineering and Remote Sensing 54: 1593-1600.
- Sasowsky, K. C., Petersen, G. W. and Evans, B. M. 1992. Accuracy of SPOT digital elevation model and derivatives: utility for Alaska's north slope. Photogrammetric Engineering and Remote Sensing 58: 815-824.
- Stein, J. L. 2006. A continental landscape framework for systematic conservation planning for Australian rivers and streams. PhD Thesis, Centre for Resource and Environmental Studies. Australian National University.
- Wise, S.M. 1998. The effect of GIS interpolation errors on the use of digital elevation models in geomorphology. In: Lowe, S.N., Richards, K.S. and Chandler, J.H. (eds), Landform Monitoring, Modelling and Analysis. Wiley, New York, pp 139-164.