We present a monitoring technique tailored to analysing change from near-continuously collected, high-resolution 3-D data. Our aim is to fully characterise geomorphological change typified by an event magnitude–frequency relationship that adheres to an inverse power law or similar. While recent advances in monitoring have enabled changes in volume across more than 7 orders of magnitude to be captured, event frequency is commonly assumed to be interchangeable with the time-averaged event numbers between successive surveys. Where events coincide, or coalesce, or where the mechanisms driving change are not spatially independent, apparent event frequency must be partially determined by survey interval.
The data reported have been obtained from a permanently installed terrestrial
laser scanner, which permits an increased frequency of surveys. Surveying
from a single position raises challenges, given the single viewpoint onto a
complex surface and the need for computational efficiency associated with
handling a large time series of 3-D data. A workflow is presented that
optimises the detection of change by filtering and aligning scans to improve
repeatability. An adaptation of the M3C2 algorithm is used to detect 3-D
change to overcome data inconsistencies between scans. Individual rockfall
geometries are then extracted and the associated volumetric errors modelled.
The utility of this approach is demonstrated using a dataset of
The processes that erode landscapes involve a broad range of event sizes, the distribution of which is commonly characterised using magnitude–frequency curves. Wolman and Miller (1960) proposed that the frequency of events that denude the Earth's surface is log-normally distributed, and that their geomorphic effectiveness (the product of magnitude and frequency) is greatest for the frequent, moderately sized events. This concept has been widely applied to study both the geomorphic efficacy of rivers (Wolman and Gerson, 1978; Hooke, 1980; Nash, 1994; Gintz et al., 1996) and the characteristics of landslides (Hovius et al., 1997, 2000; Dussauge-Peisser et al., 2002; Turcotte et al., 2002; Dussauge et al., 2003; Malamud et al., 2004; Guthrie and Evans, 2007; Li et al., 2016) using inverse power-law distributions or similar.
The exponent of the inverse power law describes the proportional contribution of increasingly small events, with larger exponents representing an increase in the proportion of small events in the inventory. However, many landslide volume distributions have been characterised by a decrease in the frequency density of the smallest events in log magnitude–log frequency space, known as a “rollover” (Malamud et al., 2004). At this point, the inverse power law breaks down, and so alternative distributions such as the double Pareto (Stark and Hovius, 2001; Guzzetti et al., 2002) or inverse gamma (Malamud et al., 2004; Guzzetti et al., 2005) have been drawn upon to model observations. Explanations for this rollover have been widely considered and include mechanical differences and physically based minimum possible event sizes (Pelletier et al., 1997; Guzzetti et al., 2002; Guthrie and Evans, 2004) or censoring of the smallest events by the resolution or frequency of monitoring (Lim et al., 2010). For rockfalls, Malamud et al. (2004) hypothesised that a rollover may not occur due to rock mass fragmentation.
The duration of monitoring relative to the return period of all possible
event sizes determines the likelihood of detecting changes that are
representative of how a landform evolves over longer timescales. The
creation of a representative inventory is also a function of the smallest
event size that can be detected and the temporal frequency of monitoring
compared to the rate at which such small events occur. Abellán et al. (2014) suggested that the spatial resolution of rockfall monitoring should
be sufficient to discretise the smallest events in a magnitude–frequency
distribution and that the recording frequency should fall below the
timescale on which superimposition and coalescence may occur. In practice,
defining this timescale a priori is challenging and requires the ability to monitor
the rock face over a sustained period in (near) real time. For rockfalls,
high-resolution monitoring also shows evolution of failures through time,
with event sequences and patterns related to the incremental growth of scars
(Rosser et al., 2007, 2013; Stock et al., 2012; Kromer et al., 2015a; Rohmer
and Dewez, 2015; Royán et al., 2015). Barlow et al. (2012) showed that a
monitoring interval of 19 months underestimated the frequency distribution
of small rockfall events, which coalesced into or were superimposed by
larger rockfalls. Treating rockfalls as spatially and temporally independent
is therefore problematic, as is experienced in other types of landform
change. For example, Milan et al. (2007) found an increase in erosion and
deposition volumes within a proglacial river channel when monitored using
daily terrestrial laser scan (TLS) surveys as opposed to surveys separated
by 8 days. This was attributed to the temporal length scales of
discharge and sediment supply, with return periods of less than 8 days.
The influence of monitoring or sampling interval on measured process rates
is more widely considered in the Sadler effect, in which sediment
accumulation rates observed in stratigraphic sections exhibit a negative
power-law dependence on measurement interval (Sadler, 1981; Wilkinson,
2015). Importantly, therefore, in settings that change little but often, the
ability to capture true magnitude and frequency without higher-frequency
monitoring is subject to an unknown degree of event superimposition and
coalescence, as well as temporal coincidence. Our first aim, therefore, is
to capture the influence of near-continuous monitoring on the
magnitude–frequency distribution of rockfalls from an actively failing rock
face, across which spatially contiguous rockfalls have previously been
observed (Rosser et al., 2005). This draws upon a database of
The improvement in temporal resolution gained by monitoring from
(semi-)permanent installations is weighed against a series of compromises in
the quality of data generated. These ultimately arise from scanning a
complex surface from only a single position, which has not previously
required consideration due to the scarcity of near-continuous lidar
monitoring in the geosciences. Scanning from a single position has a direct
impact on the similarity of point distributions between successive surveys,
which is critical in determining the accuracy of subsequent change
detections. Scanning from a single position results in occlusion, leaving
“holes” in the final point cloud in areas invisible to the scanner. At
topographic edges, range measurements may be averaged from multiple returns
recorded from separate surfaces, which are intersected within a single
footprint. Since laser scanners never measure exactly the same point twice
(Hodge et al., 2009); the perimeter of these holes and the position of
topographic edges will move between successive point clouds, even despite no
movement of the instrument. The likelihood of generating similar point
distributions between surveys is also influenced by surface reflectance
characteristics, such as moisture and colour, and by surface relief (Clark
and Robson, 2004; Bae et al., 2005; Lichti et al., 2007; Kaasalainen et al.,
2008, 2010; Pesci et al., 2008, 2011; Soudarissanane et al., 2011).
Scan lines in most laser scanners result in non-uniformly distributed data,
with heterogeneity often on a scale and orientation comparable to surface
structure, leading to aliasing that is also inconsistent (Lichti and
Jamtsho, 2006). The influence of these combined effects is exaggerated if
the scanner view is oblique to the surface, which may be common due to
logistical constraints when siting a semi-permanent instrument. Our second
aim, therefore, is to minimise the errors that arise from near-continuous
monitoring in order to reduce the minimum detectable movement. Kromer et al. (2015b) presented a 4-D smoothing technique to reduce the offset between
successive point clouds, such as those from near-continuous monitoring.
Similarly, the method presented here is optimised for handling large
(10
The minimum detectable movement, or level of detection (LoD), is a fundamental parameter in the delineation and calculation of erosion volumes, here rockfalls. This involves masking regions of change that exceed a hard threshold at the LoD, which is estimated either locally (e.g. Wheaton et al., 2010; Lague et al., 2013) or across the entire point cloud (e.g. Rosser et al., 2005; Abellán et al., 2009). Methods that estimate spatially variable LoDs have enhanced the ability to identify volumetric loss as compared to the application of a single LoD, with the latter set to exceed a significant portion of the modelled uncertainty across the area of interest (AOI). The spatially variable uncertainties described in Sect. 1.3 raise the potential for real change to be masked when using a single LoD but, equally, the benefits of using a single LoD are primarily in the consistency in measurement across the AOI. For example, if the purpose of monitoring is to generate a rockfall inventory in which the relative magnitude of events is important, a single LoD ensures consistency in the minimum detectable rockfall across the AOI and minimises the potential for recording erroneous events. The application of a single LoD also becomes increasingly computationally efficient when dealing with a large number of surveys. While the LoD remains constant, however, the accuracy of volume estimates is also contingent upon the ability to accurately identify depth change within boundary cells that occupy the rockfall perimeter. Previous approaches have ignored cells with a depth change below the instrument precision, thereby assuming that erosion events with an aerial extent below the cell size cannot be detected (e.g. Dussauge et al., 2003; Rosser et al., 2005; Abellán et al., 2006). However, these often fail to model volumetric errors that arise from extrapolating measured depth changes across each cell at the rockfall perimeter. In accounting for this, the volumetric uncertainty increases as a proportion of estimated volume for smaller events and specific geometries. Our third and final aim, therefore, is to describe the impact that a changing magnitude–frequency distribution has on the overall uncertainty of eroded volume estimates through time.
Data are presented from a monitored coastal cliff in North Yorkshire, UK.
Located at East Cliff, Whitby, the rock cliff is near vertical, reaching
Map of Whitby with the area scanned delineated with red lines. A Riegl VZ-1000 scanner is installed within East Pier lighthouse. The targets installed for the SiteMonitor4D range correction factor estimation are illustrated (T1–T6) in addition to the weather stations. Whitby Abbey lies 180 m from the cliff top. Map produced using shape files from Ordnance Survey© Crown Copyright and Database Right 2016. Ordnance Survey (Digimap Licence).
The viewpoint of the scanner results in some loss of spatial continuity in
surface measurement due to occlusion, as a result of surface relief and the
high incidence angle of parts of the rock face relative to the scanner (Fig. 2b). The closest point on the cliff is 342 m from the scanner with an
incidence angle onto the strike of the cliff of
Repeatability in change detection is dependent on point clouds that consistently describe the monitored surface. While the specification of successive scans is identical here, and despite no positional change in either the instrument or the surface, individual point clouds differ due to the inherent uncertainties described in Sect. 1.3. Minimising the impact of these differences on the resulting rockfall inventories is the primary objective of the preprocessing phases of our method, involving point cloud filtering and alignment, as well as the change detection phase. Once 3-D change has been calculated, the point clouds of change are interpolated and classified into 2.5-D datasets from which 3-D change geometry can be analysed (Fig. 3). While this paper does not seek to create a real-time system, the approach described is optimised for computational efficiency to allow data to be processed at a rate that is at least as quick as collection.
Optimising a point cloud for change detection involves filtering points on features that cannot be consistently measured, such as edges and vegetation. For large numbers of scans obtained from near-continuous monitoring, an automated approach is required that finds and removes such points consistently for each cloud. Below we describe the application of filtering prior to change detection. Given that the change detection process draws on a neighbourhood of points to describe the surface morphology, filtering points in advance removes the need for change detection of erroneous measurements and ensures that such measurements do not impact the change detection of surrounding neighbours.
Flow diagram representing the stages of the optimised near-continuous
monitoring change detection method. All stages following ASCII to MAT
conversion were written in MATLAB®, with ICP alignment and
rockfall vectorisation using the built-in functions
Scan data collected outside of the AOI can provide useful
information for scan-to-scan registration, particularly if they cover a
wider geographical area, have a distribution that is less planar than the
AOI itself, or if a large portion of the AOI is undergoing deformation.
Here, given that the control target network provides range estimate control
and the scanner remains unmoved, the spatial extent of the AOI is clipped at
the beginning of the workflow in order to increase the speed of subsequent
processing steps. For a small number of scans (< 20), AOI extraction
can be undertaken manually. However, for the
Once the AOI has been extracted, morphological (this section) and
radiometric (Sect. 3.1.3) filters are applied to remove points with high
positional uncertainties. These specific filters have not previously been
used to our knowledge; however, the reasoning behind their implementation is
well documented as described in Sect. 1.3. We refer to edges as
morphological breaks in gradient on the slope surface and holes as features
that surround regions of occlusion in the point cloud, which frequently
occur at edges due to high surface inclination. To detect edges,
neighbouring points within a fixed radius of each (query) point,
A limitation of many laser scanners is the inability to quantify the accuracy of each range measurement. With the common absence of more accurate data, assessing reliability is therefore challenging. In most TLS systems used for rock slope monitoring, range is estimated using the time of flight of a laser, where time is stamped based upon a characteristic of the measured reflection (e.g. intensity gate, maximum intensity amplitude) that varies between scanners. Some systems have the ability to capture the full energy–time distribution of the reflection. The energy of the received laser pulse structure depends on the spatial and temporal energy distribution of the emitted pulse, which are modified by the geometric and reflectance properties of the target surface (Stilla and Jutzi, 2008; Soudarissanane et al., 2011; Hartzell et al., 2015; Telling et al., 2017). This provides a means of estimating the relative quality of recorded measurements as a function of the number of separate reflections from a single pulse, the incidence angle of the laser beam with respect to the surface (the elongation through time of the reflected pulse relative to the emitted pulse), or the reflectance intensity (the integral of the reflection energy–time distribution). Here, the characteristics of the returned signal are used to remove vegetation (multiple returns per pulse) and edges (elongated reflections) in order to increase the consistency between successive point clouds. While the sensitivity of the waveform to target geometry has previously been highlighted (Williams et al., 2013), it has not previously been documented as a method to filter points acquired from terrestrial lidar.
The Riegl VZ-1000 TLS, with “full waveform” capture, records the intensity
of each returned signal at 2.01
In addition to the filtering of individual points considered unreliable, entire surveys required removal from the overall inventory due to inclement weather conditions. In conventional monitoring with no scan schedule automation, scans that are partially or fully obscured due to rain or fog are manually removed and/or repeated. Due to the frequency and duration of near-continuous scanning, some scans will almost certainly be partially or fully obscured. Scans that are entirely obscured can be removed automatically with relative ease. Unobscured areas in partial scans still allow some accurate change detection and rockfall identification, and so may be valuable to retain. Given that change is detected between scan pairs, however, it is critical that these partial scans are removed prior to change detection and remain unused. Figure 4 describes a scenario in which a rockfall occurs between 12:00 and 12:30 (all times are GMT) during adverse weather, which partially obscures the impending scan at 12:30. While some areas of this scan allow accurate change detection of the surface, if the rockfall occurs in an obscured area, it can be omitted from the inventory entirely. However, if surfaces are compared between 12:00 and the following scan at 13:00, with both captured during fair conditions, the rockfall will be observed and included in the inventory.
Conceptual illustration of the significance of removing partial scans. While parts of these scans provide accurate estimates of surface change, if a rockfall occurs in an area of no data, the failure will be missed using pairwise change. These scans should therefore be removed prior to change detection of the scan database.
Given the variability in the persistence of poor weather during a scan, a threshold based on the number of points can be difficult to define. At present, no automated method for detecting partial scans has been developed, though the removal of any scan that coincides with measured rainfall may represent a first step. Here, given that the same scan line patterns are applied during each survey, partially obscured point clouds were identified as those > 1 MB below the average file size. While the maximum possible number of change detections was 8986, this was reduced to 8596 because of complete obscuration and finally to 8270 because of partial scan removal. The point distribution of the remaining scans was manually examined by creating a video of every point cloud prior to reanalysis of the dataset. The reduction in the number of scans has a direct impact on the time interval between scans and hence deformation analysis prior to failures that occur during bad weather, which may result in some of the most active periods of rockfalls.
While range correction factors automatically scale range estimates in response to atmospheric variation, point clouds collected from a single fixed position still require alignment due to small shifts in scanner inclination that have the potential to propagate to several centimetres over distances of several hundred metres. It is therefore assumed that successive point clouds are approximately but not perfectly aligned, and hence require adjustment. In this section we describe the protocol for scan alignment applied to the near-continuous dataset. As discussed by Abellán et al. (2014), aligning point clouds can be undertaken using common surveyed and modelled targets combined with measured global coordinates (Teza et al., 2007; Olsen et al., 2009), feature-based registration based on the planarity and curvature of surfaces (e.g. Besl and Jain, 1988; Belton and Lichti, 2006; Rabbani et al., 2006), and point-to-point and point-to-surface methods, which use iterative closest point (ICP) alignment to progressively reduce the distance between two clouds (Besl and McKay, 1992; Chen and Medioni, 1992; Zhang, 1994). The accuracy of alignment is one of the key sources of error when detecting change between two point clouds (Teza et al., 2007).
Here, once filtered, point clouds are automatically registered to a
reference point cloud to improve alignment. Registration of datasets into a
global system has a significant impact on data file sizes due to multi-digit
coordinates, which becomes problematic with large numbers of large point
clouds from near-continuous monitoring. By retaining a local coordinate
system in this study, resulting file sizes were halved. ICP registration was
applied using MATLAB®'s
There is no established protocol for choosing the ideal reference scan for
alignment. This reference scan may be the first available scan, a later
single scan, or an average of a subset of previous scans. Schürch et al. (2011) aligned a series of three scans to the previous scan in a sequence,
rather than to the first of the monitoring campaign, in order to gain more
precise change estimates between successive surveys, at the cost of the
overall positional accuracy. This procedure is advantageous as it ensures
that the shape of a rapidly deforming or changing surface can be matched to
the previous survey, rather than one captured considerably earlier. For the
near-continuous monitoring dataset, however, the series of scans is aligned
to the first survey. This is undertaken because ICP alignment minimises the
point-to-plane distance of down-sampled point clouds, such that individual
(small relative to the AOI) rockfall events do not impact upon the overall
success of the alignment. Second, even with low alignment errors between
scan pairs, the potential for the point clouds to drift over time increases
with the number of scans sequentially aligned. While Schürch et al. (2011) assessed pairwise change between scans, high-frequency data allow
for change detection to be conducted over multiple intervals that the
time series enables, so aligning all scans with respect to each other is
important. Third, given that all scans were aligned to the initial reference
scan, these could be assigned to a single hierarchical structure created
only for the reference scan. The partitioning of 3-D data, here into an
octree structure, enables faster searching of point clouds with each point
assigned a 3
The distance between successive clouds is measured along the normal vector of each point in the cloud. Accurate estimation of each normal vector is therefore critical in determining the magnitude and direction of change and should be derived from an appropriately sized neighbourhood of points that adds topological context (Riquelme et al., 2014). In this section, we describe the widely adopted method for normal estimation and gains in efficiency that have been made in this study, including the definition of a single reference map of optimal neighbourhood radii.
In order to calculate the normal direction of each neighbourhood, a tangent
plane must be fitted to every point and its neighbours, with each being
considered as a potential plane subset. Using eigenvectors calculated from
principal component analysis, the eigenvector
The sign ambiguity of each vector is also corrected (Mitra and Nguyen, 2003; Ioannou et al., 2012). This can typically be resolved using the position of the query point, q, relative to the sensor position, s, by
In order to minimise the computation time required to apply Eqs. (5) and (6),
the axis orthogonal to the surface is introduced as a string of either “
The neighbourhood size strongly determines the direction of surface normals
(Mitra and Nguyen, 2003; Lalonde et al., 2005; Bae et al., 2009; Lague et
al., 2013; Riquelme et al., 2014). If the size of the neighbourhood is below
the scale of surface roughness, the resulting normals will fluctuate in
direction and are less likely to be consistent between successive point
clouds. Here, by varying the size of the neighbourhood for each point
between 0.1 and 2.5 m, the radius that produced the most planar surface is
identified, which is ideally suited to normal estimation (Lague et al.,
2013). An example of this is shown in Fig. 5a, with Fig. 5b illustrating
surface planarity across East Cliff. This shows a clear similarity to the
distribution of point density, such that the search radius is increased in
regions of low point density. Importantly, identifying the optimum
neighbourhood radius for 10
Search radii used for normal estimation across the rock face.
Conceptual variation in the distance along the normal for a
rockfall.
The distance calculation used is based upon the structure of the M3C2 algorithm, developed by Lague et al. (2013). The algorithm is described first followed by a modification, which has been incorporated to improve the overall accuracy of change detection and to streamline the workflow when applied to large time series scan datasets.
Once the normal vector is estimated, a bounding cylinder with a user-defined radius is created along the normal running through the query point. In order to enforce the boundaries of this cylinder, the orthogonal distance between every point within the current and neighbouring 26 octree cubes and the normal vector was estimated:
and the orthogonal distance is
Approach to change detection used in this study based on
empirical rockfall data.
Given that the position of each neighbouring point and its orthogonal
distance to the normal vector are known, the cylinder boundaries can be
enforced using the user-defined cylinder radius,
If the vector of change is along the direction of the normal vector (forward movement), the dot product of both vectors is > 0. If the vector of change is counter to the normal direction (backward movement), the dot product is < 0 and the vector is inverted.
M3C2 imposes a user-defined maximum cylinder length to decrease processing
times. Cylinder length is critically important for determining the accuracy
of change estimation, particularly at topographic edges within the point
cloud. As described, edges are likely to be more prevalent in point clouds
collected from single or off-nadir viewpoints. A method to reduce the effect
of edge change uncertainty in change detection is therefore required. In
Fig. 8, the influence of the choice of cylinder length is illustrated with
respect to a jointed rock mass surface. The plots illustrate variation in
measured change for a single point. When the cylinder extends 0.25 m in both
directions, only points from this surface are included in the cylinder; as
such, the centroid positions of each point cloud are both fitted onto that
surface. The measured distance for this point, the distance between the two
centroids, is
The use of an increased number of scans over a consistent timescale increases the potential for error propagation and accumulation within the dataset. Here we aim to model the uncertainty of eroded volume estimates by focussing on the conversion of rockfall depth and area into volume for cells at the perimeter of a rockfall scar. Given that the proportion of rockfall cells at the edge of a scar decreases for a larger footprint, the uncertainty in rockfall volume estimates has the potential to vary with the temporal resolution of monitoring.
An LoD was identified between scan pairs in which no rockfalls occurred as 2 standard deviations of the 3-D change, after Abellán et al. (2009). This was of comparable magnitude to the LoD recorded for every scan pair in the dataset; hence, the maximum recorded LoD was applied to all scan pairs in the dataset. Similar to Kromer et al. (2017), these change estimates are assumed to include the registration error, which is reduced here through range correction using finely scanned targets and through ICP. For sites whose geometry creates a highly variable point spacing within a single survey, a spatially variable LoD would be more appropriate even for the purpose of compiling an inventory of geomorphic events, so long as a record of the LoD across the surface is kept. Open-pit highwalls, for example, typically comprise a series of benches to minimise the travel distance of rockfalls downslope. This design generates considerable variation in instrument–object distances across the slope and a spatially variable LoD. More broadly, spatially variable LoDs can be considered better suited to measuring total erosion budgets across a single surface than the relative contribution of individual events of varying sizes.
The resulting LoD was used to threshold 2.5-D rasters of the 3-D change data,
created by linear interpolation of change values across the
Once a change image is thresholded according to the LoD, the volume of each
erosion event,
In reality, Eq. (15) represents the largest possible area because the
likelihood that border cells are entirely covered by the true change is
small. Conversely, the theoretical minimum area
This value can be applied as a threshold to the rockfall inventory, such
that failure areas below
The volumetric error is hence
Equation (22) shows that the number of border cells relative to the total number of cells within the area of change is critical in determining the net volumetric error. A higher ratio of border cells to the total number of cells results in a greater proportional area (and hence volume) error. While this volumetric error assessment is applied to rasters of 3-D-derived change, its use also extends to extraction of discrete events from DEMs of difference (DoDs).
Inputs used for distance estimation with varying cylinder lengths.
No appreciable change occurred between these two scans. As the cylinder
length increases (from 0.25 to 0.50 to 10 m), the number of surfaces
that the cylinder intersects increases (direction equal to the normal
vector). All points within a 0.25 m radius would be included as cylinder
points (circles) and the distance between their mean positions (squares)
calculated. From top to bottom, this distance is 0.0011 to
Sensitivity analysis for the edge–hole filter.
To test the influence of the filtering phases on the subsequent change
detection, comparisons were undertaken between two aligned point clouds
following point removal based on various thresholds. The theoretical
distance between the two point clouds is assumed to be zero given that no
rockfalls were observed between their collection. Any offset therefore
represents uncertainty, which is quantified for this purpose as the standard
deviation of a 3-D change detection. This uncertainty is plotted against the
applied edge and hole threshold in Fig. 9a. As the threshold is lowered,
points with higher EH values are retained and the offset between the two point
clouds (pre-alignment) increases. The distribution of EH values across the
cloud is presented in Figs. 9b and S1 in the Supplement. Using a 1 m search radius, an
inflection at the 95th percentile of points occurs at 5
Sensitivity analysis for waveform deviation filter.
Points removed using the waveform deviation filter (Fig. S2) occupied
similar locations across the point cloud and, once removed, also resulted in
decreased uncertainty in change estimates. In Fig. 10a, the mean absolute
distance between points is used to the represent the positional uncertainty
between point clouds. This uncertainty is
Following filtering and alignment, the DAN VCL method significantly lowered
the LoD (0.03 m) compared to that achieved using DoDs
(1.04 m). These were created using the same pairs of point clouds rasterised
at 0.25 m with each pixel containing > 1 point. In this instance,
the LoD also shows a 5-fold improvement relative to the M3C2 algorithm
applied using the same normal and fixed cylinder radius (LoD
In order to assess the influence of more frequent monitoring on the
resultant volume frequency distribution, two inventories were compared.
These were analysed over the same monitoring period, using scans separated
by different intervals
Distribution of rockfalls across East Cliff monitored at
sub-hourly intervals between 5 March and 30 December
2015. Rockfalls are distributed across the entire cliff face, in particular
in areas of exposed bedrock. Although the high water mark is below the
portion of cliff shown in this figure, the largest and most frequent
rockfalls occur at the base of the cliff. Accumulation and loss of material
in the areas of unexposed bedrock on the cliff buttresses, which runs
across the cliff face at
For all rockfalls observed at
Improvements in near-continuous point cloud acquisition currently outstrip
the development of techniques for data treatment and analysis (Eitel et al.,
2016; Kromer et al., 2017). Near-continuous TLS has
the potential to generate a considerable number of point clouds
(> 10
Cumulative rockfall volumes measured though the monitoring period, using data from all 11 monitoring intervals. The results show that far higher volumes of material, up to twice those recorded by 30 days of monitoring, are measured at sub-daily intervals. The times of pairwise change detections are recorded as the date of the first scan, rather than the second. As a result, although all scan intervals record a significantly increased rate of rockfall activity during November, this appears earlier on the plot for longer scan intervals. The total estimated volumes are not included for comparison as change detections cannot be recorded up to the final day of monitoring for longer time intervals (30 December).
The relative gains of each processing step applied are evident in a gradual improvement to the applied LoD. A 30 % improvement with the application of radiometric and morphological filters occurs due to the removal of points considered to be both less accurate and less repeatable. This approach to the removal of unreliable points can also be used in alternative processing techniques, for example, as members of the fuzzy inference approach developed by Wheaton et al. (2010) to quantify spatially variable DoD uncertainty. The approach to change detection adopted in this study also yielded an improvement to the LoD, highlighting the importance of precise calculation of difference, here as a function of the cylinder length, for scenarios in which multiple surfaces may be intersected by the same normal vector in the resulting point cloud. Such surfaces can be characterised as increasingly three-dimensional on the scale of the applied cylinder length (e.g. Brodu and Lague, 2012) or as rough surfaces that are surveyed at oblique viewing angles (Hodge et al., 2009). This problem is exacerbated for near-continuous monitoring, given that scanning from a fixed position increases the width of occluded zones on surfaces inclined away from the scanner, yielding higher offsets between measured surfaces inclined towards the scanner. Critically, adaptive change detection techniques are necessary to account for the variability in point cloud quality across surfaces surveyed from a fixed position, where the installation location may yield unfavourable target geometries. These may take several forms, including spatially variable LoDs, varying cylinder lengths, or varying cylinder widths.
The cylinder radius determines the degree of spatial averaging over change
measurements and, as such, should be informed by the style and scale of
movements under investigation. In theory, the smaller the radius, the finer
the spatial detail that can be established. However, this comes with a
compromise in that the increase in accuracy by accounting for neighbouring
points is reduced, the likelihood of intersecting points in the second point
cloud is reduced, and the statistical significance of calculations is
reduced by only drawing on a small number of points. Future development of a
variable cylinder radius may draw upon the same point density estimated
during normal estimation, varying in size until
The applied method removes scans undertaken during poor weather conditions
(e.g. rain or fog) in order to preserve the accuracy of the resulting rockfall
inventory. Given the potential for rockfalls to occur during poor weather
conditions, this constitutes an important drawback of near-continuous TLS
monitoring for rockfall inventory compilation. Techniques that can operate
during inclement weather conditions, such as ground-based interferometric synthetic-aperture radar (InSAR), are
therefore better suited to maintaining temporal consistency in rockfall
datasets. However, while the precision and measurement frequency of InSAR
monitoring surpasses that of lidar, highly precise change measurements are
also spatially averaged across large pixel sizes, resulting in a minimum
detectable rockfall volume several orders of magnitude higher
(
Given that small events present the highest aerial, and hence volumetric, uncertainty, the recent development of 3-D volume estimation from point clouds (Carrea et al., 2012; Benjamin et al., 2016) may play an important role in near-continuous scanning. The uncertainty of these techniques is determined by the precision of the point cloud, thereby eliminating uncertainty in object aerial extents due to linear interpolation into a fixed grid. However, these techniques also contain uncertainties that arise in part from the meshing approach adopted (Soudarissanane et al., 2011; Hartzell et al., 2015; Telling et al., 2017). Due to the dependence of these techniques on a minimum of four points to create a closed hull, fully 3-D techniques are also limited in their ability to resolve small, single point detachments. The development of scanners with increasingly small angular step widths and increased rates of point acquisition, however, will decrease the minimum resolvable detachment. At present, the 3-D clustering required to isolate points belonging to geomorphic change, combined with subsequent meshing of these points, comes at a considerable computational cost. These techniques therefore remain to be applied for > 10 scans (e.g. Carrea et al., 2012; Benjamin et al., 2016; van Veen et al., 2017).
This study demonstrates the need to adjust the frequency of data collection and processing in accordance with the study objectives. Here, monitoring has been undertaken to detect near instantaneous discrete changes to the slope (rockfalls) where both the spatial and temporal resolution of monitoring are important. Longer-term total change is more prone to error when change accrues from many small events, and big changes can occur in both the short and long term. There is a lack of research into this trade-off in spatial and temporal resolution but approaches that allow this to occur would be helpful in the future. The collection of a high-frequency time series of scan data presents the opportunity to reduce uncertainty by averaging point positions through both time and space, as points are independent in neither space nor in time. This averaging can take the form of averaging the 3-D position of each point, as utilised here and in M3C2 (Lague et al., 2013), or the averaging of differences between points (Abellán et al., 2009; Kromer et al., 2015b).
While the precise magnitude–frequency exponent reported is specific to East
Cliff, the scale-invariant behaviour of rockfalls is similar to that observed
in other rockfall inventories, albeit across a narrower range of magnitudes.
Along the same stretch of coastline, previous monthly monitoring has yielded
exponents of
Monitoring at lower frequencies may provide more accurate estimates of rates
of total change over longer periods. This is related to both the longer and
hence time-averaged conditions captured but also to the fact that the same
level of change measured infrequently has less volumetric error than when
measured frequently, particularly when change is accrued by many small,
discrete events. A decrease in
The magnitude–frequency distribution of geomorphic change is an important descriptor of the relative efficacy of event sizes and the nature of the hazard that they pose. Improvements in the ability to resolve the magnitude of events have surpassed the ability to constrain event frequency over short time intervals (< days). However, increasing the temporal resolution of monitoring of a changing surface increases the cumulative error over the same monitoring periods, particularly where change is dominated by numerous small events. In this study, we have discussed the practicalities and techniques to reduce this error for near-continuously acquired 3-D monitoring data, using one of the highest-temporal-resolution 3-D datasets collected to date.
The findings of our workflow are distilled here. Both morphological and
radiometric filters can be effective in removing unreliable points, such as
edges, surfaces of high incidence angle, and vegetation, the effects of
which become increasingly prominent when scanning from a fixed position.
Applying these filters lowered the standard deviation of change detected
between two stable point clouds from 0.078 to 0.055 m. Scans with any
degree of occlusion, arising from atmospheric conditions (e.g. rain) should
be entirely removed to ensure that no rockfall, which are more probable
during these conditions, are likely to be missed in pairwise change
detection. The alignment of large numbers of scans (10
By comparing rockfall inventories collected at
Data are available from the authors on request.
The authors declare that they have no conflict of interest.
This article is part of the special issue “4-D reconstruction of earth surface processes: multi-temporal and multi-spatial high resolution topography”. It is not associated with a conference.
This research formed part of a PhD studentship provided by the Department of Geography, Durham University, and ran alongside the Knowledge Transfer Partnership (KTP8878) awarded to Nick J. Rosser, Richard J. Hardy, Ashraf A. Afana, and 3-D Laser Mapping Ltd. We thank ICL Fertilizers (UK) Ltd for ongoing support of this project. Assistance in the development of this system was provided by Navstar Geomatics, and the maintenance and running of the system was supported by Samantha Waugh and Dave Hodgson. We thank Marc-Henri Derron, Álvaro Gómez-Gutiérrez, and Carlos Castillo for their constructive comments, which helped to improve the paper. Edited by: Carlos Castillo Reviewed by: Marc-Henri Derron and Álvaro Gómez-Gutiérrez