Bayesian image reconstruction and adaptive scene sampling in single-photon LiDAR imaging
Abstract
Three-Dimensional multispectral Light Detection And Ranging (LiDAR) used
with time-correlated Single-Photon (SP) detection has emerged as a key imaging
modality for high-resolution depth imaging due to its high sensitivity and excellent surface-to-surface resolution. This allowed depth imaging through adversarial
conditions with a prime role in numerous applications. However, several practical
challenges currently limit the use of LiDAR in real-world conditions. Large data
volume constitutes a major challenge for multispectral SP-LiDAR imaging due to
the acquisition of millions of events per second that are usually gathered in large
histogram cubes. This challenge is more evident when the useful signal photons are
attenuated and the background noise is amplified as a result of imaging through a
scattering environment such as underwater or fog. Another limitation includes the
detection of multiple-surfaces-per pixel which usually occurs when imaging through
semi-transparent materials (e.g., windows, camouflage), or in long-range profiling.
This thesis proposes robust and fast computational solutions to improve the acquisition and processing of LiDAR data while measuring uncertainty on high-dimensional data. A smart task-based sampling framework
is proposed to improve the acquisition process and reduce data volume. In addition,
the processing was improved using a Bayesian approach to different types of inverse
problems (e.g. spectral classification, and scene reconstruction). The contributions
of this thesis enables fast and robust 3D reconstruction of complex scenes, paving
the way for the extensive use of single-photon imaging in real-world applications.