Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Development of a near-infrared single-photon 3D imaging LiDAR based on 64×64 InGaAs/InP array detector and Risley-prism scanner

Open Access Open Access

Abstract

A near-infrared single-photon lidar system, equipped with a 64×64 resolution array and a Risley prism scanner, has been engineered for daytime long-range and high-resolution 3D imaging. The system’s detector, leveraging Geiger-mode InGaAs/InP avalanche photodiode technology, attains a single-photon detection efficiency of over 15% at the lidar’s 1064 nm wavelength. This efficiency, in tandem with a narrow pulsed laser that boasts a single-pulse energy of 0.5 mJ, facilitates 3D imaging capabilities for distances reaching approximately 6 kilometers. The Risley scanner, composing two counter-rotating wedge prisms, is designed to perform scanning measurements across a 6-degree circular field-of-view. Precision calibration of the scanning angle and the beam’s absolute direction was achieved using a precision dual-axis turntable and a collimator, culminating in 3D imaging with an exceptional scanning resolution of 28 arcseconds. Additionally, this work has developed a novel spatial domain local statistical filtering framework, specifically designed to separate daytime background noise photons from the signal photons, enhancing the system’s imaging efficacy in varied lighting conditions. This paper showcases the advantages of array-based single-photon lidar image-side scanning technology in simultaneously achieving high resolution, a wide field-of-view, and extended detection range.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

As a quintessential sensor in the realm of 3D remote sensing instruments, light detection and ranging (lidar) finds widespread application in object recognition, attitude measurement, and terrain mapping, attributed to its commendable angular resolution and robust anti-interference capabilities [1,2]. Since its inception, lidar technology has undergone substantial evolution, encompassing various branches such as photodetectors, scanning mechanisms, and measurement modes [3,4]. Regardless of the detection mechanism derived, be it direct or coherent, linear or photon-counting, the critical technical indicators assessing lidar systems performance consistently encompass operating range, imaging resolution, field-of-view, and integration time [5,6]. For instance, Li et al. concentrated on enhancing both the system efficiency and the photon-limited reconstruction capability of single-photon lidar, successfully demonstrating long-range, over-the-horizon 3D imaging at distances of up to 200km [7,8]. Within this research context, pulsed lidar based on individual detectors notably extends the operating range due to its heightened optical gathering efficiency. However, this enhancement comes at the expense of constraining the scanning frame rate and echo density. Consequently, the current trajectory of pulsed lidar research involves flood-illumination and capturing backscattered signals through an array detector, thereby enabling immediate 3D inspection with near-image-level resolution [9,10].

Recent research has underscored the potential of InGaAs/InP Gm-APD arrays, operating in the near-infrared (NIR) spectrum, for enhancing transceiver efficiency in non-scanning imaging contexts due to their high single-photon sensitivity [1114]. These array detectors demonstrate superior response speeds and quantum efficiency in photon-counting applications. However, their integration into time-resolved array sensors is limited by the constraints of readout circuit capabilities, particularly in terms of pixel size and data throughput. Maureen et al. explored this issue by comparing the detection probabilities of Gm-APD arrays in frame and asynchronous modes, revealing a significant interplay between circuit reset times, frame transfer rates, and incident photoelectron flux [15]. The necessity for greater illumination power and the resultant limitations on the imaging field are direct consequences of increased array pixel sizes. This dichotomy between pixel quantity and energy efficiency highlights the current inadequacies in transceiver efficiency and photon reconstruction strategies in non-scanning, array-based imaging systems. In their work, Jiang et al. demonstrated a 64$\times$64 InGaAs/InP array-based single-photon lidar, capable of penetrating atmospheric occlusions over distances up to 10 km with 20 frames per second, enabled by the optimized optical design and an advanced reconstruction method [16]. Furthermore, Ni et al. proposed a spatial correlation-based signal extraction method for Gm-APD arrays, enabling depth resolution of sparse power lines under non-scanning conditions [17]. Overall, the development of efficient signal processing techniques has significantly enhanced the capabilities of array-based single-photon lidar systems, particularly in terms of operating range and signal-to-noise ratio, even under challenging external conditions. However, the broadening field of view in non-scanning imaging continues to pose challenges for accurately resolving detailed target contours, primarily due to the limited spatial resolution of these systems. To tackle this challenge, Kang et al. developed a novel approach for registering low spatial resolution depth maps obtained by small-scale SPAD arrays with high-resolution intensity images captured by CCDs [18]. This approach utilizes intensity information to guide the reconstruction of distances, effectively quadrupling the resolution of depth images. Meanwhile, Callenberg et al. proposed the fusion of images from a compact SPAD array with those from computational CMOS sensors, aiming to enhance the pixel spatial resolution in non-scanning array-based 3D imaging [19].

A recently proposed hybrid semi-solid-state 3D imaging technique integrates an optical-mechanical scanning mechanism with single-photon array technology [20]. Addressing the challenge of limited field and sparse point cloud density in conventional array-based detection, the concept involves physically shifting the field-of-view through instantaneous flash imaging. This enables the identification of distinct Regions of Interest(ROI) during each laser pulse frame, overlaying them onto a larger detection region. Pioneering applications of this hybrid technology are evident in the work of Marino et al., who engineered a hybrid 3D imaging lidar tailored for airborne platforms [21]. This system, featuring a 32$\times$32 focal plane array and a prism scanner, successfully generated dense echo points, proving instrumental in detecting concealed targets within densely forested landscapes. Similarly, Henriksson et al. accomplished daytime panoramic imaging over a range of 300 meters, utilizing a horizontal scanner combined with a 128$\times$64 single-photon GmAPD array [22]. In another groundbreaking effort, Degnan et al. conceptualized and developed an instrument capable of capturing high-resolution 3D elevation data on Martian terrain, employing an optical scanner and a multi-channel array detector [23]. Despite these advancements, to the best of our knowledge, there remains a gap in the literature concerning hybrid solid-state lidar systems that utilize InGaAs/InP GmAPD arrays and Risley-prism scanning for long-range, daytime imaging in the near-infrared band. Furthermore, the technical details and specific processing methodologies employed in comparable research endeavors have yet to be fully disclosed, indicating potential areas for further exploration and development in this field.

The selection of the Risley prism scanning methodology for this study is predicated on its superior scanning rate, robust rotational mechanics, and a field-of-view profile that mimics the human eye’s visual focusing mechanism, characterized by prolonged dwell times at the center and reduced durations at the periphery. This scanning technique distinguishes itself among various optical scanning methods due to these attributes. A notable application of this technology is seen in the high-resolution, triangular frequency-modulated 3D imaging lidar employing multibeam scanning with Risley prisms, as introduced by Li et al [24]. Additionally, Liu et al. have elaborated on a retina-inspired, prism-based robotic scanning lidar developed from Livox [25]. Incorporating this scanning method with flash imaging, particularly when employing GmAPD arrays potentially comprising thousands of units, necessitates exacting pointing accuracy to avert photon blending across adjacent pixels, a scenario that could adversely affect resolution. The integration of an extensive pixel array markedly elevates the raw photon data transmission capacity, increasing it by three orders of magnitude in comparison to conventional unit-based prism scanning systems. This substantial enhancement, however, introduces complexities in noise reduction algorithms. This paper provides an in-depth discussion of the nuanced design considerations and data processing methodologies devised to effectively navigate these challenges.

This research introduces a single-photon lidar system, capable of high-density photon data acquisition from urban structures and forested regions up to 6 kilometers away. The contributions can be summarized as follows: Primarily, it showcases the development of dual-wedge prism array-based scanning 3D imaging. By modeling the coordinate transfer from array depth to the spatial photon under the non-near-axis approximation, our analysis covers footprint trajectory, pixel density, and coverage. The introduction of a robust geometry calibration method that ensures prism scanning pointing accuracy below 28 arc seconds, verified through extensive experimental validation. Subsequently, the creation of an integrated noise removal framework that effectively extracts sparse signals from massive scan photons, combining coarse outlier removal with a fine-filtering approach for proximal mixed noise, demonstrating superior denoising capabilities when compared to conventional methods. Comprehensive daytime outdoor experiments on urban targets within a 6-kilometer range rigorously evaluate the system’s capabilities, revealing its tremendous potential in achieving wide-field, high-resolution 3D imaging and offering a new perspective for the design of daytime long-range remote sensing instruments.

2. System description

2.1 Instrument specifications

Figure 1 illustrates the structural framework of the system. The system utilizes a passively Q-switched solid-state laser operating at 1064 nm. A near-infrared laser with a single pulse energy of 0.5 mJ, a repetition rate of 5 kHz, and a pulse width of less than 1 ns is adjusted through a pair of lenses to modify its divergence angle to cover the receiving field of view of the receiving system. It is then emitted towards the target to be detected from the center of the receiving system via two right-angle prisms. The receiving system is simply composed of a 2 nm bandwidth filter, a lens assembly and an 64$\times$64 InGaAs/InP Geiger-mode avalanche photodiode array detector. Due to the microlens array on the detector surface limiting its acceptable incident angle to within 3$^{\circ }$, the receiving lens assembly employs a small numerical aperture and a telecentric optical design to achieve efficient echo reception. The aperture diameter of the receiving is 50 millimeters.

 figure: Fig. 1.

Fig. 1. Schematic of the experimental setup. (a) Comprising a narrow-pulse 1064 nm laser, a transceiver system, a 64$\times$64 InGaAs/InP GmAPD detector, and a dual-wedge prism scanner. The setup features a detachable optical path with color of yellow, specifically designed for the geometric calibration. (b) Photograph of LiDAR device.

Download Full Size | PDF

This coaxial transceiver system employs a Risley scanner, consisting of two wedge prisms, to achieve directional scanning. The prisms, crafted from K9 glass, are coated with an anti-reflective film for 1064 nm. Both scanning prisms have a wedge angle of 3 degrees. The scanner is powered by a direct current motor, with its encoder achieving an angular resolution of up to 4.2 arcsecond, and a rotational speed ranging from 1 Hz to 20 Hz. The custom split frameless permanent magnet servomotor utilizes an incremental encoder, with its ABZ phase signals transmitted to the data acquisition system. Detailed specifications of the proposed lidar system are presented in Table 1. To mitigate the impact of internal reflections from the coaxial laser path, the emission and reception light paths were mechanically separated. The primary wave signals, once received by the PIN photodetector located behind the mirror M1, are subjected to a predetermined time delay before being fed into the array detector system, thus shielding against the near-range backward scattering interference from the Risley-prism.

Tables Icon

Table 1. Specifications of the proposed LiDAR system

The lidar employs an InGaAs/InP single-photon-array-detector with a quantum efficiency of approximately 15% and a resolution of 64 by 64. The detector assembly integrates a Time-to-Digital Converter (TDC) chip, capable of measuring the time difference between the trigger signal from the reference detector and photon events with a time resolution of 1ns. Due to the 12-bit time sampling width of the TDC chip, the detector assembly can achieve a maximum time measurement of 4 microseconds, corresponding to a ranging distance of up to 600 meters. Unlike the regular reciprocating scanning methods of devices like galvanometer motors, the scanning trajectory of a Risley scanner has a more complex relationship with the motor’s rotation angle. Factors such as the wedge angle and installation deviations can significantly impact imaging parameters. Therefore, we designed a calibration optical path inside the lidar. It consists of a 1064 nm collimated light source with a divergence angle of 100 $\mu$rad and two mirrors, marked in yellow in the diagram. The laser is only introduced into the system for high-precision angular calibration, and the detailed method will be described in the following section.

2.2 Imaging model

Figure 2 displays the principle of Risley scanning, a non-repetitive scanning method where the detection density is closely related to the imaging time, and longer detection periods facilitate seamless coverage. This section will elaborate on the geometric imaging model of this scanning method in detail. Employing non-paraxial vector tracking theory, the Snell refraction law is adapted to model the prism-scanning directional transmission for the single-photon array. As depicted in Fig. 2, the z-axis represents the Risley prism’s centerline, forming a right-handed reference coordinate system. Each pixel of the received data is treated as an equivalent beam, each with a distinct direction vector. Denote the instantaneous field-of-view of the array as $F_s$, the pixel scale as $p \times p$, and the equivalent normal vector $\mathbf {s}_{0}^{m,n}$ of pixel (m,n) is defined as:

$$\mathbf{s}_0^{\mathrm{m}, \mathrm{n}}=\left[\sin \theta_{\mathrm{m}}^{\mathrm{X}}, \cos \theta_{\mathrm{m}}^{\mathrm{X}} \sin \theta_{\mathrm{n}}^{\mathrm{Y}},-\cos \theta_{\mathrm{m}}^{\mathrm{X}} \cos \theta_{\mathrm{n}}^{\mathrm{Y}}\right]^T$$

 figure: Fig. 2.

Fig. 2. Schematic of dual-wedge prism scanning. (a) Illustration of the petal-shaped scanning trajectory achieved through the synchronous rotation of two oppositely oriented wedge prisms, with the trajectory’s form governed by the prisms’ rotational speed ratio [26]. (b) Ray tracing diagram illustrating the principles of the Snell non-near-axis approximation method applied to the scanning system

Download Full Size | PDF

The horizontal angle $\theta _m^X$ is calculated by $m \times {F_s}/{p}$, while the vertical angle $\theta _m^Y$ is derived from $n \times {F_s}/{p}$. For pixel (m,n) with depth $d_P^{m,n}$, the position of its received photon is expressed as:

$$\mathrm{P}_{\mathrm{X}, \mathrm{Y}, \mathrm{Z}}^{\mathrm{m}, \mathrm{n}}=\mathrm{d}_{\mathrm{P}}^{\mathrm{m}, \mathrm{n}} \times \boldsymbol{T}_4\left(\boldsymbol{T}_3\left\{\boldsymbol{T}_2\left[\boldsymbol{T}_1\left(\mathbf{s}_0^{\mathrm{m}, \mathrm{n}}, \mathbf{n}_{11}\right), \mathbf{n}_{12}\right], \mathbf{n}_{21}\right\}, \mathbf{n}_{22}\right)$$

Let $\mathbf {T}(s,n)$ denote the Snell refractive transfer function. The beam vector $\mathbf {s}_w^{m,n}$ through the prism is given by $\mathbf {T}_w[\mathbf {s}_{w-1}^{m,n},\mathbf {n}_w(\theta )]$, with $\mathbf {n}_w(\theta )$ being the surface normal vector at prism rotation angle $\theta$. The beam’s incident path, in reverse along the z-axis towards prism I, is elaborated upon in the derivation process. Given prism I’s wedge-angle $\alpha _1$, initial rotation angle $\theta _{10}$, and angular velocity $w_1$, the normal $\overrightarrow {\mathbf {n}}_{11}$ at its left interface is represented as:

$$\overrightarrow{\mathbf{n}}_{11}=\left[\sin \alpha_1 \cos \left(\theta_{10}+w_1 t\right), \sin \alpha_1 \sin \left(\theta_{10}+w_1 t\right), \cos \alpha_1\right]^T$$

Following Snell’s law of refraction [27], the internal beam in prism I is characterized as:

$$\overrightarrow{\boldsymbol{s}_1}=\left[\overrightarrow{\boldsymbol{s}_0}-\left(\overline{\boldsymbol{n}_{11}} \cdot \overrightarrow{\boldsymbol{s}_0}\right) \overline{\boldsymbol{n}_{11}}\right] / n_1-\overline{\boldsymbol{n}_{11}} \sqrt{1-\left[1-\left(\overline{\boldsymbol{n}_{11}} \cdot \overrightarrow{\boldsymbol{s}_0}\right)^2\right] / n_1^2}$$

The outgoing beam vector at prism I’s right interface is expressed as:

$$\overrightarrow{\boldsymbol{s}_2}=n_1\left[\overrightarrow{\boldsymbol{s}_1}-\left(\overline{\boldsymbol{n}_{12}} \cdot \overrightarrow{\boldsymbol{s}_1}\right) \overline{\boldsymbol{n}_{12}}\right]-\overline{\boldsymbol{n}_{12}} \sqrt{1-n_1^2\left[1-\left(\overline{\boldsymbol{n}_{12}} \cdot \overrightarrow{\boldsymbol{s}_1}\right)^2\right]}$$
where $n_1$ is the refractive index, the normal vector $\hat {\mathbf {n}}_{12}$ at the right interface of prism I is $[0, 0, 1]^T$. In a similar manner, the outgoing beam vector at prism II’s right interface, denoted as $\hat {\mathbf {s}}_4$, is derived. For the focal plane array’s center pixel, the deflection angle $\phi$ and azimuth angle $\Theta$ of the outgoing beam $\hat {\mathbf {s}}_4$ are expressed as:
$$[\phi, \Theta]=\left[\operatorname{arccos}\left(-\boldsymbol{s}_4^z\right), \arctan \left(\boldsymbol{s}_4^y / \boldsymbol{s}_4^x\right)\right]$$

The rotational frequencies $H_1$ and $H_2$ of the wedge prisms critically influence the density of scanning footprints. Notably, employing specific small fractional speed ratios between them leads to a progressive filling of trajectory gaps over extended scanning durations [26]. To actualize the desired non-repetitive scanning pattern, prism I was configured to rotate in reverse at 8.1 Hz, while prism II rotated forward at 12.931 Hz. This setup generated a scanning path with 13 petal-shaped lobes, each persisting for 54 ms. The lobes’ intervening gaps were steadily filled at a constant offset rate, allowing for complete coverage of the field of view in approximately 7s, without leaving any gaps. Figure 3 display the trajectory footprints and array pixel durations, indicative of coverage point density, over 2s, 5s, 7s, and 10s intervals, on a 1km screen. The instantaneous and scanned fields are 3.2m and 105m, respectively. With a pulse repetition rate of 5 kHz, the minimal and maximal distances between adjacent pulse coverage areas are 0.16 m and 0.7 m, respectively, imposing a scanning pointing accuracy limit of 33 arc seconds. Significantly, enhancing the prism’s rotation speed can further decrease the duration required for scanning the entire field of view. This advancement requires a thorough assessment of the motor’s dynamic balance capabilities to prevent extraneous vibrations during high-speed rotations.

 figure: Fig. 3.

Fig. 3. Scanning trajectories and pixel dwell density distribution. Panels (a) to (d) illustrate the scanning trajectories for varying durations: 2s, 5s, 7s, and 10s, respectively. The top row depicts the scanning trajectory of an individual pixel. The bottom row quantifies the pixel dwell times within each 0.2$\times$0.2 grid on the screen during a 64$\times$64 array scan, incorporating a pulse repetition rate of 5kHz, across the various durations.

Download Full Size | PDF

To quantitatively evaluate the point density distribution across the field-of-view under different scan durations, a 1km target screen was segmented into a square grid, each with a resolution of 0.2m. The dwell time of pixels, or the number of sampling points, was quantified within each grid, taking into account the number of emitted pulse counts and the generation of 64$\times$64 sampling points with each pulse. Subsequently, the count of sampling points (pixels) within each grid on the light screen was recorded. Within this framework, pixel densities in each circular sector were counted progressively along the radius R, in increments of $T_s$. The average frequency with which detector pixels scan each grid section within these circular grid areas is defined as:

$$N_t=N_p / N_r$$
where $N_r$ denotes the number of grids within each circular ring, segmented at 2-meter intervals. $N_p$ represents the total pixel dwell times within the ring, calculated as the product of the number of sustained pulses and the corresponding dwell pixels. Figure 4(a) illustrates the average pixel density scanned into each grid at varying integration times for a pulse repetition frequency of 5kHz and a $64 \times 64$ SPAD array. The horizontal axis corresponds to the radius extending from the center. Notably, pixel density is higher at the center and decreases towards the periphery, attributed to field-of-view dispersion. Given that low signal-to-noise ratio targets require multiple pulse detections for recording signal photons, let $N'_t$ be the minimum threshold for pixel dwellings in a grid. Figure 4(b) depicts the coverage of the effective detection area, specifically the ratio of grids surpassing $N'_t$ to the total grid count as the integration time $T_s$ increases. At a duration of 10s, the average pixel dwell times in each $0.2 \, \mathrm {m} \times 0.2 \, \mathrm {m}$ grid within the field-of-view exceed 400 points, implying an effective coverage within the entire 6-degree circular field-of-view nearing 1.

 figure: Fig. 4.

Fig. 4. Analysis of scanning trajectories on a 1 km screen. (a) Illustrates the average point density within concentric rings as the field-of-view radius expands, essentially depicting a cross-sectional view of the pixel density from Fig. 3(b). Displays the ratio of grid areas within the circles–where the pixel scan count surpasses a predefined threshold–to the total grid area in the field-of-view. This graphically represents the scan duration necessary to achieve a specific proportion of area coverage.

Download Full Size | PDF

The return photons synchronized with each pulse’s prism angles were fed into a coordinate transfer model, producing a spatial photon cloud with dense centers and sparse edges. The raw scan data experienced considerable noise photon interference, chiefly due to premature triggering from robust solar scatter and pixel dark counts. Figure 5(a) illustrates the classification of unexpected photons into three types: outlier noise, proximal mixed noise, and cluster noise. The cluster noise photon, resulting from scanning density variations, was mitigated through targeted non-uniform sampling preprocessing. Photon coordinates, expressed as $N(z,r)$, incorporate distances from the central axis, where $z$ and $r$ correspond to axial depth and radial distance, respectively. Radial cross-sectional photon density decreases with increasing radius. Let $|N(z_s,r)|$ represents the photon count in depth slice $z_s$, while $N_i(z_s,r_i)$ indicates the photon count in the radial slice at $r_i$ for depth $z_s$. The sampling rate in the $i$-th radial slice, $\eta _s^t(z_s,r_i)$, is defined as follows:

$$\eta_{\mathrm{s}}^{\mathrm{t}}\left(\mathrm{z}_{\mathrm{s}}, \mathrm{r}_{\mathrm{i}}\right)=\frac{\mathrm{N}_{\mathrm{i}}^{\mathrm{Y}}\left(\mathrm{z}_{\mathrm{s}}, \mathrm{r}_{\mathrm{i}}\right)-\min \left[\mathrm{N}_{\mathrm{i}}^{\mathrm{Y}}\left(\mathrm{z}_{\mathrm{s}}, \mathrm{r}_{\mathrm{i}}\right)\right]+\sigma \overline{\mathrm{N}}_{\mathrm{i}}^{\mathrm{Y}}\left(\mathrm{z}_{\mathrm{s}}, \mathrm{r}_{\mathrm{i}}\right)}{\max \left[\mathrm{N}_{\mathrm{i}}^{\mathrm{Y}}\left(\mathrm{z}_{\mathrm{s}}, \mathrm{r}_{\mathrm{i}}\right)-\min \left[\mathrm{N}_{\mathrm{i}}^{\mathrm{Y}}\left(\mathrm{z}_{\mathrm{s}}, \mathrm{r}_{\mathrm{i}}\right)\right]+\sigma \overline{\mathrm{N}}_{\mathrm{i}}^{\mathrm{Y}}\left(\mathrm{z}_{\mathrm{s}}, \mathrm{r}_{\mathrm{i}}\right)\right]}$$
where $N_i^Y(z_s,r_i)$ can be expressed as $|N(z_s,r)| - N_i(z_s,r_i)$, with $\sigma$ denoting the initial sampling coefficient along the axis. Photon counts $N_i(z_s,r_i)$ in radial slice $r_i$ are derived from the pixel coverage density mapping table at a consistent rotational speed ratio. Figure 5(b) illustrates the sampling rate on each slice $r_i$ along the radius, which is inversely correlated with the scan density. Notably, sampling rates decrease nearer the center, thus partly mitigating abnormal cluster noise in densely scanned central areas. Despite photon sparsity in the field-of-view’s periphery, the sampling rate effectively retains feature photons.

 figure: Fig. 5.

Fig. 5. (a) Conceptual illustrations of spatial noise in photon data, including outlier, clustered, and aliasing noise. (b) Variation in non-uniform sampling across different distances, showing its effect on reducing clustered noise in dense scanning trajectories.

Download Full Size | PDF

2.3 Geometric calibration

For a Risley scanner-based lidar, accurately calibrating the laser direction and motor angles is crucial for imaging precision. The calibration, illustrated in Fig. 6, utilizes a 0.4 meter-diameter parabolic collimator with a focal length of 4 meters and a high-precision dual-axis rotary stage. A SP928 Laser Beam Profiler (LBP) and a single-mode fiber for emitting the 1064nm laser are symmetrically mounted on the focal plane of the collimator using a Beam Splitter (BS). The position of the fiber’s emission point and the central pixel of the LBP have been precisely calibrated to ensure the complete alignment of their respective fields of view. The preparation steps before calibration can be divided into the following stages. I ). The initial step involves injecting an external 1064nm weak laser into the fiber at the focal plane of the collimator tube. Subsequently, precise adjustment of the dual-axis rotary stage ensures the convergence of the highly collimated beam emanating from the collimator onto the central pixel of the GmAPD. This alignment precisely aligns the GmAPD central pixel’s FoV with the laser from the fiber of collimator.II ). Initiate the calibration fiber laser in the system and adjust the angular orientation of the M3/M4 in Fig. 1 to direct the 100 $\mu$rad divergent reference beam. This beam, after passing through the parabolic mirror and the 45-degree mirror, is redirected to the focal plane of the LBP. This action converges the calibration spot onto the central pixel of the LBP. At this time, the orientation of the reference beam, the FoV of the central pixel of the array detector, and the emission path are in complete alignment. The dual-axial rotating stage’s attitude, including pitch and rotation, is set as the initial zero, with the LBP central spot location marked.

 figure: Fig. 6.

Fig. 6. (a) Geometric calibration setup photograph, showing the transceiver system on a dual-axis rotating stage aimed at a parallel collimator tube. (b) Diagram for Prism I assembly, highlighting angle adjustments to correct prism deflection. (c) Diagram for Prism II geometric calibration, detailing the process of aligning the directed beam spot with a reference point to calculate the beam direction vector, with Prism I fixed.

Download Full Size | PDF

When the angle of Prism is altered, both the calibration laser and the receiving field of view deviate from the non-scanning coaxial marked pixel of LBP. To compensate, adjust the turntable to converge the spot formed by the reference beam onto the central pixel of the LBP. The sensor provides feedback on the pitch angle $\theta _F$ and rotation angle $\theta _R$ of the dual-axis platform, with an accuracy of 0.5 arc seconds. Consequently, the deviating beam vector $\mathbf {N}_e$ can be represented as:

$$N_e=\left[n_e^X, n_e^Y, n_e^Z\right]^T=\left[\sin \theta_R, \cos \theta_R \sin \theta_F,-\cos \theta_R \cos \theta_F\right]^T$$

Manually rotate prism I in approximately 10-degree increments. At each alignment of the spot with the mark, sequentially record prism I’s accurate decoding rotation angle $\theta _1$, the platform’s pitch angle $\theta _F^A$, and rotation angle $\theta _R^A$, until prism I returns to its starting position. Substituting $\theta _1$ into the Snell non-near-axis tracking model, Fig. 7(a) displays the discrepancies between theoretical and actual measurements, attributed to assembly factors like axis eccentricity error and the prism’s non-parallel tilting error [28,29]. The relationship between prism I’s rotation angle and the beam deflection angle is modeled as a periodic function, approximated here by a cosine function. For this study, a fourth-order Fourier function is used for fitting, expressed as:

$$\emptyset(\theta)=a_0+\sum_{i=1}^3 a_i \sin \left(i \times \theta_1+b_i\right)$$

 figure: Fig. 7.

Fig. 7. Calibration results for Prism I. (a) Shows the residuals between the theoretical and measured pointing vectors, illustrating the initial discrepancies. (b) Presents the fitted calibration pointing vector, indicating the adjustments made for accuracy. (c) Depicts the residuals post-fitting, highlighting the precision achieved in the calibration.

Download Full Size | PDF

The Fourier coefficients were ascertained by fitting the platform data, establishing a mapping function that correlates angle $\theta _1$ with the pointing beam. This approach effectively mitigates eccentricity and assembly errors. Figure 7(b)-(c) illustrate the vector distribution and the residuals along the X and Y axes, respectively. After applying the Fourier function fitting, the mean-square error in the pointing accuracy of the beam passing through prism I is diminished to within 26 $\mu$rad, thereby substantially improving the pointing precision.

Similarly, Prism I is rotated to zero degrees, followed by manual incremental rotation of Prism II. The angle $\theta _2$ of the prism II motor, the platform pitch angle $\theta _F^B$, and the platform rotation angle $\theta _R^B$ are recorded when the reference beam’s center aligns with the non-scanning transceiving coaxial mark on the focal plane of the parallel collimator tube. This continues until Prism II returns to its initial state. Figure 8(a) illustrates the residual between the measured vector and the theoretical vector calculated according to the Snell transmission model, with a maximum pointing deviation reaching 1.5 milliradians. Prism II’s rotational error mainly stems from its assembly tilt relative to the bearing. The tilt error around the Z-axis can be represented by the rotation matrix $M_{P1}(\delta _z)$, originating from the drift error of the grating zero point relative to the initial position of the prism II. The assembly error around the X/Y axis is represented by the rotation matrix $M_{P2}(\theta _0,\delta _0)$ derived from Rodriguez’s formula [30,31]. Consequently, the adjusted initial vector $\mathbf {n}_{21}''$ on Prism II’s left side is $M_{P2}M_{P1}\mathbf {n}_{21}$. When Prism II rotates by $\theta _R^2$, its normal vector $\mathbf {n}_{21}^T$ is $\text {rot}(z,\theta _R^2)\mathbf {n}_{21}''$, where the rotation matrix $\text {rot}(z,\theta _R^2)$ is defined as:

$$\operatorname{rot}\left(\mathrm{z}, \theta_{\mathrm{R}}^2\right)=\left(\begin{array}{ccc} \cos \theta_{\mathrm{R}}^2 & -\sin \theta_{\mathrm{R}}^2 & 0 \\ \sin \theta_{\mathrm{R}}^2 & \cos \theta_{\mathrm{R}}^2 & 0 \\ 0 & 0 & 1 \end{array}\right)$$

 figure: Fig. 8.

Fig. 8. Calibration results for Prism II. (a) Displays the residuals between the theoretical and measured pointing vectors, highlighting initial calibration discrepancies. (b) Shows the residuals between the measured values and the corrected vectors, demonstrating the effectiveness of adjustments made to calibrate assembly errors in Prism II.

Download Full Size | PDF

Furthermore, Prism II’s initial right-side normal vector $\mathbf {n}_{22}$ is $(-\sin {\alpha }, 0, \cos {\alpha })$, and after rotation by $\theta _R^2$, it becomes $\text {rot}(z, \theta _R^2)M_{P2}M_{P1}\mathbf {n}_{22}$. Employing Snell’s law and the least squares principle for minimizing the mean square error between measured and theoretical platform values, rotation matrices $M_{P1}$ and $M_{P2}$ are derived. This corrects prism II’s non-near-axis tracing model. Figure 8(b) shows the post-calibration residual distribution, with the average residual reduced to 0.126 mrad. Remaining errors are attributed to the bearing’s tilt and rotational inaccuracies.

In the experiment, both prisms were rotated in $\pi /2$ increments, with Prism I and Prism II positioned at 0, $\pi /2$, $\pi$, and $3\pi /2$. The precise angular values were obtained from the grating encoder. When the reference spot aligned with the coaxial mark, the platform’s rotational and pitch angles were noted, yielding 16 data sets. Figure 9(a)-(b) show the X/Y axis residuals for the measured and corrected vectors.The average pointing accuracy at maximum deflection, improved to 28 arcseconds, a tenfold increase over the uncorrected results. Approximately three pixels of pointing deviation may stem from rotational discrepancies between the focal plane array and the geometric coordinate system of laser pointing, as well as misalignments in the calibrated spot. This misalignment complicates per-pixel calibration, especially with minimal IFoV, underscoring the necessity for precise mechanical integration of the receiving lens and array detector.

 figure: Fig. 9.

Fig. 9. Residual analysis of 16 remeasured data sets with simultaneous rotation of Prism I and Prism II. (a) and (b) correspond to the residuals along the X-axis and Y-axis, respectively. The solid black lines indicate the discrepancies between the corrected and measured values, whereas the red dashed lines represent the deviations between the theoretical and measured values.

Download Full Size | PDF

3. Filtering framework

In the realm of single-photon lidar systems, the attainment of robust detection is contingent upon the incorporation of advanced signal separation algorithms. These algorithms are intricately designed to adeptly extract and reconstruct genuine signals amidst the intricacies of echo data, often beset with substantial noise interference. Present denoising methods primarily concentrate on two areas: first, the reconstruction of distance profiles from single-beam data, exemplified by the extraction of signal photons from sectional views of photon dataset, which are constituted of both along-track distances and elevations. Secondly, they focus on the depth estimation derived from sparse echo photons, employing maximum likelihood techniques to ascertain depth in pixels that have accumulated a sufficient number of photons. Nonetheless, despite their focus on reconstructing target profiles, it should be noted that data denoising effectively operates within the confines of an array, delineated by photon flight times and pulse numbers.

Currently, lidar filtering techniques aimed at denoising three-dimensional spatial photon data are not extensively documented in literature. However, representative works do exist. Wang et al. introduced an adaptive ellipsoidal searcher, utilizing a spherical noise density estimation model and morphing ellipsoids for noise removal in the Sigma Space airborne quantum lidar system [32]. Similarly, Tang et al. proposed a voxel-based alternative space filtering method to eliminate noise in canopy point clouds measured by airborne single-photon lidar, thereby achieving accurate canopy height measurements [33]. This study addresses the challenges in 3D photon data filtering, primarily the omnipresent background noise and signal-adjacent irregular noise. This work propose a method that utilizes local statistical factors within the developed k-nearest neighbors framework for effective spatial photon processing, diverging from traditional density clustering and voxel filtering techniques. This method integrates statistical distance-based outlier removal for extensive noise reduction and a pruning-based local deviation filtering method for nearby noise, encompassing the subsequent steps:

3.1 Outlier removal in low-SNR

For high signal-to-noise ratio (SNR) received photons, the statistical outlier removal (SOR) method utilizing k-neighbourhood mean has proven effective [34,35]. Challenges emerge with low SNR data, where SOR struggles due to high spatial statistical similarity and sparse aggregation of internal signal photons. Addressing this, this study introduces the Statistical Local Distance-based Outlier Removal (SLDOR) method to differentiate between large-scale background photons and small-scale signal point clouds in raw scanning data. SLDOR evaluates the closest neighbors to the primary photon, employing a local distance outlier factor to identify deviations of point cloud $p_i$ from its nearest neighbor $N_k(p_i)$ within the same k-neighbourhood, irrespective of spatial density variations. As illustrated in Fig. 10(a)-(b), consider the average distance from all point cloud $p_j$ within $N_k(p_i)$ to the center point cloud $p_i$:

$$\overline{\mathrm{d}_{\mathrm{k}}}\left(\mathrm{N}_{\mathrm{k}}\left(\mathrm{p}_{\mathrm{i}}\right), \mathrm{p}_{\mathrm{i}}\right)=\frac{\sum_{\mathrm{p}_{\mathrm{j}} \in \mathrm{N}_{\mathrm{k}}\left(\mathrm{p}_{\mathrm{i}}\right)} \mathrm{d}\left(\mathrm{p}_{\mathrm{j}}, \mathrm{p}_{\mathrm{i}}\right)}{\left|\mathrm{N}_{\mathrm{k}}\left(\mathrm{p}_{\mathrm{i}}\right)\right|}$$

 figure: Fig. 10.

Fig. 10. (a) Illustrates the cumulative distance of the k-nearest neighbors relative to $p_i$ itself. (b) Shows the total distance for all point pairs within the k-nearest neighbors of $p_i$. (c) Depicts the deviation of the centroid relative to $p_i$ within its k-nearest neighbors.

Download Full Size | PDF

The mean distance between all point pairs in point cloud $p_i$’s vicinity is expressed as:

$$\overline{D_k}\left(N_k\left[\left(p_i, p_j\right)\right], p_i\right)=\frac{\sum_{\left(p_i, p_j\right) \in N_k} d\left(p_i, p_j\right)}{\left|N_k\left[\left(p_i, p_j\right) \mid p_i\right]\right|}$$

The local distance outlier factor for point cloud $p_i$ is defined as:

$$\operatorname{LDOF}\left(\mathrm{p}_{\mathrm{i}}\right)=\frac{\overline{\mathrm{d}_{\mathrm{k}}}\left(\mathrm{N}_{\mathrm{k}}\left(\mathrm{p}_{\mathrm{i}}\right), \mathrm{p}_{\mathrm{i}}\right)}{\overline{\mathrm{D}_{\mathrm{k}}}\left(\mathrm{N}_{\mathrm{k}}\left[\left(\mathrm{p}_{\mathrm{i}}, \mathrm{p}_{\mathrm{j}}\right)\right], \mathrm{p}_{\mathrm{i}}\right)}$$

As the LDOF$\left (p_i\right )$ increases, signaling a stronger deviation from its neighboring points $N_k\left (p_i\right )$, the point cloud is more likely to be positioned at the edge. Define the extraction rate for the edge signal as $\varepsilon$. Initially select $\varepsilon |N|$ points from the sorted set $N\left (p_t\right )$ based on the local distance outlier factor, creating a point cloud set $N'\left (p_t\right )$. Treat the distances between point pairs in $N'\left (p_t\right )$ as a normal distribution, and calculate the mean Euclidean distance $\mu ^\prime (p_i)$ between each point cloud and every other point cloud in sequence:

$$\mu^{\prime}\left(p_i\right)=\frac{1}{\left|N^{\prime}\right|} \sum_{s=1}^{\left|\mathrm{N}^{\prime}\right|}|| p_i-p_s||_2$$

The standard deviation $\sigma ^\prime \left (p_i\right )$) is calculated as follows:

$$\sigma^{\prime}\left(\mathrm{p}_{\mathrm{i}}\right)=\left[\frac{1}{\left|\mathrm{N}^{\prime}\right|} \sum_{\mathrm{s}=1}^{\left|\mathrm{N}^{\prime}\right|}\left(|| \mathrm{p}_{\mathrm{i}}-\mathrm{p}_{\mathrm{s}}||_2-\mu^{\prime}\left(\mathrm{p}_{\mathrm{i}}\right)\right)\right]^{1 / 2}$$

Subsequently, for each point cloud $p_i$ in the initial set $N\left (p_t\right )$, if the average Euclidean distance $\mu (p_i)$ of its k-nearest-neighbor point pairs lies within the $\mu ^\prime \left (p_i\right )$-centered range around $\sigma ^\prime$, and its LDOF$\left (p_i\right )$ falls within the outlier threshold interval $\text {LDOF}_t$,then $p'_i$ is considered a signal point cloud. Photon data not meeting these criteria are deemed outliers and excluded. This process selectively marks signal photons at the interface of signal and noise, considering those within a distance less than $\mu ^\prime \left (p_i\right )$ from $p'_i$ among the k-nearest neighbors. Employing a region-growing approach ensures the preservation of detailed features within the point cloud. This involves iteratively expanding the k-nearest neighbor photons of these preserved point cloud, classifying those with local distances below a specified threshold as retained point cloud.

3.2 Photon fine-filtering method

Following the initial coarse filtering, the resultant data comprises sparse background photons, the echo signal, and intermixed noise photons from nearby region. This study introduces a pruning-based local deviation filtering method to accurately filter the point cloud and eliminate proximity aliasing burr noise. As depicted in Fig. 10(c), the Local Deviation Ratio (LDR) is defined by the Euclidean distance between a point cloud $p_i$ and the center of mass $\bar {P}_i^k$ of its k-nearest neighbor point set $N_k(p_i)$, normalized by the number of neighbors $|N_k(p_i)|$, as follows:

$$\operatorname{LDR}_k\left(p_i\right)=\frac{d\left(p_i, \bar{p}_i^k\right)}{\left|N_k\left(p_i\right)\right|}$$

Therefore, the local deviation rate quantifies the influence of k-nearest neighbors on $p_i$. An elevated $\text {LDR}_k\left (p_i\right )$ suggests a more dispersed distribution of the k-nearest photons relative to $p_i$, implying a higher likelihood of $p_i$ being an isolated noise photon. To mitigate pulse spreading errors affecting the signal profile, the original photon set is pruned using an anti-aliasing factor $\text {KF}_k\left (p_i\right )$ during the calculation of LDR coefficients. Define the reachable distance of $p_i$ in its k-nearest neighbor as $\text {rd}_k(p_i)$, with its corresponding anti-aliasing factor $\text {KF}_k\left (p_i\right )$ given by:

$$\mathrm{KF}_{\mathrm{k}}\left(\mathrm{p}_{\mathrm{i}}\right)=\frac{\mathrm{rd}_{\mathrm{k}}\left(\mathrm{p}_{\mathrm{i}}\right)}{\sum_{\mathrm{p}_{\mathrm{j}} \in \mathrm{N}_{\mathrm{k}}\left(\mathrm{p}_{\mathrm{i}}\right)} \mathrm{rd}_{\mathrm{k}}\left(\mathrm{p}_{\mathrm{j}}\right)}$$

For photons proximal to the actual signal profile, the resultant denser central intervals yield a reduced $rd_k(p_i)$. Consequently, the pruning factor, denoted as $\text {pf}$, is established based on the average density of the k-nearest neighbors across the entire point cloud, represented by:

$$\mathrm{pf}=\frac{\sum_{\mathrm{p}_{\mathrm{i}} \in \mathrm{M}}\left|\mathrm{rd}_{\mathrm{k}}\left(\mathrm{p}_{\mathrm{i}}\right)\right|}{\sum_{\mathrm{p}_{\mathrm{i}} \in \mathrm{M}} \sum_{\mathrm{p}_{\mathrm{j}} \in \mathrm{N}_{\mathrm{k}}\left(\mathrm{p}_{\mathrm{j}}\right)} \mathrm{rd}_{\mathrm{k}}\left(\mathrm{p}_{\mathrm{j}}, \mathrm{p}_{\mathrm{i}}\right)}$$

The numerator in this context represents the total photon count within the k-nearest neighbors of point cloud $p_i$ in the photon set. Conversely, the denominator involves calculating the aggregated integral reachable distances between the k-nearest neighbors of point cloud $p_i$ and $p_j$. Photons are retained if their $\text {KF}_k\left (p_i\right )$ is below the threshold defined by the coefficient pf; otherwise, they are filtered out, precluding further operations. To assess the proximity of the nearest photons in the k-nearest neighbor of point cloud $p_i$, post-preprocessing pruning based on $\text {LDR}_k(p_i)$, the Local Deviation Coefficient $\text {LDC}_k(p_i)$ is defined. This coefficient represents the mean local deviation rate within $N_k(p_i)$, which includes the k-nearest neighbors of $p_i$ and is formulated as:

$$\operatorname{LDC}_{\mathrm{k}}\left(\mathrm{p}_{\mathrm{i}}\right)=\frac{\sum_{\mathrm{p}_{\mathrm{j}} \in \mathrm{N}_{\mathrm{k}}\left(\mathrm{p}_{\mathrm{i}}\right)} \operatorname{LDR}_{\mathrm{k}}\left(\mathrm{p}_{\mathrm{j}}\right)}{\left|\mathrm{N}_{\mathrm{k}}\left(\mathrm{p}_{\mathrm{i}}\right)\right|}$$

When the $\text {LDC}_k\left (p_i\right )$ is elevated, it signifies a reduced aggregate probability for the nearest photon event of $p_i$, thereby increasing its likelihood of being an isolated noise photon. Conversely, after pruning, a lower $\text {LDC}_k\left (p_i\right )$ suggests an enhanced aggregate probability within the neighborhood, favoring its classification as a signal photon. By organizing the local deviation coefficients and eliminating photons beneath a defined threshold, the method effectively diminishes interference from neighboring aliasing noise. The mechanics of the proposed integrated scanning photon processing framework are depicted in Fig. 11.

 figure: Fig. 11.

Fig. 11. Integrated processing framework. Coarse noise photons with large-scale distribution are filtered via statistical local distance-based outlier removing. Subsequently, close-range aliasing noises are eliminated by pruning-based local deviation filtering.

Download Full Size | PDF

4. Experimental results

4.1 Daytime demonstration

Single-photon array detectors encounter challenges like dark current, pixel crosstalk, and solar noise. The data predominantly consists of non-signal photons and inactive pixels, especially under daylight where solar background photons can outnumber signal echoes by two orders of magnitude. To address this, a 2-nm narrowband filter and 600-meter strobe technology are employed, reducing background noise and enhancing faint signal detection. As shown in Fig. 12(a), daytime external imaging experiments on a steel frame structure 1 kilometer from a football field utilized a 5 $\mu$s gating delay and 10 s integration time. Trajectory density variations were countered by non-uniform sampling, achieving a signal-to-noise ratio near 1.2. Photon processing techniques included density-based clustering, iterative statistical outlier removal, and the integrated filtering strategy. Figure 12(b)-(d) and (f)-(h) show the filtered 3D profiles and XY cross-sections. The density-based clustering approach, utilizing the method of density-based spatial clustering of applications with noise [36], effectively manages non-uniform noise but misclassifies approximately 16% of photons, attributed to fluctuations in signal density. While density-based methods struggle with sparse signal photon clustering, both iterative SOR and the proposed method effectively eliminate large-scale discrete noise. The proposed method, integrating a pruning-based local deviation filter, outperforms the iterative SOR in removing close-range aliasing noise, as detailed in Fig. 12(g)-(h).

 figure: Fig. 12.

Fig. 12. Comparative denoising results for a 1 km experimental target. (a) Displays the visible image. (b) Shows the denoising results using a density-based clustering approach. (c) Presents the results of iterative SOR. (d) Depicts the results achieved with the proposed denoising method. (e) Illustrates the depth profiles. Panels (f) to (h) correspond to the X/Y point contours derived from the (b) to (d), respectively.

Download Full Size | PDF

Figure 13 shows imaging results from a building 1 kilometer away. The array detector’s average photon response rate per pixel is 29.3%, with noise photons making up about 18%. Filtering high SNR photon clouds poses challenges in separating close-range noise. In Fig. 13(b), the photon cloud, post density-based clustering, shows clear clustering on the building’s sunlit side. The statistical outlier removal method, as Fig. 13(c) depicts, leaves substantial outlier and spike noise due to its dependency on the k-nearest neighbor mean. The nearest neighbor count is fixed at 100, with a standard deviation coefficient of 0.2. The input point cloud with high SNR undergoes downsampling at a rate of 0.1. Figure 13(d) demonstrates the efficiency of the proposed method. It combines K-nearest neighbor region growing for coarse filtering, retaining significant signal photons, and fine filtering to remove excess aliasing noise, thus preserving crucial point cloud details. Table 2 contains comprehensive filtering outcomes.

 figure: Fig. 13.

Fig. 13. Denoising results for a 1 km building. (a) Shows the visible image. (b) and (c) display the denoising results for the rear building using DB-Cluster and ISOR methods, respectively, while (d) presents the results using the proposed method. Similarly, (e) to (g) respectively demonstrate the extracted 3D point cloud results of the building approximately 900m ahead using the three denoising strategies mentioned above.

Download Full Size | PDF

Tables Icon

Table 2. Comparison of filtering results by different methods

Figure 14(a) shows a building’s top at a distance of 5.7 km. In this case, the detector’s activation delay was set to 35 $\mu$s, capturing a triangular rooftop sign within a 600-meter strobe range. The scenario’s average photon response rate per pixel is 6%. Overwhelming noise drowned out signal photons in the scan data, prompting the use of global gating preprocessing to isolate potential receiving pixels, thus improving the SNR of echo data. The Time-of-Flight (ToF) Count histogram, denoted as H(k), accumulates array pixels per pulse. For background suppression in H(k), a rectangular window w(k) of length $T_p$ targets the dispersed distance distribution:

$$\mathrm{H}^{\mathrm{T}}(\mathrm{k})=\sum_{\mathrm{j}={-}\infty}^{+\infty} \mathrm{w}(\mathrm{k}) \mathrm{H}(\mathrm{k}-\mathrm{j}+1)$$

 figure: Fig. 14.

Fig. 14. Denoising results for a distant building’s top (5.7 km). (a) Presents the visible image of the building’s top. (b) and (c) show the denoising results using ISOR and the proposed method, respectively, for a scanning duration of 10s. Notably, the ISOR method struggles with uneven local point densities, leading to multiple misextractions. In contrast, the proposed method not only precisely extracts the point cloud contours amid significant noise but also effectively removes close-range noise. (d) and (e) depict the results for both methods with an extended scanning duration of 30s.

Download Full Size | PDF

Assuming $H^T(k)$’s peak time is $t_p$, pixels within a $|t - t_p|$ less than $T_r$ are assigned to the strong scattering channel. Here, $T_p$ is 9 ns and $T_r$, 50 ns. Post-preprocessing, the strong scattering channel’s photon cloud SNR is 0.02. Multiple iterative statistical outlier removal and the proposed method are then applied for signal extraction. Notably, under extremely low SNR, the density-based clustering method is impractical. Figure 14(b)-(e) show the signal photon clouds extracted by both methods for 10s and 30s integration times. Despite non-uniform sampling to balance scan trajectory density, the statistical outlier removal method struggles with edge and center noise, persisting even after three filter stages. Conversely, the integrated filtering framework effectively extracts signals under low SNR, with the thinnest signal point cloud below 1m, indicating minimal scanning blur over long distances.

4.2 Performance discussion

Geometric calibration accuracy greatly impacts imaging quality. Figure 15 displays a forest scene’s depth profile and photon cloud details at 500 meters. Without scanning prism assembly error calibration, noticeable overlap and displacement occur in the photon cloud from adjacent pulses. The geometric calibration, as detailed in Section 2.2, reduced beam pointing errors to approximately 0.14 mrad, within a 3-pixel range. Figure 15(b)-(c) shows marked improvement in the point cloud quality of the antenna tower post-correction. For photon distribution interval, without considering noise interference, photon recorded by the pixel represent echoes from the target’s backward scattering. Assuming that various random error follow a normal distribution, the system’s theoretical jitter standard deviation can be expressed as

$$\sigma_{\text{sys }}=\sqrt{\sigma_{\mathrm{p}}^2+\sigma_{\mathrm{r}}^2+\sigma_{\mathrm{d}}^2+\sigma_{\mathrm{m}}^2}$$
where $\sigma _p$ denotes the pulse width, $\sigma _r$ represents the time quantization resolution of the array detector, specifically 1ns, $\sigma _m$ corresponds to the main-wave jitter with a standard deviation of approximately 150 ps, and $\sigma _d$ the standard deviation for internal jitter and synchronization time of the detector, estimated at 1.2 ns. Following error synthesis principles, the estimated jitter deviation stands at approximately 1.8 ns, equivalent to a distance of 27 cm.

 figure: Fig. 15.

Fig. 15. Geometric calibration in forest scene at 500 m. (a) Displays the visible image of the forest scene. (b) and (c) illustrate the point cloud of the antenna tower before and after geometric calibration, respectively, showcasing the calibration’s impact on data accuracy. (d) and (e) present the depth contours of the scene before and after calibration, highlighting the improvements in depth perception and detail resolution.

Download Full Size | PDF

Spatial resolution, a pivotal metric in assessing system performance, is influenced by variables such as field of view, pixel angular resolution, and scanning accuracy. The instantaneous angular resolution is ultimately limited by the optical diffraction limit and the size of individual pixels. According to the diffraction limit definition [37], the diameter of the Airy disk at the lens focus is

$$D_a=2.44 \lambda f / D$$
where $\lambda$ is the wavelength, $f$ is the focal length, and $D$ is the aperture diameter. As the Airy disk’s diameter nears a single pixel’s size, specifically 50 $\mu$m, the ultimate resolution for a single pixel reaches 0.05 mrad. Hence, the system’s theoretical angular resolution is approximately 0.2 mrad, corresponding to an imaging resolution of around 500 pixels at a 3-degree wedge-prism angle. In experiments, imaging of two small targets within 500 meter demonstrated this resolution. The photon was mapped onto an X-Y directional grid with designated resolution, and depth contours were derived from the first photon in each cell. The theoretical resolutions for the antenna tower at 500 meters and the water tower at 300 meters were approximately 10 cm and 6 cm, respectively. Despite the bracket’s diameter being less than 20 cm, the system accurately distinguished and extracted it, as shown in Fig. 16 (d) and (g). Note that limitations in scanning precision and return photon variability are not the sole determinants of the system’s spatial resolution capacity. Factors affecting scanning accuracy originate from three aspects: firstly, interference due to vibrations caused by rapid motor rotation; secondly, limitations in the pointing accuracy of the emitted beam, i.e., the ultimate error in calibration, noting that the calibration’s pointing vector is specific to the center pixel of the GmAPD array; thirdly,the installation position of the focal plane array relative to the receiving path determines the accuracy of Eq. (9), with the precision of locating the intensity image’s spot center limiting direct error calibration, necessitating minimization of assembly errors in the detector during co-axial adjustment of transmission and reception.

 figure: Fig. 16.

Fig. 16. Detailed imaging at various targets. (a) Shows the overall visible image. (b) Provides a zoomed-in view of the antenna tower at 500 m. (c) to (d) Depict depth contours derived from meshing the photon cloud, at resolutions of 5 cm and 10 cm, respectively, highlighting the detail captured at different resolutions. (e) Shows the point cloud of the entire scene. (f) to (h) Focus on details of the tower located at 300 m.

Download Full Size | PDF

Figure 17 presents the imaging results over a distance of 5.2 km, capturing the upper sections of two buildings which encompass around one-third of the visual field. For this scenario, the detector’s gating delay was configured to 35 $\mu$s, under a continuous 30s scanning regime. The photon response rate per pulse for each pixel averaged 5.4%. The signal-to-noise ratio of the recorded photons was approximately 0.8%, yet the system adeptly delineated the buildings. A marked decline in glass reflectivity induced noticeable gaps in the photon cloud. Consequently, to quantitatively evaluate the system’s single-pulse detection efficiency, a lidar equation for each receiving pixel was formulated. This equation, predicated on an operational range denoted as R, computes the mean count of signal photons received per pixel:

$$\mathrm{N}_{\mathrm{s}}=\eta_{\mathrm{t}} \eta_{\mathrm{r}} \eta_{\mathrm{d}} \tau^2 \rho \times \frac{\lambda \mathrm{P}_{\mathrm{t}}}{\mathrm{hc}} \times \frac{0.2 \theta_{\mathrm{r}}^2 \mathrm{D}^2}{\pi \mathrm{R}^2 \theta_{\mathrm{t}}^2}$$

 figure: Fig. 17.

Fig. 17. Long-range imaging of an experimental target at 5.2 km. (a) Presents the visible image of the target. (b) Shows the denoised 3D photon cloud captured with a scanning duration of 30s. (c) Focuses on the point cloud of the left building, where details such as windows remain discernible.

Download Full Size | PDF

In the lidar equation, $\eta _t$ and $\eta _r$ denote the efficiencies of the transmitting and receiving units, respectively. The photon detection probability of the SPAD array, $\eta _d$, is approximately 15%. The transmitting power is represented by $P_t$, while $\theta _t$ and $\theta _r$ are the beam divergence angle and the receiving field of view for a single pixel, respectively. D signifies the receiving aperture, $\rho$ the target reflectance, and $\tau$ the extinction coefficient. Figure 18(a) displays the signal photon count at varying distances. According to time-correlated single-photon counting principles [38], the likelihood of a photon reaching the detector and producing a primary photoelectron is akin to a Poisson distribution. If $T_s$ is the signal photon’s arrival time, its detection probability equals the product of the probability of remaining undetected until $T_s$ and the chance of detection at $T_s$:

$$\mathrm{P}(\mathrm{k})=\exp \left[-\int_0^{(k-1) \Delta} N_e(t)\right] \times\left\{1-\exp \left[-\int_{(k-1) \Delta}^{k \Delta}\left[N_e(t)+N_s(t)\right]\right]\right\}$$

 figure: Fig. 18.

Fig. 18. Return photon statistics and triggering probabilities at various distances. (a) Illustrates the average number of signal photons per pixel with different distances. (b) Depicts the triggering probability in the time bin corresponding to the target (at time Ts). The solid blue and orange lines represent this probability for signal photons at reflectivities of 0.2 and 0.05, respectively. The dashed line shows the photon triggering probability per pixel for a reflectivity of 0.2 and a background count rate of 0.2 MHz.

Download Full Size | PDF

Here, $N_e(t)$ and $N_s(t)$ represent the background and signal photon count rates, respectively. As Fig. 18(b) illustrates, the solid blue and red lines represent the photon triggering probability from signal echoes within the 1 km to 5 km range, with reflectivities of 0.2 and 0.05, respectively. The blue dashed line shows the triggering probability during the signal interval $T_s$ under a background count rate of 0.2 MHz and a reflectivity of 0.2. At 5 km, the probability for signal-triggering is 0.67%. With 0.2 MHz noise interference, this probability drops to about 0.46% within $T_s$. For low-reflectance targets like glass, the probability significantly diminishes; a fourfold decrease in reflectance leads to a 0.17% signal-triggering probability. Enhancing detection in low-reflectance areas involves exploiting the overlap of array pixels across adjacent pulses or lobe trajectories for multiple sampling. For an in-depth analysis of the interplay between scanning integration time and point density, refer to Section 2.2.

5. Conclusions

This study introduces a long-range daytime 3D imaging lidar system, comprising a near-infrared 1064nm narrow-pulse laser, a 64$\times$64 InGaAs/InP GmAPD array detector, and a dual-wedge prism scanner. As outlined in the introduction section, the proposed system, conceptually akin to Marino’s MIT-LL system, diverges by utilizing a 64$\times$64 GmAPD array based on InGaAs/InP detector substrate, enabling near-infrared operation, contrasting Marino’s use of a silicon-based 32$\times$32 detector array responsive to 532nm wavelengths [21]. Even with a smaller receiving aperture, the increased pulse energy of the proposed system significantly extends the operating range beyond the MIT-LL’s capabilities. An optimized integrated filtering framework enables the system to effortlessly separate point clouds from the massive, noisy 3D raw scanning photon data.

Field experiments revealed that achieving a dense photon cloud for distant targets typically requires multiple scanning cycles, constrained by laser energy and the emission field of view. Although receiver aperture enlargement can boost signal strength, the prism’s diameter poses a limit to this expansion. Innovations such as integrating a Cassegrain telescope with a large field of view into the scanning structure were considered, but they face challenges in terms of bulkiness and cost. Another potential challenge identified is the discrepancy between emission and reception fields-of-view due to laser propagation time delay, particularly in over-the-horizon imaging applications. This discrepancy impacts transceiver efficiency and photon coordinate computation. Given the continuous rotation in prism scanning, angle compensation through motor adjustments is difficult, suggesting that reducing rotation speed is an optimal solution for longer distances. Furthermore, in non-scanning array-based imaging lidar studies, enhanced illumination efficiency is observed through the transformation of non-flat-top emitted lasers into array beams using fiber arrays or diffractive optical elements. In this study, we employ a direct illumination method designed for small instantaneous fields of view. This approach effectively addresses the challenges associated with transceiver field offset induced by scanning, preventing discrepancies between the illumination grid and received pixels caused by prism rotation.

In conclusion, this research contributes a promising approach for long-range, high-density point acquisition in single-photon 3D imaging lidar systems. Future research directions include integrating this lidar with a high-precision pose measurement system and airborne electronic systems for applications such as aerial remote sensing and forest canopy density assessment. The focus will be on enhancing the optoelectronic transceiver system and developing advanced denoising algorithms to optimize the lidar in airborne 3D imaging environments.

Funding

Innovation Program for Quantum Science and Technology (2021ZD0300304); Shanghai Municipal Science and Technology Major Project (2019SHZDZX01); National Natural Science Foundation of China (42241169); Innovation Foundation of Shanghai Institute of Technical Physics (CX-368, CX-482).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. A. McCarthy, R. J. Collins, N. J. Krichel, et al., “Long-range time-of-flight scanning sensor based on high-speed time-correlated single-photon counting,” Appl. Opt. 48(32), 6241–6251 (2009). [CrossRef]  

2. Z. Li, E. Wu, C. Pang, et al., “Multi-beam single-photon-counting three-dimensional imaging lidar,” Opt. Express 25(9), 10189–10195 (2017). [CrossRef]  

3. F. Piron, D. Morrison, M. R. Yuce, et al., “A review of single-photon avalanche diode time-of-flight imaging sensor arrays,” IEEE Sens. J. 21(11), 12654–12666 (2021). [CrossRef]  

4. N. S. Prasad and A. R. Mylapore, “High-speed wide-angle interleaved scanning technique for a 3D imaging lidar,” J. Opt. Soc. Am. B 38(10), D22–D27 (2021). [CrossRef]  

5. A. McCarthy, N. J. Krichel, N. R. Gemmell, et al., “Kilometer-range, high resolution depth imaging via 1560 nm wavelength single-photon detection,” Opt. Express 21(7), 8904–8915 (2013). [CrossRef]  

6. J. J. Degnan, “Scanning, multibeam, single photon lidars for rapid, large scale, high resolution, topographic and bathymetric mapping,” Remote Sens. 8(11), 958 (2016). [CrossRef]  

7. Z. P. Li, X. Huang, Y. Cao, et al., “Single-photon computational 3d imaging at 45 km,” Photonics Res. 8(9), 1532–1540 (2020). [CrossRef]  

8. Z. P. Li, J.-T. Ye, X. Huang, et al., “Single-photon imaging over 200 km,” Optica 8(3), 344–349 (2021). [CrossRef]  

9. S. Chan, A. Halimi, F. Zhu, et al., “Long-range depth imaging using a single-photon detector array and non-local data fusion,” Sci. Rep. 9(1), 8075 (2019). [CrossRef]  

10. M. A. Albota, B. F. Aull, D. G. Fouche, et al., “Three-dimensional imaging laser radars with geiger-mode avalanche photodiode arrays,” Lincoln Laboratory Journal 13, 351–370 (2002).

11. J. Liu, Y. Xu, Y. Li, et al., “Exploiting the single-photon detection performance of ingaas negative-feedback avalanche diode with fast active quenching,” Opt. Express 29(7), 10150–10161 (2021). [CrossRef]  

12. Q. Hao, Y. Tao, J. Cao, et al., “Development of pulsed-laser three-dimensional imaging flash lidar using apd arrays,” Micro. Opt. Tech. Lett. 63(10), 2492–2509 (2021). [CrossRef]  

13. P. A. Hiskett, K. J. Gordon, J. W. Copley, et al., “Long range 3D imaging with a 32x32 geiger mode ingaas/inp camera,” in Advanced Photon Counting Techniques VIII, vol. 9114 (SPIE, 2014), pp. 67–79.

14. X. Jiang, S. Wilton, I. Kudryashov, et al., “Ingaasp/inp geiger-mode apd-based lidar,” in Optical Sensing, Imaging, and Photon Counting: From X-Rays to THz, vol. 10729 (SPIE, 2018), pp. 33–44.

15. M. E. Szymanski, E. A. Watson, and D. J. Rabb, “Coherent sensing performance comparison of framed and asynchronous gmapd arrays,” Appl. Opt. 60(25), G55–G63 (2021). [CrossRef]  

16. P. Y. Jiang, Z. P. Li, W. L. Ye, et al., “Long range 3D imaging through atmospheric obscurants using array-based single-photon lidar,” Opt. Express 31(10), 16054–16066 (2023). [CrossRef]  

17. H. C. Ni, J. F. Sun, L. Ma, et al., “Research on 3D image reconstruction of sparse power lines by array gm-apd lidar,” Opt. Laser Technol. 168, 109987 (2024). [CrossRef]  

18. Y. Kang, R. Xue, X. Wang, et al., “High-resolution depth imaging with a small-scale spad array based on the temporal-spatial filter and intensity image guidance,” Opt. Express 30(19), 33994–34011 (2022). [CrossRef]  

19. C. Callenberg, A. Lyons, D. den Brok, et al., “Super-resolution time-resolved imaging using computational sensor fusion,” Sci. Rep. 11(1), 1689 (2021). [CrossRef]  

20. J. Degnan, D. Wells, R. Machan, et al., “Second generation airborne 3D imaging lidars based on photon counting,” in Advanced Photon Counting Techniques II, vol. 6771 (SPIE, 2007), pp. 117–123.

21. R. M. Marino and W. R. Davis, “Jigsaw: a foliage-penetrating 3D imaging laser radar system,” Linc. Lab. J. 15, 23–36 (2005).

22. M. Henriksson and P. Jonsson, “Photon-counting panoramic three-dimensional imaging using a geiger-mode avalanche photodiode array,” Opt. Eng. 57(09), 1 (2018). [CrossRef]  

23. J. J. Degnan and D. E. Smith, “A conceptual design for a spaceborne 3D imaging lidar,” Elektrotech. Inftech. 119(4), 99–106 (2002). [CrossRef]  

24. A. H. Li, X. S. Liu, J. F. Sun, et al., “Risley-prism-based multi-beam scanning lidar for high-resolution three-dimensional imaging,” Opt. Lasers Eng. 150, 106836 (2022). [CrossRef]  

25. Z. Liu, F. Zhang, and X. P. Hong, “Low-cost retina-like robotic lidars based on incommensurable scanning,” IEEE/ASME Trans. Mechatron. 27(1), 58–68 (2022). [CrossRef]  

26. V. F. Duma and A. L. Dimb, “Exact scan patterns of rotational risley prisms obtained with a graphical method: Multi-parameter analysis and design,” Appl. Sci. 11(18), 8451 (2021). [CrossRef]  

27. Y. Li, “Third-order theory of the risley-prism-based beam steering system,” Appl. Opt. 50(5), 679–686 (2011). [CrossRef]  

28. H. Zhang, Y. Yuan, L. J. Su, et al., “Beam steering uncertainty analysis for risley prisms based on monte carlo simulation,” Opt. Eng. 56(1), 10 (2017). [CrossRef]  

29. J. S. Homg and Y. J. Li, “Error sources and their impact on the performance of dual-wedge beam steering systems,” Appl. Opt. 51(18), 4168–4175 (2012). [CrossRef]  

30. L. Fraiture, “A history of the description of the three-dimensional finite rotation,” The J. Astronaut. Sci. 57(1-2), 207–232 (2009). [CrossRef]  

31. J. S. Dai, “Euler-rodrigues formula variations, quaternion conjugation and intrinsic connections,” Mech. Mach. Theory 92, 144–152 (2015). [CrossRef]  

32. X. Wang, C. Glennie, and Z. G. Pan, “An adaptive ellipsoid searching filter for airborne single-photon lidar,” IEEE Geosci. Remote Sensing Lett. 14(8), 1258–1262 (2017). [CrossRef]  

33. H. Tang, A. Swatantran, T. Barrett, et al., “Voxel-based spatial filtering method for canopy height retrieval from airborne single-photon lidar,” Remote Sens. 8(9), 771 (2016). [CrossRef]  

34. R. B. Rusu, Z. C. Marton, N. Blodow, et al., “Towards 3D point cloud based object maps for household environments,” Robotics Auton. Syst. 56(11), 927–941 (2008). [CrossRef]  

35. H. Balta, J. Velagic, W. Bosschaerts, et al., “Fast statistical outlier removal based method for large 3D point clouds of outdoor environments,” IFAC Symp. on Robot Control. 51(22), 348–353 (2018). [CrossRef]  

36. M. Ester, H.-P. Kriegel, J. Sander, et al., “A density-based algorithm for discovering clusters in large spatial databases with noise,” in Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, vol. 96 (1996), pp. 226–231.

37. B. Zhang, J. Zerubia, and J.-C. Olivo-Marin, “Gaussian approximations of fluorescence microscope point-spread function models,” Appl. Opt. 46(10), 1819–1829 (2007). [CrossRef]  

38. M. S. Oh, H. J. Kong, T. H. Kim, et al., “Reduction of range walk error in direct detection laser radar using a geiger mode avalanche photodiode,” Opt. Commun. 283(2), 304–308 (2010). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (18)

Fig. 1.
Fig. 1. Schematic of the experimental setup. (a) Comprising a narrow-pulse 1064 nm laser, a transceiver system, a 64 $\times$ 64 InGaAs/InP GmAPD detector, and a dual-wedge prism scanner. The setup features a detachable optical path with color of yellow, specifically designed for the geometric calibration. (b) Photograph of LiDAR device.
Fig. 2.
Fig. 2. Schematic of dual-wedge prism scanning. (a) Illustration of the petal-shaped scanning trajectory achieved through the synchronous rotation of two oppositely oriented wedge prisms, with the trajectory’s form governed by the prisms’ rotational speed ratio [26]. (b) Ray tracing diagram illustrating the principles of the Snell non-near-axis approximation method applied to the scanning system
Fig. 3.
Fig. 3. Scanning trajectories and pixel dwell density distribution. Panels (a) to (d) illustrate the scanning trajectories for varying durations: 2s, 5s, 7s, and 10s, respectively. The top row depicts the scanning trajectory of an individual pixel. The bottom row quantifies the pixel dwell times within each 0.2 $\times$ 0.2 grid on the screen during a 64 $\times$ 64 array scan, incorporating a pulse repetition rate of 5kHz, across the various durations.
Fig. 4.
Fig. 4. Analysis of scanning trajectories on a 1 km screen. (a) Illustrates the average point density within concentric rings as the field-of-view radius expands, essentially depicting a cross-sectional view of the pixel density from Fig. 3(b). Displays the ratio of grid areas within the circles–where the pixel scan count surpasses a predefined threshold–to the total grid area in the field-of-view. This graphically represents the scan duration necessary to achieve a specific proportion of area coverage.
Fig. 5.
Fig. 5. (a) Conceptual illustrations of spatial noise in photon data, including outlier, clustered, and aliasing noise. (b) Variation in non-uniform sampling across different distances, showing its effect on reducing clustered noise in dense scanning trajectories.
Fig. 6.
Fig. 6. (a) Geometric calibration setup photograph, showing the transceiver system on a dual-axis rotating stage aimed at a parallel collimator tube. (b) Diagram for Prism I assembly, highlighting angle adjustments to correct prism deflection. (c) Diagram for Prism II geometric calibration, detailing the process of aligning the directed beam spot with a reference point to calculate the beam direction vector, with Prism I fixed.
Fig. 7.
Fig. 7. Calibration results for Prism I. (a) Shows the residuals between the theoretical and measured pointing vectors, illustrating the initial discrepancies. (b) Presents the fitted calibration pointing vector, indicating the adjustments made for accuracy. (c) Depicts the residuals post-fitting, highlighting the precision achieved in the calibration.
Fig. 8.
Fig. 8. Calibration results for Prism II. (a) Displays the residuals between the theoretical and measured pointing vectors, highlighting initial calibration discrepancies. (b) Shows the residuals between the measured values and the corrected vectors, demonstrating the effectiveness of adjustments made to calibrate assembly errors in Prism II.
Fig. 9.
Fig. 9. Residual analysis of 16 remeasured data sets with simultaneous rotation of Prism I and Prism II. (a) and (b) correspond to the residuals along the X-axis and Y-axis, respectively. The solid black lines indicate the discrepancies between the corrected and measured values, whereas the red dashed lines represent the deviations between the theoretical and measured values.
Fig. 10.
Fig. 10. (a) Illustrates the cumulative distance of the k-nearest neighbors relative to $p_i$ itself. (b) Shows the total distance for all point pairs within the k-nearest neighbors of $p_i$ . (c) Depicts the deviation of the centroid relative to $p_i$ within its k-nearest neighbors.
Fig. 11.
Fig. 11. Integrated processing framework. Coarse noise photons with large-scale distribution are filtered via statistical local distance-based outlier removing. Subsequently, close-range aliasing noises are eliminated by pruning-based local deviation filtering.
Fig. 12.
Fig. 12. Comparative denoising results for a 1 km experimental target. (a) Displays the visible image. (b) Shows the denoising results using a density-based clustering approach. (c) Presents the results of iterative SOR. (d) Depicts the results achieved with the proposed denoising method. (e) Illustrates the depth profiles. Panels (f) to (h) correspond to the X/Y point contours derived from the (b) to (d), respectively.
Fig. 13.
Fig. 13. Denoising results for a 1 km building. (a) Shows the visible image. (b) and (c) display the denoising results for the rear building using DB-Cluster and ISOR methods, respectively, while (d) presents the results using the proposed method. Similarly, (e) to (g) respectively demonstrate the extracted 3D point cloud results of the building approximately 900m ahead using the three denoising strategies mentioned above.
Fig. 14.
Fig. 14. Denoising results for a distant building’s top (5.7 km). (a) Presents the visible image of the building’s top. (b) and (c) show the denoising results using ISOR and the proposed method, respectively, for a scanning duration of 10s. Notably, the ISOR method struggles with uneven local point densities, leading to multiple misextractions. In contrast, the proposed method not only precisely extracts the point cloud contours amid significant noise but also effectively removes close-range noise. (d) and (e) depict the results for both methods with an extended scanning duration of 30s.
Fig. 15.
Fig. 15. Geometric calibration in forest scene at 500 m. (a) Displays the visible image of the forest scene. (b) and (c) illustrate the point cloud of the antenna tower before and after geometric calibration, respectively, showcasing the calibration’s impact on data accuracy. (d) and (e) present the depth contours of the scene before and after calibration, highlighting the improvements in depth perception and detail resolution.
Fig. 16.
Fig. 16. Detailed imaging at various targets. (a) Shows the overall visible image. (b) Provides a zoomed-in view of the antenna tower at 500 m. (c) to (d) Depict depth contours derived from meshing the photon cloud, at resolutions of 5 cm and 10 cm, respectively, highlighting the detail captured at different resolutions. (e) Shows the point cloud of the entire scene. (f) to (h) Focus on details of the tower located at 300 m.
Fig. 17.
Fig. 17. Long-range imaging of an experimental target at 5.2 km. (a) Presents the visible image of the target. (b) Shows the denoised 3D photon cloud captured with a scanning duration of 30s. (c) Focuses on the point cloud of the left building, where details such as windows remain discernible.
Fig. 18.
Fig. 18. Return photon statistics and triggering probabilities at various distances. (a) Illustrates the average number of signal photons per pixel with different distances. (b) Depicts the triggering probability in the time bin corresponding to the target (at time Ts). The solid blue and orange lines represent this probability for signal photons at reflectivities of 0.2 and 0.05, respectively. The dashed line shows the photon triggering probability per pixel for a reflectivity of 0.2 and a background count rate of 0.2 MHz.

Tables (2)

Tables Icon

Table 1. Specifications of the proposed LiDAR system

Tables Icon

Table 2. Comparison of filtering results by different methods

Equations (25)

Equations on this page are rendered with MathJax. Learn more.

s 0 m , n = [ sin θ m X , cos θ m X sin θ n Y , cos θ m X cos θ n Y ] T
P X , Y , Z m , n = d P m , n × T 4 ( T 3 { T 2 [ T 1 ( s 0 m , n , n 11 ) , n 12 ] , n 21 } , n 22 )
n 11 = [ sin α 1 cos ( θ 10 + w 1 t ) , sin α 1 sin ( θ 10 + w 1 t ) , cos α 1 ] T
s 1 = [ s 0 ( n 11 ¯ s 0 ) n 11 ¯ ] / n 1 n 11 ¯ 1 [ 1 ( n 11 ¯ s 0 ) 2 ] / n 1 2
s 2 = n 1 [ s 1 ( n 12 ¯ s 1 ) n 12 ¯ ] n 12 ¯ 1 n 1 2 [ 1 ( n 12 ¯ s 1 ) 2 ]
[ ϕ , Θ ] = [ arccos ( s 4 z ) , arctan ( s 4 y / s 4 x ) ]
N t = N p / N r
η s t ( z s , r i ) = N i Y ( z s , r i ) min [ N i Y ( z s , r i ) ] + σ N ¯ i Y ( z s , r i ) max [ N i Y ( z s , r i ) min [ N i Y ( z s , r i ) ] + σ N ¯ i Y ( z s , r i ) ]
N e = [ n e X , n e Y , n e Z ] T = [ sin θ R , cos θ R sin θ F , cos θ R cos θ F ] T
( θ ) = a 0 + i = 1 3 a i sin ( i × θ 1 + b i )
rot ( z , θ R 2 ) = ( cos θ R 2 sin θ R 2 0 sin θ R 2 cos θ R 2 0 0 0 1 )
d k ¯ ( N k ( p i ) , p i ) = p j N k ( p i ) d ( p j , p i ) | N k ( p i ) |
D k ¯ ( N k [ ( p i , p j ) ] , p i ) = ( p i , p j ) N k d ( p i , p j ) | N k [ ( p i , p j ) p i ] |
LDOF ( p i ) = d k ¯ ( N k ( p i ) , p i ) D k ¯ ( N k [ ( p i , p j ) ] , p i )
μ ( p i ) = 1 | N | s = 1 | N | | | p i p s | | 2
σ ( p i ) = [ 1 | N | s = 1 | N | ( | | p i p s | | 2 μ ( p i ) ) ] 1 / 2
LDR k ( p i ) = d ( p i , p ¯ i k ) | N k ( p i ) |
K F k ( p i ) = r d k ( p i ) p j N k ( p i ) r d k ( p j )
p f = p i M | r d k ( p i ) | p i M p j N k ( p j ) r d k ( p j , p i )
LDC k ( p i ) = p j N k ( p i ) LDR k ( p j ) | N k ( p i ) |
H T ( k ) = j = + w ( k ) H ( k j + 1 )
σ sys  = σ p 2 + σ r 2 + σ d 2 + σ m 2
D a = 2.44 λ f / D
N s = η t η r η d τ 2 ρ × λ P t h c × 0.2 θ r 2 D 2 π R 2 θ t 2
P ( k ) = exp [ 0 ( k 1 ) Δ N e ( t ) ] × { 1 exp [ ( k 1 ) Δ k Δ [ N e ( t ) + N s ( t ) ] ] }
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.