Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High precision 3D imaging with timing corrected single photon LiDAR

Open Access Open Access

Abstract

Single photon light detection and ranging (LiDAR) is an important technique for high precision long distance three-dimensional (3D) imaging. However, due to the effects and native limitations of system components, there exists ranging errors when using LiDAR system. For the LiDAR system that requires trigger detector to provide synchronization signals, the fluctuation of laser pulse energy causes the change of the initial time of the constant threshold triggered timing module, and subsequently leads to the ranging error. In this paper, we build a dual SPADs LiDAR system to avoid the ranging error caused by the fluctuation of laser pulse energy. By adding a reference optical path, the flight time of signal photons is corrected by reference photons, so as to realize the correction of ranging. A series of experiments demonstrate that the proposed LiDAR system has the capability of high precision ranging and 3D imaging. The system achieves range of error of 0.15 mm and range resolution of 0.3 mm at a distance of 29 m.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Time of flight (TOF) light detection and ranging (LiDAR) has been widely used in a variety of remote sensing applications, including environmental monitoring, autopilot, spacecraft navigation, and three-dimensional (3D) mapping [14]. With the introduction of single photon avalanche photodiode (SPAD) and time-correlated single photon counting (TCSPC) technology, single-photon LiDAR comes into being [59]. Compared with conventional LiDAR, single-photon LiDAR has higher sensitivity and precision in detecting the echo pulse and calculate the TOF [1012]. That will greatly benefit the high precision 3D imaging, which is extremely important for many emerging fields [1315].

Ranging precision is an important index to evaluate the performance of ranging systems [1618]. In single-photon LiDAR systems, the TOF of laser pulses is measured directly to calculate the distance between the system and the target. Therefore, the ranging precision of single-photon LiDAR system is fundamentally determined by the measurement precision of TOF. For the TCSPC module triggered by constant threshold, the fluctuation of initial signals and stop signals will cause the identification errors of initial time and stop time [19,20]. Usually, the initial signals of single-photon LiDAR systems are provided by a trigger detector after laser pulses are detected, and the stop signals come from the output of SPAD [11,21]. When the energy of the laser pulses fluctuates, the trigger signals output by the trigger detector fluctuate accordingly, resulting in the identification error at the initial time and ultimately leading to degradation of ranging precision. Fortunately, the steady output of SPAD ensures the stability of the stop signals, so that the discrimination error at the stop time is small enough to be ignored [22]. In addition, single-photon LiDAR needs to record multiple signal photons to complete a range measurement [23,24], thus the discrimination error of stop time generated during multiple recordings can be considered as the internal time jitter of the system. In the signal processing process, the effect of time jitter on ranging precision will be greatly reduced. Therefore, how to decrease timing error caused by pulse energy fluctuation is the key to high precision 3D imaging of single-photon LiDAR [25,26].

Generally, researchers improve the TCSPC module with constant fraction discriminators (CFD), which takes a constant proportion of the start signal peak as the initial time, so as to reduce the impact of pulse energy fluctuation on ranging precision [11,2729]. However, CFD also has timing error due to technical limitations, which cannot completely eliminate ranging error caused by changes in initial time. Moreover, it is difficult to find suitable CFD for some TCSPC modules. Some researchers add a reference surface into the single-photon LiDAR system to calculate the relative TOF of signal photons between the reference and the target and finally obtain the accurate distance [5,29,30]. In this way, the single SPAD needs to detect both signal photons and reference photons. However, there is a dead time after SPAD detects a photon, while any photons cannot be detected during the dead time [31]. To avoid the influence of reference photons and signal photons on each other, the total count rate of photons needs to be reduced, which slows down the imaging speed. Therefore, it is necessary to find a more suitable method to improve ranging precision while not introducing secondary problems.

In this paper, we build a LiDAR system having dual SPADs without CFD to correct the initial time error caused by pulse energy fluctuation. There exists dual SPADs LiDAR in which the dual SPADs are placed in the same signal optical path, which can extract signal photons from high background noise or avoid the walk error [3234]. Different from that configuration, in our work, a reference optical path is added into the conventional single-photon LiDAR system. Based on this framework, two modified algorithms are proposed to correct the TOF of signal photons. Experimental results and analysis are presented to demonstrate that our method greatly eliminates the influence of laser pulse energy fluctuations on the ranging precision, and achieves high precision 3D reconstruction.

2. Methods

A schematic of the timing corrected LiDAR system with dual SPADs is shown in Fig. 1, and the main system parameters are summarized in table 1. Specifically, a solid-state laser serves as the light source for active illumination. The output laser pulse is split by a BS with a splitting ratio of 9:1. The 10 % part from the BS was directed to an avalanche photodiode (APD), whose output provided start trigger signal for the TCSPC module. The 90 % part passes through a half-wave plate (HWP1), and is split again by a polarization beam splitter (PBS1). The HWP1 is used to orientate the polarization to maximize the transmission efficiency. Vertically polarized light with a small part of energy is reflected into the reference path and received by SPAD1. Horizontally polarized light with most of the energy is collimated by the beam expander composed of L1 and L2, and then transmits to the optical transceiver.

 figure: Fig. 1.

Fig. 1. Schematic of the timing corrected LiDAR system with dual SPADs. Optical components include: beam splitter (BS); polarization beam splitter (PBS1, PBS2); avalanche photodiode (APD); half-wave plate (HWP1, HWP2); bandpass filter (BPF1, BPF2); multimode fiber (MMF1, MMF2); single photon avalanche photodiode (SPAD1, SPAD2); mirror (M); lenses (L1, L2, L3, L4); scanning mirrors (SM); objective lens (OL); time-correlated single photon counting (TCSPC); data acquisition (DAQ).

Download Full Size | PDF

Tables Icon

Table 1. Summary of the Main System Parameters

The transmit and receive paths of the optical transceiver are configured to be coaxial with PBS2 as the optical transfer switch. Scanning mirrors (SM) are used to raster scan the light beam on the target object. A lens (L4) and an objective lens (OL) are used to both collimate the illumination beam and collect the backscattering photons from the detected target. The collected photons from the receive path are converged by L3 in the detection path and then delivered to SPAD2. The two SPADs configured in the reference path and the signal detection path were identical to avoid differences in the detection of photons. Both of the SPADs coupled the photons via bandpass filter (BPF) for background suppression and multimode fiber (MMF) to increase coupling efficiency. The two SPADs respectively provide stop signals for two time-to-digital converters (TDCs) of the TCSPC module.

The TCSPC module is configured to record time stamps for the detection events and transfer them to the computer. Software then generates time histograms for each scanned pixel. The depth information on different pixels is obtained from the corresponding time histograms. Generally, the time-correlated cross-correlation (CC) algorithm [11,23] is adopted to analyze the histogram and determine the depth. Specifically, the method calculates the time $t$, to maximize the cross-correlation $C$ between the signal photon time histogram $H$ acquired from SPAD2 and the system instrumental response function (IRF) $R$. That is

$$t(x,y)=\underset{t} {\arg \max} \, C_{t}(x,y)=\underset{t} {\arg \max} \, \sum_{i=1}^{T}{H_{t+i}(x,y)R_{i}},$$
where $H_{t}$ is the value of time histogram at the $t^{th}$ bin and $T$ is the total number of timing bins. The pixel coordinate $(x,y)$ indicates that the above data acquisition and processing is realized pixel by pixel by raster scan with SM. The IRF is determined by the pulse waveform and the time jitter of each component of the system, and it can be measured by directly detecting the signal photons. When the laser pulse is directly detected by SPAD and the detection time is long enough, the obtained time distribution histogram of signal photons will tend to be stable and can be regarded as the IRF of the system. Using this method, the time position of the maximum cross-correlation corresponding to the depth $z$ can be found, then we have
$$z(x,y)=\frac{1}{2}c(t(x,y)-t_{0}),$$
where $c$ is the speed of light in the transmitting medium, $t_{0}$ is the time delay of the system. By combining the spatial information $(x,y)$ of scanning points and corresponding measured depth $z$, a depth profile of the target can be reconstructed. The schematic of the CC algorithm is shown in Fig. 2(a).

 figure: Fig. 2.

Fig. 2. Schematics of different algorithms to obtain the TOF. The system instrumental response function (the fixed reference waveform), black; the time distribution histogram of detected reference photons, blue; the time distribution histogram of detected signal photons, red; the cross-correlation curve, green. $\otimes$ represents the cross-correlation operation. (a) The cross-correlation between fixed reference histogram and signal photons distribution histogram to obtain $t$ (CC algorithm). (b) The cross-correlation between fixed reference histogram and signal photons distribution histogram to obtain $t$, and the cross-correlation between fixed reference histogram and dynamic reference photons distribution histogram to obtain $\delta t$ (CCC algorithm). (c) The cross-correlation between dynamic reference photons distribution histogram and signal photons distribution histogram to obtain $t'$ (DCC algorithm).

Download Full Size | PDF

However, due to the fluctuation of pulse energy, the trigger signals fluctuate synchronously. Then the initial time of the TCSPC module will vary for different laser pulses, as shown in Fig. 3(a). It should be emphasized that the variation of the initial time may be larger than the width of the laser pulse. The reason is that the waveform of the trigger signal is determined by both the waveform of the laser pulse and the bandwidth of APD. Due to the limited bandwidth of APD, the trigger signal will be greatly stretched comparing with the laser pulse. For the trigger signal of large width, even small fluctuation in energy will lead to significant change in initial time. In contrast, the TOF of the signal photons reflected back from the point of a certain depth is constant. The time distribution histogram of signal photons will present overall time position shifting due to the change of the initial time. Therefore, it will introduce a time error when calculating the TOF of signal photons with the CC algorithm, which leads to a ranging error in the consequent procedure of depth calculation. In order to demonstrate the influence of laser energy fluctuations on the ranging results, we repeatedly measure the depth of a single point 1000 times, and the acquisition time for each measurement is 500 ms. Fig. 3(b) shows the ranging results obtained by CC algorithm when the laser pulse energy fluctuates violently. The ranging error is determined by the time position error of the signal photons in time distribution histogram, which is originated from the initial time error of the TCSPC module. And Fig. 3(c) shows the accumulated photon count detected in the reference path for each measurement. The count of photons is proportional to the average laser pulse energy of the measurement time. The variation of ranging error is highly consistent with the variation of photon count, which provides a solid evidence that the range error discussed is closely related to the energy fluctuation of laser pulse. Considering the principle of threshold triggering shown in Fig. 3(a), we can infer that the energy fluctuation leads to initial time error, consequently leading to ranging error.

 figure: Fig. 3.

Fig. 3. The influence of the energy fluctuation of laser pulse. (a) Schematic of the influence of the fluctuation off trigger signals caused by the different laser pulse energy. (b) The ranging results of 1000 times obtained by CC algorithm. (c) The count of photons detected from the reference optical path.

Download Full Size | PDF

It is found that although the initial time changes, the relative time between reference photons arriving at SPAD1 and signal photons arriving at SPAD2 does not change. So we can obtain the corrected time $\delta t$ for each pixel by solving the maximum cross-correlation between the dynamic histogram $H'$ of the reference photons from SPAD1 and the IRF $R$, such that

$$\delta t(x,y)=\underset{\delta t} {\arg \max} \, C_{\delta t}(x,y)=\underset{\delta t} {\arg \max} \, \sum_{i=1}^{T}{H'_{\delta t+i}(x,y)R_{i}},$$
and then obtain more accurate depth information, such that
$$z'(x,y)=\frac{1}{2}c(t(x,y)-\delta t(x,y)-t'_{0}),$$
where $t'_{0}$ is the system time delay considering the time delay of the reference path. We refer to the improved method as corrected cross-correlation (CCC) and the schematic is shown in Fig. 2(b).

Additionally, we can directly find the time to maximize the cross-correlation between signal photon time histogram and dynamic reference photon time histogram and then calculate the relative depth. Although the two histograms are recorded by different SPADs and TDCs, the two TDCs share the same start trigger and started at the same time, thus the two histograms could be combined into one histogram. For each pixel, we calculate the cross-correlation between the dynamic reference photon time histogram and the combined time histogram. When the dynamic reference histogram coincides with itself, the cross-correlation reaches a maximal value; when it matches the signal histogram to the maximum extent, the cross-correlation reaches the other maximal value. The time difference between the two maximal values reflects the depth information of this pixel. In addition, the time position that the time distribution histogram of reference photons coincides with itself is fixed in the cross-correlation operation, which can be directly obtained and used as the starting point of the time axis for the cross-correlation result. In this way, the dynamic reference histogram $H'$, can be directly cross-correlated with the signal histogram $H$ to solve the time $t'$, such that

$$t'(x,y)=\underset{t'} {\arg \max} \, C_{t'}(x,y)=\underset{t'} {\arg \max} \, \sum_{i=1}^{T}{H'_{t'+i}(x,y)H_{i}(x,y)},$$
and then the depth information can be obtained, such that
$$z^{\prime\prime}(x,y)=\frac{1}{2}c(t'(x,y)-t'_{0}).$$

We refer to the simplified method as dynamic cross-correlation (DCC) and the schematic is shown in Fig. 2(c).

3. Experiments and results

To demonstrate the feasibility and capability of the proposed methods, we carry out three experiments: ranging precision experiment of single point, 3D imaging of designed range resolution test target, and 3D imaging of test object. All the targets are placed 29 m away from the system. The transverse spatial resolution determined by the collimated optical beam diameter is about 2 mm. The count rate of the two SPADs is less than 5% of the laser repetition rate to avoid the distorting pileup effect [3537] on the imaging results. The time resolution of TCSPC is set to 1ps and corresponding depth resolution is 0.15 mm.

To characterize the ranging precision of the system, depth measurements for a single point are repeated 100 times and the results are evaluated statistically by using standard deviation (STD) and range of error (ROE). ROE is defined by the difference between the maximum and minimum of the measured depth. In order to better show the fluctuations of multiple measurements, the ranging results are subtracted from the minimum ranging value. The ranging results obtained for acquisition times of 20, 50, 100, 200, and 500 ms are shown in Fig. 4, and the statistic values of errors are shown in Fig. 5. Different performances of CC, CCC, and DCC algorithms can be observed from the corresponding figures. Due to the fluctuation of laser pulse energy, the ranging precision of the traditional CC algorithm is low. Moreover, since the fluctuation is irregular, the ranging quality of the CC algorithm can not be improved with the increase of acquisition time. However, it is visually illustrated that the ranging results of the first two rows have synchronized drift because the two SPADs have the same responses to the energy fluctuation of laser pulse. And that is also why the proposed CCC and DCC algorithms, which utilize data from the two SPADs, can significantly improve the ranging precision. Moreover, the CCC algorithm has higher ranging precision than the DCC algorithm in a relatively short acquisition time. This is because the count of signal photons and reference photons is so low that the generated histograms are unstable, which leads to larger errors in their direct cross-correlation results obtained by using the DCC algorithm. With the acquisition time increasing to 100 ms or above, the two histograms tend to be stable and similar, so the DCC algorithm is going to outperform the CCC algorithm. It is worth noting that 97 of the 100 ranging results obtained by using the DCC algorithm are the same when the acquisition time is 500 ms. The corresponding ROE is only 0.15 mm, and the standard deviation is as low as 0.026 mm.

 figure: Fig. 4.

Fig. 4. Single point ranging resluts with different algorithms and different acquisition time. (a) to (e) are the calculated depth values obtained for acquisition times of 20, 50, 100, 200, and 500 ms, respectively. The first and second rows are the ranging results obtained from the reference photons and the signal photons respectively, both processed by the CC algorithm. The third and fourth rows are the ranging results obtained by CCC and DCC algorithms, respectively.

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. (a) STD and (b) ROE of the single point over different acquisition times.

Download Full Size | PDF

It should be noted that the ROE of CC algorithm reaches 1.35mm, which is larger than the corresponding distance of the laser pulse width. The reason for this phenomenon is that the ranging error is determined by the fluctuation of the trigger signal, as discussed in the methods. The fluctuation of the trigger signal is caused by the energy fluctuation of the laser pulse. Therefore, the ranging error will be enlarged with the increase of the energy fluctuation of laser pulse. When the fluctuation of trigger signal has the same magnitude, the wider the trigger signal waveform, the larger the ranging error caused by the fluctuation of trigger signals. Therefore, the higher bandwidth of the APD could reduce the stretch of the trigger signal, which leads to reduce ranging error.

To validate the range resolution of the system, we then carry out 3D imaging with 55 $\times$ 55 pixels on a designed depth resolution test target. The target is a 3D printed plate of 110 mm $\times$ 110 mm, which has 9 square stages of 20 mm $\times$ 20 mm and different depths (see Fig. 6(a)). The depth of the stages ranges from 0.3 to 2.7 mm in steps of 0.3 mm. Considering the speed and quality of imaging comprehensively, the dwell time for each pixel is set to 50 ms, rendering a total data acquisition time of 151 s. As is shown in Fig. 6 (b), there exist large errors in the 3D imaging result obtained by using the CC algorithm. Some of the pixels even have an error of more than 10 mm, making it impossible to distinguish different heights. The 3D imaging results obtained by using the proposed CCC and DCC algorithms are respectively shown in Fig. 6 (c) and (d). Both of them can clearly distinguish the height of 0.6 mm and large, and even the height of 0.3 mm from the surrounding background, achieving a depth resolution of 0.3 mm.

 figure: Fig. 6.

Fig. 6. Comparison of reconstructed depth distribution of designed target using different algorithms. (a) The ground truth of the measured depth resolution plate. (b) The 3D imaging result reconstructed by the CC algorithm. (c) The 3D imaging result reconstructed by the CCC algorithm. (d) The 3D imaging result reconstructed by the DCC algorithm.

Download Full Size | PDF

Finally, to demonstrate the capability of high precision 3D imaging by using the proposed system and algorithms, we perform 3D scanning and reconstruction of a face model shown in Fig. 7(a). The pixel resolution is 100 $\times$ 100, and the dwell time per pixel is set to 10 ms to accelerate the scanning speed. The 3D images reconstructed by CC, CCC, and DCC algorithms are shown in Fig. 7(b) to 7(d). From the 3D imaging of the CC algorithm, we can only see the outline and nose of the face, but more detailed parts, such as the eyes and mouth, cannot be recognized. Moreover, the fluctuation error of depth information results in bumps or depressions along the horizontal direction of the image, so the relatively flat parts, such as the forehead, cheek, and background, become uneven. In contrast, the other two algorithms, CCC and DCC, restore the shape of the face model with high quality and we can clearly read all the information on the face.

 figure: Fig. 7.

Fig. 7. Comparison of reconstructed 3D images of face model using different algorithms. (a) The close-up photographs of the model from two different viewpoints. (b) The 3D imaging result reconstructed by CC algorithm. (c) The 3D imaging result reconstructed by CCC algorithm. (d) The 3D imaging result reconstructed by DCC algorithm.

Download Full Size | PDF

Although the work distance of the above experiment is set at 29 m, in fact, the laser power is high enough that the system can operate at a distance far beyond 29 m. The adverse effect of increasing work distance is mainly the decrease of lateral resolution caused by the divergence of laser beam. However, this can be improved by using super-resolution algorithms [38].

4. Conclusion

In conclusion, we demonstrate that the timing corrected LiDAR system with dual SPADs can achieve high precision 3D imaging. By adding a reference optical path into the single photon LiDAR system, two SPADs are used to detect reference photons and signal photons respectively. After that, two improved depth reconstruction algorithms based on traditional time-correlated cross-correlation algorithm are proposed to correct the TOF of the signal photons, and finally obtain high precision 3D imaging. The superiority of proposed methods has been verified by conducting a series of experiments. Through ranging experiments on the single point, we prove that the standard deviation of the system for 100 ranging times is only 0.026 mm and the range of error is 0.15 mm when the acquisition time is 500 ms. We also demonstrate that the system has a range resolution of 0.3 mm by depth reconstruction of test target. Finally, the capability of high precision 3D reconstruction with proposed system is proved by 3D imaging for the face model. More importantly, the improved methods make up for the deficiency of system devices and have universal applicability. Nevertheless, our system needs to record the TOF of a large number of signal photons to ensure high precision, so the imaging speed is not so satisfactory. In future work, we will focus on how to reduce the count of signal photons while maintain the ranging precision.

Funding

National Natural Science Foundation of China (12274262); National Key Research and Development Program of China (2022YFC2807702); Sino-German Center Mobility Programs (M-0044); Shandong Key Research and Development Programs (2020CXGC010104); State key laboratory of precision measuring technology and instruments (pilab2205); Shandong University Joint Fund.

Disclosures

The authors declare no conflicts of interest.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. M. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser ranging: a critical review of usual techniques for distance measurement,” Opt. Eng. 40(1), 10–19 (2001). [CrossRef]  

2. J. Degnan, “Photon-counting multikilohertz microlaser altimeters for airborne and spaceborne topographic measurements,” J. Geodyn. 34(3-4), 503–549 (2002). [CrossRef]  

3. E. Repasi, P. Lutzmann, O. Steinvall, and M. Elmqvist, “Mono-and bi-static swir range-gated imaging experiments for ground applications,” Proc. SPIE 7114, 71140D (2008). [CrossRef]  

4. D. Mao, J. F. McGarry, E. Mazarico, G. A. Neumann, X. Sun, M. H. Torrence, T. W. Zagwodzki, D. D. Rowlands, E. D. Hoffman, J. E. Horvath, J. E. Golder, M. K. Barker, D. E. Smith, and M. T. Zuber, “The laser ranging experiment of the Lunar Reconnaissance Orbiter: Five years of operations and data analysis,” Icarus 283, 55–69 (2017). [CrossRef]  

5. M. Umasuthan, A. Wallace, J. Massa, G. Buller, and A. Walker, “Processing time-correlated single photon counting data to acquire range images,” IEE Proc., Vis. Image Process. 145(4), 237–243 (1998). [CrossRef]  

6. A. Wallace, G. Buller, and A. Walker, “3D imaging and ranging by time-correlated single photon counting,” Comput. Control Eng. J. 12(4), 157–168 (2001). [CrossRef]  

7. W. Becker, A. Bergmann, M. Kacprzak, and A. Liebert, “Advanced time-correlated single photon counting technique for spectroscopy and imaging of biological systems,” in Fourth International Conference on Photonics and Imaging in Biology and Medicine, vol. 6047 (SPIE, 2006), pp. 261–265.

8. G. S. Buller and A. M. Wallace, “Ranging and three-dimensional imaging using time-correlated single-photon counting and point-by-point acquisition,” IEEE J. Sel. Top. Quantum Electron. 13(4), 1006–1015 (2007). [CrossRef]  

9. A. McCarthy, R. J. Collins, N. J. Krichel, V. Fernandez, A. M. Wallace, and G. S. Buller, “Long-range time-of-flight scanning sensor based on high-speed time-correlated single-photon counting,” Appl. Opt. 48(32), 6241–6251 (2009). [CrossRef]  

10. B. Aull, A. Loomis, D. Young, R. Heinrichs, B. Felton, P. Daniels, and D. Landers, “Geiger-mode avalanche photodiodes for three-dimensional imaging,” Lincoln Laboratory J. 13(2), 335–350 (2002).

11. A. McCarthy, N. J. Krichel, N. R. Gemmell, X. Ren, M. G. Tanner, S. N. Dorenbos, V. Zwiller, R. H. Hadfield, and G. S. Buller, “Kilometer-range, high resolution depth imaging via 1560 nm wavelength single-photon detection,” Opt. Express 21(7), 8904–8915 (2013). [CrossRef]  

12. Z.-P. Li, J.-T. Ye, X. Huang, P.-Y. Jiang, Y. Cao, Y. Hong, C. Yu, J. Zhang, Q. Zhang, C.-Z. Peng, F. Xu, and J.-W. Pan, “Single-photon imaging over 200 km,” Optica 8(3), 344–349 (2021). [CrossRef]  

13. S. Feng, Q. Chen, C. Zuo, and A. Asundi, “Fast three-dimensional measurements for dynamic scenes with shiny surfaces,” Opt. Commun. 382, 18–27 (2017). [CrossRef]  

14. Z. Song, S. Tang, F. Gu, C. Shi, and J. Feng, “DOE-based structured-light method for accurate 3D sensing,” Opt. Lasers Eng. 120, 21–30 (2019). [CrossRef]  

15. Y. He and S. Chen, “Recent advances in 3D data acquisition and processing by time-of-flight camera,” IEEE Access 7, 12495–12510 (2019). [CrossRef]  

16. D. R. Wehner, “High resolution radar,” Norwood (1987).

17. M. S. Oh, H. J. Kong, T. H. Kim, K. H. Hong, and B. W. Kim, “Reduction of range walk error in direct detection laser radar using a geiger mode avalanche photodiode,” Opt. Commun. 283(2), 304–308 (2010). [CrossRef]  

18. S. Jacobs and J. O’Sullivan, “Automatic target recognition using sequences of high resolution radar range-profiles,” IEEE Trans. Aerosp. Electron. Syst. 36(2), 364–381 (2000). [CrossRef]  

19. X. Li, H. Wang, B. Yang, J. Huyan, and L. Xu, “Influence of time-pickoff circuit parameters on lidar range precision,” Sensors 17(10), 2369 (2017). [CrossRef]  

20. L. Jiancheng, W. Chunyong, Y. Wei, and L. Zhenhua, “Research on the ranging statistical distribution of laser radar with a constant fraction discriminator,” IET Optoelectron. 12, 114–117 (2018). [CrossRef]  

21. K. Hua, B. Liu, L. Fang, H. Wang, Z. Chen, and Y. Yu, “Detection efficiency for underwater coaxial photon-counting lidar,” Appl. Opt. 59(9), 2797–2809 (2020). [CrossRef]  

22. M. Ghioni, A. Gulinatti, I. Rech, F. Zappa, and S. Cova, “Progress in silicon single-photon avalanche diodes,” IEEE J. Sel. Top. Quantum Electron. 13(4), 852–862 (2007). [CrossRef]  

23. D. Shin, A. Kirmani, V. Goyal, and J. Shapiro, “Photon-efficient computational 3-D and reflectivity imaging with single-photon detectors,” IEEE Trans. Comput. Imaging 1(2), 112–125 (2015). [CrossRef]  

24. J. Rapp and V. K. Goyal, “A few photons among many: Unmixing signal and noise for photon-efficient active imaging,” IEEE Trans. Med. Imaging 3(3), 445–459 (2017). [CrossRef]  

25. R. Zheng and G. Wu, “Constant fraction discriminator in pulsed time-of-flight laser rangefinding,” Front. Optoelectron. 5(2), 182–186 (2012). [CrossRef]  

26. X. Li, B. Yang, X. Xie, D. Li, and L. Xu, “Influence of waveform characteristics on lidar ranging accuracy and precision,” Sensors 18(4), 1156 (2018). [CrossRef]  

27. R. A. Barton-Grimley, R. A. Stillwell, and J. P. Thayer, “High resolution photon time-tagging lidar for atmospheric point cloud generation,” Opt. Express 26(20), 26030–26044 (2018). [CrossRef]  

28. Y. Kang, L. Li, D. Liu, D. Li, T. Zhang, and W. Zhao, “Fast long-range photon counting depth imaging with sparse single-photon data,” IEEE Photonics J. 10(3), 1–10 (2018). [CrossRef]  

29. J. Massa, G. Buller, A. Walker, G. Smith, S. Cova, M. Umasuthan, and A. Wallace, “Optical design and evaluation of a three-dimensional imaging and ranging system based on time-correlated single-photon counting,” Appl. Opt. 41(6), 1063–1070 (2002). [CrossRef]  

30. S. Pellegrini, G. Buller, J. Smith, A. Wallace, and S. Cova, “Laser-based distance measurement using picosecond resolution time-correlated single-photon counting,” Meas. Sci. Technol. 11(6), 712–716 (2000). [CrossRef]  

31. J. Zhang, M. A. Itzler, H. Zbinden, and J.-W. Pan, “Advances in ingaas/inp single-photon detector systems for quantum communication,” Light: Sci. Appl. 4(5), e286 (2015). [CrossRef]  

32. H. J. Kong, T. H. Kim, S. E. Jo, and M. S. Oh, “Smart three-dimensional imaging ladar using two geiger-mode avalanche photodiodes,” Opt. Express 19(20), 19323–19329 (2011). [CrossRef]  

33. L. Ye, G. Gu, W. He, H. Dai, and Q. Chen, “A real-time restraint method for range walk error in 3-d imaging lidar via dual detection,” IEEE Photonics J. 10(2), 1–9 (2018). [CrossRef]  

34. T. H. Kim, H. J. Kong, S. E. Jo, B. G. Jeon, M. S. Oh, A. Heo, and D. J. Park, “Improvement of range precision in laser detection and ranging system by using two geiger mode avalanche photodiodes,” Rev. Sci. Instrum. 84(6), 065112 (2013). [CrossRef]  

35. A. Wallace, R. Sung, G. Buller, R. Harkins, R. Warburton, and R. Lamb, “Detecting and characterising returns in a pulsed ladar system,” IEE Proc., Vis. Image Process. 153(2), 160–172 (2006). [CrossRef]  

36. D. O’Connor, Time-correlated single photon counting (Academic press, 2012).

37. F. Heide, S. Diamond, D. B. Lindell, and G. Wetzstein, “Sub-picosecond photon-efficient 3D imaging using single-photon sensors,” Sci. Rep. 8(1), 17726 (2018). [CrossRef]  

38. Z.-P. Li, X. Huang, P.-Y. Jiang, Y. Hong, C. Yu, Y. Cao, J. Zhang, F. Xu, and J.-W. Pan, “Super-resolution single-photon imaging at 8.2 kilometers,” Opt. Express 28(3), 4076–4087 (2020). [CrossRef]  

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Schematic of the timing corrected LiDAR system with dual SPADs. Optical components include: beam splitter (BS); polarization beam splitter (PBS1, PBS2); avalanche photodiode (APD); half-wave plate (HWP1, HWP2); bandpass filter (BPF1, BPF2); multimode fiber (MMF1, MMF2); single photon avalanche photodiode (SPAD1, SPAD2); mirror (M); lenses (L1, L2, L3, L4); scanning mirrors (SM); objective lens (OL); time-correlated single photon counting (TCSPC); data acquisition (DAQ).
Fig. 2.
Fig. 2. Schematics of different algorithms to obtain the TOF. The system instrumental response function (the fixed reference waveform), black; the time distribution histogram of detected reference photons, blue; the time distribution histogram of detected signal photons, red; the cross-correlation curve, green. $\otimes$ represents the cross-correlation operation. (a) The cross-correlation between fixed reference histogram and signal photons distribution histogram to obtain $t$ (CC algorithm). (b) The cross-correlation between fixed reference histogram and signal photons distribution histogram to obtain $t$, and the cross-correlation between fixed reference histogram and dynamic reference photons distribution histogram to obtain $\delta t$ (CCC algorithm). (c) The cross-correlation between dynamic reference photons distribution histogram and signal photons distribution histogram to obtain $t'$ (DCC algorithm).
Fig. 3.
Fig. 3. The influence of the energy fluctuation of laser pulse. (a) Schematic of the influence of the fluctuation off trigger signals caused by the different laser pulse energy. (b) The ranging results of 1000 times obtained by CC algorithm. (c) The count of photons detected from the reference optical path.
Fig. 4.
Fig. 4. Single point ranging resluts with different algorithms and different acquisition time. (a) to (e) are the calculated depth values obtained for acquisition times of 20, 50, 100, 200, and 500 ms, respectively. The first and second rows are the ranging results obtained from the reference photons and the signal photons respectively, both processed by the CC algorithm. The third and fourth rows are the ranging results obtained by CCC and DCC algorithms, respectively.
Fig. 5.
Fig. 5. (a) STD and (b) ROE of the single point over different acquisition times.
Fig. 6.
Fig. 6. Comparison of reconstructed depth distribution of designed target using different algorithms. (a) The ground truth of the measured depth resolution plate. (b) The 3D imaging result reconstructed by the CC algorithm. (c) The 3D imaging result reconstructed by the CCC algorithm. (d) The 3D imaging result reconstructed by the DCC algorithm.
Fig. 7.
Fig. 7. Comparison of reconstructed 3D images of face model using different algorithms. (a) The close-up photographs of the model from two different viewpoints. (b) The 3D imaging result reconstructed by CC algorithm. (c) The 3D imaging result reconstructed by CCC algorithm. (d) The 3D imaging result reconstructed by DCC algorithm.

Tables (1)

Tables Icon

Table 1. Summary of the Main System Parameters

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

t ( x , y ) = arg max t C t ( x , y ) = arg max t i = 1 T H t + i ( x , y ) R i ,
z ( x , y ) = 1 2 c ( t ( x , y ) t 0 ) ,
δ t ( x , y ) = arg max δ t C δ t ( x , y ) = arg max δ t i = 1 T H δ t + i ( x , y ) R i ,
z ( x , y ) = 1 2 c ( t ( x , y ) δ t ( x , y ) t 0 ) ,
t ( x , y ) = arg max t C t ( x , y ) = arg max t i = 1 T H t + i ( x , y ) H i ( x , y ) ,
z ( x , y ) = 1 2 c ( t ( x , y ) t 0 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.