Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Compact long-range single-photon imager with dynamic imaging capability

Open Access Open Access

Abstract

Single-photon light detection and ranging (LiDAR) has emerged as a strong candidate technology for active imaging applications. Benefiting from the single-photon sensitivity in detection, long-range active imaging can be realized with a low-power laser and a small-aperture transceiver. However, existing kilometer-range active imagers are bulky and have a long data acquisition time. Here we present a compact co-axial single-photon LiDAR system for kilometer-range 3D imaging. A fiber-based transceiver with a 2.5 cm effective aperture was employed to realize a robust and compact architecture, while a tailored temporal filtering approach guaranteed the high signal-to-noise level. Moreover, a micro–electro–mechanical system scanning mirror was adopted to achieve fast beam scanning. In experiment, high-resolution 3D images of different targets up to 12.8 km were acquired to demonstrate the long-range imaging capability. Furthermore, it exhibits the ability to achieve dynamic imaging at five frames per second over a distance of ${\sim}{1}\;{\rm km}$. The results indicate potential in a variety of applications such as remote sensing and long-range target detection.

© 2021 Optical Society of America

Light detection and ranging (LiDAR) is a well-established technique used for distance measurement. Combined with time-correlated single-photon counting (TCSPC) [1] and photon-efficient algorithms [27], single-photon LiDAR can achieve single-photon sensitivity and picosecond time resolution, and it has been widely adopted for applications including remote sensing [8,9], depth imaging in challenging environments [1013], and target recognition and identification [14]. It is particularly suitable for long-range time-of-flight (ToF) ranging and imaging where centimeter depth resolution is required and a very low average photon return is expected.

Tremendous efforts have been devoted to the development of single-photon LiDAR for long-range active imaging [8,9,1517]. With a 280 mm aperture Schmidt–Cassegrain telescope and a single-pixel single-photon avalanche diode (SPAD) detector, a LiDAR was demonstrated to acquire 3D images over ultralong distances [9]. However, the huge volume of its optical transceiver and electronic devices greatly weakened the portability and utility, and it is unable to take 3D images of a moving target due to the relatively long data acquisition time. Meanwhile, a system for single-photon imaging at up to 10 km range described in [8] employs an 8 in. (20.32 cm) aperture telescope and holds a data acquisition time of more than 20 min. Furthermore, Geiger-mode (Gm) arrays provide an alternative for single-photon LiDAR systems to accelerate data collection. By avoiding the need for beam scanning, imaging systems with arrays have the potential for real-time 3D imaging with millisecond-level data acquisition time per frame [1822]. A ${32} \times {32}$ InGaAs/InP Gm array was used to demonstrate 3D imaging over ranges of up to 9 km [23]. However, the system employed a large-aperture telescope as well and required a relatively high average laser power of 0.4 W. Also, Gm arrays usually suffer from a low fill factor, limited number of pixels, and high noise level [24]. Seeing that current long-range single-photon LiDAR systems are bulky and lack practicality, a small, fast, and low-power LiDAR system that can provide high-resolution 3D imaging over long ranges with all-time capability is still missing.

In this Letter, we present a small-sized single-photon 3D imager possessing long-range and high-speed imaging capabilities. A fiber-based co-axial scanning transceiver was specially designed for the miniaturized and robust structure while giving rise to unavoidable laser-dependent noise in the fiber. To deal with this noise, we developed an efficient temporal filtering approach combining two acoustic optical modulators (AOMs). To realize dynamic imaging of moving targets, we employed the state-of-art 3D deconvolutional algorithm to reduce the required signal photons and adopted a fast micro–electro–mechanical system (MEMS) scanning mirror to accelerate the data acquisition process. We demonstrated single-photon 3D imaging at ranges up to 12.8 km with few photons (${\sim}\;{9.3}$) per pixel (PPP) in an urban atmosphere environment. Using it for monitoring a crossroad over ranges of 850 m, moving cars and bicycles were captured at five frames per second.

The schematic diagram of our imaging system setup is illustrated in Fig. 1. The system employs a compact fiber laser operating at a wavelength of 1550 nm, which generates a 0.5 ns duration pulse at a repetition rate of 1 MHz. Compared with visible and short-wavelength near-infrared (SW-NIR) light, the 1550 nm operating wavelength has several advantages. It is more safe to human eyes [25] under the same power and has a lower level of solar radiation as well as less atmospheric attenuation. We adopted a ${2} \times {2}$, 50:50 single-mode fiber beam splitter (BS) for the compact and robust architecture. The receiving beam is coupled into a 1550 nm single-mode fiber by a collimator (${f} = {11.32}\;{\rm mm}$), and the transmitting beam shares the same optical path after the fiber BS. The beam divergence out of the collimator is about 1 mrad. After the ${10} \times$ beam expander, the transmitting beam has a divergence of 100 µrad, which is the same as the field of view (FoV). A compact homemade free-running InGaAs/InP SPAD detector serves as the detector in our experiment [26]. Operating at a temperature of 223 K, it has an efficiency of ${\sim}{30}\%$ with 1200 dark counts per second.

 figure: Fig. 1.

Fig. 1. Schematic diagram of our imaging system. AOM, acoustic-optical modulator; SMF, single-mode fiber; AFG, arbitrary function generator; TDC, time digital converter; BS, beam splitter. A standard camera (${f} = {120}\;{\rm mm}$) is parallel mounted on the optical breadboard to provide a convenient direction and help for alignment at long distances.

Download Full Size | PDF

As to the electronic module, a homemade arbitrary function generator (AFG) based on FPGA provides precise control signals for the laser, two AOMs, the SPAD detector and time digital converter (TDC). A high-precision TDC (Cronologic xTDC4) with 13 ps time resolution is adopted to record the time of pulse emission and photon detection. Also, it records the start signal of a frame scanning from the MEMS controller to find the beginning of the data stream precisely. The time jitter of the entire system was measured at 600 ps. A summary of the system parameters is listed in Table 1.

Tables Icon

Table 1. Summary of the Main System Parameters

MEMS scanning mirrors have several advantages compared to common mechanical scanners such as galvanometer mirrors or piezoelectric mirrors [27]. They can be integrated on a silicon-based chip to reach a fully compact form. Particularly, the small size of the mirror surface reduces the moment of inertia, which enables higher rotation speed. Here we employed a two-axis MEMS scanning mirror with four quadrant (4Q) tip–tilt capability. The aperture of the mirror is 3 mm, and the maximum rotating angle is ${\pm}{4.25}$ deg. In the imaging process, one axis operates in resonant mode, and the other one is used in quasi-static mode. Such an arrangement can lead to a raster pattern at the rate of hundreds of lines per second. In our experiment of taking 3D images of moving targets, it takes only 3 ms to cover one line, which means that the total scan time is about 0.2 s for a frame with 64 lines.

The photograph of the whole system is shown in Fig. 2. The optical components for free-space light propagation are assembled on a ${20} \times {15}\;{{\rm cm}^2}$ aluminum optical platform, which is covered with black aluminum shields with 11 cm height. The platform is fixed on a high-precision two-axis rotating stage for accurate target pointing. The silvery box consists of the laser, AOMs, SPAD detector, and all the electronic devices, which also acts as the control computer. The size of the silvery box is ${290} \times {238} \times {278}\;{{\rm mm}^3}$. Finally, two main parts of the system are fixed on a movable tripod for portability.

 figure: Fig. 2.

Fig. 2. Photograph of our compact and portable single-photon imaging system.

Download Full Size | PDF

The fiber-based transceiver is specially designed to improve the compactness, and it is well recognized that fiber-based architectures are easy to adjust and mechanically reliable in harsh environments. Another advantage here is that the laser spot and the FoV are fully overlapped with no need for adjustment. However, due to the limited isolation of the fiber BS (${\sim}{62}\;{\rm dB}$), the laser pulse with high peak power and the amplified spontaneous emission (ASE) of all time would cause crosstalk in the detection port and saturate the SPAD detector. We measured the power in the detection port while the laser was triggered periodically, which was ${\sim}{60}\;{\rm nW}$ when the laser power was 100 mW.

Aiming at eliminating the noise, we developed a specific temporal filtering technique. Our approach is shown Fig. 3. The histograms on the top represent the light intensity distributed with time in a period while in propagation, and histograms on the bottom represent the electronic control signals for different devices. We set a temporal separation of the emission phase (E) and the detection phase (D). In phase E, the laser pulses are triggered at a high repetition rate (1 MHz) while the SPAD detector is turned off. Then in phase D, the laser pulses are not triggered while the SPAD detector is turned on for photon detection. AOM1 is turned on in phase E while turned off in phase D for blocking ASE noise. The control signals for AOM2 hold the opposite level. They are off in phase E to prevent the laser pulses arriving at the detector. We also set a transition time of T between the emission phase and the detection phase to further subside near-field atmospheric reflection. With this technique, the total number of noise photon counts is quantified to be about 3 KHz (including dark counts) with 100 mW transmitting power. Thus the temporal filtering method presents the key technique to permit imaging over long distances with a fiber-based transceiver.

 figure: Fig. 3.

Fig. 3. Diagram of the time sequence tailored for the fiber-based system. We set a temporal separation of laser emission and detection and employed two AOMs for noise suppression. Histograms on top represent the light intensity in fiber propagation, while those on the bottom represent the electronic signals for devices.

Download Full Size | PDF

For data processing, we adopted the 3D deconvolutional algorithm, which employs a convolutional forward model [9]. It can be demonstrated in a general format:

$$\textbf{Y} \sim {\rm Poisson}(\textbf{g} * \textbf{RD} + \textbf{B}),$$
where $\textbf{Y}$ is the photon histogram matrix obtained in experiment; $\textbf{g}$ is a 3D spatiotemporal kernel that depends on the spatial intensity distribution of the laser and the jitter of our system; $\textbf{RD}$ is a 3D matrix whose $(i,j)$th pixel is a vector with only one nonzero entry to describe the (reflectivity, depth) pair of the target scene; $\textbf{B}$ denotes the background noise; * denotes the convolution operator; and Poisson means the inhomogeneous Poisson photon-detection process.

To find the $\textbf{RD}$, we can derive the negative log-likelihood function from Eq. (1). Then we employ the modified SPIRAL-TAP solver with a 3D $\textbf{g}$ matrix to get the minimized likelihood function in which lies the solution of $\textbf{RD}$. The code of this algorithm is available online [28].

Here we present an in-depth experiment to image a variety of scenes including long-range static targets and moving objects. The experiments were done in both daytime and at night in an urban environment in Shanghai. Different targets with various spatial distributions, reflectivities, and structure complexities were selected to show the all-around imaging capability of the system. Two typical results are shown as follows.

We first selected the Jinmao Tower at 12.8 km distances as our target to show the long-range imaging capability. Figure 4(a) shows the visible-band photograph taken by a standard astronomical camera (ASI294MC) equipped with a ${f} = {120}\;{\rm mm}$ lens. We scanned ${100} \times {128}$ pixels with an acquisition time of 15 ms per pixel. The experiment was done in daylight to show the all-time imaging capability of the system. The depth map and 3D profile of the tower are seperately shown in Figs. 4(b) and 4(c). In Fig. 4(c), the gray value indicates the photon counts of different pixels. The average PPP is 9.3 for the whole picture with a signal-to-background ratio (SBR) of 1.1, which is owed to our effective noise suppression method. From the results, four stair-stepping structures on the top of the building can be seen easily while the photograph is blurred even in good weather conditions.

 figure: Fig. 4.

Fig. 4. Reconstruction results for the Jinmao Tower at 12.8 km. (a) Real visible-band photo and the FoR. (b) Reconstructed depth map. (c) 3D profile of the building with gray value to show the photon counts.

Download Full Size | PDF

The dynamic imaging capability of our system makes it possible to capture moving objects. In addition, it has a high spatial resolution to reveal the details of a scene. In experiment, we took images of a crossroad with ${64} \times {64}$ pixels continuously. Here we show our reconstructed results of several frames of a crossroad at about 850 m range. The imaging system was located on the 17th floor of a building in Shanghai to obtain a bird’s-eye view. Cars, bicycles, and pedestrians cross the intersection in a constant stream. 3D profiles and depth maps of five frames are shown in Figs. 5(b) and 5(c). In our experiment, each frame was obtained with 200 ms for the whole scanning process, only 0.05 ms for one pixel. The laser power we use here is less than 50 mW. Each frame is acquired with ${\sim}\;{4}\;{\rm PPP}$ on average and a SBR of more than 20.

 figure: Fig. 5.

Fig. 5. Real-time capture of a crossroad with five frames/s. (a) Visible-band pictures extracted from the video. (b) 3D profiles of the five frames. (c) Reconstructed depth maps. The red (green) square indicates a moving car (bicycle).

Download Full Size | PDF

As seen in Fig. 5, a bicycle marked with a green box and a car in a red box pass through the crossroad abreast. From the results shown in Figs. 5(b) and 5(c), we can clearly see a bicycle at the front and a car at farther distances. The speed of the car can be estimated in Fig. 5(b), which is about 18 km/h (a position shift of 4 m in 0.8 s). Far beyond this speed, at a distance of 1 km, our system has the capability of taking images of a moving car with speed up to 60 km/h without motion tearing.

In conclusion, based on a fiber-based transceiver, temporal filtering technique, and use of a MEMS beam scanner and photon-efficient deconvolutional algorithm, we developed a compact long-range single-photon imaging LiDAR with high imaging speed. The results show a demonstration of real-time practical long-range 3D imaging. For future works, we will focus on improving the limitations of our system at present such as the limited (field of regard) FoR and relatively low frame rates compared with traditional imagers.

Funding

National Key Research and Development (R\D) Plan of China (2018YFB0504300); National Natural Science Foundation of China (62031024, 61771443); Shanghai Municipal Science and Technology Major Project (2019SHZDZX01); Key-Area Research and Development Program of Guangdong Province (2020B0303020001).

Disclosures

The authors declare no conflicts of interest.

REFERENCES

1. G. Buller and A. Wallace, IEEE J. Sel. Top. Quantum Electron. 13, 1006 (2007). [CrossRef]  

2. A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. Wong, J. H. Shapiro, and V. K. Goyal, Science 343, 58 (2014). [CrossRef]  

3. D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. Wong, and J. H. Shapiro, Nat. Commun. 7, 12046 (2016). [CrossRef]  

4. Y. Altmann, X. Ren, A. McCarthy, G. S. Buller, and S. McLaughlin, IEEE Trans. Image Process. 25, 1935 (2016). [CrossRef]  

5. J. Rapp and V. K. Goyal, IEEE Trans. Comput. Imaging 3, 445 (2017). [CrossRef]  

6. D. B. Lindell, M. O’Toole, and G. Wetzstein, ACM Trans. Graph. 37, 1 (2018). [CrossRef]  

7. J. Peng, Z. Xiong, X. Huang, Z.-P. Li, D. Liu, and F. Xu, in European Conference on Computer Vision (Springer, 2020), pp. 225–241.

8. A. M. Pawlikowska, A. Halimi, R. A. Lamb, and G. S. Buller, Opt. Express 25, 11919 (2017). [CrossRef]  

9. Z.-P. Li, X. Huang, Y. Cao, B. Wang, Y.-H. Li, W. Jin, C. Yu, J. Zhang, Q. Zhang, C.-Z. Peng, F. Xu, and J.-W. Pan, Photon. Res. 8, 1532 (2020). [CrossRef]  

10. B. Sun, M. P. Edgar, R. Bowman, L. E. Vittert, S. Welsh, A. Bowman, and M. J. Padgett, Science 340, 844 (2013). [CrossRef]  

11. A. Maccarone, A. McCarthy, X. Ren, R. E. Warburton, A. M. Wallace, J. Moffat, Y. Petillot, and G. S. Buller, Opt. Express 23, 33911 (2015). [CrossRef]  

12. M. Laurenzis, J. Klein, E. Bacher, and N. Metzger, Opt. Lett. 40, 4815 (2015). [CrossRef]  

13. R. Tobin, A. Halimi, A. McCarthy, M. Laurenzis, F. Christnacher, and G. S. Buller, Opt. Express 27, 4590 (2019). [CrossRef]  

14. Z.-P. Li, X. Huang, P.-Y. Jiang, Y. Hong, C. Yu, Y. Cao, J. Zhang, F. Xu, and J.-W. Pan, Opt. Express 28, 4076 (2020). [CrossRef]  

15. M. Laurenzis, F. Christnacher, and D. Monnin, Opt. Lett. 32, 3146 (2007). [CrossRef]  

16. A. McCarthy, N. J. Krichel, N. R. Gemmell, X. Ren, M. G. Tanner, S. N. Dorenbos, V. Zwiller, R. H. Hadfield, and G. S. Buller, Opt. Express 21, 8904 (2013). [CrossRef]  

17. Z. Li, E. Wu, C. Pang, B. Du, Y. Tao, H. Peng, H. Zeng, and G. Wu, Opt. Express 25, 10189 (2017). [CrossRef]  

18. F. Zappa, S. Tisa, A. Tosi, and S. Cova, Sens. Actuators A, Phys. 140, 103 (2007). [CrossRef]  

19. F. Villa, R. Lussana, D. Bronzi, S. Tisa, A. Tosi, F. Zappa, A. D. Mora, D. Contini, D. Durini, S. Weyers, and W. Brockherde, IEEE J. Sel. Top. Quantum Electron. 20, 364 (2014). [CrossRef]  

20. X. Ren, P. W. Connolly, A. Halimi, Y. Altmann, S. McLaughlin, I. Gyongy, R. K. Henderson, and G. S. Buller, Opt. Express 26, 5541 (2018). [CrossRef]  

21. J. Tachella, Y. Altmann, N. Mellado, A. McCarthy, R. Tobin, G. S. Buller, J.-Y. Tourneret, and S. McLaughlin, Nat. Commun. 10, 4984 (2019). [CrossRef]  

22. I. Gyongy, S. W. Hutchings, A. Halimi, M. Tyler, S. Chan, F. Zhu, S. McLaughlin, R. K. Henderson, and J. Leach, Optica 7, 1253 (2020). [CrossRef]  

23. K. Gordon, P. Hiskett, and R. Lamb, Proc. SPIE 9114, 91140G (2014). [CrossRef]  

24. C. Bruschini, H. Homulle, I. M. Antolovic, S. Burri, and E. Charbon, Light Sci. Appl. 8, 87 (2019). [CrossRef]  

25. R. Henderson and K. Schulmeister, Laser Safety (CRC Press, 2003).

26. C. Yu, M. Shangguan, H. Xia, J. Zhang, X. Dou, and J.-W. Pan, Opt. Express 25, 14611 (2017). [CrossRef]  

27. S. T. Holmström, U. Baran, and H. Urey, J. Microelectromech. Syst. 23, 259 (2014). [CrossRef]  

28. https://github.com/quantum-inspired-lidar/long-range-photon-efficient-imaging.git.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Schematic diagram of our imaging system. AOM, acoustic-optical modulator; SMF, single-mode fiber; AFG, arbitrary function generator; TDC, time digital converter; BS, beam splitter. A standard camera ( ${f} = {120}\;{\rm mm}$ ) is parallel mounted on the optical breadboard to provide a convenient direction and help for alignment at long distances.
Fig. 2.
Fig. 2. Photograph of our compact and portable single-photon imaging system.
Fig. 3.
Fig. 3. Diagram of the time sequence tailored for the fiber-based system. We set a temporal separation of laser emission and detection and employed two AOMs for noise suppression. Histograms on top represent the light intensity in fiber propagation, while those on the bottom represent the electronic signals for devices.
Fig. 4.
Fig. 4. Reconstruction results for the Jinmao Tower at 12.8 km. (a) Real visible-band photo and the FoR. (b) Reconstructed depth map. (c) 3D profile of the building with gray value to show the photon counts.
Fig. 5.
Fig. 5. Real-time capture of a crossroad with five frames/s. (a) Visible-band pictures extracted from the video. (b) 3D profiles of the five frames. (c) Reconstructed depth maps. The red (green) square indicates a moving car (bicycle).

Tables (1)

Tables Icon

Table 1. Summary of the Main System Parameters

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

Y P o i s s o n ( g RD + B ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.