Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Weak non-line-of-sight target echoes extraction without accumulation

Open Access Open Access

Abstract

Non-line-of-sight (NLOS) technology has been rapidly developed in recent years, allowing us to visualize or localize hidden objects by analyzing the returned photons, which is expected to be applied to autonomous driving, field rescue, etc. Due to the laser attenuation and multiple reflections, it is inevitable for future applications to separate the returned extremely weak signal from noise. However, current methods find signals by direct accumulation, causing noise to be accumulated simultaneously and inability of extracting weak targets. Herein, we explore two denoising methods without accumulation to detect the weak target echoes, relying on the temporal correlation feature. In one aspect, we propose a dual-detector method based on software operations to improve the detection ability for weak signals. In the other aspect, we introduce the pipeline method for NLOS target tracking in sequential histograms. Ultimately, we experimentally demonstrated these two methods and extracted the motion trajectory of the hidden object. The results may be useful for practical applications in the future.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

In recent years, the technology of non-line-of-sight (NLOS) attracted significant attention and has been developed rapidly, which has a wide range of applications, ranging from autonomous driving, danger rescue, etc. [1,2] Researchers work on visualizing hidden objects occluded in corners by building Tof-based models, [310] wave-based models, [11] phasor field models, [12,13] deep learning networks, [1416] and passive models. [17,18] In some applications, such as danger rescue, it is more important to obtain the positions of the hidden object [1924]. To apply this technology to real-world situations, separating weak target echoes from noise is still one of the critical challenges [20,21,25] because the intensity of the photons attenuates inversely proportional to the square of the distance and only a few of the photons can be returned to the detector. On the issue of detecting weak echoes, in 2017, Chan et al. indicates that slightly expanding the radius of the acquisition spot could increase the signal-to-noise ratio [21]. Wu et al. adopted a 1550 nm wavelength to reduce the ambient noise and coated the optical system to increase the amount of incoming light [20]. Liu et al. proposed a reconstruction method that can recover the hidden scene using only one detected photon, [26] and Li et al. proposed a time-sequential first photon (TSFP) method that employs the first-photon in several time bins to reconstruct the hidden scene, [27] which are beneficial in weak signal imaging. At the back-end processing, Ren et al. analyzed the denoising effect of Gaussian filtering and mean filtering within one frame [28]. In addition, some works improve the signal-to-noise ratio of target echo through multi-frame accumulation [22,29].

For a fixed detection system, it is important to improve the weak target echo separation ability. However, when the target echo becomes weak, filtering within a single frame may still retain noise, and direct summation accumulates the noise as well, making it impossible to extract the echoes.

Herein, we introduce two methods to extract weak target echoes without accumulation, relying on the temporal correlation features. Specifically, firstly, we used two detectors combined with software operations instead of the AND gate in hardware to improve the detection ability for weak signals, which is suitable for NLOS detection which required high time resolution. Secondly, we introduce the pipeline method and make it applicable to distinguish the trajectory of the third echoes in sequential histograms, extracting echo trajectories of hidden objects in motion, which is difficult to separate from noise. The two methods above can improve the detection capabilities of weak third echoes and are suitable for detecting both stationary and moving targets. Ultimately, we experimentally verified the effect of the two methods above respectively and used both of these two filtering methods based on temporal correlation features to extract the weak echoes and perform localization and tracking experiments.

2. Experimental scenarios

In experiments, we use a picosecond pulsed laser (VisIR, 1531 nm peak wavelength, 40 MHz repetition frequency, 70ps pulse width, ∼ 375 mW average power) to emit pulses to the spot L on the relay surface 2.5 m away from the galvanometer. Firstly, the emitted pulse is scattered at the surface and part of the photons returns directly to the lens through the galvanometer forming the first echoes in the histogram. Subsequently, part of remained photons reach the hidden object and are scattered back to the surface. Some of these scattered photons return to the specified spot (A, B, C) and are scattered the third time before entering the lenses (50 mm focal length) via the galvanometer, forming extremely weak echoes in the temporal histogram. The returned photons are coupled with fibers and transferred to the superconducting nanowire single-photon detector (SNSPD) (SINGLE-QUANTUM, the detection efficiency is between 70% and 80%, time jitter 77-79ps, deadtime less than 30 ns, dark count rate less than 300 Hz), which is incorporated with a time-correlated single-photon counting (TCSPC) module (qutools-quTAG, time jitter less than 10ps, time resolution of 1ps). SNSPD has a smaller time jitter and dark count rate, which is suitable for the 1530 nm detection system. At the receiving end, we use an unpolarized beam splitter to split the returned beam into two equal beams that are input into two separate detector channels, respectively.

3. Extraction echoes with dual detectors

3.1 Principle

The dual-detector method divided the returned pulse into two Geiger-mode avalanche photodiodes and compares the arrival time of the electrical signals using a AND gate in hardware. [30] Herein, we use two single-photon detectors combined with software operations instead of the AND gate in hardware to improve the detection ability for weak signals, avoiding the loss of time resolution and the use of additional digital signal processors to align both detectors.

As shown in Fig. 1(a), the returned photons are delivered to the detection system through two channels, and the counter records the relevant flight time on temporal histograms, respectively. Then, the collected histograms of two channels are transformed into the software operations, as shown in Fig. 1 (c), to denoise and extract the third target echo. In the software operation, firstly, we take a minimum hold that takes only the common part of the inputs In1, In2, suppressing the random noise in a single detector. Then, we utilize a multiplication unit to compute the product of the minimum hold and inputs In1, In2 and detect the third target echoes from the output histogram of the operation. In addition, the pre-processing could improve the effectiveness of this method combined with software operations, such as background subtraction.

 figure: Fig. 1.

Fig. 1. Experimental scenarios. (a) shows that we send laser pulses to the relay surface and detect the returned photons after multiple reflections with two lenses. The hidden object at the corner is out of our sight. The detection system consists of SNSPD and TCSPC module. (b) shows a picture of the experimental setup. (c) shows the schematic diagram of our software operations with two detectors. (d) shows the principle of the pipeline method in detecting the third echo of sequential histograms.

Download Full Size | PDF

After dividing returned photons into two lenses by a beam splitter, the entering target signals and noises are reduced to half as well. Based on the analysis of the detection system, the detection probability and false-alarm probability of each single detector can be depicted as Eq. (1) and Eq. (2). [31]

$${P_d}(i) = [\exp ( - ({N_n}/2){\tau _d})]\{{1 - \exp [ - (({N_s} + {N_n})/2){T_b}]} \}$$
$${P_f} = \sum\limits_1^{{T_\textrm{p}}/{T_b}} {{P_f}(i)} = \sum\limits_1^{{T_p}/{T_b}} {[\exp ( - ({N_n}/2){\tau _d})]\{{1 - \exp [ - ({N_n}/2){T_b}]} \}}$$
where, ${N_n}$ represent the mean number of photoelectrons generated by noise, ${\tau _d}$ is the dead time of the single detector, ${N_s}$ is the mean number of photoelectrons generated by the third echoes from the hidden object, ${T_b}$ is the unit for time-bin and ${T_p}$ is the detection period. Given Eq. (1) and Eq. (2), we provide an analysis of detection probability and false-alarm probability of a single detector before and after splitting light beam in the supplement. In the operations, the two detectors collect the divided photons independently. The operations take a minimum hold of the inputs In1, In2 to compute the lowest counts of the two-channel histogram, which is similar to maintaining a confidence coefficient of the echoes in the two channels. The role of the multiplier can be understood as a weighted calculation of the product of the inputs In1, In2, where the product is to amplify the similar part of the inputs and the minimum hold is to provide the confidence coefficient. The operations can be written as Eq. (3) and Eq. (4), where W is the confidence coefficient, ${X_1}$, ${X_2}$ are the inputs of the operation, $\odot $ is the Hadamard product. Finally, the histogram could be smoothed appropriately to remove local burr noise. In the supplement, we analysis the SNR improvements of the dual-detector method and the quantitative detection probability and false-alarm probability. Additionally, in the supplement, we test the effectiveness of this method in NLOS imaging through simulation.
$$Y = W \odot {X_1} \odot {X_2}$$
$$W = \mathop {\min }\limits_i \{ {X_1}(i),{X_2}(i)\}$$

3.2 Experimental tests

Moreover, we collected a set of experimental data with 1s acquiring time to evaluate the effectiveness of this method in detecting the third echoes, as shown in Fig. 2. In the pre-processing, the histograms are background-subtracted and slightly smoothed by a mean filter, in which the first echoes are set to zero because of the high intensity. In Fig. 2 (a), there are multiple echoes in the collected histogram, where it is easy to misidentify noise as a target echo. To reduce the false-alarm probability and find the target echo, we used the above-mentioned dual detector method to separate the returned beam into two channels to detect separately as shown in Fig. 2 (b, c), and we obtained the processed histogram in Fig. 2 (d) after the operations. The negative values are removed due to the insignificance and the output histogram is normalized because the amplitude represents the relative magnitude, not the number of photons. By using the dual detector method, we improve the detection ability for weak signals, allowing the target signal in Fig. 2(d) to be detected. The results show that this way is effective for detecting such multiple reflections echoes.

 figure: Fig. 2.

Fig. 2. (a)shows the temporal histogram collected by a single detector of channel 1 without the beam splitter (BS) in a period of 25 ns and an interval of 10 ps for each bin. (b) shows the temporal histogram collected by the dual detector of channel 1. (c) shows the temporal histogram collected by the dual detector of channel 2. (d) shows the histogram after the software operations. The third echo is marked by square boxes.

Download Full Size | PDF

4. Extraction echoes with pipeline method

Collected the histogram in a short interval, in practice, these histograms at each detecting point form a 3D data matrix of time-counts-sequence. Generally, accumulating the previous frames is an intuitive method to enhance the collected histogram, however, the method of accumulation is more suitable for the condition of relatively mild noise. In the case of higher noise, direct summation accumulates the noise into the histogram.

4.1 Principle

The pipeline filtering method is an effective method for sequence images in Infrared Radiation (IR) image weak target detection. [3234] Herein, we introduced the pipeline filtering method for NLOS target tracking to denoise the temporal histograms and extract the trajectories of the third echo, utilizing the temporal correlation feature between the histograms. The steps are as follows. Firstly, we pre-acquire and subtract the background to reduce the ambient noise before we use the mean filter to slightly smooth the histograms. Due to the width of the hidden object and the jitter of the system, the target echoes have a width feature on the time-axis. We limit the width of the echoes wider than several bins to obtain the screened candidate targets. Alternatively, the candidate targets can be screened in other ways, such as amplitude, or a combination of width and amplitude.

Secondly, we input the measurements and record the pipeline’s head for the first histogram, in which the candidate pipeline’s head is obtained by a limit. In the searching part, we search for candidates in the next frame within a certain range. If there are N candidates detected within this range, the pipeline is split into N pipes, and each pipe is assigned confidence. If the target is not found, we estimate the echo position by least squares using the latest frames. As a consequence, in the checking part, we determine whether different pipelines detect the same candidate. The pseudo-code is shown in Algorithm 1. In practice, we use an empirical formula to set the searching range, as depicted in Eq. (5), where k is an empirical factor, ${v_{object}}$ is the estimated speed of the object, $\Delta t$ is the time interval and c is the speed of light.

$$\textrm{range} = k({v_{object}} \cdot \Delta t)/c$$

oe-31-22-36209-i001

4.2 Experimental tests

To verify the effectiveness of the method, we experimentally collected a temporal histogram sequence when the hidden object is in a round-trip motion, as shown in Fig. 3 (a, d). The target signal is almost drowned in the noise. The period is 25 ns, each bin is 10ps, and the acquisition time is 0.3s. When the hidden target is far away from the relay surface, the target echoes become difficult to detect only from one frame. Figure 3 (b, e) shows the candidates with a minimum width limitation of 20 bins, and there is still a lot of noise. Finally, the pipeline with the highest confidence is identified and extracted, as shown in Fig. 3 (c, f), exploiting the temporal correlation feature of the third target echo. By using the pipeline filtering method, the random noises in the time domain are removed, and the echoes are detected correctly based on temporal correlation. The limitations of this method are the inability to remove fixed noise and detect crossings in trajectories, which are the problems of pipeline filtering in infrared image processing as well.

 figure: Fig. 3.

Fig. 3. Experimental results of validating the pipeline method for a moving hidden object. (a, d) show the 3D views of the time-counts-sequence matrix, in which the histograms are subtracted-background and the pipeline is weak. (b, e) show the 3D views of the matrix after limiting the width. (c, f) show the 3D views of the time-counts-sequence matrix in which the echoes have been extracted from strong noise.

Download Full Size | PDF

As shown in Fig. 4, using the temporal histogram sequence acquired above, we compare the pipeline filtering method with the extraction of one frame, accumulation of 3 frames and accumulation of 10 frames. The extraction of accumulation takes the noise of previous frames in to the current one, causing misidentified and a high false-alarm. By contrast, the pipeline filtering method almost extracts the complete trajectory of the third echoes, as illustrated in Fig. 4 (d), which only cares about the continuous third echoes in the sequential histogram. In addition, when the data is too noisy, we can consider using some preprocessing steps, such as filtering and appropriate accumulation of previous frames. Inaccurate extraction of flight time can lead to positioning errors, and accurate extraction of the flight time is a prerequisite for positioning the hidden object.

 figure: Fig. 4.

Fig. 4. Comparison of the effects using different methods to extract the third echo. (a) shows the results that extract the maximum in a single frame. (b) shows the results that accumulate the previous 3 frames and extract the maximum. (c) shows the results that accumulate the previous 10 frames and extract the maximum. (d) shows the results that extract the third echo by the pipeline method.

Download Full Size | PDF

5. Results

5.1 Positioning experiments

In the first experiment, as shown in Fig. 5, we localize a stationary hidden object in 13 positions with the setups shown in Fig. 1 (a). The hidden object is a foam board with a width of 40 cm and a height of 40 cm. Among the localizations, 9 positions are placed in the middle level, and the other positions are placed in different levels of the space. We acquired the histogram for 0.5s for each position. In pre-processing, we shift the first echo peak to the origin of the temporal histogram and use it as the start time of all events. We then zero the first echoes because the high intensity of the first echoes will make it difficult to find the third echo. We pre-acquire a background without the object to suppress ambient noise, before using a mean filter to slightly smooth the temporal histogram. We discretized the collected histograms into 2500 bins, and each time bin was 10 ps. Then, we combined the two filtering methods above to extract weak echoes based on the feature of temporal correlation. We fit a Gaussian to calibrate the extracted third echoes. The hidden target is positioned based on a neural network method, [22] which utilizes the relationship between the time difference and the spatial location of the hidden object.

 figure: Fig. 5.

Fig. 5. Experimental#results of NLOS positioning. (a) shows 9 positioning results in the middle level. (b) shows the other positioning results in different levels of the space. The red cross “L” is the laser spot. The black crosses “A”, “B”, and “C” are the specified spots on the surface. The blue cross represents the position of the hidden object. The rectangle represents the hidden object.

Download Full Size | PDF

After measuring, the root mean square error for the center of mass of the object is between 2-10 cm for each axis. The selection of detection points on the wall takes into account the relationship between the resolution of the detection system, [35] the localization of detection points, and the intensity of the target echo. There’s a tradeoff for choosing the detection points. In a certain range, increasing the distance between detection points could improve the resolution of the system. However, outside of this range, it is impossible to guarantee that the detectors acquire the returned photons because of the attenuation.

5.2 Tracking experiments

In the second experiment, we conducted tracking experiments for a moving hidden object and show the tracking results of the dual channels combined with the pipeline method and the single channel without filtering, as shown in Fig. 6. We scanned the three detecting points through the galvanometer in turn with an acquisition time of 0.3s for each frame and a setting time of 0.55s for the galvanometer, for a total acquisition time of 2.55s. The movement of the object is approximately 0.533 cm/s. As there is a delay time between the acquisition for each point in our system, we slow the speed to reduce the positioning errors appropriately. When the scanning time decreases, the process of target tracking can be speed up in our experiments. The acquirement time can be decreased by using multiple detectors to stare to the surface or a faster galvanometer control device can be used to reduce the control time and improve the performance. The reconstructed positions are in good agreement with the object’s motion.

 figure: Fig. 6.

Fig. 6. Experimental results of tracking a moving object. (a-c) show the results without filtering in three different paths. (d-f) show the results with the temporal filtering methods. The red cross “L” is the laser spot. The black crosses “A”, “B”, and “C” are the specified spots on the surface. The blue crosses represent the positions of the hidden object in each frame. The rectangle represents the hidden object.

Download Full Size | PDF

6. Conclusion

In practical applications, the separation ability of weak target echoes is related to the detection range, detection performance, etc. The returned echoes are extremely weak after transmission and multiple reflections. Therefore, it is important to improve the separation ability of weak target echoes in NLOS detection.

In this paper, we analyzed and explored how to separate the returned weak echoes from the overwhelming noise for NLOS tracking. Specifically, the original dual-detector method uses hardware AND gate, however, due to the time jitter of the detection system, the photons arriving at the detector may not generate electrical signals at the same time, so it is necessary to establish an ns-level unit to store the electrical signals generated by the signal echoes, which means that it is not suitable for scenes with high time resolution. Therefore, we propose a dual detector method with software operations to replace the hardware AND gate, avoiding the loss of time resolution. We use this method with software operations to detect the third echo returned from the hidden object, exploiting spatially temporal correlation for filtering, which improves the detection ability for weak signals despite the energy of returned photons being halved. In the future, on the one hand, in NLOS scenarios, it is expected to adopt a combination of hardware and software technology to improve the adaptability to weak signals. On the other hand, we will consider improving the detection capability of the system by improving the coupling efficiency. Additionally, we introduced the pipeline filtering method to extract the target echoes, which utilizes the temporal correlation between the frames and extracts the trajectory of the echoes in the sequence. In summary, we develop two methods based on temporal correlation features to extract the trajectory of the target echoes from the noise, reducing the false-alarm probability, which is beneficial to positioning retrieval. Further, we think the methods may be useful for imaging work.

Funding

Youth Innovation Promotion Association of the Chinese Academy of Sciences (2020372); National Natural Science Foundation of China (62005289).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. D. Faccio, A. Velten, and G. Wetzstein, “Non-line-of-sight imaging,” Nat. Rev. Phys. 2(6), 318–327 (2020). [CrossRef]  

2. R. Geng, Y. Hu, and Y. Chen, “Recent Advances on Non-Line-of-Sight Imaging: Conventional Physical Models, Deep Learning, and New Scenes,” APSIPA Transactions on Signal and Information Processing 11(1), 1 (2022). [CrossRef]  

3. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012). [CrossRef]  

4. M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23(16), 20997–21011 (2015). [CrossRef]  

5. V. Arellano, D. Gutierrez, and A. Jarabo, “Fast back-projection for non-line of sight reconstruction,” Opt. Express 25(10), 11574–11583 (2017). [CrossRef]  

6. C. Jin, J. Xie, S. Zhang, Z. Zhang, and Y. Zhao, “Reconstruction of multiple non-line-of-sight objects using back projection based on ellipsoid mode decomposition,” Opt. Express 26(16), 20089–20101 (2018). [CrossRef]  

7. M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555(7696), 338–341 (2018). [CrossRef]  

8. M. La Manna, F. Kine, E. Breitbach, J. Jackson, T. Sultan, and A. Velten, “Error Backprojection Algorithms for Non-Line-of-Sight Imaging,” IEEE Trans. Pattern Anal. Mach. Intell. 41(7), 1615–1626 (2019). [CrossRef]  

9. S. Xin, S. Nousias, K. N. Kutulakos, A. C. Sankaranarayanan, S. G. Narasimhan, I. Gkioulekas, and I. C. Soc, “A Theory of Fermat Paths for Non-Line-of-Sight Shape Reconstruction,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Conference on Computer Vision and Pattern Recognition (2019), 6793–6802.

10. M. Isogawa, D. Chan, Y. Yuan, K. Kitani, and M. O’Toole, “Efficient Non-Line-of-Sight Imaging from Transient Sinograms,” in Computer Vision – ECCV 2020 (Springer International Publishing, 2020), 193–208.

11. D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-Based Non-Line-of-Sight Imaging using Fast f-k Migration,” ACM Trans. Graph. 38(4), 1–13 (2019). [CrossRef]  

12. X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. Huu Le, A. Jarabo, D. Gutierrez, and A. Velten, “Non-line-of-sight imaging using phasor-field virtual wave optics,” Nature 572(7771), 620–623 (2019). [CrossRef]  

13. S. A. Reza, M. La Manna, S. Bauer, and A. Velten, “Phasor field waves: A Huygens-like light transport model for non-line-of-sight imaging applications,” Opt. Express 27(20), 29380–29399 (2019). [CrossRef]  

14. W. Chen, F. Wei, K. N. Kutulakos, S. Rusinkiewicz, and F. Heide, “Learned Feature Embeddings for Non-Line-of-Sight Imaging and Recognition,” ACM Trans. Graph. 39(6), 1–18 (2020). [CrossRef]  

15. T. Yu, M. Qiao, H. Liu, and S. Han, “Non-Line-of-Sight Imaging Through Deep Learning,” Acta Opt. Sin. 39(6), 1–18 (2019).

16. C. A. Metzler, F. Heide, P. Rangarajan, M. M. Balaji, A. Viswanath, A. Veeraraghavan, and R. G. Baraniuk, “Deep-inverse correlography: towards real-time high-resolution non-line-of-sight imaging,” Optica 7(1), 63–71 (2020). [CrossRef]  

17. C. Saunders, J. Murray-Bruce, and V. K. Goyal, “Computational periscopy with an ordinary digital camera,” Nature 565(7740), 472–475 (2019). [CrossRef]  

18. M. Batarseh, S. Sukhov, Z. Shen, H. Gemar, R. Rezvani, and A. Dogariu, “Passive sensing around the corner using spatial coherence,” Nat. Commun. 9(1), 3629 (2018). [CrossRef]  

19. G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016). [CrossRef]  

20. C. Wu, J. J. Liu, X. Huang, Z. P. Li, C. Yu, J. T. Ye, J. Zhang, Q. Zhang, X. K. Dou, V. K. Goyal, F. H. Xu, and J. W. Pan, “Non-line-of-sight imaging over 1.43 km,” Proc. Natl. Acad. Sci. U. S. A. 118(10), e2024468118 (2021). [CrossRef]  

21. S. Chan, R. E. Warburton, G. Gariepy, J. Leach, and D. Faccio, “Non-line-of-sight tracking of people at long range,” Opt. Express 25(9), 10109–10117 (2017). [CrossRef]  

22. T. L. Li, Y. H. Luo, Z. L. Xie, Y. Q. Liu, S. Y. Xia, S. X. Xu, G. Ren, H. T. Ma, B. Qi, and L. Cao, “Non-line-of-sight fast tracking in a corridor,” Opt. Express 29(25), 41568–41581 (2021). [CrossRef]  

23. S. Chan, R. E. Warburton, G. Gariepy, Y. Altmann, S. McLaughlin, J. Leach, and D. Faccio, “Fast tracking of hidden objects with single-pixel detectors,” Electron. Lett. 53(15), 1005–1008 (2017). [CrossRef]  

24. N. Scheiner, F. Kraus, F. Wei, P. Buu, F. Mannan, N. Appenrodt, W. Ritter, J. Dickmann, K. Dietmayer, B. Sick, F. Heide, and Ieee, “Seeing Around Street Corners: Non-Line-of-Sight Detection and Tracking In-the-Wild Using Doppler Radar,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Conference on Computer Vision and Pattern Recognition (2020), 2065–2074.

25. D. B. Lindell, M. O’Toole, S. G. Narasimhan, and R. Raskar, “Computational time-resolved imaging, single-photon sensing, and non-line-of-sight imaging,” ACM SIGGRAPH 2020 Courses (2020). [CrossRef]  

26. J. Liu, Y. Zhou, X. Huang, Z.-P. Li, and F. Xu, “Photon-Efficient Non-Line-of-Sight Imaging,” IEEE Trans. Comput. Imaging 8, 639–650 (2022). [CrossRef]  

27. Z. Li, X. Liu, J. Wang, Z. Shi, L. Qiu, and X. Fu, “Fast non-line-of-sight imaging based on first photon event stamping,” Opt. Lett. 47(8), 1928–1931 (2022). [CrossRef]  

28. Y. Ren, Y. Luo, S. Xu, H. Ma, and Y. Tan, “A comparative study of time of flight extraction methods in non-line-of-sight location,” Opto-Electronic Engineering 48(1), 200124 (2021). [CrossRef]  

29. J. H. Nam, E. Brandt, S. Bauer, X. Liu, M. Renna, A. Tosi, E. Sifakis, and A. Velten, “Low-latency time-of-flight non-line-of-sight imaging at 5 frames per second,” Nat. Commun. 12(1), 200124 (2021). [CrossRef]  

30. H. J. Kong, T. H. Kim, S. Jo, and M. S. Oh, “Smart three-dimensional imaging LADAR using two Geiger-mode avalanche photodiodes,” Opt. Express 19(20), 19323–19329 (2011). [CrossRef]  

31. Z.-J. Zhang, Y. Zhao, Y. Zhang, L. Wu, and J.-Z. Su, “A high detection probability method for Gm-APD photon counting laser radar,” ISPDI 2013 - Fifth International Symposium on Photoelectronic Detection and Imaging (SPIE, 2013), Vol. 8912.

32. W. Gouyou, C. Zhenxue, and L. I. Qiaoliang, “A Review of Infrared Weak and Small Targets Detection under Complicated Background,” Infrared Technology 28, 287–292 (2006).

33. M. Diani, G. Corsini, and A. Baldacci, “Space-time processing for the detection of airborne targets in IR image sequences,” IEE Proc., Vis. Image Process. 148(3), 151–157 (2001). [CrossRef]  

34. 34. G. Wang, “A pipeline algorithm for detection and tracking of pixel-sized target trajectories,” 1990 Technical Symposium on Optics, Electro-Optics, and Sensors (SPIE, 1990), Vol. 1305.

35. Z. He, “A simulation model of back-projection based non-line-of-sight imaging (NLOS) for resolution analysis,” in Other Conferences, (2021), [CrossRef]  

Supplementary Material (1)

NameDescription
Supplement 1       This supplement performs a simulation to test the effectiveness; analyze the detection probability and false-alarm probability; and test effectiveness of the dual-detector method for NLOS imaging.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Experimental scenarios. (a) shows that we send laser pulses to the relay surface and detect the returned photons after multiple reflections with two lenses. The hidden object at the corner is out of our sight. The detection system consists of SNSPD and TCSPC module. (b) shows a picture of the experimental setup. (c) shows the schematic diagram of our software operations with two detectors. (d) shows the principle of the pipeline method in detecting the third echo of sequential histograms.
Fig. 2.
Fig. 2. (a)shows the temporal histogram collected by a single detector of channel 1 without the beam splitter (BS) in a period of 25 ns and an interval of 10 ps for each bin. (b) shows the temporal histogram collected by the dual detector of channel 1. (c) shows the temporal histogram collected by the dual detector of channel 2. (d) shows the histogram after the software operations. The third echo is marked by square boxes.
Fig. 3.
Fig. 3. Experimental results of validating the pipeline method for a moving hidden object. (a, d) show the 3D views of the time-counts-sequence matrix, in which the histograms are subtracted-background and the pipeline is weak. (b, e) show the 3D views of the matrix after limiting the width. (c, f) show the 3D views of the time-counts-sequence matrix in which the echoes have been extracted from strong noise.
Fig. 4.
Fig. 4. Comparison of the effects using different methods to extract the third echo. (a) shows the results that extract the maximum in a single frame. (b) shows the results that accumulate the previous 3 frames and extract the maximum. (c) shows the results that accumulate the previous 10 frames and extract the maximum. (d) shows the results that extract the third echo by the pipeline method.
Fig. 5.
Fig. 5. Experimental#results of NLOS positioning. (a) shows 9 positioning results in the middle level. (b) shows the other positioning results in different levels of the space. The red cross “L” is the laser spot. The black crosses “A”, “B”, and “C” are the specified spots on the surface. The blue cross represents the position of the hidden object. The rectangle represents the hidden object.
Fig. 6.
Fig. 6. Experimental results of tracking a moving object. (a-c) show the results without filtering in three different paths. (d-f) show the results with the temporal filtering methods. The red cross “L” is the laser spot. The black crosses “A”, “B”, and “C” are the specified spots on the surface. The blue crosses represent the positions of the hidden object in each frame. The rectangle represents the hidden object.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

P d ( i ) = [ exp ( ( N n / 2 ) τ d ) ] { 1 exp [ ( ( N s + N n ) / 2 ) T b ] }
P f = 1 T p / T b P f ( i ) = 1 T p / T b [ exp ( ( N n / 2 ) τ d ) ] { 1 exp [ ( N n / 2 ) T b ] }
Y = W X 1 X 2
W = min i { X 1 ( i ) , X 2 ( i ) }
range = k ( v o b j e c t Δ t ) / c
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.