Abstract

Non-line-of-sight imaging has attracted more attentions for its wide applications. Even though ultrasensitive cameras/detectors with high time-resolution are available, current back-projection methods are still powerless to acquire a satisfying reconstruction of multiple hidden objects due to severe aliasing artifacts. Here, a novel back-projection method is developed to reconstruct multiple hidden objects. Our method considers decomposing all the ellipsoids in a confidence map into several “clusters” belonging to different objects (namely “ellipsoid mode decomposition”), and then reconstructing the objects individually from their ellipsoid modes by filtering and thresholding, respectively. Importantly, the simulated and experimental results demonstrate that this method can effectively eliminate the impacts of aliasing artifacts and exhibits potential advantages in separating, locating and recovering multiple hidden objects, which might be a good base for reconstructing complex non-line-of-sight scenes.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Laser imaging techniques have been rapidly developed for many civilian and military applications, such as navigation, terrain visualization, obstacle avoidance, weapon guidance, and target recognition. In recent years, “seeing around a corner” is becoming a new capability for laser imaging which uses reflections from mirror [1,2]or glossy surfaces [3] to extend the viewing into non-line-of-sight (NLoS) conditions [4]. Hitherto, most of NLoS imaging methods require measuring and analyzing the flight time of the scattering photons traveled beyond the line of sight vision except for several methods limited in special conditions such as wavefront shaping by spatial light modulators [5], autocorrelation of the speckle pattern scattered from a diffuse wall [6], retrieval of the scattered point spread function with the aid of a reference object [7], anti-pinhole imaging using an occluder as a lens [8]. Hence NLoS imaging systems are strictly dependent on the response time of optical sensors, and then ultrasensitive cameras/detectors with high time-resolution are adopted subsequently from steak camera [9], intensified Charge-Coupled Device (ICCD) [10,11], photonic mixer devices (PMDs) [12,13] to single-photon avalanche diode (SPAD) /array [14–17].

Despite all above hardware available, an enhancement of the imaging qualities of NLoS objects is still a bottleneck because that reconstruction of the objects is mathematically considered as an ill-posed inverse problem. Approaches including convex optimization and back projection have been mainly developed in recent years. The former has advantage in good reconstruction quality. Yet, this method heavily requires the priors and a large size of the projection matrix resulting in insolubility of this inverse problem [13,18–20]. In addition, this method is actually a bulk process since it requires all the projections available before the inverse problem is solved. The latter is popularly applied for NLoS imaging because of distinct superiorities including free of assumption about the hidden scene geometry [9–11], real-time capability [17,21,22] as well as relatively robust to noise and erroneous data [15]. Therefore, the back projection has been a promising method for the reconstruction of the hidden objects.

The back-projection methods employed in NLoS imaging field are analogous to the those used in computational tomography(CT) field [23], and has similar problems with those of CT imaging, i.e. the issue of aliasing artifacts [24]. The so-called artifacts, generally refers to those recovered voxels around the objects with non-zero intensity but do not exist in real world. The artifacts can be seen as a by-product of back projection due to data under-sampling and the limitation of projection views [25]. The artifacts can greatly degrade the quality of the reconstruction with blurry boundaries, especially for multiple NLoS objects, the overlapping among the artifacts of different objects leads to a series of difficulties to separate objects, locate position and recover shape. In this case, current back-projection methods are powerless to acquire satisfying results [9,15,26,27].

In this paper, we present a novel method to reconstruct multiple NLoS objects using back projection based on ellipsoid mode decomposition (EMD). The basic idea of our method focuses on decomposing all the ellipsoids in a confidence map into several clusters of ellipsoids belonging to different objects, respectively. Then, each object and its artifacts can be subsequently extracted from the confidence map. As a result, each object can be individually reconstructed by a procedure of filtering and thresholding. The rest of the paper is organized as following: first, we introduce the principle of general back-projection method and theoretically develop our method. Second, we compare the simulated results of reconstructing multiple objects by using general back-projection methods and our method. We also experimentally demonstrate the feasibility of our method on reconstructing multiple objects. Third, potential reasons for the comparative simulated and experimental results are further analyzed in view of artifacts.

2. General back-projection method

If taking a light path shown in Fig. 1 as an example, the NLoS imaging process is described as follows. A pulse laser is directed towards an image screen to form a light source point S. The light pulse falling at a point P on the objects is diffused and returns back to the image screen. For a given image point Ii, a detector can receive some reflected photons by directing its narrow field of view (FOV) to the image point, and a distance geometry can be formed as described by Eq. (1).

r1+r2+r3+r4=ct.
where r2 and r3 are the distances of point P toward the source point S and the imaging point Ii, respectively. r1 is the distance between the source point and the laser, and then r4 is the distance between the image point and the detector. Both r1 and r4 are independent of the objects and can be directly measured in advance. The time intervals t between the laser’s synchronous trigger and the received photon are measured and counted repeatedly to form a histogram N(S,Ii,t) of photon counts versus time. For a reconstruction of the objects, a large amount of time histograms should be obtained for different pairs of the source point and image points by scanning the FOV of the detector throughout the image screen.

 

Fig. 1 The principle of NLoS imaging.

Download Full Size | PPT Slide | PDF

The reconstruction of the objects is based on above acquired time histograms. A back projection process is performed by projecting the photon counts in each time bin of time histogram into a voxelized Cartesian space according to the equation Vij(x,y,z)|ctjr1r4r2r3=0=N(S,Ii,tj) . Each projection Vij(x,y,z)corresponds to an ellipsoid with focal points at S and Ii, and the time bins of all time histograms are projected into the voxels space. That is to say, A mass of overlapping ellipsoids comprises a confidence mapV(x,y,z), in which intensity of each voxel represents the possibility of the objects occurring in this voxel. In the confidence map, every “object” consists of a certain number of intersecting ellipsoids, which is defined as an ellipsoid mode of object.

An ellipsoid mode can be divided into an object and its artifacts. The former refers to those voxels mapped to an actual object while the latter represents the rest of voxels but not existing in real world. The artifacts can be seen as a by-product of the back projection. The surface edges of the object will become blurred due to the existence of the artifacts surrounding it. Thus, a procedure of filtering and thresholding is performed for the sake of eliminating the artifacts around the object. Generally, a Laplacian filter is firstly used to enhance surface edges by computing a second derivative of the confidence map, as given in Eq. (2). Then, a thresholding algorithm written as Eq. (3) is further applied to remove artifacts and produce a 3D reconstruction of the object.

Vf(x,y,z)=2V(x,y,z)
Vf(x,y,z)>βmax(Vf)=constant

The above reconstruction method is feasible to reconstruct single NLoS object but becomes inefficient to reconstruct multiple NLoS objects for serious interferences among multiple ellipsoid modes. Until now, within our knowledge, the methods suitable to multiple objects reconstruction are still lacking and will be quite crucial to NLoS imaging.

3. Our method

3.1 Back projection based on EMD

As mentioned above, the characteristic of an ellipsoid mode is the coexistence of an object and its artifacts. In case of multiple objects, the artifacts will become complex and overlapping due to the objects’ different properties such as reflectivity, locations and shapes, so general back-projection method is ineffective on eliminating the artifacts and reconstructing the multiple objects. Here, we design a kind of decomposing criteria, based on which we can decompose all the ellipsoids in a confidence map into several ellipsoid modes belonging to different objects. Then, each object can be individually reconstructed from corresponding ellipsoid mode, respectively.

The principle of back projection based on EMD is depicted in Fig. 2. Firstly, the time histograms acquired by a NLoS system are projected into a voxelized Cartesian space to form an initial confidence map. Then decomposing is operated to extract the ellipsoid mode of the selected object from the initial confidence map and the rest is a residual confidence map as a new confidence map for next decomposing operation, and so on, the ellipsoid modes of the objects will be extracted successively by same decomposing operation until all objects have been extracted with leaving only the projections from background and noise. By multiple decomposing, the initial confidence map is divided into multiple ellipsoid modes. Next, each object can be independently reconstructed from its ellipsoid mode after filtering and thresholding. Finally, all reconstructed objects are composed into a whole reconstruction of the multiple objects. It is emphasized that decomposing operation is crucial to ensure the reconstruction quality of the multiple objects.

 

Fig. 2 Schematic diagram of back projection based on EMD.

Download Full Size | PPT Slide | PDF

3.2 Decomposing criteria

Because the ellipsoid modes of the different objects will overlap with no obvious boundary among each other in the confidence map, it is very different to separate them using conventional image segmentation method. Our decomposing operation is based on three considerations. Firstly, it involves how the voxels are clustered into the most probable objects, respectively. Secondly, it depends on which one among the objects should be selected preferentially in this decomposing operation. Lastly, it is concerned how the ellipsoid mode of the selected object is extracted from the confidence map. Details of decomposing criteria are listed as follows.

  • (1) Clustering of voxels in confidence map

    Since the objects are most likely to appear in the voxels with the maximum intensity in the confidence map, our criteria prefers to take a local maximal voxel as the central position of certain object. Considering the shape and volume of the object, those voxels around the local maximal voxel have more probability to be part of this object, estimation of which is based on two conditions. One is that the distance between an evaluated voxel and the local maximal voxel should be within certain range (spatial window), the other is the intensity difference between these two voxels should be within certain range (intensity window). Only those voxels satisfying above two conditions are clustered into the same object centering on the local maximal voxel.

  • (2) Selection of preferentially extracted object

    After clustering of the voxels in the confidence map is finished, it is important to select an appropriate object that will be extracted from the confidence map in this decomposing operation. Our criteria prefer to extract “strong object” which has both high intensity and big volume. This way has two advantages. On the one hand, the “strong object” is more apt to keep real location and shape under the influence of other “weak objects”, so it is easy to drawn “strong object” exactly from confidence map. On the other hand, only after the “strong object” is extracted, the “weak objects” obstructed by the “strong object ” have more probability to reveal themselves in residual confidence map so that one of the appeared “weak objects” can be found and extracted in next decomposing operation.

  • (3) Extraction of ellipsoid mode of selected object

    As one object is considered to be extracted, an issue about how to effectively extract the ellipsoid mode of the selected object from the confidence map becomes more important. our criteria is that only the ellipsoids passing through the voxels of the selected object are projected again to form an ellipsoid mode of the selected object, and then this ellipsoid mode is subtracted from the confidence map to leave an residual confidence map. The benefits of extracting an ellipsoid mode include two aspects: One is that the shape of the selected object can be individually reconstructed only from its ellipsoid mode that is not affected by the ellipsoid modes of remaining objects. The other is that the ellipsoid modes of remaining objects can be separated in next decomposing without the interference of this extracted ellipsoid mode.

3.3 procedure of decomposing

The decomposing procedure is performed by using Matlab software. List of the decomposing procedure is briefly provided in Table 1. Before decomposing, the space window (hs) and the intensity window (hc) are initialized.

Tables Icon

Table 1. List of Decomposing Procedure

|xxo|hs.|yyo|hs.|zzo|hs.
|V(xo,yo,zo)V(x,y,z)|/V(xo,yo,zo)hc.

The decomposing procedure actually has a little dependence on the values of hs and hc, thus the selection of hs and hc is not quite strict. In our method, hs is selected slightly larger than the size of the greatest object in the scene and hc is set as a value of about 0.4.

3.4 Stop criteria

In our method, the unknown objects are extracted and reconstructed one by one on the basis of our decomposing criteria, which has a characteristic that “strong objects” with higher intensity and bigger volume have more priority to be recovered while “weak objects” with low intensity and small volume often get posterior operation. The reconstruction process will stop when intensity sum of the extracted object in certain decomposition is lower than a value set in advance. If the value is appropriately selected according to practical experiences, most objects can be recovered well except for very “weak objects”, background and noise. Fortunately, very “weak objects” are often not concerned and even can be negligible in practice. Of course, if the number of the objects is known in advance, the reconstruction will be finished when the number of recovered objects is up to total number of the objects.

4. Results and discussion

4.1 Simulated results

The simulated experiment as is depicted in Fig. 1 has been performed. We assume that the image screen parallel to x-y plane is located at z = 0 and positions of the laser and the detector are known in advance. The objects with complex shapes can be generated in 3D Max software and the model files are imported to MATLAB to define the positions of the objects’ points. In our simulation, the round, triangle and square plate with reflectivity of 0.1, 0.3, and 1.0 are located at different distance from the image screen, respectively. The simulation process is composed of forward transmission and inverse reconstruction. In the forward transmission, the number of the photons received by the detector is obtained from the reflections of all objects points, and specific value is given by the radar equation similar to those given by the references [28–30]. Flight time of each photon is given by a ray tracing method similar to that given by the reference [31]. For a fixed source point and a set of imaging points, a set of time histograms (i.e. TCSPC data) can be acquired by the forward transmission process. There are 4024 time bins with the width of 10ps. In order to simulate the time response of the NLoS imaging system due to pulse width of the laser and time jitter of the detector, we broaden the time histograms using a Guass Kernel function with full width of half maximum (FWHM) of 50ps. In the inverse reconstruction, both general back-projection methods and our method are performed with the time histograms acquired during the forward transmission process. The reconstruction region is limited in a cube of 2m × 2m × 1m divided into 100 × 100 × 50 voxels.

Typical results of the objects using general back-projection with global thresholding are shown in Fig. 3. From Fig. 3(a), at low threshold value of 0.2, the round plate and the triangle plate can be reconstructed but with serious shape distortions, while the square plate fails to emerge obviously from its ambient artifacts. Although the square plate can be further recovered by increasing threshold value to 0.25, the round plate and the triangle plate in Fig. 3(b) are greatly distorted under this circumstance. As threshold value is increased to 0.3, quite small part of the triangle plate remains and the round plate disappears completely, while the square plate acquires an acceptable shape shown in Fig. 3(c). Typically, with threshold value up to 0.5, the square plate takes on a good shape but both the round plate and the triangle plate disappear completely shown in Fig. 3(d). From these changes, it is indicated that general back-projection with global thresholding is powerless to reconstruct multiple NLoS objects with different properties such as reflectivity, locations and shapes.

 

Fig. 3 Simulated reconstruction of multiple NLoS objects (a)-(d)using general back-projection with global thresholding such as β = 0.2, 0.25, 0.3 and 0.5, respectively. (e)-(h) using general back-projection with local-global thresholding such as [λloc,λglob] = [0.25,0.13], [0.18,0.19], [0.25,0.25] and [0.34,0.25], respectively.

Download Full Size | PPT Slide | PDF

Besides general back-projection with global thresholding, another general back-projection with local-global thresholding was presented in the reference [9]. Similar to global thresholding expressed by Eq. (3), local-global thresholding can be described by

Vf(x,y,z)>λlocMloc+λglobMglob=constant
where Mglob is the global maximum of the filtered confidence map Vf while Mloc is a sliding window maximum of Vf. Typically a 11 × 11 × 11 voxels sliding window is used in our simulation. λloc and λglob are constant which can be adjusted manually. Compared with the global thresholding, the local-global thresholding has an advantage in separating multiple hidden objects as shown in Fig. 3(e). Yet it still can’t eliminate the impacts of aliasing artifacts among the objects, so the recovered objects have obvious shape distortions as shown in Fig. 3(f). Similar to the global thresholding, the local-global thresholding is also heavily dependent on the manual adjusting. If λloc and λglob isn’t selected correctly, some objects might be lost in the reconstruction map as shown in Figs. 3(g) and 3(h). In addition, this method also produces several new problems such as sawtooth edges and rough surfaces, messy fragments as shown in Figs. 3(e)-3(h).

The simulated results of our method are depicted in Fig. 4. It is seen that the objects are well separated and reconstructed at correct locations respectively. Especially, three objects are recovered without obvious shape distortions in contrast with the results using the general back-projection methods. Compared with original objects, slight distortions are also observed and possible reasons for this phenomenon can be mainly attributed to the missing cone problem [9] which is also known to traditional CT field [32].

 

Fig. 4 Simulated results using our method to reconstruct multiple NLoS objects with (a) a view from the front and (b) a view from the side.

Download Full Size | PPT Slide | PDF

For comparing quantitatively our method with general back-projection methods, we use mean square error (MSE) to evaluate the errors between original objects and recovered objects because MSE is the most frequently used for image quality metrics [33]. Figures 3(b), 3(f) and 4(a) present optimal reconstruction for different methods, respectively, and their MSE results are listed in Table 2. It can be seen that our method has smaller error on each recovered object while general back-projection methods both have larger errors on the object, especially the round plate and the triangle plate. The advantages of our method over general back-projections can be well demonstrated From MSE results of three objects.

Tables Icon

Table 2. Comparison of reconstruction errors for our method and general back-projection methods

4.2 Experimental results

For the experimental proof of the theory, we constructed experimental setups as shown in Fig. 5(a). The light source is a fiber Laser emitting 90 fs light pulses at a wavelength of 1550 nm. It operates at a repetition rate of 100MHz with pulse energy of 1 nJ. By an emitting collimating lens, the fiber laser is collimated to a narrow Gaussian laser beam with 2mm diameter and 2 mrad divergence angle. The detector is a free-running InGaAs/InP SPAD with a fiber pigtail for optical input. It can provide the detection efficiency up to 25% with a time jitter of about 300 ps. A receiving collimating lens is coupled with the SPAD by a fiber pigtail, and the FOV of the detector is limited to 2 mrad. The lens is mounted on a rotating platform with adjustable elevation and azimuth angles.

 

Fig. 5 Details of our experiment including (a) photograph of the experimental setups, (b) the source point and image points on the image screen, and (c) photograph of the objects.

Download Full Size | PPT Slide | PDF

In Fig. 5(b), an image screen is covered by one white paper with 5cm × 5cm grids, on which the location marked by red point is selected as a source point and the locations marked by 256 blue points as image points. A shielding screen and the floor are covered by a black light-absorbing cloth for removing the scatterings from non-interesting things. A Time-correlated single-photon counting (TCSPC) unit is used to produce time histograms. Each time histogram has 1024 time bins with a width of 165 ps. The time histograms were collected with the room lights off using 90 s exposure time (high signal-to-noise ratio) to avoid the noise effect on the reconstruction quality.

The objects used in the experiment include two white rectangle plates with a size of 60 cm × 30 cm and one 30 cm × 30 cm white square plate, as shown in Fig. 5(c). All of the objects were located outside the direct field of view of the laser and the detector. The objects were placed at 0.2m, 0.35m and 0.65m distances from the image screen, respectively. The angles between the objects’ plane and axes of (x,y,z) were (30°, 0°, 60°) for the object 1, (0°, 30°, 60°) for the object 2 and (15°, 0°, 75°) for the object 3, respectively.

The reconstruction results of the objects using both general back-projection with global thresholding and our method are compared in Fig. 6. The reconstruction volume is limited within a box of 2m × 2m × 1m. Typical results using general back-projection method with global thresholding are shown in Figs. 6(a) and 6(c), and local-global thresholding gives about the same results. It is indicated that the object 2 takes on an acceptable shape, while the object 1 is concealed deep in its ambient artifacts and the object 3 almost disappears. The sizes, locations and orientations of the reconstructed objects have obvious deviations from those of real objects. In contrast, if using our method, the objects can be clearly separated and reconstructed well. As shown in Figs. 6(b) and 6(d), each object takes on a good shape with size, location and orientation identical well to those of real object.

 

Fig. 6 Reconstruction of the objects from a side view (a) using general back-projection with global thresholding and (b) using our method. Reconstruction of the objects from a top view (c) using general back-projection with global thresholding and (d) using our method.

Download Full Size | PPT Slide | PDF

4.3 Analysis of simulatied results

We consider comparing the simulated results of general back-projection method with those of our method by analysis of the initial confidence map and different objects’ ellipsoid modes.

The normalized color bars are used to distinctly display the distribution of aliasing artifacts in the initial confidence map and different ellipsoid modes. For general back-projection method, typical 2D slices coincided with different objects’ planes in the initial confidence map are shown in Figs. 7(a)-7(c). Among the objects, the square plate takes on a good shape with much higher intensity than that of ambient artifacts in Fig. 7(c). In contrast, the round and the triangle plates (“weak objects”) in Figs. 7(a) and 7(b) exhibit serious shape distortions and low contrasts to ambient artifacts that mainly come from the square plate. In this case, it is hard to separate the objects from aliasing artifacts even with filtering and thresholding as shown in Fig. 3.

 

Fig. 7 (a)-(c) Typical slices of the initial confidence map at the locations of three objects, respectively. (d)-(f) Typical slices of the square plate’s ellipsoid mode at the locations of the objects, respectively. (g)-(h) Typical slices of the triangle plate’s ellipsoid mode at the locations of the objects, respectively. (j)-(l) Typical slices of the round plate’s ellipsoid mode at the locations of the objects, respectively.

Download Full Size | PPT Slide | PDF

In our method, the initial confidence map is decomposed into three ellipsoid modes belonging to different objects, respectively. The ellipsoid mode of the square plate is firstly extracted, and typical 2D slices of this ellipsoid mode coincided with three objects’ planes are shown in Figs. 7(d)-7(f). Figure 7(f) indicates that the square plate can be distinguished clearly from surrounding artifacts. It is also observed that some artifacts of the square plate indeed extend to the locations of the round plate and the triangle plate marked in dotted lines in Figs. 7(d) and 7(e), which means that these artifacts are successfully integrated into the square plate’s ellipsoid mode that is fully extracted from the initial confidence map.

The ellipsoid mode of the triangle plate is secondly extracted. Typical 2D slices of this ellipsoid mode at three objects’ locations are depicted in Figs. 7(g)-7(i). Based on the slice image shown in Fig. 7(h), the triangle plate exhibits better shape and higher contrast to ambient artifacts compared with the result shown in Fig. 7(b). It is also seen that there is a blank at the location of the square plate marked in dotted lines in Fig. 7(i), which gives a proof that the ellipsoid mode of the square plate was removed so completely during last decomposing that it has little effect on current decomposing. It is also observed that small proportions of artifacts actually extend to the location of the round plate marked in dotted lines in Fig. 7(g). Similarly, it means these artifacts are successfully incorporated into the triangle plate’s ellipsoid mode that is fully extracted from the initial confidence map.

The ellipsoid mode of the round plate is finally extracted. Typical 2D slices of this ellipsoid mode at the objects’ locations are shown in Figs. 7(j)-7(l). Figure 7(j) demonstrates that the round plate take on a much better shape and much higher contrast to ambient artifacts compared with the result shown in Fig. 7(a). It is also seen that there are blanks at the locations of the triangle plate and the square plate marked in dotted lines in Figs. 7 (k) and 7(l), which are also proofs that the ellipsoid modes of the triangle plate and the square plate were eliminated from the initial confidence map completely before decomposing the ellipsoid mode of the round plate. Ultimately, each object can be independently reconstructed from their own ellipsoid modes by filtering and thresholding, respectively, and then all of the recovered objects make up a reconstruction of multiple NLoS objects as shown in Fig. 4.

4.4 Analysis of experimental results

We also compare experimental results of general back-projection method with those of our method by analysis of the initial confidence map and different objects’ ellipsoid modes. In case of using general back-projection method, the ellipsoid modes of three objects observed in Fig. 8(a) are mixed up together. Then it is quite obvious that the object 1 is the “strongest” and the object 3 is the “weakest” without clear boundary between the ellipsoid modes of these two objects. It is challenging to reconstruct all the objects even with filtering and thresholding as is shown in Figs. 6(a) and 6(c). Our method well overcomes this limitation. As are demonstrated in Figs. 8(b)-8(d), the initial confidence map is decomposed into three ellipsoid modes belonging to different objects, respectively. Especially, the ellipsoid mode of the “weakest” object 3 are separated perfectly from the ellipsoid mode of the “strongest” object 1. Each ellipsoid mode contains enough information of the objects which can be reconstructed individually by a procedure of filtering and thresholding. Finally, the experimental results shown in Figs. 6(b) and 6(d) can be obtained by combining the recovered objects with right sizes, locations and orientations together.

 

Fig. 8 (a) the initial confidence map of three objects, (b) the ellipsoid mode of the object 1, (c) the ellipsoid mode of the object 2, (d) the ellipsoid mode of the object 3.

Download Full Size | PPT Slide | PDF

5. Conclusions

We present a novel method for reconstructing multiple hidden objects. In our method, the ellipsoid modes of different objects can be successively extracted from the initial confidence map on the basis of decomposing criteria including clustering of voxels in confidence map, a selection of preferentially extracted object and a specific extraction of ellipsoid mode. Then, each object can be independently reconstructed from its ellipsoid modes by a procedure of filtering and thresholding. Our method can effectively eliminate the impacts of aliasing artifacts among multiple hidden objects and obtain a good reconstruction quality almost without shapes distortions. Compared with general back-projection methods with global thresholding or local-global thresholding, our method exhibits potential advantages in separating, locating and recovering multiple hidden objects though it might take longer reconstruction time because of requirement of more times projections. So far, our method is suitable for reconstruction of multiple well-separated hidden objects, which can effectively eliminate the impacts of aliasing artifacts among the objects. It is not satisfying for our method to recover complex objects which are not well separated. In this case, reconstruction results are dependent on the characteristics of the objects’ shapes and locations, the selection of algorithm parameters. In our next work, we are planning to introduce self-adaption even artificial intelligence to make our method suitable for more complex objects, and develop an algorithm optimization for our method to reduce computation time. In addition, we think that idea of our method might be transferred to CT field to solve the issue of aliasing artifacts.

Funding

National Natural Science Foundation of China (NSFC) (61102147, 11504071).

References and links

1. E. Repasi, P. Lutzmann, O. Steinvall, M. Elmqvist, B. Göhler, and G. Anstett, “Advanced short-wavelength infrared range-gated imaging for ground applications in monostatic and bistatic configurations,” Appl. Opt. 48(31), 5956–5969 (2009). [CrossRef]   [PubMed]  

2. O. Steinvall, M. Elmqvist, and H. Larsson, “See around the corner using active imaging,” Proc. SPIE 8186, 818605 (2011). [CrossRef]  

3. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering Three-dimensional Shape Around a Corner using Ultrafast Time-of-Flight Imaging,” Nat. Commun. 3(1), 745 (2012). [CrossRef]   [PubMed]  

4. V. Molebny and O. Steinvall, “Multi-dimensional laser radars,” Proc. SPIE 9080, 908002 (2014). [CrossRef]  

5. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012). [CrossRef]  

6. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014). [CrossRef]  

7. X. Xu, X. Xie, H. He, H. Zhuang, J. Zhou, A. Thendiyammal, and A. P. Mosk, “Imaging objects through scattering layers and around corners by retrieval of the scattered point spread function,” Opt. Express 25(26), 32829–32840 (2017). [CrossRef]  

8. F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26(8), 9945–9962 (2018). [CrossRef]   [PubMed]  

9. O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20(17), 19096–19108 (2012). [CrossRef]   [PubMed]  

10. M. Laurenzis and A. Velten, “Non-line-of-sight active imaging of scattered photons,” Proc. SPIE 8897, 889706 (2013). [CrossRef]  

11. M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53(2), 023102 (2014). [CrossRef]  

12. F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget Transient Imaging using Photonic Mixer Devices,” ACM Trans. Graph. 32(4), 45 (2013). [CrossRef]  

13. F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse Mirrors: 3D Reconstruction from Diffuse Indirect Illumination Using Inexpensive Time-of-Flight Sensors, ” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3222–3229. [CrossRef]  

14. C. Jin, Z. Song, S. Zhang, J. Zhai, and Y. Zhao, “Recovering three-dimensional shape through a small hole using three laser scatterings,” Opt. Lett. 40(1), 52–55 (2015). [CrossRef]   [PubMed]  

15. M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23(16), 20997–21011 (2015). [CrossRef]   [PubMed]  

16. M. Laurenzis, J. Klein, E. Bacher, and N. Metzger, “Multiple-return single-photon counting of light in flight and sensing of non-line-of-sight objects at shortwave infrared wavelengths,” Opt. Lett. 40(20), 4815–4818 (2015). [CrossRef]   [PubMed]  

17. G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016). [CrossRef]  

18. M. B. Hullin, “Computational Imaging of Light in Flight,” Proc. SPIE 9273, 927314 (2014).

19. A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded Imaging with Time of Flight Sensors,” ACM Trans. Graph. 35(2), 1–12 (2016). [CrossRef]  

20. J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6(1), 32491 (2016). [CrossRef]   [PubMed]  

21. S. Chan, R. E. Warburton, G. Gariepy, J. Leach, and D. Faccio, “Non-line-of-sight tracking of people at long range,” Opt. Express 25(9), 10109–10117 (2017). [CrossRef]   [PubMed]  

22. V. Arellano, D. Gutierrez, and A. Jarabo, “Fast back-projection for non-line of sight reconstruction,” Opt. Express 25(10), 11574–11583 (2017). [CrossRef]   [PubMed]  

23. X. Pan, E. Y. Sidky, and M. Vannier, “Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?” Inverse Probl. 25(12), 123009 (2009). [CrossRef]   [PubMed]  

24. A. Kak and M. Slaney, “Principles of Computerized Tomographic Imaging” (IEEE, 1999), Chap. 5.

25. R. A. Brooks, G. H. Weiss, and A. J. Talbert, “A new approach to interpolation in computed tomography,” J. Comput. Assist. Tomogr. 2(5), 577–585 (1978). [CrossRef]   [PubMed]  

26. M. Laurenzis and A. Velten, “Feature selection and back-projection algorithms for nonline-of-sight laser-gated viewing,” J. Electron. Imaging 23(6), 063003 (2014). [CrossRef]  

27. M. Laurenzis and A. Velten, “Investigation of frame-to-frame back projection and feature selection algorithms for non line of sight laser gated viewing,” Proc. SPIE 9250, 92500J (2014). [CrossRef]  

28. M. Laurenzis, A. Velten, and J. Klein, “Dual-mode optical sensing: three-dimensional imaging and seeing around a corner,” Opt. Eng. 56(3), 031202 (2016). [CrossRef]  

29. M. Laurenzis, F. Christnacher, and A. Velten, “Study of a dual mode SWIR active imaging system for direct imaging and non-line of sight vision,” Proc. SPIE 9465, 946509 (2015). [CrossRef]  

30. M. Laurenzis, F. Christnacher, J. Klein, M. B. Hullin, and A. Velten, “Study of single photon counting for non-line-of-sight vision,” Proc. SPIE 9492, 94920K (2015). [CrossRef]  

31. A. Sroka, S. Chan, R. Warburton, G. Gariepy, R. Henderson, J. Leach, D. Faccio, and S. T. Lee, “Time-resolved non-sequential ray-tracing modelling of non-line-of-sight picosecond pulse LIDAR,” Proc. SPIE 9822, 98220L (2016). [CrossRef]  

32. K. C. Tam and V. Perez-Mendez, “Tomographical imaging with limited-angle input,” J. Opt. Soc. Am. 71(5), 582–592 (1981). [CrossRef]  

33. I. Avcibas, B. Sankur, and K. Sayood, “Statistical evaluation of image quality measures,” J. Electron. Imaging 11(2), 206–223 (2002). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. E. Repasi, P. Lutzmann, O. Steinvall, M. Elmqvist, B. Göhler, and G. Anstett, “Advanced short-wavelength infrared range-gated imaging for ground applications in monostatic and bistatic configurations,” Appl. Opt. 48(31), 5956–5969 (2009).
    [Crossref] [PubMed]
  2. O. Steinvall, M. Elmqvist, and H. Larsson, “See around the corner using active imaging,” Proc. SPIE 8186, 818605 (2011).
    [Crossref]
  3. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering Three-dimensional Shape Around a Corner using Ultrafast Time-of-Flight Imaging,” Nat. Commun. 3(1), 745 (2012).
    [Crossref] [PubMed]
  4. V. Molebny and O. Steinvall, “Multi-dimensional laser radars,” Proc. SPIE 9080, 908002 (2014).
    [Crossref]
  5. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012).
    [Crossref]
  6. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014).
    [Crossref]
  7. X. Xu, X. Xie, H. He, H. Zhuang, J. Zhou, A. Thendiyammal, and A. P. Mosk, “Imaging objects through scattering layers and around corners by retrieval of the scattered point spread function,” Opt. Express 25(26), 32829–32840 (2017).
    [Crossref]
  8. F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26(8), 9945–9962 (2018).
    [Crossref] [PubMed]
  9. O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20(17), 19096–19108 (2012).
    [Crossref] [PubMed]
  10. M. Laurenzis and A. Velten, “Non-line-of-sight active imaging of scattered photons,” Proc. SPIE 8897, 889706 (2013).
    [Crossref]
  11. M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53(2), 023102 (2014).
    [Crossref]
  12. F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget Transient Imaging using Photonic Mixer Devices,” ACM Trans. Graph. 32(4), 45 (2013).
    [Crossref]
  13. F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse Mirrors: 3D Reconstruction from Diffuse Indirect Illumination Using Inexpensive Time-of-Flight Sensors, ” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3222–3229.
    [Crossref]
  14. C. Jin, Z. Song, S. Zhang, J. Zhai, and Y. Zhao, “Recovering three-dimensional shape through a small hole using three laser scatterings,” Opt. Lett. 40(1), 52–55 (2015).
    [Crossref] [PubMed]
  15. M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23(16), 20997–21011 (2015).
    [Crossref] [PubMed]
  16. M. Laurenzis, J. Klein, E. Bacher, and N. Metzger, “Multiple-return single-photon counting of light in flight and sensing of non-line-of-sight objects at shortwave infrared wavelengths,” Opt. Lett. 40(20), 4815–4818 (2015).
    [Crossref] [PubMed]
  17. G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016).
    [Crossref]
  18. M. B. Hullin, “Computational Imaging of Light in Flight,” Proc. SPIE 9273, 927314 (2014).
  19. A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded Imaging with Time of Flight Sensors,” ACM Trans. Graph. 35(2), 1–12 (2016).
    [Crossref]
  20. J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6(1), 32491 (2016).
    [Crossref] [PubMed]
  21. S. Chan, R. E. Warburton, G. Gariepy, J. Leach, and D. Faccio, “Non-line-of-sight tracking of people at long range,” Opt. Express 25(9), 10109–10117 (2017).
    [Crossref] [PubMed]
  22. V. Arellano, D. Gutierrez, and A. Jarabo, “Fast back-projection for non-line of sight reconstruction,” Opt. Express 25(10), 11574–11583 (2017).
    [Crossref] [PubMed]
  23. X. Pan, E. Y. Sidky, and M. Vannier, “Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?” Inverse Probl. 25(12), 123009 (2009).
    [Crossref] [PubMed]
  24. A. Kak and M. Slaney, “Principles of Computerized Tomographic Imaging” (IEEE, 1999), Chap. 5.
  25. R. A. Brooks, G. H. Weiss, and A. J. Talbert, “A new approach to interpolation in computed tomography,” J. Comput. Assist. Tomogr. 2(5), 577–585 (1978).
    [Crossref] [PubMed]
  26. M. Laurenzis and A. Velten, “Feature selection and back-projection algorithms for nonline-of-sight laser-gated viewing,” J. Electron. Imaging 23(6), 063003 (2014).
    [Crossref]
  27. M. Laurenzis and A. Velten, “Investigation of frame-to-frame back projection and feature selection algorithms for non line of sight laser gated viewing,” Proc. SPIE 9250, 92500J (2014).
    [Crossref]
  28. M. Laurenzis, A. Velten, and J. Klein, “Dual-mode optical sensing: three-dimensional imaging and seeing around a corner,” Opt. Eng. 56(3), 031202 (2016).
    [Crossref]
  29. M. Laurenzis, F. Christnacher, and A. Velten, “Study of a dual mode SWIR active imaging system for direct imaging and non-line of sight vision,” Proc. SPIE 9465, 946509 (2015).
    [Crossref]
  30. M. Laurenzis, F. Christnacher, J. Klein, M. B. Hullin, and A. Velten, “Study of single photon counting for non-line-of-sight vision,” Proc. SPIE 9492, 94920K (2015).
    [Crossref]
  31. A. Sroka, S. Chan, R. Warburton, G. Gariepy, R. Henderson, J. Leach, D. Faccio, and S. T. Lee, “Time-resolved non-sequential ray-tracing modelling of non-line-of-sight picosecond pulse LIDAR,” Proc. SPIE 9822, 98220L (2016).
    [Crossref]
  32. K. C. Tam and V. Perez-Mendez, “Tomographical imaging with limited-angle input,” J. Opt. Soc. Am. 71(5), 582–592 (1981).
    [Crossref]
  33. I. Avcibas, B. Sankur, and K. Sayood, “Statistical evaluation of image quality measures,” J. Electron. Imaging 11(2), 206–223 (2002).
    [Crossref]

2018 (1)

2017 (3)

2016 (5)

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded Imaging with Time of Flight Sensors,” ACM Trans. Graph. 35(2), 1–12 (2016).
[Crossref]

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6(1), 32491 (2016).
[Crossref] [PubMed]

M. Laurenzis, A. Velten, and J. Klein, “Dual-mode optical sensing: three-dimensional imaging and seeing around a corner,” Opt. Eng. 56(3), 031202 (2016).
[Crossref]

A. Sroka, S. Chan, R. Warburton, G. Gariepy, R. Henderson, J. Leach, D. Faccio, and S. T. Lee, “Time-resolved non-sequential ray-tracing modelling of non-line-of-sight picosecond pulse LIDAR,” Proc. SPIE 9822, 98220L (2016).
[Crossref]

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016).
[Crossref]

2015 (5)

2014 (6)

M. Laurenzis and A. Velten, “Feature selection and back-projection algorithms for nonline-of-sight laser-gated viewing,” J. Electron. Imaging 23(6), 063003 (2014).
[Crossref]

M. Laurenzis and A. Velten, “Investigation of frame-to-frame back projection and feature selection algorithms for non line of sight laser gated viewing,” Proc. SPIE 9250, 92500J (2014).
[Crossref]

M. B. Hullin, “Computational Imaging of Light in Flight,” Proc. SPIE 9273, 927314 (2014).

M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53(2), 023102 (2014).
[Crossref]

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014).
[Crossref]

V. Molebny and O. Steinvall, “Multi-dimensional laser radars,” Proc. SPIE 9080, 908002 (2014).
[Crossref]

2013 (2)

F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget Transient Imaging using Photonic Mixer Devices,” ACM Trans. Graph. 32(4), 45 (2013).
[Crossref]

M. Laurenzis and A. Velten, “Non-line-of-sight active imaging of scattered photons,” Proc. SPIE 8897, 889706 (2013).
[Crossref]

2012 (3)

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012).
[Crossref]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering Three-dimensional Shape Around a Corner using Ultrafast Time-of-Flight Imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref] [PubMed]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20(17), 19096–19108 (2012).
[Crossref] [PubMed]

2011 (1)

O. Steinvall, M. Elmqvist, and H. Larsson, “See around the corner using active imaging,” Proc. SPIE 8186, 818605 (2011).
[Crossref]

2009 (2)

E. Repasi, P. Lutzmann, O. Steinvall, M. Elmqvist, B. Göhler, and G. Anstett, “Advanced short-wavelength infrared range-gated imaging for ground applications in monostatic and bistatic configurations,” Appl. Opt. 48(31), 5956–5969 (2009).
[Crossref] [PubMed]

X. Pan, E. Y. Sidky, and M. Vannier, “Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?” Inverse Probl. 25(12), 123009 (2009).
[Crossref] [PubMed]

2002 (1)

I. Avcibas, B. Sankur, and K. Sayood, “Statistical evaluation of image quality measures,” J. Electron. Imaging 11(2), 206–223 (2002).
[Crossref]

1981 (1)

1978 (1)

R. A. Brooks, G. H. Weiss, and A. J. Talbert, “A new approach to interpolation in computed tomography,” J. Comput. Assist. Tomogr. 2(5), 577–585 (1978).
[Crossref] [PubMed]

Anstett, G.

Arellano, V.

Avcibas, I.

I. Avcibas, B. Sankur, and K. Sayood, “Statistical evaluation of image quality measures,” J. Electron. Imaging 11(2), 206–223 (2002).
[Crossref]

Bacher, E.

Bawendi, M. G.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering Three-dimensional Shape Around a Corner using Ultrafast Time-of-Flight Imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref] [PubMed]

Brooks, R. A.

R. A. Brooks, G. H. Weiss, and A. J. Talbert, “A new approach to interpolation in computed tomography,” J. Comput. Assist. Tomogr. 2(5), 577–585 (1978).
[Crossref] [PubMed]

Buttafava, M.

Chan, S.

S. Chan, R. E. Warburton, G. Gariepy, J. Leach, and D. Faccio, “Non-line-of-sight tracking of people at long range,” Opt. Express 25(9), 10109–10117 (2017).
[Crossref] [PubMed]

A. Sroka, S. Chan, R. Warburton, G. Gariepy, R. Henderson, J. Leach, D. Faccio, and S. T. Lee, “Time-resolved non-sequential ray-tracing modelling of non-line-of-sight picosecond pulse LIDAR,” Proc. SPIE 9822, 98220L (2016).
[Crossref]

Christnacher, F.

M. Laurenzis, F. Christnacher, J. Klein, M. B. Hullin, and A. Velten, “Study of single photon counting for non-line-of-sight vision,” Proc. SPIE 9492, 94920K (2015).
[Crossref]

M. Laurenzis, F. Christnacher, and A. Velten, “Study of a dual mode SWIR active imaging system for direct imaging and non-line of sight vision,” Proc. SPIE 9465, 946509 (2015).
[Crossref]

Eliceiri, K.

Elmqvist, M.

Faccio, D.

S. Chan, R. E. Warburton, G. Gariepy, J. Leach, and D. Faccio, “Non-line-of-sight tracking of people at long range,” Opt. Express 25(9), 10109–10117 (2017).
[Crossref] [PubMed]

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016).
[Crossref]

A. Sroka, S. Chan, R. Warburton, G. Gariepy, R. Henderson, J. Leach, D. Faccio, and S. T. Lee, “Time-resolved non-sequential ray-tracing modelling of non-line-of-sight picosecond pulse LIDAR,” Proc. SPIE 9822, 98220L (2016).
[Crossref]

Fink, M.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014).
[Crossref]

Gariepy, G.

S. Chan, R. E. Warburton, G. Gariepy, J. Leach, and D. Faccio, “Non-line-of-sight tracking of people at long range,” Opt. Express 25(9), 10109–10117 (2017).
[Crossref] [PubMed]

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016).
[Crossref]

A. Sroka, S. Chan, R. Warburton, G. Gariepy, R. Henderson, J. Leach, D. Faccio, and S. T. Lee, “Time-resolved non-sequential ray-tracing modelling of non-line-of-sight picosecond pulse LIDAR,” Proc. SPIE 9822, 98220L (2016).
[Crossref]

Gigan, S.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014).
[Crossref]

Göhler, B.

Gregson, J.

F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget Transient Imaging using Photonic Mixer Devices,” ACM Trans. Graph. 32(4), 45 (2013).
[Crossref]

Gupta, O.

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20(17), 19096–19108 (2012).
[Crossref] [PubMed]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering Three-dimensional Shape Around a Corner using Ultrafast Time-of-Flight Imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref] [PubMed]

Gutierrez, D.

He, H.

Heide, F.

F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget Transient Imaging using Photonic Mixer Devices,” ACM Trans. Graph. 32(4), 45 (2013).
[Crossref]

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse Mirrors: 3D Reconstruction from Diffuse Indirect Illumination Using Inexpensive Time-of-Flight Sensors, ” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3222–3229.
[Crossref]

Heidmann, P.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014).
[Crossref]

Heidrich, W.

F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget Transient Imaging using Photonic Mixer Devices,” ACM Trans. Graph. 32(4), 45 (2013).
[Crossref]

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse Mirrors: 3D Reconstruction from Diffuse Indirect Illumination Using Inexpensive Time-of-Flight Sensors, ” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3222–3229.
[Crossref]

Henderson, R.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016).
[Crossref]

A. Sroka, S. Chan, R. Warburton, G. Gariepy, R. Henderson, J. Leach, D. Faccio, and S. T. Lee, “Time-resolved non-sequential ray-tracing modelling of non-line-of-sight picosecond pulse LIDAR,” Proc. SPIE 9822, 98220L (2016).
[Crossref]

Hullin, M. B.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6(1), 32491 (2016).
[Crossref] [PubMed]

M. Laurenzis, F. Christnacher, J. Klein, M. B. Hullin, and A. Velten, “Study of single photon counting for non-line-of-sight vision,” Proc. SPIE 9492, 94920K (2015).
[Crossref]

M. B. Hullin, “Computational Imaging of Light in Flight,” Proc. SPIE 9273, 927314 (2014).

F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget Transient Imaging using Photonic Mixer Devices,” ACM Trans. Graph. 32(4), 45 (2013).
[Crossref]

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse Mirrors: 3D Reconstruction from Diffuse Indirect Illumination Using Inexpensive Time-of-Flight Sensors, ” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3222–3229.
[Crossref]

Jarabo, A.

Jin, C.

Kadambi, A.

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded Imaging with Time of Flight Sensors,” ACM Trans. Graph. 35(2), 1–12 (2016).
[Crossref]

Katz, O.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014).
[Crossref]

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012).
[Crossref]

Klein, J.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6(1), 32491 (2016).
[Crossref] [PubMed]

M. Laurenzis, A. Velten, and J. Klein, “Dual-mode optical sensing: three-dimensional imaging and seeing around a corner,” Opt. Eng. 56(3), 031202 (2016).
[Crossref]

M. Laurenzis, F. Christnacher, J. Klein, M. B. Hullin, and A. Velten, “Study of single photon counting for non-line-of-sight vision,” Proc. SPIE 9492, 94920K (2015).
[Crossref]

M. Laurenzis, J. Klein, E. Bacher, and N. Metzger, “Multiple-return single-photon counting of light in flight and sensing of non-line-of-sight objects at shortwave infrared wavelengths,” Opt. Lett. 40(20), 4815–4818 (2015).
[Crossref] [PubMed]

Larsson, H.

O. Steinvall, M. Elmqvist, and H. Larsson, “See around the corner using active imaging,” Proc. SPIE 8186, 818605 (2011).
[Crossref]

Laurenzis, M.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6(1), 32491 (2016).
[Crossref] [PubMed]

M. Laurenzis, A. Velten, and J. Klein, “Dual-mode optical sensing: three-dimensional imaging and seeing around a corner,” Opt. Eng. 56(3), 031202 (2016).
[Crossref]

M. Laurenzis, F. Christnacher, and A. Velten, “Study of a dual mode SWIR active imaging system for direct imaging and non-line of sight vision,” Proc. SPIE 9465, 946509 (2015).
[Crossref]

M. Laurenzis, F. Christnacher, J. Klein, M. B. Hullin, and A. Velten, “Study of single photon counting for non-line-of-sight vision,” Proc. SPIE 9492, 94920K (2015).
[Crossref]

M. Laurenzis, J. Klein, E. Bacher, and N. Metzger, “Multiple-return single-photon counting of light in flight and sensing of non-line-of-sight objects at shortwave infrared wavelengths,” Opt. Lett. 40(20), 4815–4818 (2015).
[Crossref] [PubMed]

M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53(2), 023102 (2014).
[Crossref]

M. Laurenzis and A. Velten, “Investigation of frame-to-frame back projection and feature selection algorithms for non line of sight laser gated viewing,” Proc. SPIE 9250, 92500J (2014).
[Crossref]

M. Laurenzis and A. Velten, “Feature selection and back-projection algorithms for nonline-of-sight laser-gated viewing,” J. Electron. Imaging 23(6), 063003 (2014).
[Crossref]

M. Laurenzis and A. Velten, “Non-line-of-sight active imaging of scattered photons,” Proc. SPIE 8897, 889706 (2013).
[Crossref]

Leach, J.

S. Chan, R. E. Warburton, G. Gariepy, J. Leach, and D. Faccio, “Non-line-of-sight tracking of people at long range,” Opt. Express 25(9), 10109–10117 (2017).
[Crossref] [PubMed]

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016).
[Crossref]

A. Sroka, S. Chan, R. Warburton, G. Gariepy, R. Henderson, J. Leach, D. Faccio, and S. T. Lee, “Time-resolved non-sequential ray-tracing modelling of non-line-of-sight picosecond pulse LIDAR,” Proc. SPIE 9822, 98220L (2016).
[Crossref]

Lee, S. T.

A. Sroka, S. Chan, R. Warburton, G. Gariepy, R. Henderson, J. Leach, D. Faccio, and S. T. Lee, “Time-resolved non-sequential ray-tracing modelling of non-line-of-sight picosecond pulse LIDAR,” Proc. SPIE 9822, 98220L (2016).
[Crossref]

Lutzmann, P.

Martín, J.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6(1), 32491 (2016).
[Crossref] [PubMed]

Metzger, N.

Molebny, V.

V. Molebny and O. Steinvall, “Multi-dimensional laser radars,” Proc. SPIE 9080, 908002 (2014).
[Crossref]

Mosk, A. P.

Pan, X.

X. Pan, E. Y. Sidky, and M. Vannier, “Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?” Inverse Probl. 25(12), 123009 (2009).
[Crossref] [PubMed]

Perez-Mendez, V.

Peters, C.

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6(1), 32491 (2016).
[Crossref] [PubMed]

Raskar, R.

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded Imaging with Time of Flight Sensors,” ACM Trans. Graph. 35(2), 1–12 (2016).
[Crossref]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering Three-dimensional Shape Around a Corner using Ultrafast Time-of-Flight Imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref] [PubMed]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20(17), 19096–19108 (2012).
[Crossref] [PubMed]

Repasi, E.

Sankur, B.

I. Avcibas, B. Sankur, and K. Sayood, “Statistical evaluation of image quality measures,” J. Electron. Imaging 11(2), 206–223 (2002).
[Crossref]

Sayood, K.

I. Avcibas, B. Sankur, and K. Sayood, “Statistical evaluation of image quality measures,” J. Electron. Imaging 11(2), 206–223 (2002).
[Crossref]

Shapiro, J. H.

Shi, B.

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded Imaging with Time of Flight Sensors,” ACM Trans. Graph. 35(2), 1–12 (2016).
[Crossref]

Shulkind, G.

Sidky, E. Y.

X. Pan, E. Y. Sidky, and M. Vannier, “Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?” Inverse Probl. 25(12), 123009 (2009).
[Crossref] [PubMed]

Silberberg, Y.

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012).
[Crossref]

Small, E.

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012).
[Crossref]

Song, Z.

Sroka, A.

A. Sroka, S. Chan, R. Warburton, G. Gariepy, R. Henderson, J. Leach, D. Faccio, and S. T. Lee, “Time-resolved non-sequential ray-tracing modelling of non-line-of-sight picosecond pulse LIDAR,” Proc. SPIE 9822, 98220L (2016).
[Crossref]

Steinvall, O.

V. Molebny and O. Steinvall, “Multi-dimensional laser radars,” Proc. SPIE 9080, 908002 (2014).
[Crossref]

O. Steinvall, M. Elmqvist, and H. Larsson, “See around the corner using active imaging,” Proc. SPIE 8186, 818605 (2011).
[Crossref]

E. Repasi, P. Lutzmann, O. Steinvall, M. Elmqvist, B. Göhler, and G. Anstett, “Advanced short-wavelength infrared range-gated imaging for ground applications in monostatic and bistatic configurations,” Appl. Opt. 48(31), 5956–5969 (2009).
[Crossref] [PubMed]

Talbert, A. J.

R. A. Brooks, G. H. Weiss, and A. J. Talbert, “A new approach to interpolation in computed tomography,” J. Comput. Assist. Tomogr. 2(5), 577–585 (1978).
[Crossref] [PubMed]

Tam, K. C.

Thendiyammal, A.

Thrampoulidis, C.

Tonolini, F.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016).
[Crossref]

Torralba, A.

Tosi, A.

Vannier, M.

X. Pan, E. Y. Sidky, and M. Vannier, “Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?” Inverse Probl. 25(12), 123009 (2009).
[Crossref] [PubMed]

Veeraraghavan, A.

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20(17), 19096–19108 (2012).
[Crossref] [PubMed]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering Three-dimensional Shape Around a Corner using Ultrafast Time-of-Flight Imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref] [PubMed]

Velten, A.

M. Laurenzis, A. Velten, and J. Klein, “Dual-mode optical sensing: three-dimensional imaging and seeing around a corner,” Opt. Eng. 56(3), 031202 (2016).
[Crossref]

M. Laurenzis, F. Christnacher, and A. Velten, “Study of a dual mode SWIR active imaging system for direct imaging and non-line of sight vision,” Proc. SPIE 9465, 946509 (2015).
[Crossref]

M. Laurenzis, F. Christnacher, J. Klein, M. B. Hullin, and A. Velten, “Study of single photon counting for non-line-of-sight vision,” Proc. SPIE 9492, 94920K (2015).
[Crossref]

M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23(16), 20997–21011 (2015).
[Crossref] [PubMed]

M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53(2), 023102 (2014).
[Crossref]

M. Laurenzis and A. Velten, “Investigation of frame-to-frame back projection and feature selection algorithms for non line of sight laser gated viewing,” Proc. SPIE 9250, 92500J (2014).
[Crossref]

M. Laurenzis and A. Velten, “Feature selection and back-projection algorithms for nonline-of-sight laser-gated viewing,” J. Electron. Imaging 23(6), 063003 (2014).
[Crossref]

M. Laurenzis and A. Velten, “Non-line-of-sight active imaging of scattered photons,” Proc. SPIE 8897, 889706 (2013).
[Crossref]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20(17), 19096–19108 (2012).
[Crossref] [PubMed]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering Three-dimensional Shape Around a Corner using Ultrafast Time-of-Flight Imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref] [PubMed]

Warburton, R.

A. Sroka, S. Chan, R. Warburton, G. Gariepy, R. Henderson, J. Leach, D. Faccio, and S. T. Lee, “Time-resolved non-sequential ray-tracing modelling of non-line-of-sight picosecond pulse LIDAR,” Proc. SPIE 9822, 98220L (2016).
[Crossref]

Warburton, R. E.

Weiss, G. H.

R. A. Brooks, G. H. Weiss, and A. J. Talbert, “A new approach to interpolation in computed tomography,” J. Comput. Assist. Tomogr. 2(5), 577–585 (1978).
[Crossref] [PubMed]

Willwacher, T.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering Three-dimensional Shape Around a Corner using Ultrafast Time-of-Flight Imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref] [PubMed]

O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20(17), 19096–19108 (2012).
[Crossref] [PubMed]

Wong, F. N. C.

Wornell, G. W.

Xiao, L.

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse Mirrors: 3D Reconstruction from Diffuse Indirect Illumination Using Inexpensive Time-of-Flight Sensors, ” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3222–3229.
[Crossref]

Xie, X.

Xu, F.

Xu, X.

Zeman, J.

Zhai, J.

Zhang, S.

Zhao, H.

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded Imaging with Time of Flight Sensors,” ACM Trans. Graph. 35(2), 1–12 (2016).
[Crossref]

Zhao, Y.

Zhou, J.

Zhuang, H.

ACM Trans. Graph. (2)

F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget Transient Imaging using Photonic Mixer Devices,” ACM Trans. Graph. 32(4), 45 (2013).
[Crossref]

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded Imaging with Time of Flight Sensors,” ACM Trans. Graph. 35(2), 1–12 (2016).
[Crossref]

Appl. Opt. (1)

Inverse Probl. (1)

X. Pan, E. Y. Sidky, and M. Vannier, “Why do commercial CT scanners still employ traditional, filtered back-projection for image reconstruction?” Inverse Probl. 25(12), 123009 (2009).
[Crossref] [PubMed]

J. Comput. Assist. Tomogr. (1)

R. A. Brooks, G. H. Weiss, and A. J. Talbert, “A new approach to interpolation in computed tomography,” J. Comput. Assist. Tomogr. 2(5), 577–585 (1978).
[Crossref] [PubMed]

J. Electron. Imaging (2)

M. Laurenzis and A. Velten, “Feature selection and back-projection algorithms for nonline-of-sight laser-gated viewing,” J. Electron. Imaging 23(6), 063003 (2014).
[Crossref]

I. Avcibas, B. Sankur, and K. Sayood, “Statistical evaluation of image quality measures,” J. Electron. Imaging 11(2), 206–223 (2002).
[Crossref]

J. Opt. Soc. Am. (1)

Nat. Commun. (1)

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering Three-dimensional Shape Around a Corner using Ultrafast Time-of-Flight Imaging,” Nat. Commun. 3(1), 745 (2012).
[Crossref] [PubMed]

Nat. Photonics (3)

O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012).
[Crossref]

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photonics 8(10), 784–790 (2014).
[Crossref]

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10(1), 23–26 (2016).
[Crossref]

Opt. Eng. (2)

M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53(2), 023102 (2014).
[Crossref]

M. Laurenzis, A. Velten, and J. Klein, “Dual-mode optical sensing: three-dimensional imaging and seeing around a corner,” Opt. Eng. 56(3), 031202 (2016).
[Crossref]

Opt. Express (6)

Opt. Lett. (2)

Proc. SPIE (8)

M. B. Hullin, “Computational Imaging of Light in Flight,” Proc. SPIE 9273, 927314 (2014).

M. Laurenzis and A. Velten, “Non-line-of-sight active imaging of scattered photons,” Proc. SPIE 8897, 889706 (2013).
[Crossref]

V. Molebny and O. Steinvall, “Multi-dimensional laser radars,” Proc. SPIE 9080, 908002 (2014).
[Crossref]

O. Steinvall, M. Elmqvist, and H. Larsson, “See around the corner using active imaging,” Proc. SPIE 8186, 818605 (2011).
[Crossref]

M. Laurenzis and A. Velten, “Investigation of frame-to-frame back projection and feature selection algorithms for non line of sight laser gated viewing,” Proc. SPIE 9250, 92500J (2014).
[Crossref]

M. Laurenzis, F. Christnacher, and A. Velten, “Study of a dual mode SWIR active imaging system for direct imaging and non-line of sight vision,” Proc. SPIE 9465, 946509 (2015).
[Crossref]

M. Laurenzis, F. Christnacher, J. Klein, M. B. Hullin, and A. Velten, “Study of single photon counting for non-line-of-sight vision,” Proc. SPIE 9492, 94920K (2015).
[Crossref]

A. Sroka, S. Chan, R. Warburton, G. Gariepy, R. Henderson, J. Leach, D. Faccio, and S. T. Lee, “Time-resolved non-sequential ray-tracing modelling of non-line-of-sight picosecond pulse LIDAR,” Proc. SPIE 9822, 98220L (2016).
[Crossref]

Sci. Rep. (1)

J. Klein, C. Peters, J. Martín, M. Laurenzis, and M. B. Hullin, “Tracking objects outside the line of sight using 2D intensity images,” Sci. Rep. 6(1), 32491 (2016).
[Crossref] [PubMed]

Other (2)

A. Kak and M. Slaney, “Principles of Computerized Tomographic Imaging” (IEEE, 1999), Chap. 5.

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse Mirrors: 3D Reconstruction from Diffuse Indirect Illumination Using Inexpensive Time-of-Flight Sensors, ” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2014), pp. 3222–3229.
[Crossref]

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 The principle of NLoS imaging.
Fig. 2
Fig. 2 Schematic diagram of back projection based on EMD.
Fig. 3
Fig. 3 Simulated reconstruction of multiple NLoS objects (a)-(d)using general back-projection with global thresholding such as β = 0.2, 0.25, 0.3 and 0.5, respectively. (e)-(h) using general back-projection with local-global thresholding such as [λloc,λglob] = [0.25,0.13], [0.18,0.19], [0.25,0.25] and [0.34,0.25], respectively.
Fig. 4
Fig. 4 Simulated results using our method to reconstruct multiple NLoS objects with (a) a view from the front and (b) a view from the side.
Fig. 5
Fig. 5 Details of our experiment including (a) photograph of the experimental setups, (b) the source point and image points on the image screen, and (c) photograph of the objects.
Fig. 6
Fig. 6 Reconstruction of the objects from a side view (a) using general back-projection with global thresholding and (b) using our method. Reconstruction of the objects from a top view (c) using general back-projection with global thresholding and (d) using our method.
Fig. 7
Fig. 7 (a)-(c) Typical slices of the initial confidence map at the locations of three objects, respectively. (d)-(f) Typical slices of the square plate’s ellipsoid mode at the locations of the objects, respectively. (g)-(h) Typical slices of the triangle plate’s ellipsoid mode at the locations of the objects, respectively. (j)-(l) Typical slices of the round plate’s ellipsoid mode at the locations of the objects, respectively.
Fig. 8
Fig. 8 (a) the initial confidence map of three objects, (b) the ellipsoid mode of the object 1, (c) the ellipsoid mode of the object 2, (d) the ellipsoid mode of the object 3.

Tables (2)

Tables Icon

Table 1 List of Decomposing Procedure

Tables Icon

Table 2 Comparison of reconstruction errors for our method and general back-projection methods

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

r 1 + r 2 + r 3 + r 4 =c t .
V f ( x,y,z )= 2 V( x,y,z )
V f ( x,y,z ) >βmax( V f ) =constant
| x x o | h s . | y y o | h s . | z z o | h s .
| V( x o , y o , z o )V( x,y,z ) |/V( x o , y o , z o ) h c .
V f ( x,y,z ) > λ loc M loc + λ glob M glob =constant

Metrics