Abstract
Non-line-of-sight (NLOS) imaging has recently attracted a lot of interest from the scientific community. The goal of this paper is to provide the basis for a comprehensive mathematical framework for NLOS imaging that is directly derived from physical concepts. We introduce the irradiance phasor field ($\mathcal {P}$-field) as an abstract quantity for irradiance fluctuations, akin to the complex envelope of the Electrical field (E-field) that is used to describe propagation of electromagnetic energy. We demonstrate that the $\mathcal {P}$-field propagator is analogous to the Huygens-Fresnel propagator that describes the propagation of other waves and show that NLOS light transport can be described with the processing methods that are available for LOS imaging. We perform simulations to demonstrate the accuracy and validity of the $\mathcal {P}$-field formulation and provide experimental results to demonstrate a Huygens-like $\mathcal {P}$-field summation behavior.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
1. Introduction
1.1 Background
In an optical line-of-sight (LOS) imaging scenario, such as the one depicted in Fig. 1a, the goal is to reconstruct an image of a target that is in the direct path of a laser source and a camera. Conversely, in an optical non-line-of-sight (NLOS) imaging scenario, the goal is to reconstruct an image of a target that is not in the direct path of the transmitter and receiver. Considering the scene shown in Fig. 1b it has been shown [1–6] that 3D NLOS image recovery can be achieved by illuminating a relay surface in the visible scene (i.e. a relay wall) and collecting light reflected from the object via the relay surface. In Fig. 1b, a light pulse generated by the laser L, incident on a relay wall at point $p$, subsequently scatters from the wall in all directions with a fraction of the photons reflected from the relay wall reaching the target. A fraction of the photons reflected from the target travels back to the wall. The ultra-fast camera, focused at location $q$ on the relay wall, measures the photon flux from the target reflected at $q$ as a function of time.
Reconstructing a 3D image of the hidden object is an inverse light transport problem. Different light transport models describe the propagation of light through a scene (for example, in [7]) and enable us to infer about the scene by analyzing the data captured by a camera [8]. Previous approaches have used ray optics and attempted to model light propagation through a scene as a linear operator that can be inverted with a variety of inverse methods [1,2,9–17]. Non-linear inverse methods for more complex scenes have also been proposed [5,18,19], but the added level of complexity makes their application challenging. Questions regarding null-spaces, attainable resolutions and contrast and dealing with multiple reflections in the hidden scene have also been discussed in prior art. For example, the role of the BRDF and effects of null-spaces have been discussed in [19] and [20] respectively. Model complexity and inaccurate modeling of real light transport pose a great challenge for conducting more fundamental analyses on NLOS imaging.
In some of the recent works [21,22], a wave propagation-based $\mathcal {P}$-field approach for NLOS imaging has produced excellent image reconstruction results of hidden scenes. Experimental results in [21] demonstrate the benefits in treating an NLOS imaging system as a virtual camera-based $\mathcal {P}$-field imaging system. The goal of this paper is to describe ToF NLOS light transport using a wave propagation model akin to those governing other imaging methods. Here, we describe the fundamental mathematical foundations of $\mathcal {P}$-field summation for imaging applications, the limitations of the $\mathcal {P}$-field NLOS imaging approach and the requisite assumptions and approximations for the validity of the $\mathcal {P}$-field model.
The newly introduced $\mathcal {P}$-field approach denotes the complex envelope of the average optical irradiance. In this paper we show that propagation of $\mathcal {P}$-fields from the virtual aperture at the relay wall to a virtual sensor behind the wall can be modeled –analogous to the Huygens’ integral– as the propagation of wave-like $\mathcal {P}$-field wavelets contributions from the aperture plane to the detector plane. With the proposed phasor field ($\mathcal {P}$-field) (A phasor representation of radiance was discussed in [23], where the authors propose a framework to analyze the light transport in correlation-based TOF ranging) formalism, we show that NLOS imaging can be treated similarly to LOS imaging.
With the aid of simulation results, we also demonstrate the effect of aperture roughness (here roughness refers the material roughness of the relay wall, which we treat here as the $\mathcal {P}$-field virtual aperture) on the accuracy of the amplitude and phase estimates in the detector plane – obtained through the $\mathcal {P}$-field integral which involves a summation of $\mathcal {P}$-field contributions from the aperture. We also present preliminary experimental results where we implemented a $\mathcal {P}$-field interferometer and measured the change in $\mathcal {P}$-field signal amplitude by changing the path length of one interferometer arm while keeping the path length of the other reference arm fixed.
2. The $\mathcal {P}$-field Imaging Approach
2.1 The $\mathcal {P}$-field Integral
To explain what we mean by a Huygens’-like integral describing the propagation of $\mathcal {P}$-fields, let us consider Fig. 2 which describes the propagation of various E-field spherical wavelet contributions from an aperture plane $\mathcal {A}$ to a detection plane $\Sigma$ separated by an arbitrary distance $z$. The Green’s function-based solution of the wave equation describes the propagation of scalar E-field wavelet contributions from each location $(x',y',0) \in \mathcal {A}$ to any particular location $(x, y, z) \in \Sigma$ with the resulting scalar E-field $E(x,y,z)$ at $(x, y, z)$ described as a linear sum of these E-field wavelet contributions. The Huygens’ integral is given by [24,25]
In our case, the optical irradiance is modulated by a time harmonic function $P(t)$ (which we refer to as the $\mathcal {P}$-field signal) with frequency $\Omega$ and a corresponding wavelength $\lambda _P$. Let us assume that the following condition on the coherence length $l_{\mathrm {c}}$ of light holds true
Moreover, let where the detector integration time is much longer than the time period of the short optical carrier (E-field) of frequency $\omega$ but much shorter than the longer time period of the $\mathcal {P}$-field signal of frequency $\Omega$. We show in the Appendix that we can describe $\mathcal {P}$-field propagation and summation in $\Sigma$ as the sum of Huygens’-like $\mathcal {P}$-field wavelet contributions from $\mathcal {A}$. Namely, similarly to (1) for E-fields, the temporal evolution of the time-average irradiance can be described by the $\mathcal {P}$-field integral as a sum of $\mathcal {P}$-field wavelet contributions $\mathcal {P}(r)$ from $\mathcal {A}$ asAnalogous to the complex E-field, $P_\Omega (x',y',z',t)$ is symmetric about the origin $\Omega =0$ resulting in $P(x',y',z',t)$ to be always real. In practice we usually omit the negative frequency component and compute only the complex phasor, keeping in mind that the actual phasor field intensity is given by the sum of the positive and negative frequency components (i.e. the phasor and its complex conjugate). Note also that other than the electric field, the intensity cannot be negative which only affects the phasor at $P_{\Omega =0}$. This static component of the field can usually safely be ignored. We can also subtract it and consider only the fast variations: $P'_{\Omega }(x',y',z',t) = P_{\Omega }(x',y',z',t) - P_{\Omega =0}(x',y',z',t)$. In the following we consider a single monochromatic component $P_\Omega (x',y',z',t)$ and this distinction is of no consequence.
To improve readability and maintain clarity throughout the manuscript, a complete derivation of the $\mathcal {P}$-field integral is presented in Appendix – where we provide details of all requisite approximations and assumptions about aperture roughness, coherence length and detector integration time, which allow us to arrive at the result in (5). Notations used for describing quantities in the E-field Huygens’ integral and the Huygens’-like $\mathcal {P}$-field integral are summarized in Table 1. Moreover, to avoid confusion, all symbols used throughout the manuscript are summarized in Table 2.
2.2 Correcting Amplitude Error of $\mathcal {P}$-field Integral
Note that the spherical $\mathcal {P}$-field wavelet contributions in (5) are not the only term in the $\mathcal {P}$-field integral argument as an additional $1/|r|$ term multiplies with each corresponding $\mathcal {P}$-field wavelet contribution term. For cases when either of the respective far-field (Fraunhofer) or the near-field (Fresnel) approximations in (A.35) and (A.36) hold, (for a chosen $\mathcal {P}$-field wavelength $\lambda _{\mathrm {P}}$), we can assume that $1/|r|\approx 1/z$. For these cases, (5) can be expressed as
It also has to be noted that we defined the $\mathcal {P}$-field as a single frequency (monotonic) function but it was only done to simplify our mathematical treatment. The $\mathcal {P}$-field imaging integral is correct and applicable even if $\mathcal {P}$-fields represent more complex irradiance fluctuations. Through a spectral decomposition such as a Fourier series representation, any $\mathcal {P}$-field signal with a multi-frequency spectral composition can be simply expressed as a linear summation of single frequency contributions.
We can also calculate the approximate mean percentage error $\langle \eta \rangle$ (for $m\times n$ total ($x$,$y$) locations in $\Sigma$) between the true amplitude corrected estimate of $|I_{\mathrm {Tot}}(x,y,z)|$ from (5) and the corresponding uncorrected $\mathcal {P}$-field estimate from (8) (assuming no aperture roughness) for any separation distance $z$ between $\mathcal {A}$ and $\Sigma$. $\langle \eta \rangle$ is given by
3. Phasor Field Simulations
In this section, we present simulation results of the expected $\mathcal {P}$-field sums at all locations in $\Sigma$. The surface roughness is assumed to be uniformly distributed between 0 and $\gamma$. The computed $\mathcal {P}$-field sum for a monochromatic optical carrier modulated by a monotonic RF signal is normalized to the maximum value of $|I_{\mathrm {Tot-F}}(x,y,z)|$ (denoted by the subscript ’Norm’) which assumes a maximum surface roughness $\gamma \approx 0$. As the $\mathcal {P}$-field integral in (A.39) assumes negligible surface roughness, the accuracy of the $\mathcal {P}$-field estimate is affected for apertures with larger roughness - specially if the roughness is comparable to the $\mathcal {P}$-field wavelength.
The first two simulations (namely Simulation 1 and Simulation 2), performed for the far-field and near-field imaging scenarios, solely investigate this effect on $\mathcal {P}$-field distribution estimate with increasing aperture roughness. In the third and final simulation (Simulation 3), we only investigate the effect of amplitude estimation error for the ultra near-field scenario. In this simulation, we assume negligible aperture roughness (i.e., $\gamma \approx 0$) such that the roughness is enough to scatter light but not enough to cause a significant phase shift to the $\mathcal {P}$-field (which is the underlying assumption for the $\mathcal {P}$-field integral).
For the first two simulation scenarios, we show that estimates of $\mathcal {P}$-field distributions calculated through the $\mathcal {P}$-field integral in (A.39) remain accurate even for high aperture roughness values and gradually degrade as $\gamma \rightarrow \lambda _P$. For Simulation 3, we compare the $\mathcal {P}$-field distribution estimate obtained from the $\mathcal {P}$-field integral in (A.40) without amplitude correction and the $\mathcal {P}$-field integral in (A.34) with the amplitude correction factor applied. We also show that the $\mathcal {P}$-field estimation error in the absence of significant aperture roughness for the ultra near-field case is only an amplitude error and not a phase estimation error. For each of the three simulations, we set the E-field and $\mathcal {P}$-field wavelengths to $\lambda _{\mathrm {E}} = {1}\, \mu \textrm {m}$ and $\lambda _{\mathrm {P}} \approx {30}\, \textrm {cm}$ (corresponding to $\Omega = {1}\, \textrm {GHz}$). Also, for each simulation, we assume a uniform illuminated aperture with arbitrarily chosen dimensions of ${8}\, \textrm {m}\times {4}\,\textrm {m}$.
3.1 Simulation 1: Far-Field Imaging Scenario
For this simulation we chose $z = 200\,{\textrm{m}}$ as the mutual separation between $\Sigma$ and $\mathcal {A}$. The $\mathcal {P}$-field distribution $|I_{\mathrm {Tot-F}}(x,y,z)|_{\mathrm {Norm}}$ estimated from the $\mathcal {P}$-field integral in (A.39) is plotted in Fig. 3a. This $\mathcal {P}$-field estimate assumes negligible roughness and hence does not depend on any changes to the aperture roughness. We compare this $\mathcal {P}$-field estimate to the actual $\mathcal {P}$-field distribution $|I_{\mathrm {Tot}}(x,y,z)|_{\mathrm {,Norm}}$ calculated from (A.34) – which takes into account aperture roughness – for one instance of random aperture roughness profile. These $\mathcal {P}$-field distribution calculated from (A.34) for $\gamma = 21\,{\textrm{cm}}$ is plotted in Fig. 3b.
Comparing the $\mathcal {P}$-field integral estimate to the estimate where aperture roughness is accounted for, we observe that the $\mathcal {P}$-field integral of (A.39) provides an accurate estimate of the $\mathcal {P}$-field distribution in $\Sigma$ even for high aperture roughness values (such as our case where is of comparable dimension as $\lambda _{\mathrm {P}}$). For the chosen maximum roughness values of $\gamma$, the $\mathcal {P}$-field integral provides a reasonably accurate estimate of $\mathcal {P}$-field distribution despite an aperture roughness value as high as 21 cm (70% of the $\lambda _{\mathrm {P}}$). Of course, the $\mathcal {P}$-field integral provides a more accurate estimate for low roughness values and this estimate gradually degrades as $\gamma \rightarrow \lambda _{\mathrm {P}}$.
To clearly show how the accuracy of the $\mathcal {P}$-field distribution estimate provided by the $\mathcal {P}$-field integral is affected by increasing roughness, we plot, in Fig. 4, $|I_{\mathrm {Tot}}(x,y,z)|_{\mathrm {,Norm}}$ and $|I_{\mathrm {Tot-F}}(x,y,z)|_{\mathrm {Norm}}$ only along the $x$-axis in $\Sigma$ for $y=0$. For four maximum roughness values of $\gamma = 3\,{\textrm{cm}}$, $\gamma = 6\,{\textrm{cm}}$, $\gamma = 9\,{\textrm{cm}}$ and $\gamma = 15\,{\textrm{cm}}$, Fig. 4 assists us to observe that the $\mathcal {P}$-field estimate in Fig. 3a digresses from the actual $\mathcal {P}$-field distribution when aperture roughness features become large enough to be comparable to $\lambda _{\mathrm {P}}$. In other words, additional $\mathcal {P}$-field specular noise, which increases with an increasing aperture roughness, is not taken into account in (A.39) which affects the Huygens-like $\mathcal {P}$-field distribution estimate in $\Sigma$ specially at large $\gamma$ values. This results in an increasing $\mathcal {P}$-field estimation error for increasing aperture roughness.
3.2 Simulation 2: Near-Field Imaging Scenario
In Simulation 2, we repeat Simulation 1 for a near-field scenario with the location of the observation plane $\Sigma$ set to 5 m. The dimensions of the aperture as well as the values of $\lambda _{\mathrm {E}}$ and $\lambda _{\mathrm {P}}$ are the same as in Simulation 1.
The $x$-$y$ plane $\mathcal {P}$-field distribution for this scenario, computed through the $\mathcal {P}$-field integral in (A.39), is compared to the $\mathcal {P}$-field distributions calculated from (A.34) for aperture roughness values of $\gamma = 2\,{\textrm{cm}}$, and $\gamma = 9\,{\textrm{cm}}$ respectively. These $\mathcal {P}$-field distributions are plotted in Fig. 5. As is the case with the far-field estimates in Simulation 1, the estimate from the $\mathcal {P}$-field integral in (A.39) deteriorates with increasing roughness and because the $\mathcal {P}$-field integral assumes negligible roughness, it does not account for an increasing $\mathcal {P}$-field specular noise with an increasing aperture roughness.
3.3 Simulation 3: $\mathcal {P}$-field Amplitude Estimation Error for the Ultra Near-Field Imaging Scenario
While the previous two simulations demonstrate a degradation in the $\mathcal {P}$-field distribution estimate from the $\mathcal {P}$-field integral with increasing aperture roughness, the purpose of performing Simulation 3 is to demonstrate an amplitude error incurred by the $\mathcal {P}$-field integral in the ultra near-field scenario even for negligible aperture roughness as was discussed in Section 2.2. It is this scenario where the introduction of a $\mathcal {P}$-field amplitude correction factor is necessary. Through these simulations, we also show that this particular error, i.e., a dynamic scaling error with respect to the location in $\Sigma$, is independent of the phase estimation error which was investigated in Simulation 1 and Simulation 2.
For this simulation, we first assume that the separation between $\mathcal {A}$ and $\Sigma$ is small, such that neither of the respective Fraunhofer and Fresnel conditions in (A.35) and (A.36) apply. Next, for comparison, we also compute the amplitude error for the near-field and far-field cases of Simulations 1 and 2. To only observe the magnitude of the error in amplitude estimation, Simulation 3 considered a uniformly-illuminated aperture but with a negligible roughness, i.e., $\gamma \approx 0$. For the ultra near-field case we set the plane separation to $z = 2.5\,{\textrm{m}}$ while the near-field and far-field distances were set again to 5 m and 200 m respectively. For all cases, the uncorrected $\mathcal {P}$-field distribution was estimated from (8) and compared to the amplitude corrected estimate from (5). These amplitude-corrected and uncorrected $\mathcal {P}$-field distributions in $\Sigma$ along the $x$-axis at $y = 0$ are plotted in Fig. 6.
From the plots in Fig. 6, we clearly observe that even for an ultra near-field scenario, a $\mathcal {P}$-field distribution estimate from (A.39) without the correction factor only yields an amplitude estimation error but no phase estimation error as no phase approximations were made throughout the derivation of the $\mathcal {P}$-field integral. This is evident from the locations of the minima and maxima in the $\mathcal {P}$-field distribution in Fig. 6.
We also observe an increasing $\mathcal {P}$-field amplitude estimation error when separation between $\mathcal {A}$ and $\Sigma$ is decreased and that it is negligible for large separation distances i.e., the Fresnel and Fraunhofer cases. This is also depicted in Fig. 7 where we plot a color map of the logarithmic $\mathcal {P}$-field estimation errors $\langle \eta \rangle$ and $\langle \eta _{\mathrm {AV}}\rangle$ in the $x$-$z$ plane from (12) and (14) respectively for $50\,{\textrm{m}} \geq z \geq 1\,{\textrm{m}}$.
4. Experimental Demonstration
We performed an experiment to verify our claim of $\mathcal {P}$-field summation behavior in the presence of rough apertures such as NLOS imaging scenarios. This, as was mentioned earlier, is analogous to the Huygens’ summation of E-field wavelets to describe conventional LOS imaging. To experimentally demonstrate $\mathcal {P}$-field summation, the setup shown in Fig. 9 was implemented which enacts the simplest situation of the summation of two optically incoherent $\mathcal {P}$-field contributions from a rough aperture. This experiment is analogous to an interferometry experiment in the realm of E-fields. This analogy is depicted in Fig. 8 where akin to translating a mirror and changing path length in one arm of a Michelson interferometer, we emulate the shifting of the light source with the help of a translatable mirror pair where each new location of the mirror pair introduces a unique additional phase to one beam component resulting in a different $\mathcal {P}$-field sum at a fixed location of the detector.
A Gaussian beam from a fiber-coupled laser source exits the optical fiber cable via a Fiber Collimator (FC). A 50:50 Beam Splitter (BS) splits the propagating beam into two identical beams each with exactly half of the original beam power. We refer to these beams as Beam 1 and Beam 2. Beam 1 and Beam 2 propagate through path lengths $L_1$ and $L_2$ before incidence at identical diffusers D1 and D2, separated by a distance $D_{\mathrm {S}}$. The path length $L_1$ serves as the reference beam path and remains unaltered throughout the course of measurements. Path length $L_2$ for Beam 2 is altered by translating a two-mirror assembly comprising mirrors M1 and M2 placed on a translation stage. The translation distance $D_{\mathrm {T}}$ of the mirror pair is measured from a reference position $D_{\mathrm {T}} = 0$ at which path lengths $L_1$ and $L_2$ are equal. An AC-coupled Photo-Detector (PD) is positioned at a distance Z away from the plane of the two diffusers $\mathcal {A}$ and equidistant from D1 and D2. Translation of the [M1, M2] mirror pair resulted in altering the path length $L_2$ for Beam 2 and the consequent phase difference $\Delta \phi _{\mathrm {P}} = |\phi _1 - \phi _2|$ between the two $\mathcal {P}$-field contributions $\mathcal {P}_1 = |\mathcal {P}_1|e^{j\phi _1}$ and $\mathcal {P}_2 = |\mathcal {P}_2|e^{j\phi _2}$ simply expressed as a function of $D_{\mathrm {T}}$ is given by
The amplitude contributions are equal, i.e., $|\mathcal {P}_2| = |\mathcal {P}_1|$ with the use of identical diffusers and a 50:50 BS. Therefore, the expected output signal $\mathcal {P}_{\mathrm {Sum}}$ which the PD outputs expressed as a function of $D_{\mathrm {T}}$ is given byFor the actual experiments, we used a fiber-coupled laser source with a wavelength of $\lambda _{\mathrm {E}} = 520\,{\textrm{nm}}$ modulated by a $\mathcal {P}$-field sinusoidal signal of 1 GHz which corresponds to a free-space $\mathcal {P}$-field wavelength of roughly 30 cm. The diffusers were separated by a distance of $D_{\mathrm {S}}$ of 34.5 cm and the distance $Z$ between the aperture plane $\mathcal {A}$ (plane of the diffusers) and the detection plane $\Sigma$ was set to 36 cm. Two identical diffusers from the Newport 20DKIT-C3 light-shaping diffusers kit were used which provided a Gaussian irradiance distribution at the detector plane. A $\mathcal {P}$-field detector to measure $|\mathcal {P}_{\mathrm {Norm}}|$ was realized by connecting the output of an AC-coupled Menlo System APD210 photo-detector to an Agilent CXA N9000A RF spectrum analyzer. The spectrum analyzer output $|\mathcal {P}_{\mathrm {Sum}}|_{\mathrm {Peak}}$ was normalized in post-processing to obtain $|\mathcal {P}_{\mathrm {Norm}}|$.
The mirror pair [M1, M2] was translated and $|\mathcal {P}_{\mathrm {Norm}}|$ was measured for different unique values of $D_{\mathrm {T}}$. In Fig. 10, we plot simultaneously the measurement data-point values of $|\mathcal {P}_{\mathrm {Norm}}|$ as well as a plot of the theoretically expected behavior of $|\mathcal {P}_{\mathrm {Norm}}|$ as a function of $D_{\mathrm {T}}$ calculated from (18). Comparing the theoretically predicted and experimentally measured values of $|\mathcal {P}_{\mathrm {Norm}}|$ at different $D_{\mathrm {T}}$ settings, we clearly observe a maximum $\mathcal {P}$-field summation and cancellation at the expected $D_{\mathrm {T}}$ settings as well as a very strong agreement for other $D_{\mathrm {T}}$ settings which yield varying levels of partial $\mathcal {P}$-field interference.
Using a slow Thorlabs SM05PD1A power meter/PD assembly with an integration time that is far greater than the time period $\mathcal {P}$-field signal, we also recorded simultaneous measurements of the total optical irradiance at the location of the fast PD. Through these measurements, we show that a change in the $\mathcal {P}$-field values at different $D_{\mathrm {T}}$ settings is independent of the total number of photons present at the detection location which we expected to remain almost constant for any $D_{\mathrm {T}}$. We provide a plot of this average optical irradiance in Fig. 11. As was expected, the normalized average irradiance $\langle I_{\mathrm {Tot}}(x,y,z) \rangle$ remains almost constant for all $D_{\mathrm {T}}$ settings.
5. $\mathcal {P}$-field Imaging Approach to NLOS Imaging
5.1 Comparison of NLOS Imaging with a $\mathcal {P}$-field Virtual Camera to LOS Imaging with a Conventional Camera
Fundamental E-field imaging principles defined by the Huygens’ integral in (1) completely describe conventional LOS imaging, as is depicted in Fig. 12a, where we consider an imaging system comprising a monochromatic EM source, a camera and a point-like target (which we consider our object under investigation). The monochromatic wave emitted by the source interacts with the object, and is focused at the detector/camera plane by an E-field imaging lens.
In a virtual $\mathcal {P}$-field camera approach for NLOS imaging, the relay wall acts as the aperture of a holographic $\mathcal {P}$-field projection and detection system. After detection of the $\mathcal {P}$-field at all points on the aperture, any conceivable imaging system can be realized through digital post processing. A simple imaging lens, for example, applies a position dependent phase delay to the signal followed by a summation of the fields on the camera pixels and a time integral over the absolute value to reconstruct a 2D image of a scene (Fig. 1b). With the application of correct time shifts to the received signal, the relay wall can be treated as a virtual $\mathcal {P}$-field lens, which forms a $\mathcal {P}$-field image of the NLOS at the virtual sensor located behind the relay wall looking directly at the hidden scene. This is depicted in Fig. 12b.
5.2 Limitations of the Phasor-Field model
In the derivation of the Phasor-Field Rayleigh Sommerfeld propagator we made two important assumptions:
- • Light is added incoherently such that the intensities of two overlapping light beams add.
- • Light propagates from any point in the the source plane on a spherical wavefront in all directions.
It is interesting to note that, while specular reflectors and occlusions in the scene affect the validity of the Phasor-Field model, they do not generally affect reconstructions done using the Phasor-Field formalism which look the same regardless of the surface specularies [21]. However, in the presence of occlusions and specularities it is in certain cases possible to perform NLOS reconstructions with higher quality than what is permitted using Phasor-Field methods.
6. Conclusion
In this paper, we introduced the concept of irradiance phasor fields, a complex scalar quantity with an amplitude and a phase term. We show that $\mathcal {P}$-fields provide an ideal representation for imaging through apertures which exhibit roughness which is in the order of the optical (E-field) wavelength but significantly less than the $\mathcal {P}$-field wavelength. Provided that the optical spatial coherence of the E-field is less than the $\mathcal {P}$-field wavelength and its temporal coherence is less than the detector integration time, the proposed $\mathcal {P}$-field approach provides a means to model any NLOS imaging system – such as imagers that image around corners – as an LOS imaging system. The significant advantage with the $\mathcal {P}$-fields-based approach is the inherent ability to use well-known techniques in LOS imaging to model any NLOS system by considering partially reflective and scattering rough surfaces as $\mathcal {P}$-field apertures and using existing knowledge of light transport in LOS imaging to model any NLOS system from there onward. We back our claims with in-depth near-field and far-field simulation results for uniformly illuminated rectangular and square apertures as well as results from carefully designed experiments.
Appendix: $\mathcal {P}$-field Imaging Approach
Irradiance Propagation between Two Spatial Locations
The Poynting vector $\textbf {S}$, which describes energy propagation of any electro-magnetic (EM) wave, is stated as a cross-product of the wave electric field (E-field) vector $\textbf {E}$ and the magnetic field vector $\textbf {H}$ as
Moreover, $\textbf {H}$ can be expressed in terms of the wave vector $\textbf {K}$, $\textbf {E}$, the E-field frequency $\omega$ and the medium permeability $\mu$ as where $\textbf {K}$ can be expressed either in terms of the E-field wavelength $\lambda _{\mathrm {E}}$ or likewise $\omega$ as In (A.3), $\hat {K}$ signifies the unit vector $\textbf {K}/|\textbf {K}|$. Substituting (A.2) into (A.1), and knowing that $\textbf {E}\cdot \textbf {K} = 0$ for an isotropic medium, $\textbf {S}$ is given byNext, we use this knowledge to a) rigorously derive the irradiance contribution from a location $(x',y',0)$ within a plane $\mathcal {A}$ – which we refer to as the plane of a rough aperture (see Section C of this appendix for definition of ’roughness’ that we use for our considerations) – to a location $(x,y,z)$ in the $\Sigma$ plane which we refer to as the detection plane or the image plane as is shown in Fig. 2, and b) derive the cumulative irradiance contribution of all locations within $\mathcal {A}$ to location $(x,y,z)$ in $\Sigma$. Let
be the incident E-field at $(x',y',0)$ in $\mathcal {A}$ where $\hat {e'}$ denotes the E-field polarization unit vector. Then the transmitted E-field contribution from the same location $(x',y',0)$ in $\mathcal {A}$ is given by$\mathcal {P}$-field Propagation between Aperture and Detection Planes
Now let us consider the case when optical irradiance is directly amplitude-modulated. A direct amplitude modulation of optical irradiance is expressed as the multiplication of the time-varying Poynting vector $\textbf {S}$ by a non-negative scalar modulating function $P(t)$ of frequency $\Omega$ and an amplitude of $P_0$ where
After propagation from $(x',y',0)$ to $(x,y,z)$, the modulation envelope $P(t)$ is phase-shifted by $\beta |r|$ due to propagation and $\Delta \phi _{\beta }(x',y',0)$ due to aperture roughness at $(x',y',0)$. This phase-shifted envelope at $(x,y,z)$ is expressed asSuppose we define a detection scheme - which we shall refer to as a $\mathcal {P}$-field detector - which simply detects the scalar peak amplitude $|I_{\mathrm {Tot}}(x,y,z)|$ of the $\mathcal {P}$-field envelope $I_{\mathrm {Tot}}(x,y,z)$ at every location $(x,y,z)$ in $\Sigma$ where
Impact of Aperture Roughness on $\mathcal {P}$-field Phase
For the propagation of modulated optical irradiance from an aperture plane $\mathcal {A}$ to the detection plane $\Sigma$, we consider the relative effect of the aperture roughness on the E-field and $\mathcal {P}$-field phases. For the visible spectrum, a frosted glass or a lens with a ground glass side are examples of partially-transmissive rough apertures. On the other hand, a painted wall is one example of a partially-reflective rough aperture. Now let us assume that for a rough aperture (refer to Fig. 2), the E-field transmission function $\tau (x',y',0)$ is given by
where $t_0(x',y',0)$ is the location-dependent E-field amplitude transmissivity of the aperture and $\Delta \phi _R (x',y',0)$ is a random phase variable denoting a random E-field phase contribution from any location $(x',y',0)$. In this paper, we consider an aperture ’rough’ if any random aperture phase contribution from an arbitrary location $(x',y',0) \in \mathcal {A}$ results in a corresponding random phase change $\Delta \phi _{K} (x',y',0)$ to the E-field contribution at $(x',y',0)$ yet the resulting $\mathcal {P}$-field phase change $\Delta \phi _{\beta }(x',y',0)$ is negligible, i.e., $\Delta \phi _{\beta } (x',y',0) \ll 2\pi$. Mathematically speaking, without loss of generality, we assume that the surface has a minimum roughness of 0 and a maximum roughness of $\gamma$ (having the unit of length) with $\lambda _{\mathrm {E}} \ll \gamma \ll \lambda _{\mathrm {P}}$. We therefore know that and This implies thatFunding
Office of Naval Research (N00014-15-1-2652); National Aeronautics and Space Administration (NNX15AQ29G); Defense Advanced Research Projects Agency (HR0011-16-C-0025).
Acknowledgments
The authors thank Jeremy Teichman for valuable discussions and suggestions.
References
1. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012). [CrossRef]
2. M. Buttafava, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23(16), 20997–21011 (2015). [CrossRef]
3. R. Ramesh and J. Davis, “5d time-light transport matrix: What can we reason about scene properties?” Tech. rep., MIT (2008).
4. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with scattered incoherent light,” Nat. Photonics 6(8), 549–553 (2012). [CrossRef]
5. A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Trans. Graph. 35(2), 1–12 (2016). [CrossRef]
6. M. M. Balaji, A. Viswanath, P. Rangarajan, D. MacFarlane, and M. P. Christensen, “Resolving non line-of-sight (nlos) motion using speckle,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2018), pp. CM2E–2.
7. P. Sen, B. Chen, G. Garg, S. R. Marschner, M. Horowitz, M. Levoy, and H. P. A. Lensch, “Dual photography,” ACM Trans. Graph. 24(3), 745–755 (2005). [CrossRef]
8. S. M. Seitz, Y. Matsushita, and K. N. Kutulakos, “A theory of inverse light transport,” in Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, vol. 2 (2005), pp. 1440–1447.
9. A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using ultrafast transient imaging,” Int. J. Comput. Vis. 95(1), 13–28 (2011). [CrossRef]
10. O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes using diffuse reflections,” Opt. Express 20(17), 19096–19108 (2012). [CrossRef]
11. D. Wu, Frequency Analysis of Transient Light Transport with Applications in Bare Sensor Imaging (Springer Berlin Heidelberg, Berlin, Heidelberg, 2012). pp. 542–555.
12. F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget transient imaging using photonic mixer devices,” ACM Trans. Graph. 32(4), 1–45:10 (2013). [CrossRef]
13. F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, (2014), pp. 3222–3229.
14. F. Heide, W. Heidrich, M. Hullin, and G. Wetzstein, “Doppler time-of-flight imaging,” ACM Trans. Graph. 34(4), 36 (2015). [CrossRef]
15. M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53(2), 023102 (2014). [CrossRef]
16. C.-Y. Tsai, K. N. Kutulakos, S. G. Narasimhan, and A. C. Sankaranarayanan, “The geometry of first-returning photons for non-line-of-sight imaging,” in IEEE Intl. Conf. Computer Vision and Pattern Recognition (CVPR), (2017).
17. M. La Manna, F. Kine, E. Breitbach, J. Jackson, and A. Velten, “Error backprojection algorithhms for non-line-of-sight imaging,” MINDS@UW (2017).
18. J. Iseringhausen and M. B. Hullin, “Non-line-of-sight reconstruction using efficient transient rendering,” arXiv preprint arXiv:1809.08044 (2018).
19. F. Heide, M. O’Toole, K. Zang, D. Lindell, S. Diamond, and G. Wetzstein, “Non-line-of-sight imaging with partial occluders and surface normals,” arXiv preprint arXiv:1711.07134 (2017).
20. X. Liu, S. Bauer, and A. Velten, “Analysis of feature visibility in non-line-of-sight measurements,” IEEE/CVF Conf. on Comput. Vis. Pattern Recognit. (CVPR) (2019).
21. X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Virtual wave optics for non-line-of-sight imaging,” arXiv preprint arXiv:1810.07535 (2018).
22. D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast fk migration,” ACM Trans. Graph. 38(4), 1–13 (2019). [CrossRef]
23. M. Gupta, S. K. Nayar, M. B. Hullin, and J. Martin, “Phasor imaging: A generalization of correlation-based time-of-flight imaging,” ACM Trans. Graph. 34(5), 1–18 (2015). [CrossRef]
24. W. Goodman, Introduction to Fourier Optics (Mc-Graw Hill, New York, NY, 1996).
25. W. Stutzman and G. Thiele, Antenna Theory and Design (Wiley, Hoboken, NJ, 1998), 2nd ed.
26. J. Dove and J. H. Shapiro, “Paraxial theory of phasor-field imaging,” Opt. Express 27(13), 18016–18037 (2019). [CrossRef]
27. S. Silver, Microwave antenna theory and design, 19 (Iet, 1984).