Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Towards a more accurate light transport model for non-line-of-sight imaging

Open Access Open Access

Abstract

Non-line-of-sight (NLOS) imaging systems involve the measurement of an optical signal at a diffuse surface. A forward model encodes the physics of these measurements mathematically and can be inverted to generate a reconstruction of the hidden scene. Some existing NLOS imaging techniques rely on illuminating the diffuse surface and measuring the photon time of flight (ToF) of multi-bounce light paths. Alternatively, some methods depend on measuring high-frequency variations caused by shadows cast by occluders in the hidden scene. While forward models for ToF-NLOS and Shadow-NLOS have been developed separately, there has been limited work on unifying these two imaging modalities. Dove et al introduced a unified mathematical framework capable of modeling both imaging techniques [Opt. Express 27, 18016 (2019) [CrossRef]  ]. The authors utilize this general forward model, known as the two frequency spatial Wigner distribution (TFSWD), to discuss the implications of reconstruction resolution for combining the two modalities but only when the occluder geometry is known a priori. In this work, we develop a graphical representation of the TFSWD forward model and apply it to novel experimental setups with potential applications in NLOS imaging. Furthermore, we use this unified framework to explore the potential of combining these two imaging modalities in situations where the occluder geometry is not known in advance.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The problem of Non-line-of-Sight (NLOS) Imaging seeks to image an object around a corner that is hidden from the line of sight of some observer. This is typically done by measuring the optical signal at a diffuse relay surface that is in the line-of-sight (LOS) of both the observer and the hidden object as shown in Fig. 1. Reconstruction algorithms are then utilized to invert the measured signal and infer an image of the hidden object. Such systems have applications in diverse domains, such as autonomous navigation, remote surveillance, and natural disaster response.

 figure: Fig. 1.

Fig. 1. General setup for NLOS imaging. The observer aims to create an image of a hidden object by measuring an optical signal at the relay surface, which is in the line of sight of both the observer and the hidden object.

Download Full Size | PDF

Reconstructing an image of an object from a measurement usually requires solving an inverse problem to a forward model. This forward model describes mathematically how the (hidden) object is encoded into this measurement. How we define the forward model and the assumptions we make directly impact how we formulate the inverse problem and ultimately affect the speed of reconstruction and quality of the reconstructed image.

When the relay surface is imaged with a standard smartphone camera, the picture of the relay surface has little useful structure or variation that can be used to infer information about the hidden object, unless prior knowledge about the hidden object or the obstructor (as depicted in Fig. 1) is available. Thus, the challenge of NLOS imaging is twofold - first, to find an imaging paradigm where we are able to measure some quantity that contains useful information about the hidden object. Second, to describe this imaging paradigm with an accurate forward model so we find an inverse that yields fast and/or high-quality reconstructions.

Passive imaging techniques measure the intensity at the relay surface without the use of any light source. One class of methods utilizes the aperture created by the visible obstructor (Fig. 1) to extract useful information about the hidden scene [16]. For example, a vertical edge from this obstructor maps different angular regimes in the hidden scene to different contiguous areas on the relay surface, and this can be used to track [2] and reconstruct [35] a 2D projection of the hidden object. However, such methods are sensitive to ambient illumination and are not able to recover the 3D geometry of the hidden object.

Active imaging methods encode the measurement by illuminating the relay surface with a controlled light source, such as a pulsed laser [79] or modulated LEDs [10]. They extract information about the hidden object by analyzing spatial and temporal changes introduced by this controlled illumination. For example, photons from different parts of the object arrive at the relay wall at different times and this variation can be measured using fast photon time-of-flight (ToF) detectors [1114]. ToF-NLOS exploits this structure, and has been shown to generate 3D reconstructions of room-sized hidden scenes with direct [9,1519] or iterative methods [2023] as well as generating reconstructions of hidden objects around two corners [24]. However, ToF-NLOS suffers from requiring slow sequential scans of the relay surface and expensive ToF hardware. One active approach seeks to alleviate the need for expensive ToF detectors by modeling the spatial intensity variation across the relay surface but is limited to tracking the movement and orientation of hidden objects due to low reconstruction resolution [25]. Another approach is to use objects that cast shadows in the hidden scene and create sharp variations in the measurement by occluding part of the optical signal [26]. If the geometry of these occluders is known apriori, Shadow-NLOS can be utilized to reconstruct hidden objects under active [26,27] or passive illumination [28] with resolution limited by optical diffraction in certain scenarios [29]. However, having knowledge of occluder geometry is not always possible and reconstructions are limited to 2D projections of the hidden scene.

Fusing the two methods of ToF-NLOS and Shadow-NLOS may help overcome the limitations of each imaging modality. Recent work combines vertical edges from visible obstructors (Fig. 1) with ToF measurements to demonstrate "2.5-dimensional" reconstructions of the hidden scene [3032] from limited scans of the relay surface. However, such obstructors need to be in the observer’s line of sight for these methods to be useful. Conversely, concurrent work [33] utilizes ToF-based reconstructions to detect and leverage hidden objects as partial occluders for other parts of the hidden scene. Their approach improves reconstruction quality through a forward model that jointly estimates the surface albedo, surface normal, and the visibility of the hidden scene. However, their forward model assumes multiple priors to leverage these partial occluders, and is only valid for the confocal configuration, where the laser and the ToF detector are focused at the same point. Consequently, there is a need for a generalized light transport framework.

Recent work proposes a unified forward model that models light transport for both imaging modalities. This forward model, called the two-frequency spatial Wigner distribution (TFSWD) [29], builds on top of the phasor field forward model [3437] which tracks photon timing using the frequency domain of the ToF measurement. In addition to photon timing, the TFSWD framework incorporates spatial Wigner distributions [38] to correctly model how light rays interact with occluders.

While the phasor field forward model (PFM) has been verified experimentally [34,39] and utilized successfully for NLOS imaging [16,19,40], it fails to correctly model sharp shadows from occluders. On the other hand, the TFSWD can incorporate both time of flight and shadows but has not been used in conjunction with experimental measurements or as part of practical NLOS imaging methods.

In this work, we introduce a visual representation for optical components and setups in the TFSWD. We model the forward propagation of a phasor field wave through different optical paths that include occluders and compare the predictions of TFSWD with experimental data. Finally we use our representation to reason about the effect of occlusions on the data and the potential for improved resolution of occlusions are modeled in ToF-NLOS methods. We hope that this work simplifies understanding of TFSWD-based NLOS imaging and lays the foundation for practically realizable NLOS systems that employ it.

1.1 Contributions

Our main contributions can be summarized as follows. In this work:

  • • We utilize light fields to provide a graphical way to understand TFSWD.
  • • We demonstrate the TFSWD model in several novel experiments, including focusing through a diffuser with curved optical mirrors.
  • • We discuss the impact of occluders on the achievable reconstruction resolution, in the context of TFSWD framework, and evaluate the utility of modeling occlusions in time-of-flight NLOS. We show how occlusions may help to enhance reconstruction quality even when the occluder geometry is not known apriori.

The rest of the paper can be summarized as follows. In Section 2, we formally define the phasor field framework and discuss certain limitations. Section 3 formally introduces the TFSWD and demonstrates how to visualize the TFSWD for different propagation primitives. Section 4.1 presents experimental demonstrations showcasing how the TFSWD overcomes the limitations of the original phasor field model. In Section 4.2, we conduct experiments that demonstrate how an optical lens focuses the phasor field through intervening diffusers. In both cases, we use our visualizations to graphically illustrate how the TFSWD correctly predicts the right answer. In Section 5, we employ these TFSWD visualizations to explore reconstruction resolution for NLOS imaging in the context of ToF detectors, occluders, or a combination of both.

2. $\mathcal {P}$-field light transport

2.1 $\mathcal {P}$-field definition

Let $\omega _0 = 2\pi \lambda _0/c$ describe the central frequency of a quasi-monochromatic laser source with modulation frequency $\Delta \omega$, where $\lambda _0$ is the central wavelength and $c$ is the speed of light. Let $\Omega = 1/\tau$ describe the integration time/bandwidth of a fast photodetector such that:

$$\Delta \omega < < \Omega < < \omega_0,$$
so that the fast photodetector can easily measure the phase of the modulation, $\Delta \omega$, but is unable to measure the faster oscillating optical carrier with frequency $\omega _0$.

Mirroring the development in [29], we define the $\mathcal {P}$-field, $\mathcal {P}_z$, at plane $z$ as the temporal Fourier transform of the measured Intensity $I_{z}$. The measured intensity itself is defined as the autocorrelation of the electric field, $\mathcal {E}_{z}$, at plane $z$ - which effectively defines $\mathcal {P}$-field as the power spectral density of the $\mathcal {E}$-field ($\mathcal {E}$-field modulation envelope). This is expressed as:

$${\langle{I_{z}(\mathbf{\rho}_{z},t)}\rangle} = {\langle{\mathcal{E}_{z}(\mathbf{\rho}_{z},{t})\mathcal{E}_{z}^{*}(\mathbf{\rho}_{z},{t}^{\prime})}\rangle}$$
$$= {\int{\frac{\text{d}{\omega}}{2\pi}{\int{\frac{\text{d}{\omega}^{\prime}}{2\pi}{\langle{\mathcal{E}_{z}(\mathbf{\rho}_{z},{\omega})\mathcal{E}_{z}^{*}(\mathbf{\rho}_{z},{\omega}^{\prime})}\rangle}}}e^{- i({\omega} - {\omega}^{\prime})t}}}$$
$$= {\int{\frac{\text{d}{\omega}_{-}}{2\pi}\left\lbrack {\int{\frac{\text{d}{\omega}_{+}}{2\pi}{\langle{\mathcal{E}_{z}(\mathbf{\rho}_{z},{\omega}_{+} + {\omega}_{-}/2)\mathcal{E}_{z}^{*}(\mathbf{\rho}_{z},{\omega}_{+} - {\omega}_{-}/2)}\rangle}}} \right\rbrack e^{- i{\omega}_{-}t}}}$$
$$= {\int{\frac{\text{d}{\omega}_{-}}{2\pi}P_{z}(\mathbf{\rho}_{z},{\omega}_{-})e^{- i{\omega}_{-}t},}}$$
where $\omega, \omega ^{\prime }$ are variables denoting the temporal frequencies of the input E-field, $\mathcal {E}_{z}$, and its complex conjugate, $\mathcal {E}_{z}^{*}$. Additionally, $\rho _z$ is the spatial coordinate at plane z and
$$\omega_{+} = (\omega + \omega^{\prime})/2,$$
$$\omega_{-} = \omega - \omega^{\prime},$$
where the change of basis is performed on the frequency variables to define the auto-correlation in terms of an average, $\omega _{+}$, and a difference, $\omega _{-}$, frequency. Then, the measured intensity is conveniently expressed as the inverse Fourier Transform of the $\mathcal {P}$-Field with respect to $\omega _{-}$. Mathematical notation for this paper is summarized in Table 1 in Appendix 10.

To model the $\mathcal {E}$-field propagating through a phase scrambling mask, such as a diffuser, a random space-dependent time delay can be added to each $\mathcal {E}$- field component. The statistical properties of this delay can be inferred by modeling an ensemble of statistically identical diffusers (denoted by the $\langle \rangle$ brackets above) [29]. It can be shown that this time delay has negligible impact on the $\mathcal {P}$-field phase [29,34].

Since the $\mathcal {P}$-field phase is known before and after the spatial scattering event, a Huygens-like integral can be defined to accurately propagate $\mathcal {P}$-field spherical wavelets [39] from an arbitrary diffuse surface A to surface B. It has been shown [29,35,41] that these spherical wavelets can be approximated as parabolic wavelets under the paraxial approximation. Consequently, the Fresnel Diffraction integral can be used to independently propagate individual $\mathcal {P}$-field frequencies $\omega _{-}$ from an arbitrary initial diffuse plane, $z_0$, to an arbitrary final plane $z_1$. Then we can write down the propagated $\mathcal {P}$-field wavefront, $\mathcal {P}_1$, at $z_1$ in terms of the initial $\mathcal {P}$-field wavefront, $\mathcal {P}_0$, at $z_0$:

$$\mathcal{P}_{1}(\mathbf{\rho}_{1},\omega_{-}) = {\int{\text{d}^{2}\mathbf{\rho}_{0}\mathcal{P}_{0}(\mathbf{\rho}_{0},\omega_{-})\frac{e^{i\omega_{-}L/c}e^{i\omega_{-}{|{\mathbf{\rho}_{1} - \mathbf{\rho}_{0}}|}^{2}/2cL}}{L^{2}},}}$$
where $\rho _0$ and $\rho _1$ describe the spatial coordinate at $z_0$ and $z_1$ planes respectively, and $L = z_1 - z_0$. We will henceforth refer to this model as $\mathcal {P}$-field Forward Model (PFM).

2.2 Limitations of the $\mathcal {P}$-field forward model

While the $\mathcal {P}$-field forward model has been shown to successfully generate room-sized reconstructions of the hidden scene [42] efficiently using fast Fourier transforms [40] for non-planar relay surfaces [43], it has some limitations. Suppose we propagate the $\mathcal {P}$-field from some surface A to a surface B. This forward model is contingent on having the first surface, from where the field originates, be completely diffuse so that the directionality of the $\mathcal {P}$-field spherical wavelets matches the directionality of the underlying optical carrier. If there is a mismatch, then the $\mathcal {P}$-field model predicts an inaccurate wavefront at surface B.

One immediate repercussion of this limitation is the incorrect calculation of diffraction effects at edges [4446]. Consequently, the $\mathcal {P}$-field model fails to predict $\mathcal {P}$-field diffraction accurately, especially in scenarios involving sharp edges from deterministic occluders present within diffuse imaging scenarios. Recent work seeks to quantify the error of this Huygens-like model as surface A becomes more specular [47]. This prohibitive limit of only propagating from diffuse surfaces means that this mathematical framework can not be employed to propagate the $\mathcal {P}$-fields from occluder planes in Shadow-NLOS. We experimentally demonstrate this limitation in Section 4.1.

3. Two frequency spatial Wigner distribution (TFSWD) formulation to describe $\mathcal {P}$-field propagation through systems

In wave optics, the position of the wave and its momentum i.e. spatial frequency, form Fourier duals, and both representations completely describe the wavefront at a given plane. While the Fourier transforms convert from one representation to the other, the Wigner distribution is a formulation for describing the wavefront using both domains simultaneously [38,48,49]. Liu et al [50] previously used Wigner distributions to describe and visualize the $\mathcal {P}$-field wavefront. However, their forward model makes the same assumptions as the original PFM and therefore is subject to the same limitations (as outlined in section 2.2).

In this section, we define the Wigner distribution and show its connection to light field [51,52]. Next, we define the TFSWD and show how it incorporates the $\mathcal {P}$-field. Finally, we use the light field framework to build a graphical representation for the TFSWD.

3.1 Wigner distribution

The Wigner distribution (WD) represents the $\mathcal {E}$-field wavefront at a given plane by taking the spatial Fourier transform of its spatial autocorrelation function:

$$W(\mathbf{\rho}_{+},\mathbf{k}) \equiv {\int{\text{d}^{2}\mathbf{\rho}_{-}{\langle{\mathcal{E}_{z}(\mathbf{\rho})\mathcal{E}_{z}^{*}(\mathbf{\rho}^{\prime}}\rangle}}}e^{- i\mathbf{k} \cdot \mathbf{\rho}_{-}},$$
$$\equiv {\int{\text{d}^{2}\mathbf{\rho}_{-}{\langle{\mathcal{E}_{z}(\mathbf{\rho}_{+} + \mathbf{\rho}_{-}/2)\mathcal{E}_{z}^{*}(\mathbf{\rho}_{+} - \mathbf{\rho}_{-}/2)}\rangle}}}e^{- i\mathbf{k} \cdot \mathbf{\rho}_{-}},$$
where we change the basis to keep track of global and local autocorrelations separately and redefine the spatial coordinates in terms of an average $\rho _{+}$ and difference $\rho _{-}$ using: $\rho _{+} = (\rho + \rho ^{\prime })/2$, and $\rho _{-} = \rho - \rho ^{\prime }$. $\mathbf {k}$ is the directional optical number and is commonly described as "spatial frequency" in the context of light fields:
$$\mathbf{k} = 2\pi s/\lambda_0 = \sin(\theta)/\lambda_0,$$
where $\lambda _0 = \omega _0/c$, $s$ is the transverse component of the propagation vector of the optical carrier, and $\theta$ is the angle between the ray and the optical axis [53]. The degree of local autocorrelation is denoted by $\rho _{-}$ and its Fourier dual $k$ encodes information about the directionality $s$ of the light ray. Under the paraxial approximation, $\sin (\theta ) \approx \theta$ and
$$\mathbf{k} \approx \theta/\lambda_0 \propto \theta ,$$
$$W(\rho_{+}, \mathbf{k}) \approx L_z(\rho_{+}, \mathbf{k}) ,$$
so $\mathbf {k}$ becomes the wave optics analog of ray directionality in ray optics. Similarly, the light field, $L_z$, from ray optics approximates the WD from wave optics [54]. While the Wigner Distribution keeps track of the location and direction of a light ray, it does not track any time of flight information. This limitation renders it unsuitable for modeling light transport in ToF-NLOS.

3.2 TFSWD and its relation to $\mathcal {P}$-fields

To address this limitation, Dove et al developed the two frequency spatial Wigner distribution (TFSWD) [29]. The TFSWD representation of $\mathcal {P}$-field light transport overcomes the constraints of the original $\mathcal {P}$-field model by appropriately incorporating effects at both the optical wavelength scale and the $\mathcal {P}$-field wavelength scale, including interference and diffraction effects.

In addition to the phase of $\mathcal {P}$-field wave at each modulation frequency $\omega _{-}$, the TFSWD light transport model incorporates directionality information ’$s$ or $\mathbf {k}$’ along with the frequency $\omega _{+}$ of the optical carrier at each spatial coordinate $\rho _{+}$. This integration is accomplished by computing the space-time autocorrelation of the $\mathcal {E}$-field. The TFSWD function is described as:

$$W_{\mathcal{E}_{z}}(\mathbf{\rho}_{+},\mathbf{k},\omega_{+},\omega_{-}) \equiv {\int{\text{d}^{2}\mathbf{\rho}_{-}{\langle{\mathcal{E}_{z}(\mathbf{\rho}_{+} + \mathbf{\rho}_{-}/2,\omega_{+} + \omega_{-}/2) \mathcal{E}_{z}^{*}(\mathbf{\rho}_{+} - \mathbf{\rho}_{-}/2,\omega_{+} - \omega_{-}/2)}\rangle}}}e^{- i\mathbf{k} \cdot \mathbf{\rho}_{-}}.$$

It can be shown that the TFSWD is equivalent to the 6D light field [29]:

$$I_{z}(\mathbf{\rho}_{+},\mathbf{k},\omega_{+},t) \equiv \frac{1}{\lambda_{0}^{2}}{\int{\frac{\text{d}\omega_{-}}{2\pi}W(\mathbf{\rho}_{+},\mathbf{k},\omega_{+},\omega_{-})e^{- i\omega_{-}t},}}$$
where $I_{z}$ is the time-dependent specific irradiance measured at spatial coordinate $\rho _{+}$, directionality $\mathbf {k}$, optical frequency $\omega _{+}$ at time $t$. So TFSWD is simply the temporal Fourier transform of the 6D light field and tracks the evolution of the modulated wave in the frequency domain.

The projection property of the Wigner distribution allows the recovery of the positional distributions and the frequency distributions as marginals. This enables us to recover $\mathcal {P}$-field, spatial Wigner distribution, and the directional intensity from TFSWD:

$$\mathcal{P}_{z}(\mathbf{\rho}_{+},\omega_{-}) = {\int{\frac{\text{d}\omega_{+}}{2\pi}{\int\frac{\text{d}^{2}\mathbf{k}}{{(2\pi)}^{2}}}W_{\mathcal{E}_{z}}(\mathbf{\rho}_{+},\mathbf{k},\omega_{+},\omega_{-}).}}$$
$$W(\mathbf{\rho}_{+},\mathbf{k}) = \frac{1}{\lambda_{0}^{2}}{\int\frac{\text{d}\omega_{-}}{2\pi}}{\int\frac{\text{d}\omega_{+}}{2\pi}}W_{\mathcal{E}_{z}}(\mathbf{\rho}_{+},\mathbf{k},\omega_{+},\omega_{-}).$$
$$I_{z}(\mathbf{\rho}_{+},\mathbf{k},t) = \frac{1}{\lambda_{0}^{2}}{\int\frac{\text{d}\omega_{-}}{2\pi}}{\int\frac{\text{d}\omega_{+}}{2\pi}}W_{\mathcal{E}_{z}}(\mathbf{\rho}_{+},k,\omega_{+},\omega_{-})e^{- i\omega_{-}t}.$$

Therefore, we can also recover the measured intensity from a ToF detector by combining Eq. (5) and Eq. (15). While both the 6D light field and the TFSWD contain the same information about light transport, employing the light field framework to describe these propagation primitives offers the advantage of using intuitive visualizations developed by experts in computer graphics [52]. We hope that integrating these visualizations with the TFSWD will make this mathematical model accessible to a broader audience.

3.3 TFSWD visualizations

In this section, we visualize the TFSWD to describe propagation primitives that form the building blocks of light transport in NLOS imaging scenarios. We describe and visualize the primitives for light rays, modulated light rays, propagation of light, and interaction of light with thin lenses, occluders, and diffusers. These primitives were derived in [29], and have been summarized briefly in Appendix 8. For ease of visualization, we make the following simplifications to the TFSWD:

  • 1. We operate in flatland so that we have a single spatial coordinate denoted by $\rho$ and angular direction $\mathbf {k}$.
  • 2. We make the paraxial approximation so that $\mathbf {k}$ and $\theta$ become interchangeable (Eq. (12)).
  • 3. We assume that our light source is a coherent laser at a single frequency $\omega _0$. The $\omega _{+}$ for the TFSWD becomes a singleton dimension, and is completely defined by $\omega _{+}=\omega _0$.
  • 4. The color in our TFSWD visualization is the phase of the complex number that keeps track of $\mathcal {P}$-field wavefront.

3.3.1 Step 1

With these assumptions, TFSWD becomes a 4D function with a singleton dimension $\omega _{+}=\omega _0$. Then the conventional 2D light field utilized in computer graphics [51] at plane $z$ is the DC component for the $\omega _{-}$ modulation:

$$L_z(\rho_+, \mathbf{k}) = W_{\mathcal{E}_{z}}(\mathbf{\rho}_{+},\mathbf{k},\omega_{+}=\omega_0,\omega_{-}=0).$$

We can plot this 2D Light Field in phase space, which is defined by $(\rho, k)$ dimensions as shown in the middle column of Fig. 2 and Fig. 3. For example, a light ray averaged over all modulations, is a delta function in this phase space and the location of the function define the location and direction of this ray.

 figure: Fig. 2.

Fig. 2. Visualizations for propagation primitives that serve as the building blocks for light transport in NLOS imaging scenarios. These include light rays, modulated light rays, diffusers, point light sources, and propagation of diffuse light.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Visualizations for additional propagation primitives that serve as the building blocks for light transport in NLOS imaging scenarios. These include occlusions for Shadow-NLOS,, plane wave representation, plane wave propagation, and thin positive lens.

Download Full Size | PDF

3.3.2 Step 2

Similarly, we can look at another 2D function (Column 3 in Fig. 2 and Fig. 3) to keep track of the directional $\mathcal {P}$-field,

$$\mathcal{P}_z(\rho_+, \mathbf{k}, \omega_{-} = \omega_{\lambda_p}) = W_{\mathcal{E}_{z}} (\mathbf{\rho}_{+},\mathbf{k},\omega_{+}=\omega_0,\omega_{-} = \omega_{\lambda_p}),$$
where $\omega _{\lambda _p} = 2 \pi c/\lambda _p$ and $\lambda _p$ is the wavelength of modulation. A light ray modulated at $\omega _{\lambda _p}$ is a delta function in this phase space, and the color represents the phase of the $\mathcal {P}$-field as shown in Step 2 of Fig. 2. We avoid tracking the amplitude for simplicity.

3.3.3 Step 3

When the modulated light ray passes through a diffuser, the input light is spread over multiple angles creating a point light source. For $\mathcal {P}$-field, the incoming phase is now copied over to all the outgoing light rays as shown in Step 3 of Fig. 2. Mathematically, this is equivalent to taking the input TFSWD and convolving it with the WD of the diffuser, $W_{D}(\mathbf {\rho }_{+},\mathbf {k})$, along the angular dimension. For our visualization, we set $W_{D}(\mathbf {\rho }_{+},\mathbf {k}) = \text {rect}(\theta /k_0\lambda _0)$ where $k_0\lambda _0 \approx \theta _0$ is the diffuser cone angle and:

$$\text{rect} \left( \frac{\theta}{\theta_0}\right) = \begin{cases} 1/(2\theta_0), & \theta \leq |\theta_0| \\ 0, & \theta > |\theta_0| \\ \end{cases}$$

3.3.4 Step 4

In Step 4, we propagate the scattered light to a second plane that is distance $L$ away from the first. While the shearing described in Eq. (57) changes the shape of the 4D function, the diffraction kernel updates the complex value or the "color" of the function in our visualization by the phase accumulated by each $\mathcal {P}$-field frequency based on the photon time of flight. The kernel in Eq. (57) has two components:

  • 1. The axial phase term, $P_1 = e^{i\omega _{-}L/c}$, where the entire input wavefront acquires a phase that is proportional to the axial distance traveled by the wavefront along the optical axis.
    $$P_1 = e^{i\omega_{-}L/c}.$$
  • 2. The differential phase term, $P_2 = e^{i\omega _{-}{|{\mathbf {\rho }_{+} - \mathbf {\rho }_{0}}|}^{2}/2cL}$, where the input wavefront acquires a phase that is proportional to the square of the transverse offset, $|{\mathbf {\rho }_{+} - \mathbf {\rho }_{0}}|$.
    $$P_2 = e^{i\omega_{-}{|{\mathbf{\rho}_{+} - \mathbf{\rho}_{0}}|}^{2}/2cL}.$$

While $P_1$ is constant for Plane to Plane propagation, $P_2$ varies proportional to the parabolic profile given by $|{\mathbf {\rho }_{+} - \mathbf {\rho }_{0}}|^2$. Since $P_1$ is a constant offset, the color of the visualization varies only with $P_2$ and the phase is symmetric around the location, $\rho _0$, of the point light source.

3.3.5 Step 5

For this paper, we use the geometric optics assumption to assume that the occluder is a spatial binary mask, $M(\rho _{+})$, whose edges have no impact on the directionality of incoming light. This can be modeled as a delta function in angle, $\theta \approx \mathbf {k} \lambda _0$:

$$W_{P}(\mathbf{\rho}_{+},\mathbf{k}) = \begin{cases} \delta (\theta), & \quad M(\rho_{+}) = 1 \\ 0, & \quad M(\rho_{+}) = 0 \\ \end{cases}$$
where we use the delta function to denote the identity operator in convolution. In our visualization, occluder blocks any light rays that it intersects with.

3.3.6 Step 6

A point light source spreads light in all directions from a single position. Its dual is the plane wave, which propagates at a single angle at all spatial locations. A plane wave with a wavefront orthogonal to the axial direction becomes a vertical line in our light field visualization. Similarly, the $\mathcal {P}$-field visualization is also a vertical line with the same color since the phase is uniform everywhere on the plane wave.

3.3.7 Step 7

If the plane wave from step 6 propagates in the axial direction, the differential phase term, $P_2$, is always zero since the transverse offset is always zero. Since the phase only depends on the axial phase term, $P_1$, the phase (And hence the color) updates by the same amount everywhere on the plane wave and wraps around after traveling a distance $z = \lambda _p$ which is the $\mathcal {P}$-field wavelength.

3.3.8 Step 8

While free space propagation causes a vertical shearing, the thin lens is its dual and causes a shearing in the horizontal direction. Rays at a positive spatial coordinate acquire a negative angle, while rays at a negative spatial coordinate acquire a positive angle. The magnitude of the angle depends on the focal length f of the lens. The lens imparts a $\mathcal {P}$-field phase that also varies parabolically and is proportional to the focal length $f$ and modulation frequency $\omega _{-}$ (Eq. (58)).

3.3.9 Recovery of experimental measurement from TFSWD

To recover the intensity measurement with a fast ToF detector and compare it with experimental data, we combine Eq. (5) and Eq. (15):

$${\langle{I_{z}(\mathbf{\rho}_{z},t)}\rangle} = {\int{\frac{\text{d}{\omega}_{-}}{2\pi}\left( P_{z}(\mathbf{\rho}_{z},{\omega}_{-}) \right) e^{- i{\omega}_{-}t}}},$$
$$= {\int{\frac{\text{d}{\omega}_{-}}{2\pi} \left( {\int{\frac{\text{d}\omega_{+}}{2\pi}{\int\frac{\text{d}^{2}\mathbf{k}}{{(2\pi)}^{2}}}W_{\mathcal{E}_{z}}(\mathbf{\rho}_{+},\mathbf{k},\omega_{+},\omega_{-})}} \right) e^{- i{\omega}_{-}t}}}.$$

4. Experiments

In this section, we conduct two different experiments that demonstrate the utility of modeling the directionality of the optical carrier using the TFSWD. We show that this updated framework is able to correctly incorporate effects at the larger $\mathcal {P}$-field spatio-temporal scales as well as effects at the scale of the shorter optical wavelength. We use our previously developed visualizations to illustrate the TFSWD prediction in each case, and show it closely matches the measured experimental results. Furthermore, we show that the PFM only predicts the correct answer when we always propagate from a uniformly diffuse surface and the optical carrier exists everywhere.

In both experiments, we employ a Satsuma femtosecond laser ($\lambda _0 = 515$ nm) to create a diffuse point source. The laser emits pulses with a repetition rate of 10 MHz and the laser power is set to 0.35 W. To capture the optical signal at the detector plane, we utilize a PDM series commercial spad from MPD mounted on a custom translation stage that can sequentially scan a 0.65 m x 0.65 m 2D plane in roughly 1 cm increments. The MPD SPAD has a photon detection efficiency of around 45${\%}$ at 515 nm, with a photon timing resolution that is better than 50 ps FWHM.

4.1 Example: application of TFSWD to describe double slit $\mathcal {P}$-field interference

First, we experimentally show that the $\mathcal {P}$-field forward model (PFM) fails to predict the correct answer when we propagate from a non-diffuse plane. To showcase this, we set up a phasor double-slit experiment (see Fig. 4). Our laser creates a diffuse point source that illuminates the two slits at the same time, turning each slit into a coherent $\mathcal {P}$-field point source. This is the simplest setup that creates an easily measurable $\mathcal {P}$-field interference pattern at the detector plane. We repeat this experiment with and without the occluder (Fig. 4).

 figure: Fig. 4.

Fig. 4. Set Up for the $\mathcal {P}$-fields Double Slit Experiment with a known occluder. We use this experiment to demonstrate how the PFM fails to predict sharp shadows from the occluder. $L_1 = 1.16\ m, \ L_2 = 1.18\ m,\ L_3 = 0.77\ m, \ a = 0.5\ m, \ s = 0.02\ m, \ W = 0.105\ m, \ D = 0.65\ m$.

Download Full Size | PDF

The two slits themselves are transmissive diffuse surfaces, so the $\mathcal {P}$-field model is valid when propagating from this plane directly to the detector plane, as illustrated in (Row 1, Column 1) of Fig. 6(b). However, introducing an occluder with sharp edges leads the plane-to-plane $\mathcal {P}$-field model to predict that the modulated wave bends around the occluder, resulting in fuzzy $\mathcal {P}$-field shadows at the detector plane (Row 1, Column 2 of Fig. 6(b)).

The $\mathcal {P}$-field model erroneously predicts diffraction fringe effects behind the opaque straight edge at the scale of the $\mathcal {P}$-field wavelength as opposed to the optical wavelength. This is because PFM wrongly assumes that the optical light propagates uniformly in all directions (as a spherical wavelet) at the occluder plane. In reality, the directionality of the optical carrier is governed by the light rays that travel from the slits to the detector plane. Blocking the light rays with an occluder creates sharp transitions in the measured optical carrier (Fig. 6(a)), which is modeled incorrectly by the PFM.

Next, we use the TFSWD framework to build a step-by-step analysis, as shown in Fig. 5, and visually demonstrate how TFSWD accurately incorporates the directionality of the optical carrier. First, the laser beam through the diffuser creates a diffuse point light source containing multiple angles. Next, propagation to the plane with two slits shears the light field and updates the phasor field phase. After illuminating the slit plane, the light is blocked everywhere except at the location of the slits. Due to a second diffuser, the two slits become $\mathcal {P}$-field point sources propagating at multiple angles. Subsequent propagation to the occluder plane shears the light field and again updates the phasor field phase. The occluder blocks all the light at certain spatial locations. Finally, we propagate to the detector which shears the light field and updates the phasor field phase.

 figure: Fig. 5.

Fig. 5. Visualization of light transport for the double slit experiment with step-by-step TFSWD analysis.

Download Full Size | PDF

We utilize Eq. (15) to integrate along the $k$ dimension and generate the theoretical $P$-field signal from the TFSWD to compare it to our measurement. The time-averaged spatial distribution of the optical carrier, denoted as $\mathcal {P}z(\rho +, \omega _{-} = 0)$, is encoded in the DC component of our measured signal. This distribution can be fully modeled using the light field framework (Row 1 of Fig. 5), and we observe a close match between the prediction and our experimental results (Fig. 6(a)). Similarly, the TFSWD prediction from Row 2 of Fig. 5 can be compared with the experimental measurement by examining $\mathcal {P}_z(\rho _+, \omega _{-} = \omega _{\lambda _p})$.

 figure: Fig. 6.

Fig. 6. (a) DC component is the optical signal averaged over time and shows the two shadows from each of the two slits, and TFSWD prediction matches experimental results. (b) Theoretical Predictions for the Phasor Field and TFSWD forward models for double slit experiment with and without the occluders. Both models match experimental results closely when we have no occluders, but only TFSWD gives us the right answer when we add in occluders.

Download Full Size | PDF

Since each point in the TFSWD visualization (Row 2 of Fig. 5) is a complex number, adding them together when integrating along $k$ produces the correct interference pattern when we have no occluders (Column 1, Row 2 of Fig. 6(b)). When we add the occluder (Column 2, Row 2 of Fig. 6(b), we see that the TFSWD uses the directionality of the light rays to correctly mask out rays from the two slits that intersect with the occluder and don’t make it to the detector plane.

We have established that the $\mathcal {P}$-field recovered from the TFSWD model matches the experimental results closely. The general mismatch on the left-hand side of the detector plane ($\rho$ < 0) in Fig. 6 can be explained by the occluder not being perfectly centered around the optical axis. This means that the slits are blocked by a different amount at the detector plane, giving us the asymmetry visible in both Fig. 6(a) and (Column 2, Row 2 of Fig. 6(b)).

4.2 Incoherent imaging using a conventional lens or mirror

In this experiment, we implement a basic imaging system shown in Fig. 7. This system performs the imaging operation between the object plane and the image plane using a focusing element (depicted as a lens in Fig. 7) of focal length ’$f$’. The choice of focal length dictates the relationship between the object and image plane locations. Also shown in Fig. 7 is an optional diffuser (Diffuser 2) after the lens. The purpose of this experiment is to:

  • 1. Differentiate between the various facets of E-field and $\mathcal {P}$-field imaging and clearly demonstrate the conditions under which the $\mathcal {P}$-field integral fails.
  • 2. Experimentally demonstrate how an optical lens can be used to focus and project $\mathcal {P}$-field through intervening diffusers or relay walls. This experiment validates the theoretical prediction by Dove et al [36] that NLOS imaging can be performed with a physical lens through intervening diffusers - thereby removing the necessity for computationally expensive reconstruction algorithms. This result is rederived succinctly in Appendix 9.
  • 3. Illustrate that in a ’pure’ $\mathcal {P}$-field imaging scenario, the imaging resolution is set by the $\mathcal {P}$-field wavelength (as predicted by the $\mathcal {P}$-field integral in Eq. (65)). In situations where the optical carrier does not suffer phase scrambling between the source and the imaging planes, the diffraction limit and the imaging resolution is dictated by the E-field wavelength. This occurs because the $\mathcal {P}$-field cannot exist in regions where the optical E-field is absent. This showcases the difference between $\mathcal {P}$-field imaging and classical radio frequency (RF) imaging and highlights the possibility of achieving an optical diffraction limit despite using much larger $\mathcal {P}$-field wavelengths.

 figure: Fig. 7.

Fig. 7. Set up to demonstrate how an optical lens can be used to focus and project $\mathcal {P}$-field through intervening diffusers or relay walls. We repeat the experiment with and without Diffuser 2. The thin lens setup is annotated for the analysis from [36], recapped in Appendix 9.

Download Full Size | PDF

The large $\mathcal {P}$-field wavelength mandates the use of large apertures to focus the $\mathcal {P}$-field wavefront to a diffraction-limited spot. We used a parabolic mirror to enact a reflective imaging geometry since large aperture glass lenses are very heavy and consequently impractical to use. Our final setup, depicted in Fig. 8, features a lightweight, acrylic parabolic mirror that has a diameter of 0.74m, a focal length of 0.66m, and a minimum optical spot size of 2.54 cm. These mirrors are manufactured by GreenPowerScience to focus sunlight for cooking and heating purposes. This means that the focused spot has many aberrations, as seen in Fig. 12(b), but can be used to demonstrate the focusing of the $\mathcal {P}$-field due to the larger modulation wavelength.

 figure: Fig. 8.

Fig. 8. Set up to demonstrate how an optical mirror can be used to focus and project $\mathcal {P}$-field through intervening diffusers or relay walls. We repeat the experiment with and without Diffuser 2.

Download Full Size | PDF

4.2.1 Application of TFSWD for physical lens experiment

We leverage the TFSWD to predict the output of the lens experiment and demonstrate how it predicts the correct answer with (Fig. 9) and without (Fig. 10) an intervening diffuser. The TFSWD visualization 9) demonstrates how the second diffuser scatters the optical carrier everywhere by acquiring additional angles of propagation at each spatial location at the diffuser plane. The center of the detector plane sees the largest amount of light rays due to the diffuser, so the signal has a maximum value at the center and drops off slowly.

 figure: Fig. 9.

Fig. 9. Visualization of light transport for the thin lens experiment with Diffuser 2 (see Fig. 7). We use the TFSWD to build and visualize a step-by-step analysis.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Visualization of light transport for the thin lens experiment without Diffuser 2 (see Fig. 7). We use the TFSWD to build and visualize a step-by-step analysis.

Download Full Size | PDF

However, the $\mathcal {P}$-field phase is preserved at each spatial location at the diffuser plane. The same phase is copied for additional propagation angles generated by the diffuser. After propagating to the diffuser plane, the shearing of the $\mathcal {P}$-field causes it to constructively interfere at $\rho _+ = 0$. The amount of constructive interference drops off away from the center ($\rho _+ = 0$). Integrating out the angular dimension yields an airy disk pattern at the $\mathcal {P}$-field diffraction limit. This allows us to use a lens to either project or image $\mathcal {P}$-field patterns through intervening diffusers, as hypothesized in [36].

When we don’t have an intervening diffuser (Diffuser 2 in Fig. 7), our optical carrier does not acquire additional spatial locations and focuses back to a point. Our TFSWD visualization in Fig. 10 shows how the $\mathcal {P}$-field also focuses back to the same point. This is because the $\mathcal {P}$-field only exists on top of the optical carrier.

4.2.2 Experimental results

We scan our MPD SPAD along a line at the detector plane for these two scenarios, and plot the DC component of the temporal Fourier transform of our measurement in Fig. 12(a). This is the time-averaged intensity and gives the spatial distribution of the optical carrier for each scenario. The presence of Diffuser 2 (in Fig. 7 and Fig. 8) scatters the optical carrier, and Fig. 12(a) shows how removing this diffuser gives us a focused optical spot on the detector plane.

The $\mathcal {P}$-field wavefront for different wavelengths can be recovered by filtering out the correct frequency components in our signal. When the optical carrier is spread over the detector plane, we see that an optical lens is able to project and focus an image of a point source through an intervening diffuser to recover a $\mathcal {P}$-field airy disk at the detector plane. The size of the airy disk is governed by the $\mathcal {P}$-field wavelength (Fig. 13(a)). In this case, both PFM and TFSWD predict the correct answer (Column 1 in Fig. 11). When we remove the diffuser, the lens focuses all the light into an optical sized spot, regardless of the $\mathcal {P}$-field wavelength (Fig. 13(b)).

 figure: Fig. 11.

Fig. 11. Theoretical predictions for the phasor field and TFSWD forward models for thin lens experiment with and without an intervening diffuser. Predictions for both models match experimental results when we have an intervening diffuser, but only TFSWD gives us the right answer when we remove the diffuser.

Download Full Size | PDF

The TFSWD models the directionality of the carrier, and correctly predicts that the carrier masks out the $\mathcal {P}$-field spatially creating an optical-sized diffraction spot for the $\mathcal {P}$-field (Row 2, Column 2 in Fig. 11). On the other hand, the PFM assumes that the optical carrier exists everywhere, it wrongly predicts a spot size governed by the larger $\mathcal {P}$-field wavelength when we don’t have a second diffuser (Row 1, Column 2 in Fig. 11).

The TFSWD predicts a smaller spot size than what we measure because it assumes an ideal parabolic mirror. Our actual mirror has a minimum optical spot size of 2.54 cm and significant aberrations (Fig. 12(b)) since it was designed to focus sunlight for cooking and heating. The experimental airy disk radius, from the maxima to the first minima, is about 14.3 cm and is 18${\%}$ larger than the theoretical radius of 12.05 cm. The acrylic mirror is 0.74 m in diameter, and the large size causes it to bend in the vertical dimension. Furthermore, the mirror creates an image that is off the optical axis. This creates significant aberrations associated with astigmatism and coma (Fig. 12(b)) which can partly explain this discrepancy.

 figure: Fig. 12.

Fig. 12. (a) Experimental results illustrating how the optical carrier scatters in the presence of diffuser 2 from Fig. 7 while removing the diffuser reveals the sharp focus of the carrier from the lens. (b) Focused optical spot from our parabolic mirror showcasing significant aberrations.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Experimental results showcasing how the parabolic mirror focuses $\mathcal {P}$-field for multiple $\mathcal {P}$-field wavelengths when Diffuser 2 (see Fig. 8) is (a) present and (b) absent. When Diffuser 2 is absent, we get a sharp optical focus for all $\mathcal {P}$-field wavelengths.

Download Full Size | PDF

5. Role of occluders and timing in reconstruction resolution

In this section, we seek to quantify the differences in reconstruction resolution for NLOS imaging scenarios in the presence of an occluder, photon ToF, or a combination of both factors. For each scenario, we set up a diffuse point source that potentially interacts with an occluder and illuminates a diffuse relay wall (Fig. 14). This point source serves as a proxy for a point object in the hidden scene (Fig. 1). We leverage the TFSWD framework to model light transport and simulate the measured optical signals at the relay wall across different NLOS imaging scenarios. Our primary goal is to pinpoint the point source’s location, and we quantify the relevant information in the measured signal to achieve this.

 figure: Fig. 14.

Fig. 14. Modeling light transport for a point light source illuminating a diffuse relay wall when there are no occluders in the hidden scene.

Download Full Size | PDF

We proceed in two steps. For each scenario, we derive or estimate the point spread function (PSF) for a diffuse point source using our forward model. Performing NLOS imaging for multiple diffuse point sources is equivalent to carrying out deconvolution of the measured signal with these PSFs. Since zero frequencies can not be recovered using traditional deconvolution techniques, quantifying the spectral bandwidth of the measured signal serves as a useful proxy of the best possible reconstruction resolution. The Fourier transform of the PSF is also easily comparable across different scenarios. Next, we leverage TFSWD visualizations to quantify and visualize reconstruction using existing inverse methods in the literature. For each of these two subsections, we investigate three separate cases.

  • 1. No occluders in the hidden scene and without photon timing
  • 2. No occluders in the hidden scene, but with photon timing
  • 3. Occluders are in the hidden scene and occluder geometry is known

Finally, we tackle the challenging problem of estimating the best possible reconstruction resolution when the occluder geometry is unknown.

5.1 Frequency bandwidth using forward model

We place our point source at $(\rho = 0, z = 0)$, and place our detector at some plane $z$. For each scenario, we derive the measurable variation in the signal at a fixed detector plane $(z)$ by fitting a Gaussian (with mean $\mu$ and standard deviation $\sigma$) along the $\rho$ axis. A zero-mean ($\mu =0$) Gaussian can be used to approximate a PSF or a blur kernel. The width of a Gaussian can be quantified by its full width at half maximum (FWHM), which can be used to estimate and compare the highest frequency in our measured signal for different cases. Conveniently, the Fourier transform of a Gaussian is also a Gaussian [55]:

$$f(\rho, \mu, \sigma) = \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{(\rho-\mu)^2}{2\sigma^2}} ,$$
$$\mathcal{F_\rho}(f(x, \mu, \sigma)) = e^{{-}2\pi i k\mu }e^{{-}2\pi^2 k^2 \sigma^2} ,$$
where ($\rho$, k) are Fourier duals and $\mathcal {F_\rho }$ is the Fourier transform with respect to $\rho$. The FWHM of a Gaussian is proportional to its standard deviation, $\sigma$:
$$FWHM = 2\sqrt{2\ln2} \ \sigma.$$

And $\sigma$ is inversely proportional to the standard deviation in the Fourier domain, $\sigma _f$:

$$\sigma_f = \frac{1}{2\pi\sigma}.$$

5.1.1 Case 1: no occluders

In this section, we demonstrate the utility of improved timing resolution in the absence of occluders. Suppose we have a point light source in the hidden scene that illuminates a diffuse relay wall, where we measure the optical signal (Fig. 14) using a spatial detector without any timing. Then, the spatial variation in our signal depends on the inverse square law $(1/r^2)$ as well as Lambert’s cosine law. In Appendix 11, we fit a Gaussian along the $\rho$ axis for any given detector plane $z$, and derive an analytical expression for the FHWM, $\Delta x_n$, as a function of z, $\sigma (z)$

$$\Delta x_n (z) = 2(3)^{\frac{1}{6}}z .$$

Can we do better if we replace our spatial detector with a ToF detector? While the intensity falloff varies slowly along our detector plane, photon ToF varies at a faster scale and our limiting factor is governed by the accuracy of existing ToF detectors. This accuracy can be quantified by the timing jitter $\gamma$ i.e. the width of its temporal impulse response. Since the impulse response blurs out the temporal signal, it is a low-pass filter in the frequency domain and so the smallest detectable change can be linked to this timing jitter. Prior work [15,56] approximated the impulse response as a Gaussian function with FWHM equal to the jitter, and used the FWHM criterion to derive the minimum resolvable distance along transverse ($\Delta x_t$) and axial ($\Delta z_t$) axes:

$$\Delta x_t \geq 2c\gamma \frac{\sqrt{(D/2)^2 + z^2 }}{D},$$
$$\Delta z_t \geq \frac{c\gamma}{2},$$
where $c$ refers to the speed of light and $D$ is the diameter/width of our detector plane.

5.1.2 Case 2: known occluder geometry

If the geometry of occluder in the hidden scene is known, then we can use the edges of the occluder to project known masks/shadows into the hidden scene. In this case, we our limited by how well the edges of shadows are localized spatially. This regime is called the penumbra of the shadow and the size of the penumbra depends on two things - first, the spatial extent of the light source and second, the ripples that occur at the edge of the shadows due to diffraction.

Size of the light source For a light source with some spatial extent, the size of the penumbra can be shown to be:

$$\Delta \rho_l = \frac{z}{a} \Delta s,$$
where $\Delta s$ is the size of the light source, $a$ is the distance from the point source to the occluder, and $a$ is the distance between the occluder and the detector plane (Fig. 15(a)).

 figure: Fig. 15.

Fig. 15. Set up to measure the contribution to the width of the penumbra from the spatial extent of (a) the light source, and (b) the ripples from a knife edge.

Download Full Size | PDF

In practice, we can focus our laser spot on the relay surface to the order of a few millimeters since the relay surface is far away from the NLOS imaging system. When $z/a$ is large, the width of the edge becomes limited by optical diffraction.

Optical diffraction We use a knife-edge diffraction model to quantify the width of these ripples in the near field i.e. under the paraxial approximation (Fig. 15(b)). Assuming a point source distance $a$ away from the knife edge and a detector screen distance $z$ away from the knife, the intensity pattern $I(\rho )$ as a function of the transverse coordinate $\rho$ can be computed numerically using Fresnel Integrals [57]. The transverse location of each local maxima of the function can be computed analytically, and is given by [57]:

$$\rho_s(m) = \sqrt{ 2\lambda_0\frac{z(z+a)}{a}(m+1)} \qquad \qquad m = 0, 1, {\ldots}$$

We use $\Delta \rho _s = \rho _s(m=0)$ as a proxy for the width of the penumbra regime and use this to generate a fair comparison with the Rayleigh criterion, which is defined from the center to the first minima. We can split further into 3 subcases:

$$\Delta \rho_s \approx \begin{cases} \sqrt{\frac{2\lambda_0}{a}} z , \qquad z \gg a \\ \sqrt{4\lambda_0 z}, \qquad z \approx a \\ \sqrt{2\lambda_0 z }, \qquad z \ll a \end{cases}$$

The case $z \ll a$ corresponds a planar wavefront, where we can assume the point source is infinitely far away from the knife edge, which yields:

$$\Delta \rho_s = \lim_{a \rightarrow \infty} \rho(m=0) = \sqrt{ 2\lambda_0 z }.$$

Combining the contribution of two sources, we get:

$$\Delta x_o = \sqrt{\Delta \rho_l^2 + \Delta \rho_s^2},$$
$$= \sqrt{ 2\lambda_0\frac{z(z+a)}{a} + \left(\frac{z}{a} \Delta s\right)^2}.$$

We can compare the measured signal in the Fourier domain for all three cases. When we don’t have an occluder, we can simply take the Fourier transform of our blur kernel to quantify the bandwidth. For occluders, $\Delta x_o$, is approximately the FWHM of a Gaussian blur kernel that blurs out an ideal sharp edge. This can be formulated as the convolution of a step function, $H(\rho )$, with this Gaussian blur kernel where:

$$H(\rho) = \begin{cases} 1, \quad \rho \geq 0 \\ 0, \quad \rho < 0 \end{cases}$$

Let $\sigma _o = \Delta x_o/(2\sqrt {2\ln 2})$. We can write down our measurement, $I_o(\rho )$, at detector plane as:

$$I_o(\rho) = d\rho^{\prime} \int H( \rho^\prime) f(\rho-\rho^{\prime}, \ \mu=0, \ \sigma_o).$$

Fixing $z=2$ m, $a=1$ m, $D=2$ m, $\gamma =50$ ps, $\lambda _0=500$ nm, and $\Delta s = 1$ mm to simulate a reasonable NLOS imaging scenario, we generate Fig. 16. We see that without any shadows or timing, the information is concentrated in a narrow regime around frequencies $k=0$. Incorporating photon timing increases the frequency bandwidth substantially. The blur kernel for the shadow is even more broadband in the frequency domain due to the smaller value of $\Delta x_o$, but has a lower magnitude than that of the timing signal at intermediate frequencies. This is because we convolve with the blur kernel with the step function, which multiplies the signal with $1/k$ in the frequency domain and so reduces the magnitude for $|k|\geq 1$.

 figure: Fig. 16.

Fig. 16. We use the Fourier domain to quantify the information bandwidth for each scenario i.e. signals associated with Intensity falloff (blue), photon timing (orange), and shadows from occluders (yellow).

Download Full Size | PDF

5.2 Reconstruction

In this subsection, we revisit the three cases and use existing inverse methods to quantify the best reconstruction resolution of our diffuse point source at Plane 1 in Fig. 14. We also utilize TFSWD to visualize the inverse.

5.2.1 Case 2.1: no occluders and no timing

To understand the reconstruction process, we apply time reversal to reconstruct the point source at Plane 1. In the TFSWD visualization, we simply shear the light field backward. However, the diffusers create uncertainty in the lateral location of the point source, as shown in Fig. 17. We can utilize the blur kernel from the variation in intensity arriving at different points of the detector plane - in general, we can resolve two points if they are more than twice the FWHM away from each other. In the absence of noise, our lateral uncertainty or reconstruction resolution can quantified as:

$$\Delta r_n = \frac{\Delta x_n}{2} = 3^{1/6} z .$$

For $z=2$ m, our resolution is $\Delta r_n = 2.40$ m.

 figure: Fig. 17.

Fig. 17. TFSWD visualizations illustrate the inverse and reconstruction resolution for the case when we have no photon timing.

Download Full Size | PDF

5.2.2 Case 2.2: with timing

This timing information is encoded in the $\mathcal {P}$-field phase, and the increased variation can be seen directly from the TFSWD visualization in Fig. 14. The differential phase term (Eq. (22)) increases the amount of spatial variation in the signal captured at the detector. We can also visualize the inverse using the $\mathcal {P}$-field (Fig. 18) and demonstrate the $\mathcal {P}$-field lateral resolution, which is given by Rayleigh Criterion for the modulation wavelength $\lambda _p$ [29,36]:

$$\Delta r_t = \frac{1.22 \lambda_p z}{D}.$$

 figure: Fig. 18.

Fig. 18. TFSWD visualizations illustrate the $\mathcal {P}$-field inverse and reconstruction resolution for the case when we have photon timing.

Download Full Size | PDF

The smaller the $\mathcal {P}$-field wavelength, the better the lateral resolution. Let $\lambda _p = 4$ cm, $D=1$ m, $z=2$ m, we find that the reconstruction resolution is $\Delta r_t = 9.76$ cm which is an order of magnitude improvement over $\Delta r_n$.

5.2.3 Case 2.3: known occluder geometry

Dove et al [29] quantified the reconstruction resolution when occluder geometry is known apriori. For example, when the occluder is a Gaussian pinhole, the optimal point spread function is given at Fresnel numbers equal to 1 and its width can be approximated as:

$$\Delta r_o = \sqrt{\lambda_0 z}.$$

For $\lambda _0 = 500$ nm and $z=2$ m, the reconstruction resolution $\Delta r_o = 1$ mm is two orders of magnitude higher than $\Delta r_t$. This illustrates that using photon ToF does not help improve the lateral resolution when the occluder geometry is known. However, having knowledge of the occluder geometry is not always feasible, especially if the occluder is inside the hidden scene.

5.3 Case 3: unknown occluder geometry

In the most general case, occluder geometry must be inferred before the edges of the occluder can be utilized to improve our image. However, the precision at which the edges of the occluder can be reconstructed depends on two things. First, how good this reconstruction resolution is to begin with? Second, can we make assumptions about occluder shape apriori?

Existing literature [28,33] utilizes models that make use of priors for the unknown occluder shape to improve the reconstruction. In this work, we examine the case where we can’t make any assumptions. In this context, we have shown that ToF reconstructions are the best we can do. With the $\mathcal {P}$-field framework, $\Delta r_{t_k}= 1.22 \lambda _p (a)/D$ is the uncertainty with which we can localize the knife edge in Fig. 15(b) while $\Delta r_{t}= 1.22 \lambda _p (a+z)/D$ is the uncertainty for the point source. This uncertainty increases linearly with the propagation distance $z$.

Can we find a case where we reduce the uncertainty in the location of our point source using the high-frequency information from shadows? After we reconstruct our knife edge, the final uncertainty for our point source $\Delta x_{t,o}$ is given by convolving the uncertainty from the $\mathcal {P}$-field reconstruction of the knife edge, $\Delta r_{t_k}$, with the uncertainty from shadows of the occluder, $\Delta x_o$:

$$\Delta x_{t,o}= \sqrt{\Delta r_{t_k}^2 + \Delta x_o^2}$$
$$= \sqrt{ \left( \frac{1.22 \lambda_p a}{D} \right)^2 + 2\lambda_0\frac{z(z+a)}{a} + \left(\frac{z}{a} \Delta s\right)^2 }$$
when $z \gg a$, we see that $\Delta x_o$ also scales linearly with $z$ and shadows don’t necessarily help us:
$$\Delta x_{o} \approx \left( \frac{z}{a} \right) \sqrt{ 2\lambda_0 a + \Delta s^2}.$$

The next case, $z \ll a$, is similarly not useful but in a more subtle way. To better understand this, let’s examine the unfolded setup shown in Fig. 19 which acts as a substitute for NLOS imaging. A diffuse point source interacts with a knife edge and illuminates a point source in the hidden scene. Assuming ideal scattering, the point source scatters reflect light back to the relay surface where we measure our signal. Since the same knife edge is met by the light to and from the relay wall, it follows that $a^\prime = z$, and $z^\prime = a$. Consequently, if $z \ll a$, then $z^\prime \gg a^\prime$ and we have the same linear dependence of $\Delta x_o$ on $z^\prime$.

 figure: Fig. 19.

Fig. 19. Unfolded Set up that acts as a stand in for NLOS imaging

Download Full Size | PDF

What happens when $z\approx a$? For the one-sided case in Fig. 15(b), we have:

$$\Delta x_{o} \approx \sqrt{ 4\lambda_0 z + \Delta s^2},$$
and this uncertainty varies with $\sqrt {z}$. For the unfolded case, this uncertainty doubles ($2\Delta x_{o}$). To get further insight, let’s make some assumptions pertinent to NLOS imaging. Usually, $a$, $z$ are on the order of meters, and the light source can be focused to a spot size, $\Delta s$, of a few centimeters. Since $\lambda _0 \ll z$, our final uncertainty can shown to be less than $\Delta r_t$ i.e. the uncertainty from reconstructing the point source directly using only photon timing:
$$\Delta x_{t,o} = \sqrt{\Delta x_t^2 + 2(\Delta x_o)^2}$$
$$\approx \sqrt{ \left( \frac{1.22 \lambda_p a}{D} \right)^2 + 4 (4\lambda_0 z + \Delta s^2) }$$
$$\approx \sqrt{ \left( \frac{1.22 \lambda_p a}{D} \right)^2 + 4( \Delta s^2) } \qquad< \left( \frac{1.22 \lambda_p (z+a)}{D} \right)$$
$$= \Delta r_t$$

This demonstrates we can potentially use high frequency information contained in the shadows to improve reconstruction resolution for the case when $z \approx a$, even when we have no information about the occluder apriori.

6. Conclusion

In this work, we conduct experiments demonstrating the validity of two theoretical analyses conducted by Dove et al in [29,36]. We show that an optical lens can be used to focus and project $\mathcal {P}$-field through intervening diffusers or scattering relay surfaces. We also show that the TFSWD [29] framework can successfully model occlusions experimentally and develop some visualizations with the aim of generating intuitive understanding. Finally, we leverage the TFSWD framework to quantify uncertainty in our modeling light transport for different NLOS imaging scenarios and demonstrate that occluders of unknown geometry can help improve NLOS reconstructions. In our analysis, we don’t consider the impact of noise on NLOS measurements or develop an inverse model to combine ToF-NLOS and Shadow-NLOS, using the TFSWD. We hope this can be the subject of future work.

Appendices

A. Propagation primitives for the two-frequency spatial Wigner distribution

In this section, we outline the propagation primitives that form the fundamental ’building blocks’ in our experiments. The idea is to assemble these propagation primitives to describe $\mathcal {P}$-field propagation for different experimental scenarios.

A.0.1. Light ray

Let’s start with the smallest building block for our light transport model: a single light ray with fixed carrier frequency $\omega ^o_{+}$ and modulation frequency $\omega ^o_{-}$. If our light ray starts at position $\rho ^o$ and propagates at an angle $\theta ^o$, the TFSWD becomes a delta function:

$$W_{\mathcal{E}_{0}}(\mathbf{\rho}_{+},\mathbf{k},{\omega}_{+},{\omega}_{-}) = \delta(\rho_{+} - \rho^o, k - k^o, {\omega}_{+} - {\omega}^{o}_{+}, {\omega}_{-} - {\omega}^{o}_{-}).$$
where $k^o = \sin ^{-1}(\theta ^o)\lambda ^o \approx \theta ^o\lambda ^o$ encodes direction of the ray under the paraxial approximation.

A.0.2. Diffuser

A diffuser takes incoming light and spreads it over a spatial cone angle $\theta _0$. In the context of NLOS imaging, a diffuser is the transmissive analog of a relay wall that spatially scatters the light in reflection. This allows us to unfold the NLOS set up, and sequentially build the light transport step by step. For an ’ideal’ diffuser, the cone angle is $\theta _0 = 2\pi$, and the exiting spatial frequencies $k^{\prime }$ are independent of the incident spatial frequencies $k$. In classical optics, this is defined as a Lambertian diffuser.

$$W_{\mathcal{E}_{0}^{\prime}}(\mathbf{\rho}_{+},\mathbf{k},{\omega}_{+},{\omega}_{-}) = \lambda_{0}^{2}{\int{\frac{\text{d}^{2}\mathbf{k}^{\prime}}{\left( {2\pi} \right)^{2}}W_{\mathcal{E}_{0}}\left( {\mathbf{\rho}_{+},\mathbf{k}^{\prime},{\omega}_{+},{\omega}_{-}} \right).}}$$
where $\mathcal {E}_0$ and $\mathcal {E}_{0}^{\prime }$ denote the input and output electric field to the diffuser respectively. For any other diffuser with a finite cone angle ’$\theta _0$’, the final TFSWD is the angular convolution of the Wigner distribution of the diffuser, $W_{D}(\mathbf {\rho }_{+},\mathbf {k})$ with the incoming TFSWD:
$$W_{\mathcal{E}_{0}^{\prime}}(\mathbf{\rho}_{+},\mathbf{k},{\omega}_{+},{\omega}_{-}) = {\int{\frac{\text{d}^{2}\mathbf{k}^{\prime}}{\left( \theta_0 \right)^{2}}W_{\mathcal{E}_{0}}(\mathbf{\rho}_{+},\mathbf{k}^{\prime},{\omega}_{+},{\omega}_{-})W_{D}(\mathbf{\rho}_{+},\mathbf{k} - \mathbf{k}^{\prime}).}}$$
where $W_{D}$ contains information about the diffuser cone angle.

A.0.3. Occluder

The outgoing TFSWD after an occluder can be obtained by doing an angular convolution of the incoming TFSWD with the Wigner distribution of the occluder, $W_{P}(\mathbf {\rho }_{+},\mathbf {k})$, so that:

$$W_{\mathcal{E}_{0}^{\prime}}(\mathbf{\rho}_{+},\mathbf{k},{\omega}_{+},{\omega}_{-}) = {\int{\frac{\text{d}^{2}\mathbf{k}^{\prime}}{\left( {2\pi} \right)^{2}}W_{\mathcal{E}_{0}}(\mathbf{\rho}_{+},\mathbf{k}^{\prime},{\omega}_{+},{\omega}_{-})W_{P}(\mathbf{\rho}_{+},\mathbf{k} - \mathbf{k}^{\prime}).}}$$

A.0.4. Propagation

Under the Fresnel approximation, propagation becomes a shearing operation in the light field and Wigner domains [54,58,59]. Rays propagating at positive angles move up, at zero angles remain at the same transverse location, while rays moving at negative angles move down.

With the TFSWD, we also keep track of the $\mathcal {P}$-field modulation phase for each modulation frequency $\omega _{-}$. Under the paraxial approximation, Fresnel Diffraction can be used to propagate individual frequency components, $\omega _{-}$, of the $\mathcal {P}$-field wavefront independently from a plane $z_1 = 0$ to a plane $z_2 = L$ [29,34]:

$$W_{\mathcal{E}_z}(\mathbf{\rho}_{+},\mathbf{k},{\omega}_{+},{\omega}_{-}) = W_{\mathcal{E}_{0}^{\prime}}(\mathbf{\rho}_{+} - c(L)\mathbf{k}/{\omega}_{0},\mathbf{k},{\omega}_{+},{\omega}_{-})e^{i\lbrack{\omega}_{-}(L)/c\rbrack(1 + c^{2}{|\mathbf{k}|}^{2}/2{\omega}_{0}^{2})}.$$
where $\rho _0$ describes the spatial coordinate at the initial $z_1 = 0$ plane. To see that this is the familiar Fresnel Diffraction Kernel, substitute $|k| = \omega _0 |\rho _{+} - \rho _0|c/L$, then
$$W_{\mathcal{E}_z}(\mathbf{\rho}_{+},\mathbf{k},{\omega}_{+},{\omega}_{-}) = W_{\mathcal{E}_{0}^{\prime}}(\mathbf{\rho}_{+} - c(L)\mathbf{k}/{\omega}_{0},\mathbf{k},{\omega}_{+},{\omega}_{-})e^{i\omega_{-}L/c}e^{i\omega_{-}{|{\mathbf{\rho}_{+} - \mathbf{\rho}_{0}}|}^{2}/2cL}.$$
where we have sheared the spatial axes ($\rho _{+}$) by an amount proportional to $k$, and the Fresnel diffraction kernel keeps track of the phase shift for each phasor field frequency given by $\omega _{-}$.

A.0.5. Thin lens

A thin lens with a focal length of f shears the light field horizontally [54,58,59]. Each ray at a positive spatial location acquires a negative angle, while a ray at a negative location acquires a positive angle. Furthermore, the lens also delays the modulation phase of the input TFSWD by an amount proportional to the focal length, f, of the lens at each spatial location $\rho _+$ [36,60]:

$$W_{\mathcal{E}_z}(\mathbf{\rho}_{+},\mathbf{k},{\omega}_{+},{\omega}_{-}) = W_{\mathcal{E}_{0}^{\prime}}(\mathbf{\rho}_{+} ,\mathbf{k} + (2\pi/\lambda_0)\mathbf{\rho}_{+}/f ,{\omega}_{+},{\omega}_{-})e^{{-}i(\omega_{-}/c)(\rho_+^2/2f)}.$$

B. Focusing phasor fields using an optical lens through an intervening diffuser

For the sake of brevity, we will restate the key results from [36] that demonstrate an the optical lens can be used to focus the $\mathcal {P}$-field through a diffuser. We will use the setup and notation depicted in Fig. 7 to conduct our analysis. The thin lens at $z = L_{\text {in}_1}$ is assumed to have a Gaussian field-transmission pupil given by $e^{-|\rho _\mathrm {in}|^2/2D^2}$. The input $\mathcal {P}$-field is defined as the autocorrelation of the electric fields, as stated in Eq. (4):

$$\mathcal{P}_{\text{in}}(\mathbf{\rho}_{\text{in}},{\omega}_{-}) = \int\!\frac{\text{d}{\omega}_{+}}{2\pi}\,\mathcal{E}_{\text{in}}(\mathbf{\rho}_{\text{in}},{\omega}_{+} + {\omega}_{-}/2)\mathcal{E}_{\text{in}}^{{\ast}}(\mathbf{\rho}_{\text{in}},{\omega}_{+} - {\omega}_{-}/2),$$

We can show that the phasor field signal, ${\mathcal {P}_{1}(\mathbf {\rho }_{1},{\omega }_{-}) }$ at the imaging plane, $z=L_1$, after a series of propagation steps shown below is given by:

$$\begin{aligned} {\mathcal{P}_{1}(\mathbf{\rho}_{1},{\omega}_{-})} =&{\pi\left( \frac{D}{L_{1}L_{\text{in}_{1}}} \right)^{2}e^{i{\omega}_{-}(L_{\text{in}_{1}} + L_{\text{prj}})/c}e^{i{\omega}_{-}|\mathbf{\rho}_{1}|^{2}/2cL_{\text{prj}}}}\\ & {\times \int\text{d}^{2}\mathbf{\rho}_{\text{in}}\,\mathcal{P}_{\text{in}}(\mathbf{\rho}_{\text{in}},{\omega}_{-})e^{i{\omega}_{-}|\mathbf{\rho}_{\text{in}}|^{2}/2cL_{\text{in}_{1}}}e^{- ({\omega}_{-}D/2cL_{\text{prj}})^{2}{|\mathbf{\rho}_{1} + M\mathbf{\rho}_{\text{in}}|}^{2}}.} \end{aligned}$$
where $L_\text {prj} = L_{\text {in}_{2}} + L_1$ and $M = L_\text {prj}/L_\text {in}$ is the Magnification factor. For a more detailed derivation of this result, please refer to the original paper [36]. For an input point light source at $\mathbf {\rho }_{\text {in}}= \mathbf {\rho }_{\text {in}}'$, we have:
$$\mathcal{P}_{\text{in}}(\mathbf{\rho}_{\text{in}},{\omega}_{-}) = \delta(\mathbf{\rho}_{\text{in}}')$$

By the sifting property of the delta function, we have that:

$$\begin{aligned}{\mathcal{P}_{1}(\mathbf{\rho}_{1},{\omega}_{-})} =&{\pi\left( \frac{D}{L_{1}L_{\text{in}_{1}}} \right)^{2}e^{i{\omega}_{-}(L_{\text{in}_{1}} + L_{\text{prj}})/c}e^{i{\omega}_{-}|\mathbf{\rho}_{1}|^{2}/2cL_{\text{prj}}}}\\ & {\times \int\text{d}^{2}\mathbf{\rho}_{\text{in}}\,\delta(\mathbf{\rho}_{\text{in}}', {\omega}_{-}) e^{i{\omega}_{-}|\mathbf{\rho}_{\text{in}}|^{2}/2cL_{\text{in}_{1}}}e^{- ({\omega}_{-}D/2cL_{\text{prj}})^{2}{|\mathbf{\rho}_{1} + M\mathbf{\rho}_{\text{in}}|}^{2}}} \end{aligned}$$
$$\begin{aligned}=&{\pi\left( \frac{D}{L_{1}L_{\text{in}_{1}}} \right)^{2}e^{i{\omega}_{-}(L_{\text{in}_{1}} + L_{\text{prj}})/c}e^{i{\omega}_{-}|\mathbf{\rho}_{1}|^{2}/2cL_{\text{prj}}}}\\ & {\times \, e^{i{\omega}_{-}|\mathbf{\rho}_{\text{in}}'|^{2}/2cL_{\text{in}_{1}}} e^{- ({\omega}_{-}D/2cL_{\text{prj}})^{2}{|\mathbf{\rho}_{1} + M\mathbf{\rho}_{\text{in}}'|}^{2}}} \int\text{d}^{2}\mathbf{\rho}_{\text{in}}\, \end{aligned}$$
$$\begin{aligned}=&{\pi\left( \frac{D}{L_{1}L_{\text{in}_{1}}} \right)^{2}e^{i{\omega}_{-}(L_{\text{in}_{1}} + L_{\text{prj}})/c}e^{i{\omega}_{-}|\mathbf{\rho}_{1}|^{2}/2cL_{\text{prj}}}} e^{i{\omega}_{-}|\mathbf{\rho}_{\text{in}}'|^{2}/2cL_{\text{in}_{1}}}\\ & {\times \, e^{- ({\omega}_{-}D/2cL_{\text{prj}})^{2}{|\mathbf{\rho}_{1} + M\mathbf{\rho}_{\text{in}}'|}^{2}}} . \end{aligned}$$

Taking the absolute value to get the intensity, we get:

$$|{\mathcal{P}_{1}(\mathbf{\rho}_{1},{\omega}_{-})}| ={\pi\left( \frac{D}{L_{1}L_{\text{in}_{1}}} \right)^{2} } { e^{- ({\omega}_{-}D/2cL_{\text{prj}})^{2}{|\mathbf{\rho}_{1} + M\mathbf{\rho}_{\text{in}}'|}^{2}}.}$$

Since this is only defined over a region, $({\omega }_{-}D/2cL_{\text {prj}})^{-1} = \pi \lambda _{-} L_{\text {prj}}/D$ where we have used $\omega _{-} = 2\pi c/\lambda _-$, we can see that the output image has resolution diffraction limited at the modulation wavelength $\lambda _-$ and scaled by the appropriate magnification factor M.

C. Table of notations

Tables Icon

Table 1. Summary of mathematical notation used in the manuscript

D. Intensity Variation without any timing or occluders

We want to measure the spatial variation in intensity due to a diffuse point source. Suppose the point source is located at $(\rho _0 = 0, z_0 = 0)$ and detector plane is at $z = L$. Then we have variation from the inverse-square law ($1/r^2$) as well as Lambert’s cosine law. Then, we can write down the intensity as a function of space using:

$$I(\rho, z) = \frac{1}{r^2} \cos \theta = \frac{z}{r^3}$$
$$= \frac{z}{ \sqrt{(\rho - \rho_0)^2 + (z - z_0)^2}^3}$$
$$= \frac{z}{ \left( \sqrt{\rho ^2 + z^2} \right)^3}.$$
where $\theta$ is the angle between the optical axis and the light ray, we have used $\cos \theta = z/r$. We will fit a Gaussian and subsequently find the FWHM:
$$I_{\text{max}}(\rho, z) = I(0, z) = \frac{1}{z^2}.$$

Solving for $\rho$ to compute FWHM:

$$\begin{aligned}I(\rho, z) = \frac{I_{\text{max}}(\rho, z)}{2} \end{aligned}$$
$$\begin{aligned}\frac{z}{ (\sqrt{\rho^2 + z^2})^3} = \frac{1}{2z^2} \end{aligned}$$
$$\begin{aligned} \rho = (3)^{\frac{1}{6}} z \end{aligned}$$

Then FWHM for the Gaussian for the case with no occluders or timings, $\Delta x_n$, is twice this value:

$$\Delta x_n = 2(3)^{\frac{1}{6}} z.$$

Funding

Air Force Office of Scientific Research (FA9550-650 21-1-0341).

Acknowledgements

The authors would like to acknowledge Justin Dove, Jeffrey Shapiro, and Trevor Seets for valuable discussions. Talha Sultan would like to acknowledge Nate Zerrien for help with data acquisition.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are available in Ref. [61].

References

1. A. Torralba and W. T. Freeman, “Accidental Pinhole and Pinspeck Cameras,” Int. J. Comput. Vis. 110(2), 92–112 (2014). [CrossRef]  

2. K. L. Bouman, V. Ye, A. B. Yedidia, et al., “Turning Corners into Cameras: Principles and Methods,” in 2017 IEEE International Conference on Computer Vision (ICCV), (IEEE, Venice, 2017), pp. 2289–2297. Tex.ids= bouman_turning_2017-1.

3. S. W. Seidel, Y. Ma, J. Murray-Bruce, et al., “Corner Occluder Computational Periscopy: Estimating a Hidden Scene from a Single Photograph,” in 2019 IEEE International Conference on Computational Photography (ICCP), (2019), pp. 1–9. ISSN: 2472–7636.

4. W. Krska, S. W. Seidel, C. Saunders, et al., “Double Your Corners, Double Your Fun: The Doorway Camera,” in 2022 IEEE International Conference on Computational Photography (ICCP), (2022), pp. 1–12. ISSN: 2472–7636.

5. S. W. Seidel, J. Murray-Bruce, Y. Ma, et al., “Two-Dimensional Non-Line-of-Sight Scene Estimation From a Single Edge Occluder,” IEEE Trans. Comput. Imaging 7, 58–72 (2021). Conference Name: IEEE Transactions on Computational Imaging. [CrossRef]  

6. D. Lin, C. Hashemi, and J. R. Leger, “Passive non-line-of-sight imaging using plenoptic information,” J. Opt. Soc. Am. A 37(4), 540–551 (2020). Publisher: Optica Publishing Group. [CrossRef]  

7. A. Kirmani, T. Hutchison, J. Davis, et al., “Looking around the corner using transient imaging,” in 2009 IEEE 12th International Conference on Computer Vision, (IEEE, Kyoto, 2009), pp. 159–166.

8. A. Kirmani, T. Hutchison, J. Davis, et al., “Looking Around the Corner using Ultrafast Transient Imaging,” International Journal of Computer Vision 95(1), 13–28 (2011). [CrossRef]  

9. A. Velten, T. Willwacher, O. Gupta, et al., “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3(1), 745 (2012). [CrossRef]  

10. F. Heide, L. Xiao, W. Heidrich, et al., “Diffuse Mirrors: 3D Reconstruction from Diffuse Indirect Illumination Using Inexpensive Time-of-Flight Sensors,” in 2014 IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, Columbus, OH, USA, 2014), pp. 3222–3229.

11. M. Buttafava, G. Boso, A. Ruggeri, et al., “Time-gated single-photon detection module with 110 ps transition time and up to 80 MHz repetition rate,” Rev. Sci. Instrum. 85(8), 083114 (2014). Publisher: American Institute of Physics. [CrossRef]  

12. M. Buttafava, J. Zeman, A. Tosi, et al., “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23(16), 20997–21011 (2015). [CrossRef]  

13. S. Riccardo, E Conca, V. Sesta, et al., “Fast-Gated 16 × 16 SPAD Array with 16 on-chip 6 ps Time-to-Digital Converters for Non-Line-of-Sight Imaging,” IEEE Sensors Journal p. 1 (2022). Conference Name: IEEE Sensors Journal

14. J. Zhao, F. Gramuglia, P. Keshavarzian, et al., “A Gradient-Gated SPAD Array for Non-Line-of-Sight Imaging,” IEEE Journal of Selected Topics in Quantum Electronics pp. 1–10 (2023). Conference Name: IEEE Journal of Selected Topics in Quantum Electronics.

15. M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555(7696), 338–341 (2018). [CrossRef]  

16. X. Liu, I. Guillén, M. La Manna, et al., “Non-line-of-sight imaging using phasor-field virtual wave optics,” Nature 572(7771), 620–623 (2019). Number: 7771 Publisher: Nature Publishing Group. [CrossRef]  

17. D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast f-k migration,” ACM Trans. Graph. 38(4), 1–13 (2019). [CrossRef]  

18. B. Ahn, A. Dave, A. Veeraraghavan, et al., “Convolutional Approximations to the General Non-Line-of-Sight Imaging Operator,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), (IEEE, Seoul, Korea (South), 2019), pp. 7888–7898.

19. J. H. Nam, E. Brandt, S. Bauer, et al., “Low-latency time-of-flight non-line-of-sight imaging at 5 frames per second,” Nat. Commun. 12(1), 6526 (2021). [CrossRef]  

20. M. L. Manna, F. Kine, E. Breitbach, et al., “Error Backprojection Algorithms for Non-Line-of-Sight Imaging,” IEEE Transactions on Pattern Analysis and Machine Intelligence p. 1 (2018).

21. J. Iseringhausen and M. B. Hullin, “Non-line-of-sight Reconstruction Using Efficient Transient Rendering,” ACM Trans. Graph. 39(1), 1–14 (2020). [CrossRef]  

22. X. Liu, J. Wang, Z. Li, et al., “Non-line-of-sight reconstruction with signal–object collaborative regularization,” Light: Sci. Appl. 10(1), 198 (2021). [CrossRef]  

23. X. Liu, J. Wang, L. Xiao, et al., “Non-line-of-sight imaging with arbitrary illumination and detection pattern,” Nat. Commun. 14(1), 3230 (2023). [CrossRef]  

24. D. Royo, T. Sultan, A. Muñoz, et al., “Virtual Mirrors: Non-Line-of-Sight Imaging Beyond the Third Bounce,” ACM Trans. Graph. 42(4), 1–15 (2023). [CrossRef]  

25. J. Klein, C. Peters, J. Martín, et al., “Tracking objects outside the line of sight using 2d intensity images,” Sci. Rep. 6(1), 32491 (2016). [CrossRef]  

26. C. Thrampoulidis, G. Shulkind, F. Xu, et al., “Exploiting Occlusion in Non-Line-of-Sight Active Imaging,” IEEE Trans. Comput. Imaging 4(3), 419–431 (2018). [CrossRef]  

27. F. Xu, G. Shulkind, C. Thrampoulidis, et al., “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26(8), 9945–9962 (2018). [CrossRef]  

28. C. Saunders, J. Murray-Bruce, and V. K. Goyal, “Computational periscopy with an ordinary digital camera,” Nature 565(7740), 472–475 (2019). [CrossRef]  

29. J. Dove and J. H. Shapiro, “Paraxial theory of phasor-field imaging,” Opt. Express 27(13), 18016–18037 (2019). [CrossRef]  

30. J. Rapp, C. Saunders, J. Tachella, et al., “Seeing around corners with edge-resolved transient imaging,” Nat. Commun. 11(1), 5929 (2020). Number: 1 Publisher: Nature Publishing Group. [CrossRef]  

31. C. Saunders, W. Krska, J. Tachella, et al., “Edge-resolved transient imaging: Performance analyses, optimizations, and simulations,” in 2021 IEEE International Conference on Image Processing (ICIP), (IEEE, 2021), pp. 2858–2862.

32. S. Seidel, H. Rueda-Chacón, I. Cusini, et al., “Non-line-of-sight snapshots and background mapping with an active corner camera,” Nat. Commun. 14(1), 3677 (2023). Number: 1 Publisher: Nature Publishing Group. [CrossRef]  

33. F. Heide, M. O’Toole, K. Zang, et al., “Non-line-of-sight imaging with partial occluders and surface normals,” ACM Trans. Graph. 38(3), 1–10 (2019). [CrossRef]  

34. S. A. Reza, M. L. Manna, S. Bauer, et al., “Phasor field waves: A Huygens-like light transport model for non-line-of-sight imaging applications,” Opt. Express 27(20), 29380–29400 (2019). Publisher: Optical Society of America. [CrossRef]  

35. J. A. Teichman, “Phasor field waves: a mathematical treatment,” Opt. Express 27(20), 27500–27506 (2019). Publisher: Optical Society of America. [CrossRef]  

36. J. Dove and J. H. Shapiro, “Paraxial phasor-field physical optics,” Opt. Express 28(14), 21095–21109 (2020). Publisher: Optical Society of America. [CrossRef]  

37. J. Dove and J. H. Shapiro, “Nonparaxial phasor-field propagation,” Opt. Express 28(20), 29212–29229 (2020). Publisher: Optical Society of America. [CrossRef]  

38. M. J. Bastiaans, “The Wigner distribution function applied to optical signals and systems,” Opt. Commun. 25(1), 26–30 (1978). [CrossRef]  

39. S. A. Reza, M. La Manna, S. Bauer, et al., “Phasor field waves: experimental demonstrations of wave-like properties,” Opt. Express 27(22), 32587 (2019). [CrossRef]  

40. X. Liu, S. Bauer, and A. Velten, “Phasor field diffraction based reconstruction for fast non-line-of-sight imaging systems,” Nat. Commun. 11(1), 1–13 (2020). Number: 1 Publisher: Nature Publishing Group. [CrossRef]  

41. S. A. Reza, M. La Manna, and A. Velten, “Imaging with Phasor Fields for Non-Line-of Sight Applications,” in Imaging and Applied Optics 2018 (3D, AO, AIO, COSI, DH, IS, LACSEA, LS&C, MATH, pcAOP), (OSA, Orlando, Florida, 2018), p. CM2E.7.

42. X. Liu, S. Bauer, and A. Velten, “Analysis of Feature Visibility in Non-Line-Of-Sight Measurements,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), (IEEE, Long Beach, CA, USA, 2019), pp. 10132–10140.

43. C. Gu, T. Sultan, K. Masumnia-Bisheh, et al., “Fast Non-line-of-sight Imaging with Non-planar Relay Surfaces,” in 2023 IEEE International Conference on Computational Photography (ICCP), (2023), pp. 1–12. ISSN: 2472–7636.

44. R. Nityananda, “Diffraction at a straight edge: A gem from sommerfeld’s work in classical physics,” Resonance 20(5), 389–400 (2015). [CrossRef]  

45. G. Franceschetti, A. Iodice, A. Natale, et al., “Stochastic theory of edge diffraction,” IEEE Trans. Antennas Propag. 56(2), 437–449 (2008). [CrossRef]  

46. G. Franceschetti, A. Iodice, A. Natale, et al., “Stochastic theory of edge diffraction: its physical reading,” IEEE Trans. Antennas Propag. 58(12), 4078–4081 (2010). [CrossRef]  

47. S. A. Reza, S. Bauer, and A. Velten, “A Statistical Treatment of Phasor Fields for a Partially-Coherent Optical Carrier,” in Imaging and Applied Optics Congress (2020), paper CTh4C.4, (Optica Publishing Group, 2020), p. CTh4C.4.

48. M. J. Bastiaans, “Wigner distribution function and its application to first-order optics,” J. Opt. Soc. Am. 69(12), 1710–1716 (1979). Publisher: Optical Society of America. [CrossRef]  

49. M. J. Bastiaans, “Application of the Wigner distribution function to partially coherent light,” J. Opt. Soc. Am. A 3(8), 1227 (1986). [CrossRef]  

50. X. Liu and A. Velten, “The role of Wigner Distribution Function in Non-Line-of-Sight Imaging,” in 2020 IEEE International Conference on Computational Photography (ICCP), (2020), pp. 1–12. ISSN: 2472–7636.

51. M. Levoy and P. Hanrahan, “Light field rendering,” in Proceedings of the 23rd annual conference on Computer graphics and interactive techniques, (Association for Computing Machinery, New York, NY, USA, 1996), SIGGRAPH ’96, pp. 31–42.

52. F. Durand, N. Holzschuch, C. Soler, et al., “A frequency analysis of light transport,” ACM Trans. Graph. 24(3), 1115–1126 (2005). [CrossRef]  

53. J. Dove, “Theory of Phasor-Field Imaging,”.

54. Z. Zhang and M. Levoy, “Wigner distributions and how they relate to the light field,” in 2009 IEEE International Conference on Computational Photography (ICCP), (IEEE, San Francisco, CA, USA, 2009), pp. 1–10.

55. P. Bromiley, “Products and Convolutions of Gaussian Probability Density Functions,” (2013).

56. D. Faccio, A. Velten, and G. Wetzstein, “Non-line-of-sight imaging,” Nat. Rev. Phys. 2(6), 318–327 (2020). Number: 6 Publisher: Nature Publishing Group. [CrossRef]  

57. R. Wood, Physical optics (Macmillan, 1905). Tex.lccn: 06005702.

58. T. Cuypers, R. Horstmeyer, S. B. Oh, et al., “Validity of Wigner Distribution Function for ray-based imaging,” in 2011 IEEE International Conference on Computational Photography (ICCP), (IEEE, Pittsburgh, PA, USA, 2011), pp. 1–9.

59. S. B. Oh, S. Kashyap, R. Garg, et al., “Rendering wave effects with augmented light field,” in Computer Graphics Forum, vol. 29 (Wiley Online Library, 2010), pp. 507–516.

60. J. W. Goodman, Introduction to Fourier optics (Roberts & Co. Englewood, Colo, 2005), 3rd ed. OCLC: ocm56632414.

61. T. Sultan, “NLOS Light Transport Using TFSWD,” Github (2024), https://github.com/tmsultan/2023_NLOS_Light_Transport_TFSWD.

Data availability

Data underlying the results presented in this paper are available in Ref. [61].

61. T. Sultan, “NLOS Light Transport Using TFSWD,” Github (2024), https://github.com/tmsultan/2023_NLOS_Light_Transport_TFSWD.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (19)

Fig. 1.
Fig. 1. General setup for NLOS imaging. The observer aims to create an image of a hidden object by measuring an optical signal at the relay surface, which is in the line of sight of both the observer and the hidden object.
Fig. 2.
Fig. 2. Visualizations for propagation primitives that serve as the building blocks for light transport in NLOS imaging scenarios. These include light rays, modulated light rays, diffusers, point light sources, and propagation of diffuse light.
Fig. 3.
Fig. 3. Visualizations for additional propagation primitives that serve as the building blocks for light transport in NLOS imaging scenarios. These include occlusions for Shadow-NLOS,, plane wave representation, plane wave propagation, and thin positive lens.
Fig. 4.
Fig. 4. Set Up for the $\mathcal {P}$-fields Double Slit Experiment with a known occluder. We use this experiment to demonstrate how the PFM fails to predict sharp shadows from the occluder. $L_1 = 1.16\ m, \ L_2 = 1.18\ m,\ L_3 = 0.77\ m, \ a = 0.5\ m, \ s = 0.02\ m, \ W = 0.105\ m, \ D = 0.65\ m$.
Fig. 5.
Fig. 5. Visualization of light transport for the double slit experiment with step-by-step TFSWD analysis.
Fig. 6.
Fig. 6. (a) DC component is the optical signal averaged over time and shows the two shadows from each of the two slits, and TFSWD prediction matches experimental results. (b) Theoretical Predictions for the Phasor Field and TFSWD forward models for double slit experiment with and without the occluders. Both models match experimental results closely when we have no occluders, but only TFSWD gives us the right answer when we add in occluders.
Fig. 7.
Fig. 7. Set up to demonstrate how an optical lens can be used to focus and project $\mathcal {P}$-field through intervening diffusers or relay walls. We repeat the experiment with and without Diffuser 2. The thin lens setup is annotated for the analysis from [36], recapped in Appendix 9.
Fig. 8.
Fig. 8. Set up to demonstrate how an optical mirror can be used to focus and project $\mathcal {P}$-field through intervening diffusers or relay walls. We repeat the experiment with and without Diffuser 2.
Fig. 9.
Fig. 9. Visualization of light transport for the thin lens experiment with Diffuser 2 (see Fig. 7). We use the TFSWD to build and visualize a step-by-step analysis.
Fig. 10.
Fig. 10. Visualization of light transport for the thin lens experiment without Diffuser 2 (see Fig. 7). We use the TFSWD to build and visualize a step-by-step analysis.
Fig. 11.
Fig. 11. Theoretical predictions for the phasor field and TFSWD forward models for thin lens experiment with and without an intervening diffuser. Predictions for both models match experimental results when we have an intervening diffuser, but only TFSWD gives us the right answer when we remove the diffuser.
Fig. 12.
Fig. 12. (a) Experimental results illustrating how the optical carrier scatters in the presence of diffuser 2 from Fig. 7 while removing the diffuser reveals the sharp focus of the carrier from the lens. (b) Focused optical spot from our parabolic mirror showcasing significant aberrations.
Fig. 13.
Fig. 13. Experimental results showcasing how the parabolic mirror focuses $\mathcal {P}$-field for multiple $\mathcal {P}$-field wavelengths when Diffuser 2 (see Fig. 8) is (a) present and (b) absent. When Diffuser 2 is absent, we get a sharp optical focus for all $\mathcal {P}$-field wavelengths.
Fig. 14.
Fig. 14. Modeling light transport for a point light source illuminating a diffuse relay wall when there are no occluders in the hidden scene.
Fig. 15.
Fig. 15. Set up to measure the contribution to the width of the penumbra from the spatial extent of (a) the light source, and (b) the ripples from a knife edge.
Fig. 16.
Fig. 16. We use the Fourier domain to quantify the information bandwidth for each scenario i.e. signals associated with Intensity falloff (blue), photon timing (orange), and shadows from occluders (yellow).
Fig. 17.
Fig. 17. TFSWD visualizations illustrate the inverse and reconstruction resolution for the case when we have no photon timing.
Fig. 18.
Fig. 18. TFSWD visualizations illustrate the $\mathcal {P}$-field inverse and reconstruction resolution for the case when we have photon timing.
Fig. 19.
Fig. 19. Unfolded Set up that acts as a stand in for NLOS imaging

Tables (1)

Tables Icon

Table 1. Summary of mathematical notation used in the manuscript

Equations (74)

Equations on this page are rendered with MathJax. Learn more.

Δ ω << Ω << ω 0 ,
I z ( ρ z , t ) = E z ( ρ z , t ) E z ( ρ z , t )
= d ω 2 π d ω 2 π E z ( ρ z , ω ) E z ( ρ z , ω ) e i ( ω ω ) t
= d ω 2 π [ d ω + 2 π E z ( ρ z , ω + + ω / 2 ) E z ( ρ z , ω + ω / 2 ) ] e i ω t
= d ω 2 π P z ( ρ z , ω ) e i ω t ,
ω + = ( ω + ω ) / 2 ,
ω = ω ω ,
P 1 ( ρ 1 , ω ) = d 2 ρ 0 P 0 ( ρ 0 , ω ) e i ω L / c e i ω | ρ 1 ρ 0 | 2 / 2 c L L 2 ,
W ( ρ + , k ) d 2 ρ E z ( ρ ) E z ( ρ e i k ρ ,
d 2 ρ E z ( ρ + + ρ / 2 ) E z ( ρ + ρ / 2 ) e i k ρ ,
k = 2 π s / λ 0 = sin ( θ ) / λ 0 ,
k θ / λ 0 θ ,
W ( ρ + , k ) L z ( ρ + , k ) ,
W E z ( ρ + , k , ω + , ω ) d 2 ρ E z ( ρ + + ρ / 2 , ω + + ω / 2 ) E z ( ρ + ρ / 2 , ω + ω / 2 ) e i k ρ .
I z ( ρ + , k , ω + , t ) 1 λ 0 2 d ω 2 π W ( ρ + , k , ω + , ω ) e i ω t ,
P z ( ρ + , ω ) = d ω + 2 π d 2 k ( 2 π ) 2 W E z ( ρ + , k , ω + , ω ) .
W ( ρ + , k ) = 1 λ 0 2 d ω 2 π d ω + 2 π W E z ( ρ + , k , ω + , ω ) .
I z ( ρ + , k , t ) = 1 λ 0 2 d ω 2 π d ω + 2 π W E z ( ρ + , k , ω + , ω ) e i ω t .
L z ( ρ + , k ) = W E z ( ρ + , k , ω + = ω 0 , ω = 0 ) .
P z ( ρ + , k , ω = ω λ p ) = W E z ( ρ + , k , ω + = ω 0 , ω = ω λ p ) ,
rect ( θ θ 0 ) = { 1 / ( 2 θ 0 ) , θ | θ 0 | 0 , θ > | θ 0 |
P 1 = e i ω L / c .
P 2 = e i ω | ρ + ρ 0 | 2 / 2 c L .
W P ( ρ + , k ) = { δ ( θ ) , M ( ρ + ) = 1 0 , M ( ρ + ) = 0
I z ( ρ z , t ) = d ω 2 π ( P z ( ρ z , ω ) ) e i ω t ,
= d ω 2 π ( d ω + 2 π d 2 k ( 2 π ) 2 W E z ( ρ + , k , ω + , ω ) ) e i ω t .
f ( ρ , μ , σ ) = 1 2 π σ e ( ρ μ ) 2 2 σ 2 ,
F ρ ( f ( x , μ , σ ) ) = e 2 π i k μ e 2 π 2 k 2 σ 2 ,
F W H M = 2 2 ln 2   σ .
σ f = 1 2 π σ .
Δ x n ( z ) = 2 ( 3 ) 1 6 z .
Δ x t 2 c γ ( D / 2 ) 2 + z 2 D ,
Δ z t c γ 2 ,
Δ ρ l = z a Δ s ,
ρ s ( m ) = 2 λ 0 z ( z + a ) a ( m + 1 ) m = 0 , 1 ,
Δ ρ s { 2 λ 0 a z , z a 4 λ 0 z , z a 2 λ 0 z , z a
Δ ρ s = lim a ρ ( m = 0 ) = 2 λ 0 z .
Δ x o = Δ ρ l 2 + Δ ρ s 2 ,
= 2 λ 0 z ( z + a ) a + ( z a Δ s ) 2 .
H ( ρ ) = { 1 , ρ 0 0 , ρ < 0
I o ( ρ ) = d ρ H ( ρ ) f ( ρ ρ ,   μ = 0 ,   σ o ) .
Δ r n = Δ x n 2 = 3 1 / 6 z .
Δ r t = 1.22 λ p z D .
Δ r o = λ 0 z .
Δ x t , o = Δ r t k 2 + Δ x o 2
= ( 1.22 λ p a D ) 2 + 2 λ 0 z ( z + a ) a + ( z a Δ s ) 2
Δ x o ( z a ) 2 λ 0 a + Δ s 2 .
Δ x o 4 λ 0 z + Δ s 2 ,
Δ x t , o = Δ x t 2 + 2 ( Δ x o ) 2
( 1.22 λ p a D ) 2 + 4 ( 4 λ 0 z + Δ s 2 )
( 1.22 λ p a D ) 2 + 4 ( Δ s 2 ) < ( 1.22 λ p ( z + a ) D )
= Δ r t
W E 0 ( ρ + , k , ω + , ω ) = δ ( ρ + ρ o , k k o , ω + ω + o , ω ω o ) .
W E 0 ( ρ + , k , ω + , ω ) = λ 0 2 d 2 k ( 2 π ) 2 W E 0 ( ρ + , k , ω + , ω ) .
W E 0 ( ρ + , k , ω + , ω ) = d 2 k ( θ 0 ) 2 W E 0 ( ρ + , k , ω + , ω ) W D ( ρ + , k k ) .
W E 0 ( ρ + , k , ω + , ω ) = d 2 k ( 2 π ) 2 W E 0 ( ρ + , k , ω + , ω ) W P ( ρ + , k k ) .
W E z ( ρ + , k , ω + , ω ) = W E 0 ( ρ + c ( L ) k / ω 0 , k , ω + , ω ) e i [ ω ( L ) / c ] ( 1 + c 2 | k | 2 / 2 ω 0 2 ) .
W E z ( ρ + , k , ω + , ω ) = W E 0 ( ρ + c ( L ) k / ω 0 , k , ω + , ω ) e i ω L / c e i ω | ρ + ρ 0 | 2 / 2 c L .
W E z ( ρ + , k , ω + , ω ) = W E 0 ( ρ + , k + ( 2 π / λ 0 ) ρ + / f , ω + , ω ) e i ( ω / c ) ( ρ + 2 / 2 f ) .
P in ( ρ in , ω ) = d ω + 2 π E in ( ρ in , ω + + ω / 2 ) E in ( ρ in , ω + ω / 2 ) ,
P 1 ( ρ 1 , ω ) = π ( D L 1 L in 1 ) 2 e i ω ( L in 1 + L prj ) / c e i ω | ρ 1 | 2 / 2 c L prj × d 2 ρ in P in ( ρ in , ω ) e i ω | ρ in | 2 / 2 c L in 1 e ( ω D / 2 c L prj ) 2 | ρ 1 + M ρ in | 2 .
P in ( ρ in , ω ) = δ ( ρ in )
P 1 ( ρ 1 , ω ) = π ( D L 1 L in 1 ) 2 e i ω ( L in 1 + L prj ) / c e i ω | ρ 1 | 2 / 2 c L prj × d 2 ρ in δ ( ρ in , ω ) e i ω | ρ in | 2 / 2 c L in 1 e ( ω D / 2 c L prj ) 2 | ρ 1 + M ρ in | 2
= π ( D L 1 L in 1 ) 2 e i ω ( L in 1 + L prj ) / c e i ω | ρ 1 | 2 / 2 c L prj × e i ω | ρ in | 2 / 2 c L in 1 e ( ω D / 2 c L prj ) 2 | ρ 1 + M ρ in | 2 d 2 ρ in
= π ( D L 1 L in 1 ) 2 e i ω ( L in 1 + L prj ) / c e i ω | ρ 1 | 2 / 2 c L prj e i ω | ρ in | 2 / 2 c L in 1 × e ( ω D / 2 c L prj ) 2 | ρ 1 + M ρ in | 2 .
| P 1 ( ρ 1 , ω ) | = π ( D L 1 L in 1 ) 2 e ( ω D / 2 c L prj ) 2 | ρ 1 + M ρ in | 2 .
I ( ρ , z ) = 1 r 2 cos θ = z r 3
= z ( ρ ρ 0 ) 2 + ( z z 0 ) 2 3
= z ( ρ 2 + z 2 ) 3 .
I max ( ρ , z ) = I ( 0 , z ) = 1 z 2 .
I ( ρ , z ) = I max ( ρ , z ) 2
z ( ρ 2 + z 2 ) 3 = 1 2 z 2
ρ = ( 3 ) 1 6 z
Δ x n = 2 ( 3 ) 1 6 z .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.