Abstract

The phasor field has been shown to be a valuable tool for non-line-of-sight imaging. We present a formal analysis of phasor-field imaging using paraxial wave optics. Then, we derive a set of propagation primitives—using the two-frequency, spatial Wigner distribution—that extend the purview of phasor-field imaging. We use these primitives to analyze a set of simple imaging scenarios involving occluded and unoccluded geometries with modulated and unmodulated light. These scenarios demonstrate how to apply the primitives in practice and reveal what kind of insights can be expected from them.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Non-line-of-sight (NLoS) imaging, colloquially known as imaging around corners, is an important and growing area of research in the imaging community. Kirmani et al. [1] introduced the concept of transient NLoS imaging by using short pulses and time-resolved detection together with multipath analysis to recover the geometry of simple, occluded scenes. Their approach was independent of bidirectional reflectance distribution function (BRDF) and albedo, and they demonstrated its experimental feasibility. Velten et al. [2] revisited the problem, focusing on the case of diffuse reflection, using ultrafast streak cameras and computational backprojection. With these more powerful and developed tools, they were able to demonstrate human-identifiable reconstructions of relatively detailed geometry from around a corner. A major obstacle to applying Velten et al.’s approach in practice is the relative expense of their advanced equipment. This barrier was addressed by Heide et al. [3] who applied similar techniques with success to data collected by relatively inexpensive photonic-mixer-device (PMD) time-of-flight sensors. Buttafava et al. [4] also improved upon the practical feasibility—bearing in mind cost, power, size, etc.— of implementing these approaches by demonstrating NLoS imaging with single-photon avalanche diode (SPAD) detectors. Whereas all of this work had focused on static geometry reconstruction, Gariepy et al. [5] extended these techniques using SPAD detectors to detect motion and track moving objects around corners. With an awareness of the depth of the preceding work, Kadambi et al. [6] provided a unified theoretical framework for the problem of occluded geometry reconstruction and motion tracking, including an analysis of expected performance and a consideration of commercially available equipment. They also generalized their theory to deal with imaging through diffusers, in addition to the around-the-corner scenario, and offered experimental demonstration of the effectiveness of their framework. Pointing out that the experimentally collected data in the previous literature had quality and resource issues owing to experimental practicalities, Klein et al. [7] developed a simulation engine fit for thinking more broadly about NLoS imaging tasks without the limitations of real data. Additionally, leveraging their newfound ability to quickly simulate NLoS scenarios, they developed and demonstrated a new simulation-based inversion technique as an alternative to the computational backprojection methods that had been used in most of the prior work. Making further improvements in the area of reconstruction techniques and coping with practical resource limitations, O’Toole et al. [8] demonstrated a confocal NLoS imaging system which facilitated the development and use of a closed-form inversion formula.

With the goal of further advancing the field of NLoS imaging, Reza et al. [9] recently introduced the phasor-field (𝒫-field) representation for light transport that involves diffuse reflection (such as occurs in NLoS imaging) or diffuse transmission. Attempting to apply their light transport model to NLoS geometries that include intermediate occluding objects or non-Lambertian reflections will reveal that the 𝒫 field is an insufficient representation of the underlying field at the site of such features. Nevertheless, Liu et al. [10] used the 𝒫-field approach to propose and demonstrate that line-of-sight imaging techniques can be fruitfully applied, in a computational manner, to NLoS operation, even in the presence of intermediate occluders and non-Lambertian reflections. In doing so, they presented what may be the most robust and detailed reconstructions of NLoS scenes to date. Their success in this endeavor is due to their development of reconstruction techniques that obviate the need for a full light transport model by relying on there being initial and final Lambertian reflections. These techniques are fortunately, and somewhat surprisingly, not burdened by the limitations inherent in applying 𝒫-field propagation to scenarios more general than purely Lambertian reflections. Very recently, Reza et al. [11] reported an elegant series of experiments that verify the 𝒫 field’s legitimacy. These experiments clearly demonstrate the 𝒫 field’s wave-like properties, which offer the possibility of NLoS imaging without the need for computational reconstructions by using a 𝒫-field lens instead.

The success of Liu et al.’s experiments is impressive, and Reza et al.’s 𝒫-field lens is quite promising. However, we believe that even greater performance might be possible if afforded a complete transport model that can account for all features that might be encountered in NLoS imaging. At the very least, such a transport model would facilitate anticipatory preparation and analysis for particular scenarios of interest. The argument could be made that the propagation rules for the optical-frequency field—not those for the 𝒫 field—already provide such a transport model, but the aforementioned works have demonstrated the intuitive utility of the 𝒫-field approach. Consequently, we believe it is worthwhile to pursue propagation primitives that can readily establish the 𝒫-field input-output relation for the initial and final Lambertian reflections when occluders and non-Lambertian reflectors are present in the intervening space.

In this paper, we develop a set of propagation primitives that extend the 𝒫-field formalism to scenarios that go beyond what was considered in [9] by Reza, et al. For convenience, we assume a transmissive geometry (without reflections) that is an unfolded proxy for occlusion-aided, three-bounce NLoS imaging [12,13] and use scalar-wave, paraxial optics although these restrictions are not essential. In Sec. 2 we present our own development and analysis of the 𝒫-field notion. We begin by tracing light propagation through an example transmissive geometry wherein a natural definition for the 𝒫 field presents itself. Continuing this analysis, we arrive at a paraxial 𝒫-field propagator analogous to that reported by Reza et al. [9]. Using this result, we analyze the performance of 𝒫-field imaging for unoccluded transmissive geometries. Next, moving beyond the 𝒫 field, in Sec. 3 we introduce the two-frequency spatial Wigner distribution and present primitives for its propagation through a diffuser, through a deterministic occluder, through a specular-plus-diffuser mask, and through Fresnel diffraction. With these primitives, we then derive the 𝒫-field input-output relation for occlusion-aided, diffuse-object, transmissive imaging. With that analysis in hand, we compare the 𝒫-field point-spread function for diffuse-object imaging using modulated light in the absence of an occluder with those for diffuse-object imaging using unmodulated light that is aided by the presence of either a Gaussian-pinhole occluder or a Gaussian-pinspeck occluder. Finally, in Sec. 4 we summarize our results and consider directions for further research.

2. 𝒫-field propagation and imaging

In this section we consider electromagnetic field propagation through a paraxial, transmissive geometry that serves as a surrogate for an around-the-corner imaging configuration. As was done by Reza et al. [9], we define the 𝒫 field as the Fourier transform of the short-time average irradiance. Using this definition, we derive a formula for paraxial propagation of the 𝒫 field, which we find to be similar to the traditional Fresnel-diffraction formula for the propagation of the electromagnetic field, as reported by Reza et al. in [9]. We then apply this understanding of the 𝒫 field to the task of imaging through diffusers and analyze the associated performance.

2.1. Setup for paraxial propagation through multiple diffusers

Figure 1 shows the transmissive geometry we shall address in this paper for 𝒫-field propagation within the paraxial regime, i.e., wherein Fresnel diffraction applies. Here, E0(ρ0, t) is the baseband, complex-field envelope for a quasimonochromatic, scalar-wave, modulated laser field entering the z = 0 plane, expressed as a function of the transverse spatial coordinates, ρ0 = (x0, y0), and time, t. This field has center frequency ω0 and bandwidth Δωω0, so that the optical-frequency field is Re[E0(ρ0, t)eot]. Its units are W/m2, making I0(ρ0, t) = |E0(ρ0, t)|2 the short-time average irradiance [14] illuminating the z = 0 plane. It will be assumed, in all that follows, that Δω is such that available photodetectors can fully resolve the time dependence of I0(ρ0, t). As soon will be seen, it will be valuable to employ the time-domain Fourier transform of E0(ρ0, t), viz. [15],

0(ρ0,ω)dtE0(ρ0,t)eiωt,
for use analyzing the Fig. 1 configuration.

 figure: Fig. 1

Fig. 1 Unfolded geometry for three-bounce NLoS active imaging. Scalar, paraxial diffraction theory is assumed, with {Ek(ρk, t) : 0 ≤ k ≤ 2} being the baseband complex-field envelopes illuminating the z = 0, z = L1, and z = L1 + L2 planes, respectively, written as functions of the transverse spatial coordinates, {ρk = (xk, yk) : 0 ≤ k ≤ 2}, in those planes and time, t. The blue rectangles represent thin transmissive diffusers, and the black line represents a thin transmission screen whose intensity transmission pattern, T(ρ1), is to be imaged using the light that emerges from the z = L1 + L2 plane.

Download Full Size | PPT Slide | PDF

After propagating through the thin diffuser h0(ρ0), the Fourier-domain field at z = 0+ is

0(ρ0,ω)=0(ρ0,ω)exp[i(ω0+ω)h0(ρ0)/c],
where c is light speed and we have normalized away the diffuser’s refractive index. Physically, we are modeling this diffuser as a space-dependent h0(ρ0)/c time delay. Because it is unreasonable to presume we can accurately account for this delay as a deterministic quantity, we shall suppress its average value—across an ensemble of statistically identical diffusers—and consider h0(ρ0) to be a zero-mean, homogeneous, isotropic, Gaussian random function of ρ0, with covariance function Kh(|Δρ|) = 〈h0(ρ0 + Δρ)h0(ρ0)〉, where angle brackets denote ensemble average. Moreover, in keeping with h0(ρ0)’s being a diffuser, we shall take its standard deviation, σh=Kh(0) to be much greater than the center wavelength, λ0 = 2πc/ω0, and its coherence length ρc—the transverse distance beyond which Kh(|Δρ|) vanishes—to be at most a few λ0. Furthermore—and this condition is essential to there being a useful 𝒫-field propagator—we shall assume that σh is much smaller than the wavelength of the modulation bandwidth, Δλ = 2πcω.

Within the paraxial (Fresnel-diffraction) propagation regime we have that

1(ρ1,ω)=d2ρ00(ρ0,ω)exp[i(ω0+ω)L1/c+i(ω0+ω)|(ρ1ρ0)|2/2cL1](ω0+ω)i2πcL1,
is the time-domain Fourier transform of E1(ρ1, t), the field illuminating the z = L1 plane. This illumination results in
1(ρ1,ω)=1(ρ1,ω)T(ρ1)exp[i(ω0+ω)h1(ρ1)/c],
being the time-domain Fourier transform of E′1(ρ1, t), the field that emerges at z = L1+, after propagation through a deterministic thin transmission screen with intensity transmission pattern T(ρ1), and a thin diffuser, h1(ρ1), that we will take to be statistically independent of, but identically distributed as, h0(ρ0).

Paraxial propagation to z = L1 + L2, now gives us

2(ρ2,ω)=d2ρ11(ρ1,ω)exp[i(ω0+ω)L2/c+i(ω0+ω)|(ρ2ρ1)|2/2cL2](ω0+ω)i2πcL2,
and propagation through the thin diffuser at z = L1 + L2 results in
2(ρ2,ω)=2(ρ2,ω)exp[i(ω0+ω)h2(ρ2)/c],
being the time-domain Fourier transform of E′2(ρ2, t), the field that emerges at z = (L1 + L2)+. We will assume that h2(ρ2) is statistically independent of, but identically distributed as, h0(ρ0) and h1(ρ1).

Before proceeding further, let us briefly comment on how the Fig. 1 geometry relates to three-bounce NLoS active imaging. The z = 0 diffuser, which is illuminated by modulated laser light, represents a Lambertian-reflecting visible wall with a uniform albedo. The combination of the intensity transmission pattern T(ρ1) and the z = L1 diffuser represent a Lambertian-reflecting hidden wall with spatially-varying albedo T(ρ1). The z = L1 + L2 diffuser represents a second Lambertian reflection at the visible wall, where statistical independence from the first visible-wall reflection can be ensured by the NLoS imaging sensor’s viewing a different section of that wall than what the laser illuminates. The goal of three-bounce NLoS active imaging in this setting is to use the third-bounce light returned from the visible wall to reconstruct the hidden wall’s albedo T(ρ1). In the next section, we will derive the 𝒫-field propagator for the preceding transmission geometry.

2.2. 𝒫-field propagator in the paraxial regime

To start our derivation, consider 〈I1(ρ1, t)〉, where I1(ρ1, t) ≡ |E1(ρ1, t)|2 is the short-time average irradiance illuminating the z = L1 plane and angle brackets denote averaging over the statistics of h0(ρ0). Going to the temporal-frequency domain, we have that

I1(ρ1,t)=dω2πdω2π1(ρ1,ω)1*(ρ1,ω)ei(ωω)t
=dω2π[dω+2π1(ρ1,ω++ω/2)1*(ρ1,ω+ω/2)]eiωt
=dω2π𝒫1(ρ1,ω)eiωt,
where * denotes complex conjugate, ω+ ≡ (ω + ω′)/2, ωωω′, and we have introduced the 𝒫 field at the z = L1 plane as the Fourier transform of 〈I1(ρ1, t)〉. Next, employing Eqs. (2) and (3), we get
𝒫1(ρ1,ω)=d2ρ0d2ρ0dω+2π0(ρ0,ω)0*(ρ0,ω)ei[(ω0+ω)h0(ρ0)(ω0+ω)h0(ρ0)]/c×(ω0+ω)(ω0+ω)ei(ωω)L1/c+i[(ω0+ω)|ρ1ρ0|2(ω0+ω)|ρ1ρ0|2]/2cL1/(2πcL1)2,
where, as before, ω+ ≡ (ω + ω′)/2 and ωωω′. Because Δωω0 and σh ≪ Δλ, the preceding result can be reduced to
𝒫1(ρ1,ω)=d2ρ0d2ρ0dω+2π0(ρ0,ω)0*(ρ0,ω)eiω0[h0(ρ0)h0(ρ0)]/cω02/(2πcL1)2×ei(ωω)L1/c+i[(ω0+ω)|ρ1ρ0|2(ω0+ω)|ρ1ρ0|2]/2cL1.

A standard result for Gaussian random functions gives us [16]

eiω0[h0(ρ0)h0(ρ0)]/c=exp{ω02[σh2Kh(|ρ0ρ0|)]/c2}.
Then, because σhλ0 and ρcλ0 we can use an impulse approximation, viz.,
eiω0[h0(ρ0)h0(ρ0)]/cλ02δ(ρ0ρ0),
in Eq. (11) to obtain
𝒫1(ρ1,ω)=d2ρ0dω+2π0(ρ0,ω)0*(ρ0,ω)ei(ωω)L1/c+i(ωω)|ρ1ρ0|2/2cL1/L12.
=d2ρ0𝒫(ρ0,ω)eiωL1/c+iω|ρ1ρ0|2/2cL1/L12.
Here, the 𝒫 field at z = 0 is
𝒫0(ρ0,ω)=dω+2π0(ρ0,ω++ω/2)0*(ρ0,ω+ω/2),
with no averaging brackets required, because the laser illumination of the z = 0 plane is deterministic.

Equation (15)—which coincides with the result of applying the Fresnel approximation to Reza et al.’s Rayleigh-Sommerfeld 𝒫-field propagator [9]—is our essential result for paraxial 𝒫-field propagation over a distance L1. It shows that the field emerging from a diffuser that imposes complete spatial incoherence at the optical frequency, but is smooth at the modulation frequency, leads to paraxial 𝒫-field propagation at frequency ω over a distance L1 that is governed by a modified version of the -field’s Fresnel-diffraction formula, viz., one in which the exponent’s optical frequency in the -field Fresnel formula is replaced by the 𝒫 field’s modulation frequency and the -field formula’s ω0/i2πcL1 factor is replaced by the 𝒫 field’s 1/L12 factor. By inverse Fourier transformation of Eq. (15), we see that irradiance propagation from the diffuser at z = 0 to the z = L1 plane is governed by

I1(ρ1,t)=d2ρ0I0(ρ0,tL1/c|ρ1ρ0|2/2cL1)/L12,
which has the following pleasing physical interpretation: Paraxial propagation of the short-time average irradiance from the diffuser’s output to the z = L1 presumes that
exp[iωL12+|ρ1ρ0|2/c]L12+|ρ1ρ0|2exp(iωL1/c+iω|ρ1ρ0|2/2cL1)L1,for|ω|Δω
can be employed, and results in 〈I1(ρ1, t)〉 being governed by the paraxial form of geometric optics, viz., the differential contribution of I0(ρ0, t) to 〈I1(ρ1, t)〉 is time delayed by L1/c+|ρ1ρ0|2/2cL1 and attenuated by the inverse-square-law factor 1/L12.

Paralleling the previous development, it is now easy to show that

𝒫2(ρ2,ω)dω+2π2(ρ2,ω++ω/2)2*(ρ2,ω+ω/2)
=d2ρ1𝒫1(ρ1,ω)T(ρ1)exp(iωL2/c+iω|ρ2ρ1|2/2cL2)/L22,
where the averaging brackets in Eq. (19) represent averaging over the h0(ρ0) and the h1(ρ1) ensembles. Combining this result with what we have already obtained for relating 𝒫1(ρ1, ω) to 𝒫0(ρ0, ω) we get
𝒫2(ρ2,ω)=d2ρ1(d2ρ0𝒫0(ρ0,ω)exp(iωL1/c+iω|ρ1ρ0|2/2cL1)/L12)×T(ρ1)exp(iωL2/c+iω|ρ2ρ1|2/2cL2)/L22.

Before continuing, it is crucial to note the behavior of 𝒫2(ρ2, 0). From Eq. (21) we immediately find that

𝒫2(ρ2,0)=d2ρ1T(ρ1)d2ρ0𝒫0(ρ0,0)/(L1L2)2,
indicating that there is no spatial information about T(ρ1) available in 𝒫2(ρ2, 0). This behavior is a consequence of using the paraxial approximation. Going beyond the paraxial-propagation regime—to Rayleigh-Sommerfeld diffraction—will yield a 𝒫2(ρ2, 0) containing some spatial information about T(ρ1), but the inverse problem for recovering T(ρ1) from 𝒫2(ρ2, 0) will still be poorly conditioned in the Fig. 1 configuration. This behavior has been seen by Xu et al. [12] and Thrampoulidis et al. [13] in their work on NLoS active imaging with pulsed illumination, in which occlusion-aided operation was needed to obtain useful albedo reconstructions when transient behavior was ignored.

2.3. T(ρ1) reconstruction in the paraxial regime 𝒫-field formalism

Equation (21) shows that the intensity transmission pattern, T(ρ1), we wish to reconstruct is illuminated by 𝒫1(ρ1, ω), the 𝒫 field that results from propagation of the laser illumination’s 𝒫0(ρ0, ω) from z = 0 to z = L1. After transmission through T(ρ1) and the diffuser h1(ρ1), 𝒫-field propagation from to z = L1 + L2 results in 𝒫2(ρ2, ω), which encounters another diffuser. Because that last diffuser will render the field emerging from it spatially incoherent, we will use the conventional thin-lens imaging system, shown in Fig. 2, to gather the data needed to reconstruct T(ρ1).

 figure: Fig. 2

Fig. 2 Thin-lens imaging setup. A focal-length f thin lens casts an inverted image of the intensity pattern that emerges from the diffuser at z = L1 + L2. The image is located in the plane—shown as a black dashed line—a distance Lim behind the lens, where 1/f = 1/L3 + 1/Lim.

Download Full Size | PPT Slide | PDF

Let E′2(ρ2, t) be the baseband, complex-field envelope emerging from the diffuser in the z = L1 + L2 plane, and let ℰ′2(ρ2, ω) be its time-domain Fourier transform. After Fresnel propagation from z = L1 + L2 to z = L1 + L2 + L3, propagation through the diameter-D circular-pupil, focal-length-f, thin lens, and Fresnel propagation over an additional Lim distance where 1/f = 1/L3 + 1/Lim, the resulting image-plane field Eim(ρ, t) has time-domain Fourier transform given by

im(ρim,ω)=|ρ3|D/2d2ρ3ei(ω0+ω)Lim/c+i(ω0+ω)|ρimρ3|2/2cLimi(ω0+ω)|ρ3|2/2cfiλ0Lim×d2ρ22(ρ2,ω)ei(ω0+ω)L3/c+i(ω0+ω)|ρ3ρ2|2/2cL3iλ0L3
=ei(ω0+ω)|ρim|2/2cLimd2ρ22(ρ2,ω)ei(ω0+ω)(L3+Lim)/c+i(ω0+ω)|ρ2|2/2cL3iλ0L3×|ρ3|D/2d2ρ3ei(ω+ω0)ρ3(ρ2/L3+ρim/Lim)/ciλ0Lim.
Performing the integration over ρ3 results in
im(ρim,ω)=ei(ω0+ω)|ρim|2/2cLim×d2ρ22(ρ2,ω)ei(ω0+ω)(L3+Lim)/c+i(ω0+ω)|ρ2|2/2cL3λ02L3LimπD24J1(πDλ0|ρ2L3+ρimLim|)πD2λ0|ρ2L3+ρimLim|,
where J1(·) is the first-order Bessel function of the first kind, and we have used πD/λ0 in lieu of (ω0 + ω)D/2c in the Airy pattern because Δωω0.

The presence of the diffuser h2(ρ2) makes

2(ρ2,ω)2*(ρ2,ω)λ022(ρ2,ω)2*(ρ2,ω)δ(ρ2ρ2),
which together with Eq. (25) yields
𝒫im(ρim,ω)=d2ρ2𝒫2(ρ2,ω)×eiω(L3+Lim)/c+iω|ρ2|2/2cL3+iω|ρim|2/2cLim[πD24λ0L3LimJ1(πDλ0|ρ2L3+ρimLim|)πD2λ0|ρ2L3+ρimLim|]2.
and hence
Iim(ρim,t)=d2ρ2I2(ρ2,t(L3+Lim)/c|ρ2|2/2cL3|ρim|2/2cLim)×[πD24λ0L3LimJ1(πDλ0|ρ2L3+ρimLim|)πD2λ0|ρ2L3+ρimLim|]2.
So, by measuring 〈Iim(ρim, t)〉, i.e., the diffuser-averaged, short-time average, image-plane irradiance, we obtain a 1.22λ0/D-angular-resolution, image of 〈I2(ρ2, t − (L3 + Lim)/c − |ρ2|2/2cL3)〉. From that irradiance image we can then compute a 1.22λ0/D-angular-resolution image of 𝒫2(ρ2, ω) at any modulation frequency of interest.

For reconstructing T(ρ1), let us suppose that the z = 0 illumination is a duration t0, cosinusoidally-modulated, collimated Gaussian-beam laser field where Δωt0 ≫ 1, i.e.,

E0(ρ0,t)={8P0πd2e4|ρ0|2/d2cos(Δωt/2),for|t|t0/2,0,otherwise,
with P0t0/2 being the energy illuminating the z = 0 plane. This field’s short-time average irradiance is then
I0(ρ0,t)={8P0πd2e8|ρ02/d2cos2(Δωt/2)=4P0πd2e8|ρ02/d2[1+cos(Δωt)],for|t|t0/2,0,otherwise,
which leads to
𝒫0(ρ0,ω)=8P0t0πd2e8|ρ0|2/d2[sin(ωt0/2)ωt0/2+sin[(ω+Δω)t0/2](ω+Δω)t0+sin[(ωΔω)t0/2](ωΔω)t0],
and hence
𝒫1(ρ1,Δω)d2ρ04P0t0πd2e8|ρ0|2/d2exp(iΔωL1/c+iΔω|ρ1ρ0|2/2cL1)L12,
because Δωt0 ≫ 1. Although this expression can be evaluated analytically, we shall not bother. We just note that with Δω/2π ∼ 1 GHz, d ∼ 1 mm, and L1 ∼ 1 m, we have cL1ωd2 ≫ 1 from which it follows that the spatial extent of 𝒫1(ρ1, Δω) will be ∼cL1ωdd. In other words, the effect of the diffuser h0(ρ0) is to ensure that a finite, but much larger than diameter-d, region of the z = L1 plane is illuminated by the frequency-Δω 𝒫 field.

To proceed further, assume we have generated the computed image,

𝒫˜2(ρ2,Δω)(Lim/L3)2𝒫im(ρ2Lim/L3,Δω)eiΔω(L3+Lim)/ciΔω|ρ2|2/2cL3iω|ρim|2/2cLim,
of 𝒫2(ρ2, Δω) from the 〈Iim(ρim, t)〉 measurement. We can computationally invert Eq. (20) to obtain a reconstruction of T(ρ1)𝒫1(ρ1, Δω) and use our knowledge of 𝒫1(ρ1, Δω) to obtain a T(ρ1) image. In particular, suppose we measure 〈Iim(ρim, t)〉 for |ρim| ≤ dim/2, and then define (ρ̃1) by
T˜(ρ˜1)|𝒫1(ρ˜1,Δω)|=||ρ2|D/2d2ρ2𝒫˜2(ρ2,Δω)eiΔω|ρ2|2/2cL2+iΔωρ2ρ˜1/cL2Δλ2|,
where D′dimL3/Lim. Neglecting noise, and assuming that the 1.22λ0/D angular resolution is sufficient to make
𝒫˜2(ρ2,Δω)𝒫2(ρ2,Δω),
for |ρ2| ≤ D′/2, Eq. (34) leads to
T˜(ρ˜1)|𝒫1(ρ˜1,Δω)|=|d2ρ1𝒫1(ρ1,Δω)T(ρ1)eiΔω|ρ1|2/2cL2×π4(DΔλL2)2J1(πD|ρ˜1ρ1|/ΔλL2)πD|ρ˜1ρ1|/2ΔλL2|.
Thus, over the region in the z = L1 plane wherein |𝒫1(ρ1, Δω)| has an appreciable value, the 𝒫-field imager using cosinusoidal E-field modulation at frequency Δω/2 achieves a spatial resolution of 1.22ΔλL2/D′, where: Δλ = 2πcω; L2 is the distance from the transparency-containing plane to the plane visible to the sensor; and D′ = dimL3/Lim, with L3 being the distance from the plane visible to the sensor to the sensor’s entrance pupil, Lim being the distance from that entrance pupil to the image plane where irradiance measurements are made, and dim being the diameter of the image-plane region over which those measurements are made.

3. Two-frequency spatial wigner distribution and occlusion-aided imaging

In this section, we consider a generalized version of our paraxial, transmissive geometry which allows for the presence of deterministic occluders in the light’s path and a more general target transmissivity mask. The 𝒫 field alone does not suffice to track the evolution of the light through all intermediate planes of this geometry, so we go beyond this quantity to define a more comprehensive one: the two-frequency spatial Wigner distribution. We demonstrate how the two-frequency spatial Wigner distribution relates to other better-known quantities for characterizing propagation through random media and present a set of propagation primitives for it, relevant to our transmissive geometry. Finally, we use these propagation primitives to analyze occlusion-aided imaging scenarios and demonstrate that the presence of intermediate occluders has the potential to improve performance, as seen previously in Xu et al. [12] and Thrampoulidis et al. [13].

3.1. Setup for paraxial propagation through multiple diffusers with occlusion

Figure 3 shows a generalized setup for transmissive 𝒫-field imaging. Here, two occluders, having field-transmission functions P(ρd) and P′(ρ′d), have been introduced in the z = L1Ld and z = L1 + L′d planes, and the z = L1 plane contains a field-transmission mask F(ρ1) that has both specular and diffuse components. In the NLoS analogy, the two occluders represent objects in the hidden space—encountered by the light as it propagates towards and returns from the hidden wall, respectively—and the generalized field-transmission mask accounts for more general, non-Lambertian hidden walls. This configuration—if F(ρ1) is purely diffuse with a space-varying albedo that is to be imaged, i.e., equivalent to the stacked intensity-transmission mask and thin diffuser from Fig. 1—is our unfolded proxy for Xu et al.’s experiments [12].

 figure: Fig. 3

Fig. 3 Unfolded geometry for three-bounce, occlusion-aided NLoS active imaging. Scalar, paraxial diffraction theory is assumed, with E0(ρ0, t) being the baseband complex-field envelope illuminating the z = 0 plane and E′2(ρ2, t) being the baseband complex-field envelope emerging from the z = L1 + L2 plane. These fields are written as functions of their transverse spatial coordinates, {ρk = (xk, yk) : k = 0, 2}, in their respective planes and time, t. The blue rectangles represent thin transmissive diffusers, and the black line at z = L1 represents a thin specular-plus-diffuser transmission mask with field-transmission function F(ρ1), whose associated intensity-transmission pattern is to be imaged using the light that emerges from the z = L1 + L2 plane. That imaging process is aided by the presence of occluders in the z = L1Ld and z = L1 + L′d planes, whose field-transmission functions are P(ρd) and P′(ρ′d), respectively.

Download Full Size | PPT Slide | PDF

The ultimate goal of a phasor-field transport model is to provide the short-time average irradiance at the output of some system—or equivalently, its Fourier transform: the 𝒫 field—given the short-time average irradiance, or its associated 𝒫 field, at the input of the system. This is possible in NLoS or diffuse transmissive-imaging scenarios—provided that the system can be summarized by a linear transformation of the underlying electromagnetic field—when the input and output facets of the systems in question are Lambertian walls (NLoS case) or diffusers (transmissive case). Such facets destroy all directionality information, viz., all spatial coherence, so that 𝒫-fields fully characterize the light they reflect (NLoS case) or transmit (transmissive case). Free-space propagation increases spatial coherence, but provided we only care about the short-time average irradiances at input and output planes containing pure diffusers, a 𝒫-field input-output model propagation is possible as those diffusers will, respectively, destroy the initial and propagation-created coherence. If, however, as at z = L1Ld, z = L1, or z = L1 + L′d in Fig. 3, we are interested in planes that do not contain pure diffusers, the 𝒫 field is insufficient to fully characterize the electromagnetic field emerging from them. Thus, owing to what can be viewed as a lack of directionality information, the 𝒫 field at those output planes fails to provide enough information to determine the increased spatial coherence that will accrue from subsequent free-space diffraction. Accordingly, we find the 𝒫 field insufficient for the task of building a complete light-transport model for scenarios including occluders and specular-plus-diffuser masks. Indeed, although omitted for brevity, carrying out a Fig. 3 propagation analysis—like that done for Fig. 1—confirms that a 𝒫-field input-output relation built up from propagating the 𝒫 field from each plane containing an optical element to the next such plane is impossible.

To tackle these scenarios, we start from the beginning, and instead of considering the short-time average irradiance we consider a variant with directionality information—the time-dependent specific irradiance from small-angle-approximation linear transport theory [17]:

Iz(ρ+,s,t)d2ρλ02Ez(ρ++ρ/2,t)Ez*(ρ+ρ/2,t)ei2πsρ/λ0.
In computer vision, this quantity is known as the 5D light field [18–20]. By replacing 2πs/λ0 with k, the time-dependent specific irradiance can be seen to be a time-indexed spatial Wigner distribution, cf. the spatial Wigner distribution of a monochromatic scalar wave, viz.,
W(ρ+,k)d2ρEz(ρ++ρ/2)Ez*(ρ+ρ/2)eikρ,
which has long been recognized as a useful tool in optics, see, e.g., [21–23]. The short-time average irradiance is obtained from Iz(ρ+, s, t) by integrating out its directionality information,
Iz(ρ+,t)=d2sIz(ρ+,s,t),
and the 𝒫 field is then obtained by time-domain Fourier transformation.

As before, we find it to convenient to carry out our analysis in the temporal-frequency domain. Paralleling the development in Eqs. (7)(9) we have:

Iz(ρ+,s,t)=dω2πdω2πd2ρλ02z(ρ++ρ/2,ω)z*(ρ+ρ/2,ω)ei2πsρ/λ0ei(ωω)t
=dω2π[dω+2π(d2ρλ02z(ρ++ρ/2,ω)z*(ρ+ρ/2,ω)e2πsρ/λ0)]eiωt,
where ω+ ≡ (ω + ω′)/2 and ωωω′ as we employed in Sec. 2. The bracketed quantity in Eq. (41) is the Fourier transform of the time-dependent specific irradiance, so it contains equivalent information. Comparing to our Sec. 2 analysis, this quantity is the directionality-augmented analog of the 𝒫 field, and as it turns out would be sufficient to build a transport model for the Fig. 3 scenario. Out of prudence though, having learned from the insufficient generality of the 𝒫 field, we feel it is wise to build our Fig. 3 analysis on the quantity in parentheses within Eq. (41), the two-frequency spatial Wigner distribution (TFSWD):
Wz(ρ+,k,ω+,ω)d2ρz(ρ++ρ/2,ω++ω/2)z*(ρ+ρ/2,ω+ω/2)eikρ,
from which the time-dependent specific irradiance can be obtained via
Iz(ρ+,s,t)=1λ02dω2πdω+2πWz(ρ+,2πs/λ0,ω+,ω)eiωt.

The merit of the TFSWD’s added generality can be seen by considering the space-time autocorrelation function,

Γz(ρ1,ρ2,t1,t2)Ez(ρ1,t1)Ez*(ρ2,t2),
that is used in parabolic-approximation propagation theory through random media [24]. The time-dependent specific irradiance can be found from the space-time autocorrelation function, viz., we have that
Iz(ρ+,s,t)=d2ρλ02Γz(ρ++ρ/2,ρ+ρ/2,t,t)ei2πsρ/λ0,
but the converse is not true, i.e., the space-time autocorrelation function cannot in general be found from knowledge of the time-dependent specific irradiance alone. However, the space-time autocorrelation function is equivalent to the TFSWD because we have that
Wz(ρ+,k,ω+,ω)=d2ρdt1dt2Γz(ρ++ρ/2,ρ+ρ/2,t++t/2,t+t/2)×ei(ω+t+ωt+kρ),
where t+ ≡ (t1 + t2)/2, tt1t2, and
Γz(ρ++ρ/2,ρ+ρ/2,t++t/2,t+t/2)=d2k(2π)2dω+2πdω2πWz(ρ+,k,ω+,ω)×ei(ω+t+ωt+kρ).

For E-field propagation through an arbitrary linear transformation of the form

Ez(ρ,t)=dτd2ρEz(ρ,τ)h(ρ,ρ;t,τ),
the input’s space-time autocorrelation function suffices to determine the output’s space-time autocorrelation function, and hence the output-plane 𝒫 field. Morevoer, the same must be true for the TFSWD. Because knowledge of the time-dependent specific irradiance alone does not in general determine the space-time autocorrelation function, it does not suffice to characterize second-moment propagation through an arbitrary linear transformation of the form given in Eq. (48), i.e., it cannot determine the output 𝒫 field. For example, the time-dependent specific irradiance cannot account for propagation that involves a linear time-invariant filtering in time, e.g., through a transparency that has a frequency-dependent transmissivity. So, although this capability is not fully exploited in this paper, by building our theory around the TFSWD we are prepared to handle arbitrary linear transformations of the E field, rather than just those that can be characterized by the time-dependent specific irradiance. Note that the 6D light field,
Iz(ρ+,s,ω+,t)1λ02dω2πW(ρ+,2πs/λ0,ω+,ω)eiωt,
would also suffice in this regard, as it is the time-domain inverse Fourier transform of the TFSWD.

The z-plane 𝒫 field can be found from that plane’s TFSWD as follows:

𝒫z(ρ+,ω)=dω+2πd2k(2π)2Wz(ρ+,k,ω+,ω).
From this result we see that the TFSWD allows us to realize the goal of analyzing occluded phasor-field imaging if we can: (1) propagate Wz (ρ+, k, ω+, ω) through a z-plane field-transmission mask, whether that be a diffuser, deterministic occluder, or specular-plus-diffuser mask; and (2) propagate Wz (ρ+, k, ω+, ω) through a distance L of Fresnel diffraction. All of these propagation calculations are done Appendix A. For convenience, we summarize these results below:
  • Propagation through a diffuser:

    For propagation through a diffuser characterized by the impulse approximation in Eq. (13), we have

    W0(ρ+,k,ω+,ω)=λ02d2k(2π)2W0(ρ+,k,ω+,ω).
    .

  • Propagation through a deterministic occluder:

    With WP(ρ+, k) ≡ ∫d2ρ P(ρ+ + ρ/2)P*(ρ+ρ/2)eik·ρ, we have

    WL1Ld(ρ+,k,ω+,ω)=d2k(2π)2WL1Ld(ρ+,k,ω+,ω)WP(ρ+,kk).

  • Propagation through a specular-plus-diffuser mask:

    With F(ρ1) having nonzero mean 〈F(ρ1)〉 ≠ 0, and covariance, ΔF(ρ++ρ/2)ΔF*(ρ+ρ/2)λ02(ρ+)δ(ρ) where 0 ≤ (ρ+) ≤ 1, we get

    WL1(ρ+,k,ω+,ω)=d2k(2π)2WL1(ρ+,k,ω+,ω)WF(ρ+,kk)+λ02(ρ+)d2k(2π)2WL1(ρ+,k,ω+,ω).

  • Fresnel diffraction:

    For Fresnel diffraction from the z = 0+ plane to the z = L1Ld plane, we get

    WL1Ld(ρ+,k,ω+,ω)=W0(ρ+c(L1Ld)k/ω0,k,ω+,ω)ei[ω(L1Ld)/c](1+c2|k|2/2ω02).

3.2. Occlusion-aided imaging

In Sec. 2 we noted that, in the paraxial limit, unoccluded imaging configurations without modulated light are unconditioned with respect to reconstructing the target mask’s albedo. Moreover, we showed that the addition of modulation enabled reconstruction of the target mask’s albedo at a resolution limited by the bandwidth of that modulation. What remains then is to examine the unmodulated and modulated cases for occluded geometries. For clarity and convenience, we will consider a simplified version of Fig. 3 in which the first occluder is absent and the screen at z = L1 is purely diffuse. In the NLoS analogy, this corresponds to a geometry in which a single occluding object is encountered in the hidden space only on the light’s return trip from a Lambertian hidden wall. Further convenience, without appreciable loss of generality, is afforded by our assuming that the laser light incident on the z = 0 plane is a +z-going plane wave of short-time average irradiance I0(t), and that the distances in Fig. 3 satisfy L1 = L2 = L, and Ld = L/2

The TFSWD of the plane-wave laser light is easily shown to be

W0(ρ+,k,ω+,ω)=Win(ω+,ω)(2π/λ0)2δ(k),
where
Win(ω+,ω)=λ02dtI0(t)ei(ω++ω/2)tduI0(u)ei(ω+ω/2)u.
After the diffuser in the z = 0 plane we get
W0(ρ+,k,ω+,ω)=Win(ω+,ω),
and after propagation to the z = L plane, we find
WL(ρ+,k,ω+,ω)=Win(ω+,ω)ei(ωL/c)(1+c2|k|2/2ω02).

At z = L1 this Wigner distribution encounters a diffuse target mask, i.e., one whose field-transmission function F(ρ1) has zero mean and covariance ΔF(ρ1)ΔF*(ρ2)λ02[(ρ1+ρ2)/2]δ(ρ1ρ2), which results in

WL(ρ+,k,ω+,ω)=(ρ+)Win(ω+,ω)eiωL/c2πic/ωL.
Fresnel propagation to z = 3L/2 now gives us
W3L/2(ρ+,k,ω+,ω)=(ρ+cLk/2ω0)Win(ω+,ω)eiω3L/2ceiωcL|k|2/4ω022πic/ωL,
and passage through the occluder in that plane leads to
W3L/2(ρ+,k,ω+,ω)=Win(ω+,ω)d2k(2π)2(ρ+cLk/2ω0)eiω3L/2ceiωcL|k|2/4ω02×WP(ρ+,kk)2πic/ωL.
Fresnel propagation over another L/2 distance then gives
W2L(ρ+k,ω+,ω)=Win(ω+,ω)d2k(2π)2(ρ+cL(k+k)/2ω0)eiω2L/c×eiωcL(|k|2+|k|2)/4ω02WP(ρ+cLk/2ω0,kk)2πic/ωL,
from which we get
𝒫2L(ρ+,ω)=dω+2πWin(ω+,ω)d2k(2π)2d2k(2π)2(ρ+cL(k+k)/2ω0)eiω2L/c×eiωcL(|k|2+|k|2)/4ω02WP(ρ+cLk/2ω0,kk)2πic/ωL.

Now, using

𝒫0(ρ+,ω)=dω+2πd2k(2π)2W0(ρ+,k,ω+,ω)=dtI0(t)eiωt,
and changing variables to k = kk′ and k+ = (k + k′)/2 we have
𝒫2L(ρ+,ω)=λ02𝒫0(ω)eiω2L/cd2k+(2π)2d2k(2π)2(ρ+cLk+/ω0)×eiωcL(2|k+|2+|k|2/2)/4ω02WP(ρ+cL(k+/2+k/4)/ω0,k)2πic/ωL,
where we have suppressed the ρ+ argument of 𝒫0(ρ+, ω) because that field has no such dependence for the plane-wave source we have assumed. We define a new function
G(ρ,ω)=d2k(2π)2eiωcL|k|2/8ω02WP(ρ/2cLk/4ω0,k)2πic/ωL.
With this definition we have
𝒫2L(ρ+,ω)=λ02𝒫0(ω)eiω2L/cd2k+(2π)2(ρ+cLk+/ω0)×G(2ρ++cLk+/ω0,ω)eiωcL|k+|2/2ω02.
Changing variables again, ρ̃ = ρ+cLk+/ω0, we get our final result
𝒫2L(ρ+,ω)=𝒫0(ω)eiω2L/cd2ρ˜(ρ˜)G(ρ+ρ˜,ω)eiω|ρ+ρ˜|2/2cLL2.
Owing to the Fresnel-propagation kernel in Eq. (68), this result is a superposition integral with image inversion, rather than a convolution integral with image inversion.

To get to a simpler result that will afford us insight into the advantage of occlusion-aided imaging, we shall assume that the initial laser illumination is monochromatic, i.e., the optical-frequency field that illuminates the z = 0 plane is Re[E0(ρ0)e0t]. In this unmodulated case we can use the usual spatial Wigner distribution, i.e.,

WE0(ρ+,k)d2ρE0(ρ++ρ/2)E0*(ρ+ρ/2)eikρ,
of the z = 0-plane field, in lieu of the TFSWD. The propagation primitives given earlier for the TFSWD all apply to the spatial Wigner distribution function for the unmodulated case with the only difference being that we set ω = 0 in the Fresnel-diffraction primitive. Paralleling the development that led to Eq. (68) assuming that E0(ρ0)=I0 is a constant, we get
I2L(ρ+)|E2L(ρ+)|2=I0d2ρ˜(ρ˜)G(ρ+ρ˜),
where
G(ρ)πL2d2k(2π)2WP(ρ/2cLk/4ω0,k),
and we have used the evanescence cutoff, |k| ≤ 2π/λ0, to justify replacing ∫ d2k I0/(2π)2 with πI0/λ02.

Equations (70) and (71) show that this unmodulated case offers no spatial information about (ρ) in the absence of an occluder, i.e., we get G(ρ) = π/L2 when P(ρ) = 1, as seen previously in Eq. (22). To quantify the spatial information afforded by the presence of an occluder in the unmodulated scenario, we consider two simple cases: the Gaussian pinhole

Pph(ρ)=e|ρ|2/2ρ02,
and the Gaussian pinspeck,
Pps(ρ)=1e|ρ|2/2ρ02,
where ρ0 is the e−1/2-attenuation radius of the Gaussian functions. The Gaussian-pinhole camera can be analyzed with far less complication than our approach to obtaining Eqs. (70) and (71), but (after accounting for image inversion) its point-spread function (psf) Gph(ρ) is revealing. The Gaussian-pinspeck camera, on the other hand, is more relevant to the experiments of Xu et al. [12], but its psf Gph(ρ) is more complicated. In both cases, however, the Gaussian functions involved enable us to get closed-form psf results.

For the Gaussian pinhole, we find that

Gph(ρ)=πΩ2L2(1+Ω2)exp[Ω21+Ω2|ρ|24ρ02],
where k0ω0/c = 2π/λ0 is the wave number at the optical frequency and Ω4k0ρ02/L is the Fresnel number for the pinhole’s propagation geometry. The spatial resolution of Gph(ρ) improves with decreasing ρ0 when Ω > 1, and degrades with decreasing ρ0 when Ω < 1. Thus the Gaussian pinhole’s resolution-optimized psf,
Gphopt(ρ)=πexp(π|ρ|2/λ0L)2L2,
is obtained when ρ0=L/4k0=λ0L/8π. The optimized psf’s spatial resolution—taken to be its eπ-attenuation radius—is then λ0L, which is far superior to the 1.22ΔλL/D′ for the unoccluded, modulated case governed by Eq. (36). For example, with λ = 1 μm and L = 1 m the optimum spatial resolution of occlusion-aided unmodulated imaging is 1 mm, while that of unoccluded modulated imaging, with Δλ = 3 cm (Δω/2π = 10 GHz) and D′ = 10 cm, is 37 cm at L = 1 m. For comparison with the Gaussian pinspeck’s psf, it is worth noting that the Gaussian pinhole’s psf maintains its Gaussian shape for all values of its Fresnel number Ω, with only its overall amplitude Gph(0) and its spatial resolution ρres(Ω)4π(1+Ω2)ρ0/Ω changing, i.e., we have that
Gph(ρ)/Gph(0)=exp[π|ρ2|ρres2(Ω)],
for the Gaussian pinhole.

For the Gaussian pinspeck, we get

Gps(ρ)=πL2|1Ω1+Ω2exp[Ω1+Ω2|ρ|28ρ02(Ωi)itan1(1/Ω)]|2.
This psf is a bit more complicated than what we found for the Gaussian pinhole. Nevertheless, it shows the expected result for a pinspeck camera, viz., that the image-bearing part of the psf is embedded in a uniform background term whose presence creates photodetection shot noise that degrades signal-to-noise ratio. As was the case for the Gaussian pinhole, we see that optimum spatial resolution occurs when Ω = 1, in which case we get
Gpsopt(ρ)=πL2|1exp(π|ρ|2(1i)/2λ0Liπ/4)2|2.
On the other hand, unlike the Gaussian pinhole’s psf, the Gaussian pinspeck’s psf does not preserve its shape as the Fresnel number is varied. This is illustrated in Fig. 4, where we have plotted Gps(ρ)/Gps(∞) versus ρ/ρres(Ω) for ρ = (x, 0) and Ω = 0.1, 1, and 10, where ρres(Ω) is the Gaussian pinhole’s spatial resolution.

 figure: Fig. 4

Fig. 4 Plots of Gps(ρ)/Gps(∞) for the Gaussian pinspeck versus ρ/ρres(Ω) for ρ = (x, 0) and Ω = 0.1, 1, and 10.

Download Full Size | PPT Slide | PDF

Note that in the near-field region, wherein Ω ≫ 1, Eq. (77) reduces to the geometric optics result,

Gps(ρ)/Gps()=[1exp(|ρ|2/8ρ02)]2,
which is analogous to the geometric optics treatment used by Xu et al. [12] and Thrampoulidis et al. [13] for the hard-aperture, circular occluder
P(ρ)=circ(2ρ/d){1,for|ρ|d/20,otherwise.

4. Discussion

In summary, we have presented a complete light transport model, in phasor-field terms, capable of describing propagation through a transmissive, paraxial geometry—including intermediate occluders and a specular-plus-diffuser mask—that serves as an unfolded proxy for occlusion-aided, three-bounce NLoS imaging. For imaging purely diffuse objects without intermediate occluders, we phrased our analysis in terms of the 𝒫 field and provided a straightforward derivation of its behavior, analogous to that reported by Reza et al. [9]. To handle more general scenarios, we introduced and presented propagation primitives for the two-frequency spatial Wigner distribution (TFSWD). With these in hand, we turned our attention to the task of diffuse-object, occlusion-aided imaging and arrived at closed-form results for occlusion-aided imaging with unmodulated light using either a Gaussian-pinhole occluder or a Gaussian-pinspeck occluder. Our results show that imaging unoccluded diffuse objects with unmodulated light is not possible in the paraxial regime, but phasor-field imaging provides techniques for image construction if modulated light is used or object occlusion can be exploited. For imaging non-occluded diffuse objects with modulated light, spatial resolution is the diffraction limit at the modulation frequency. For occlusion-aided imaging of the same object with unmodulated light, spatial resolution is set by the optical-frequency diffraction limit of the occluder. Although the latter can be far superior to the former, blind determination of the occluder’s characteristics poses a challenge for exploiting its presence, and even with a known occluder, imaging performance will be limited by its size and shape.

There are many avenues for future research that build upon the work we have reported. Here we shall list just a few of the possibilities. First, because diffuse transmission (and, for the NLoS case, diffuse reflection) creates laser speckle, our assumption that we can measure the speckle-averaged, short-time average irradiance needs to examined. Toward that end, it is worth noting that Liu et al.’s experiments [10] did not suffer any obvious ill effects of laser speckle. Second, it remains to be seen how occlusion-aided imaging with modulated light might benefit from synergy between the approaches we have examined. A third avenue to pursue is evaluating 𝒫-field imaging of specular objects. Next, because Liu et al. [10] used ps-duration pulsed illumination to obtain three-dimensional scene reconstructions—and such illumination violates our quasimonochromatic-light assumption—a fourth item on our plate would be to treat the pulsed case, including the value of synthesizing desirable input 𝒫 fields. Fifth on our list is to extend our propagation primitives beyond the paraxial regime, i.e., to replace Fresnel diffraction with Rayleigh-Sommerfeld diffraction. Moreover, we need to address NLoS imaging explicitly, rather than its transmissive proxy, and include more than just three-bounce returns. It is also possible—and potentially interesting—to extend our TFSWD transport model to account for arbitrary linear transformations of the E field of the type given by Eq. (48). Finally, the work we have presented could be fruitfully specialized to sinusoidal E-field modulation and wedded to the 𝒫-field optics introduced and demonstrated in Reza et al. [11].

A. Propagation calculations

In this appendix we provide derivations for the TFSWD’s propagation primitives given earlier in Eqs. (51)(54).

Propagation through a diffuser:

Consider propagation through one of our diffusers: assume that we know Wz (ρ+, k, ω+, ω) and we want to find Wz(ρ+,k,ω+,ω), where

z(ρ,ω)=z(ρ,ω)ei(ω0+ω)hz(ρ)/cz(ρ,ω)eiω0hz(ρ)/c,
with
eiω0[hz(ρ)hk(ρ)]/cλ02δ(ρρ).
In this case we immediately get
Wz(ρ+,k,ω+,ω)=d2ρz(ρ++ρ/2,ω++ω/2)z*(ρ+ρ/2,ω+ω/2)eik·ρ
=d2ρz(ρ++ρ/2,ω++ω/2)z*(ρ+ρ/2,ω+ω/2)×eiω0[hz(ρ++ρ/2)hk(ρ+ρ/2)]/ceikρ
=λ02z(ρ+,ω++ω/2)z*(ρ+,ω+ω/2)
=λ02d2k(2π)2Wz(ρ+,k,ω+,ω).
Physically, the k dependence of the TFSWD carries the field’s spatial-frequency information, i.e., its directionality. The result we have just obtained shows that the diffuser has completely destroyed the directionality of z(ρ, ω), because Wz(ρ+,k,ω+,ω) is independent of k.

Propagation through a deterministic occluder:

Now consider propagation through a deterministic transmission mask. Here we want to find Wz(ρ+,k,ω+,ω) given Wz (ρ+, k, ω+, ω) and a deterministic P(ρ), where

z(ρ,ω)=z(ρ,ω)P(ρ).
For this case we have that
Wz(ρ+,k,ω+,ω)=d2ρz(ρ++ρ/2,ω++ω/2)z*(ρ+ρ/2,ω+ω/2)eik·ρ
=d2ρz(ρ++ρ/2,ω++ω/2)z*(ρ+ρ/2,ω+ω/2)×P(ρ++ρ/2)P*(ρ+ρ/2)eikρ
=d2k(2π)2Wz(ρ+,k,ω+,ω)d2ρP(ρ++ρ/2)P*(ρ+ρ/2)ei(kk)ρ
=d2k(2π)2Wz(ρ+,k,ω+,ω)WP(ρ+,kk),
where
WP(ρ+,k)d2ρP(ρ++ρ/2)P*(ρ+ρ/2)eikρ
is the spatial Wigner distribution of P(ρ). In words, Eq. (91) shows that multiplying z(ρ, ω) by a deterministic field-transmission mask implies that Wz(ρ+,k,ω+,ω) is obtained from a k-space convolution of Wz (ρ+, k, ω+, ω) with the field-transmission mask’s spatial Wigner distribution. Moreover, Eq. (92), together with Eq. (50), immediately leads to
𝒫z(ρ+,ω)=dω+2πd2k(2π)2Wz(ρ+,k,ω+,ω)
=dω+2πd2k(2π)2d2k(2π)2Wz(ρ+,k,ω+,ω)WP(ρ+,kk)
=dω+2πd2k(2π)2Wz(ρ+,k,ω+,ω)|P(ρ+)|2=𝒫z(ρ+,ω)|P(ρ+)|2,
as could have been directly obtained from Eq, (87) and the 𝒫-field’s definition.

Propagation through a specular-plus-diffuser mask:

Combining the approaches for the diffuser and deterministic transmission mask allows us to model the propagation through a specular-plus-diffuser mask. We take such a mask to be a multiplicative random process F(ρ1) having nonzero mean 〈F(ρ1)〉 ≠ 0, and covariance, ΔF(ρ++ρ/2)ΔF*(ρ+ρ/2)λ02(ρ+)δ(ρ) where 0 ≤ (ρ+) ≤ 1 and ΔF(ρ) ≡ F(ρ) − 〈F(ρ)〉. The propagation analysis follows from combining the two previous analyses:

Wz(ρ+,k,ω+,ω)=d2ρz(ρ++ρ/2,ω++ω/2)z*(ρ+ρ/2,ω+ω/2)eikρ
=d2ρz(ρ++ρ/2,ω++ω/2)z*(ρ+ρ/2,ω+,ω/2)×F(ρ++ρ/2)F*(ρ+ρ/2)eikρ.
From expanding F(ρ) into a sum of its (deterministic) mean and zero-mean random portions, it follows that
Wz(ρ+,k,ω+,ω)=d2ρz(ρ++ρ/2,ω++ω/2)z*(ρ+ρ/2,ω+ω/2)×(F(ρ++ρ/2)F*(ρ+ρ/2)+ΔF(ρ++ρ/2)ΔF*(ρ+ρ/2))eikρ
=d2k(2π)2WL1(ρ+,k,ω+,ω)WF(ρ+,kk)+λ02(ρ+)d2k(2π)2WL1(ρ+,k,ω+,ω).
Fresnel diffraction:

Our final task is to find WL (ρ+, k, ω+, ω) when

L(ρL,ω)=d2ρ00(ρ0,ω)(ω0+ω)ei(ω0+ω)(L/c+|ρLρ0|2/2cL)i2πcL,
i.e., for Fresnel diffraction over a distance L [25]. This calculation turns out to be more complicated than its predecessors in this section. We start from
WL(ρ+,k,ω+,ω)=d2ρd2ρ0d2ρ00(ρ0,ω++ω/2)0*(ρ0,ω+ω/2)×eiωL/ceikρ(ω0+ω++ω/2)ei(ω0+ω++ω/2)|ρ++ρ/2ρ0|2/2cLi2πcL×(ω0+ω+ω/2)ei(ω0+ω+ω/2)|ρ+ρ/2ρ0|2/2cLi2πcL.
Exploiting Δωω0, and making the coordinate transformation from ρ0 and ρ′0 to ρ0+ ≡ (ρ0 + ρ′0)/2 and ρ0ρ0ρ′0, we can reduce Eq. (101) to
WL(ρ+,k,ω+,ω)=d2ρd2ρ0+d2ρ0(ρ0++ρ0/2,ω++ω/2)0*(ρ0+ρ0/2,ω+ω/2)×eiωL/c(λ0L)2ei(ω0+ω+)+(ρ+ρ0+)(ρρ0)/cLeiω(|ρ+ρ0+|2+|ρρ0|2/4)/2cLeikρ.
Rearranging terms allows us to put the ρ integral inside the ρ0+ and ρ0 integrals, i.e.,
WL(ρ+,k,ω+,ω)=d2ρ0+d2ρ00(ρ0++ρ0/2,ω++ω/2)0*(ρ0+ρ0/2,ω+ω/2)eiωL/c(λ0L)2×ei(ω0+ω+)(ρ+ρ0+)ρ0/cLeiω(|ρ+ρ0+|2/2cL+|ρ0|2/8cL)×d2ρeiω|ρ|2/8cLei[k(ω0+ω+)(ρ+ρ0+)/cL+ωρ0/4cL]ρ.
Performing the ρ integral then yields
WL(ρ+,k,ω+,ω)=d2ρ0+d2ρ00(ρ0++ρ0/2,ω++ω/2)0*(ρ0+ρ0/2,ω+ω/2)eiωL/c(λ0L)2×ei(ω0+ω+)(ρ+ρ0+)ρ0/cLeiω|ρ+ρ0+|2/2cLeiω|ρ0|2/8cL(i8πcL/ω)×e2icL|k(ω0+ω+)(ρ+ρ0+)/cL+ωρ0/4cL|2/ω,
which, after some terms cancel, gives
WL(ρ+,k,ω+,ω)=d2ρ0+d2ρ00(ρ0++ρ0/2,ω++ω/2)0*(ρ0+ρ0/2,ω+ω/2)×eiωL/c(λ0L)2eiω|ρ+ρ0+|2/2cLe2icL|k(ω0+ω+)(ρ+ρ0+)/cL|2/ωeikρ0(i8πcL/ω)
=d2ρ0+W0(ρ0+,k,ω+,ω)eiωL/c(λ0L)2eiω|ρ+ρ0+|2/2cL×e2icL|k(ω0+ω+)(ρ+ρ0+)/cL|2/ω(i8πcL/ω).

The term

e2icL|k(ω0+ω+)(ρ+ρ0+)/cL|2/ωi8πcL/ω(λ0L)2
in Eq. (106)’s integrand behaves like the impulse δ[ρ0+ρ+ + kcL/(ω0 + ω+)]. This delta-function behavior follows because: (1) The term in question is a highly-oscillatory function outside of a narrow slow-oscillation region that is centered at ρ+kcL/(ω0 + ω+) with nominal width ωcL/2(ω0+ω+), and ω0 ≫ max |ω+| implies that it integrates to one. (2) The other ρ0+-dependent terms in Eq. (106) are the oscillatory term, exp(|ρ+ρ0+|2/2cL), which varies much more slowly than its predecessor, because ω0 ≫ max |ω|, and the Wigner distribution, whose ρ0+ dependence can reasonably be assumed to be nearly constant over regions of diameter ωcL/2(ω0+ω+). So, using the delta-function approximation in Eq. (106), we get
WL(ρ+,k,ω+,ω)=W0(ρ+cLk/(ω0+ω+),k,ω+,ω)ei(ωL/c)(1+c2|k|2/2(ω0+ω+)2).
Finally, again making use ω0ω+, we have
WL(ρ+,k,ω+,ω)=W0(ρ+cLk/ω0,k,ω+,ω)ei(ωL/c)(1+c2|k|2/2ω02).

As a consistency check on Eq. (108), let us use it to calculate 𝒫L(ρ+, ω) when z = 0 illumination with TFSWD W0(ρ0+, k, ω+, ω) passes through the diffuser specified in Eq. (81) before undergoing Fresnel diffraction over a distance L. We then have that

𝒫L(ρ+,ω)=dω+2πd2k(2π)2W0(ρ+cLk/ω0,k,ω+,ω)ei(ωL/c)(1+c2|k|2/2ω02).
Using Eq. (86) now gives us
𝒫L(ρ+,ω)=λ02dω+2πd2k(2π)2d2k(2π)2W0(ρ+cLk/ω0,k,ω+,ω)ei(ωL/c)(1+c2|k|2/2ω02).
Changing variables so that k = ω0(ρ+ρ0)/cL leaves us with
𝒫L(ρ+,ω)=dω+2πd2ρ0d2k(2π)2W0(ρ0,k,ω+,ω)ei(ωL/c)(1+|ρ+ρ0|2/2L2)L2.
which reduces to the result from Sec. 2,
𝒫L(ρ+,ω)=d2ρ0𝒫0(ρ0,ω)eiωL/ceiω|ρ+ρ0|2/2cLL2,
by virtue of Eq. (50).

Funding

DARPA REVEAL program (HR0011-16-C-0030).

References

1. A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using ultrafast transient imaging,” Int. J. Comput. Vision 95, 13–28 (2011). [CrossRef]  

2. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012). [CrossRef]   [PubMed]  

3. F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., pp. 3222–3229 (2014).

4. M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23, 20997–21011 (2015). [CrossRef]   [PubMed]  

5. G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–27 (2015). [CrossRef]  

6. A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Trans. Graph. 35, 1–12 (2016). [CrossRef]  

7. J. Klein, M. Laurenzis, and M. Hullin, “Transient imaging for real-time tracking around a corner,” Proc. SPIE 9988, 998802 (2016). [CrossRef]  

8. M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555, 338–341 (2018). [CrossRef]  

9. S. A Reza, M. La Manna, and A. Velten, “A physical light transport model for non-line-of-sight imaging applications,” arXiv:1802.1823 [physics.optics].

10. X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Virtual wave optics for non-line-of-sight imaging,” arXiv:1810.07535 [cs.CV].

11. S. A. Reza, M. La Manna, S. Bauer, and A. Velten, “Wave-like properties of phasor fields: experimental demonstrations,” arXiv:190401565 [physics.optics].

12. F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26, 9945 (2018). [CrossRef]   [PubMed]  

13. C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Exploiting occlusion in non-line-of-sight active imaging,” IEEE Trans. Comput. Imag. 4, 419 (2018). [CrossRef]  

14. The short-time average z-plane irradiance is the instantaneous irradiance averaged over a time Ta satisfying ω0Ta ≫ 1 and ΔωTa ≪ 1.

15. In what follows, integrals without explicit limits are over the integration variable’s entire domain.

16. Because h0(ρ) is a zero-mean Gaussian process, its samples at ρ0 and ρ′0 are zero-mean jointly Gaussian random variables whose joint characteristic function is as given in Eq. (12).

17. A. Ishimaru, Wave Propagation and Scattering in Random Media, Vol. 1: Single Scattering and Transport Theory (Academic, New York, 1978).

18. A. Gershun, “The light field,” J. Math. Phys. 18, 51–151 (1939). [CrossRef]  

19. E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in M. S. Landy and J. A. Movshon, eds., Computational Models of Visual Processing, (MIT Press, 1991), pp. 3–20.

20. M. Levoy and P. Hanrahan, “Light field rendering,” in Proc. SIGGRAPH (ACM, New York, NY, USA, 1996), pp. 31–42.

21. A. Walther, “Radiometry and coherence,” J. Opt. Soc. Am. 58, 1256 (1968). [CrossRef]  

22. M. J. Bastiaans, “Wigner distribution and its application to first-order optics,” J. Opt. Soc. Am. 69, 1710–1716 (1980). [CrossRef]  

23. M. A. Alonso, “Wigner functions in optics: describing beams as ray bundles and pulses as particles,” Adv. Opt. Photon. 3, 272–365 (2011). [CrossRef]  

24. A. Ishimaru, Wave Propagation and Scattering in Random Media, Vol. 2: Multiple Scattering, Turbulence, Rough Surfaces, and Remote Sensing (Academic, New York, 1978).

25. For notational convenience, we have assumed that the diffraction takes place between the z = 0 and z = L planes, but the result we obtain will apply for +z-going Fresnel diffraction over a distance L starting from an arbitrary z plane.

References

  • View by:
  • |
  • |
  • |

  1. A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using ultrafast transient imaging,” Int. J. Comput. Vision 95, 13–28 (2011).
    [Crossref]
  2. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
    [Crossref] [PubMed]
  3. F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., pp. 3222–3229 (2014).
  4. M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23, 20997–21011 (2015).
    [Crossref] [PubMed]
  5. G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–27 (2015).
    [Crossref]
  6. A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Trans. Graph. 35, 1–12 (2016).
    [Crossref]
  7. J. Klein, M. Laurenzis, and M. Hullin, “Transient imaging for real-time tracking around a corner,” Proc. SPIE 9988, 998802 (2016).
    [Crossref]
  8. M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555, 338–341 (2018).
    [Crossref]
  9. S. A Reza, M. La Manna, and A. Velten, “A physical light transport model for non-line-of-sight imaging applications,” arXiv:1802.1823 [physics.optics].
  10. X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Virtual wave optics for non-line-of-sight imaging,” arXiv:1810.07535 [cs.CV].
  11. S. A. Reza, M. La Manna, S. Bauer, and A. Velten, “Wave-like properties of phasor fields: experimental demonstrations,” arXiv:190401565 [physics.optics].
  12. F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26, 9945 (2018).
    [Crossref] [PubMed]
  13. C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Exploiting occlusion in non-line-of-sight active imaging,” IEEE Trans. Comput. Imag. 4, 419 (2018).
    [Crossref]
  14. The short-time average z-plane irradiance is the instantaneous irradiance averaged over a time Ta satisfying ω0Ta ≫ 1 and ΔωTa ≪ 1.
  15. In what follows, integrals without explicit limits are over the integration variable’s entire domain.
  16. Because h0(ρ) is a zero-mean Gaussian process, its samples at ρ0 and ρ′0 are zero-mean jointly Gaussian random variables whose joint characteristic function is as given in Eq. (12).
  17. A. Ishimaru, Wave Propagation and Scattering in Random Media, Vol. 1: Single Scattering and Transport Theory (Academic, New York, 1978).
  18. A. Gershun, “The light field,” J. Math. Phys. 18, 51–151 (1939).
    [Crossref]
  19. E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in M. S. Landy and J. A. Movshon, eds., Computational Models of Visual Processing, (MIT Press, 1991), pp. 3–20.
  20. M. Levoy and P. Hanrahan, “Light field rendering,” in Proc. SIGGRAPH (ACM, New York, NY, USA, 1996), pp. 31–42.
  21. A. Walther, “Radiometry and coherence,” J. Opt. Soc. Am. 58, 1256 (1968).
    [Crossref]
  22. M. J. Bastiaans, “Wigner distribution and its application to first-order optics,” J. Opt. Soc. Am. 69, 1710–1716 (1980).
    [Crossref]
  23. M. A. Alonso, “Wigner functions in optics: describing beams as ray bundles and pulses as particles,” Adv. Opt. Photon. 3, 272–365 (2011).
    [Crossref]
  24. A. Ishimaru, Wave Propagation and Scattering in Random Media, Vol. 2: Multiple Scattering, Turbulence, Rough Surfaces, and Remote Sensing (Academic, New York, 1978).
  25. For notational convenience, we have assumed that the diffraction takes place between the z = 0 and z = L planes, but the result we obtain will apply for +z-going Fresnel diffraction over a distance L starting from an arbitrary z plane.

2018 (3)

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555, 338–341 (2018).
[Crossref]

F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26, 9945 (2018).
[Crossref] [PubMed]

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Exploiting occlusion in non-line-of-sight active imaging,” IEEE Trans. Comput. Imag. 4, 419 (2018).
[Crossref]

2016 (2)

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Trans. Graph. 35, 1–12 (2016).
[Crossref]

J. Klein, M. Laurenzis, and M. Hullin, “Transient imaging for real-time tracking around a corner,” Proc. SPIE 9988, 998802 (2016).
[Crossref]

2015 (2)

M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23, 20997–21011 (2015).
[Crossref] [PubMed]

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–27 (2015).
[Crossref]

2012 (1)

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

2011 (2)

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using ultrafast transient imaging,” Int. J. Comput. Vision 95, 13–28 (2011).
[Crossref]

M. A. Alonso, “Wigner functions in optics: describing beams as ray bundles and pulses as particles,” Adv. Opt. Photon. 3, 272–365 (2011).
[Crossref]

1980 (1)

1968 (1)

1939 (1)

A. Gershun, “The light field,” J. Math. Phys. 18, 51–151 (1939).
[Crossref]

Adelson, E. H.

E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in M. S. Landy and J. A. Movshon, eds., Computational Models of Visual Processing, (MIT Press, 1991), pp. 3–20.

Alonso, M. A.

Bastiaans, M. J.

Bauer, S.

S. A. Reza, M. La Manna, S. Bauer, and A. Velten, “Wave-like properties of phasor fields: experimental demonstrations,” arXiv:190401565 [physics.optics].

Bawendi, M. G.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

Bergen, J. R.

E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in M. S. Landy and J. A. Movshon, eds., Computational Models of Visual Processing, (MIT Press, 1991), pp. 3–20.

Buttafava, M.

Davis, J.

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using ultrafast transient imaging,” Int. J. Comput. Vision 95, 13–28 (2011).
[Crossref]

Eliceiri, K.

Faccio, D.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–27 (2015).
[Crossref]

Freeman, W. T.

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Exploiting occlusion in non-line-of-sight active imaging,” IEEE Trans. Comput. Imag. 4, 419 (2018).
[Crossref]

Gariepy, G.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–27 (2015).
[Crossref]

Gershun, A.

A. Gershun, “The light field,” J. Math. Phys. 18, 51–151 (1939).
[Crossref]

Guillén, I.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Virtual wave optics for non-line-of-sight imaging,” arXiv:1810.07535 [cs.CV].

Gupta, O.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

Gutierrez, D.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Virtual wave optics for non-line-of-sight imaging,” arXiv:1810.07535 [cs.CV].

Hanrahan, P.

M. Levoy and P. Hanrahan, “Light field rendering,” in Proc. SIGGRAPH (ACM, New York, NY, USA, 1996), pp. 31–42.

Heide, F.

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., pp. 3222–3229 (2014).

Heidrich, W.

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., pp. 3222–3229 (2014).

Henderson, R.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–27 (2015).
[Crossref]

Hullin, M.

J. Klein, M. Laurenzis, and M. Hullin, “Transient imaging for real-time tracking around a corner,” Proc. SPIE 9988, 998802 (2016).
[Crossref]

Hullin, M. B.

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., pp. 3222–3229 (2014).

Hutchison, T.

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using ultrafast transient imaging,” Int. J. Comput. Vision 95, 13–28 (2011).
[Crossref]

Ishimaru, A.

A. Ishimaru, Wave Propagation and Scattering in Random Media, Vol. 1: Single Scattering and Transport Theory (Academic, New York, 1978).

A. Ishimaru, Wave Propagation and Scattering in Random Media, Vol. 2: Multiple Scattering, Turbulence, Rough Surfaces, and Remote Sensing (Academic, New York, 1978).

Jarabo, A.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Virtual wave optics for non-line-of-sight imaging,” arXiv:1810.07535 [cs.CV].

Kadambi, A.

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Trans. Graph. 35, 1–12 (2016).
[Crossref]

Kirmani, A.

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using ultrafast transient imaging,” Int. J. Comput. Vision 95, 13–28 (2011).
[Crossref]

Klein, J.

J. Klein, M. Laurenzis, and M. Hullin, “Transient imaging for real-time tracking around a corner,” Proc. SPIE 9988, 998802 (2016).
[Crossref]

La Manna, M.

S. A. Reza, M. La Manna, S. Bauer, and A. Velten, “Wave-like properties of phasor fields: experimental demonstrations,” arXiv:190401565 [physics.optics].

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Virtual wave optics for non-line-of-sight imaging,” arXiv:1810.07535 [cs.CV].

S. A Reza, M. La Manna, and A. Velten, “A physical light transport model for non-line-of-sight imaging applications,” arXiv:1802.1823 [physics.optics].

Laurenzis, M.

J. Klein, M. Laurenzis, and M. Hullin, “Transient imaging for real-time tracking around a corner,” Proc. SPIE 9988, 998802 (2016).
[Crossref]

Le, T. H.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Virtual wave optics for non-line-of-sight imaging,” arXiv:1810.07535 [cs.CV].

Leach, J.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–27 (2015).
[Crossref]

Levoy, M.

M. Levoy and P. Hanrahan, “Light field rendering,” in Proc. SIGGRAPH (ACM, New York, NY, USA, 1996), pp. 31–42.

Lindell, D. B.

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555, 338–341 (2018).
[Crossref]

Liu, X.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Virtual wave optics for non-line-of-sight imaging,” arXiv:1810.07535 [cs.CV].

Nam, J. H.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Virtual wave optics for non-line-of-sight imaging,” arXiv:1810.07535 [cs.CV].

O’Toole, M.

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555, 338–341 (2018).
[Crossref]

Raskar, R.

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Trans. Graph. 35, 1–12 (2016).
[Crossref]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using ultrafast transient imaging,” Int. J. Comput. Vision 95, 13–28 (2011).
[Crossref]

Reza, S. A

S. A Reza, M. La Manna, and A. Velten, “A physical light transport model for non-line-of-sight imaging applications,” arXiv:1802.1823 [physics.optics].

Reza, S. A.

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Virtual wave optics for non-line-of-sight imaging,” arXiv:1810.07535 [cs.CV].

S. A. Reza, M. La Manna, S. Bauer, and A. Velten, “Wave-like properties of phasor fields: experimental demonstrations,” arXiv:190401565 [physics.optics].

Shapiro, J. H.

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Exploiting occlusion in non-line-of-sight active imaging,” IEEE Trans. Comput. Imag. 4, 419 (2018).
[Crossref]

F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26, 9945 (2018).
[Crossref] [PubMed]

Shi, B.

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Trans. Graph. 35, 1–12 (2016).
[Crossref]

Shulkind, G.

F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26, 9945 (2018).
[Crossref] [PubMed]

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Exploiting occlusion in non-line-of-sight active imaging,” IEEE Trans. Comput. Imag. 4, 419 (2018).
[Crossref]

Thrampoulidis, C.

F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26, 9945 (2018).
[Crossref] [PubMed]

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Exploiting occlusion in non-line-of-sight active imaging,” IEEE Trans. Comput. Imag. 4, 419 (2018).
[Crossref]

Tonolini, F.

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–27 (2015).
[Crossref]

Torralba, A.

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Exploiting occlusion in non-line-of-sight active imaging,” IEEE Trans. Comput. Imag. 4, 419 (2018).
[Crossref]

F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26, 9945 (2018).
[Crossref] [PubMed]

Tosi, A.

Veeraraghavan, A.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

Velten, A.

M. Buttafava, J. Zeman, A. Tosi, K. Eliceiri, and A. Velten, “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23, 20997–21011 (2015).
[Crossref] [PubMed]

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

S. A. Reza, M. La Manna, S. Bauer, and A. Velten, “Wave-like properties of phasor fields: experimental demonstrations,” arXiv:190401565 [physics.optics].

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Virtual wave optics for non-line-of-sight imaging,” arXiv:1810.07535 [cs.CV].

S. A Reza, M. La Manna, and A. Velten, “A physical light transport model for non-line-of-sight imaging applications,” arXiv:1802.1823 [physics.optics].

Walther, A.

Wetzstein, G.

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555, 338–341 (2018).
[Crossref]

Willwacher, T.

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

Wong, F. N. C.

F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26, 9945 (2018).
[Crossref] [PubMed]

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Exploiting occlusion in non-line-of-sight active imaging,” IEEE Trans. Comput. Imag. 4, 419 (2018).
[Crossref]

Wornell, G. W.

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Exploiting occlusion in non-line-of-sight active imaging,” IEEE Trans. Comput. Imag. 4, 419 (2018).
[Crossref]

F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26, 9945 (2018).
[Crossref] [PubMed]

Xiao, L.

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., pp. 3222–3229 (2014).

Xu, F.

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Exploiting occlusion in non-line-of-sight active imaging,” IEEE Trans. Comput. Imag. 4, 419 (2018).
[Crossref]

F. Xu, G. Shulkind, C. Thrampoulidis, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Revealing hidden scenes by photon-efficient occlusion-based opportunistic active imaging,” Opt. Express 26, 9945 (2018).
[Crossref] [PubMed]

Zeman, J.

Zhao, H.

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Trans. Graph. 35, 1–12 (2016).
[Crossref]

ACM Trans. Graph. (1)

A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Trans. Graph. 35, 1–12 (2016).
[Crossref]

Adv. Opt. Photon. (1)

IEEE Trans. Comput. Imag. (1)

C. Thrampoulidis, G. Shulkind, F. Xu, W. T. Freeman, J. H. Shapiro, A. Torralba, F. N. C. Wong, and G. W. Wornell, “Exploiting occlusion in non-line-of-sight active imaging,” IEEE Trans. Comput. Imag. 4, 419 (2018).
[Crossref]

Int. J. Comput. Vision (1)

A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using ultrafast transient imaging,” Int. J. Comput. Vision 95, 13–28 (2011).
[Crossref]

J. Math. Phys. (1)

A. Gershun, “The light field,” J. Math. Phys. 18, 51–151 (1939).
[Crossref]

J. Opt. Soc. Am. (2)

Nat. Commun. (1)

A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging,” Nat. Commun. 3, 745 (2012).
[Crossref] [PubMed]

Nat. Photonics (1)

G. Gariepy, F. Tonolini, R. Henderson, J. Leach, and D. Faccio, “Detection and tracking of moving objects hidden from view,” Nat. Photonics 10, 23–27 (2015).
[Crossref]

Nature (1)

M. O’Toole, D. B. Lindell, and G. Wetzstein, “Confocal non-line-of-sight imaging based on the light-cone transform,” Nature 555, 338–341 (2018).
[Crossref]

Opt. Express (2)

Proc. SPIE (1)

J. Klein, M. Laurenzis, and M. Hullin, “Transient imaging for real-time tracking around a corner,” Proc. SPIE 9988, 998802 (2016).
[Crossref]

Other (12)

E. H. Adelson and J. R. Bergen, “The plenoptic function and the elements of early vision,” in M. S. Landy and J. A. Movshon, eds., Computational Models of Visual Processing, (MIT Press, 1991), pp. 3–20.

M. Levoy and P. Hanrahan, “Light field rendering,” in Proc. SIGGRAPH (ACM, New York, NY, USA, 1996), pp. 31–42.

The short-time average z-plane irradiance is the instantaneous irradiance averaged over a time Ta satisfying ω0Ta ≫ 1 and ΔωTa ≪ 1.

In what follows, integrals without explicit limits are over the integration variable’s entire domain.

Because h0(ρ) is a zero-mean Gaussian process, its samples at ρ0 and ρ′0 are zero-mean jointly Gaussian random variables whose joint characteristic function is as given in Eq. (12).

A. Ishimaru, Wave Propagation and Scattering in Random Media, Vol. 1: Single Scattering and Transport Theory (Academic, New York, 1978).

F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., pp. 3222–3229 (2014).

S. A Reza, M. La Manna, and A. Velten, “A physical light transport model for non-line-of-sight imaging applications,” arXiv:1802.1823 [physics.optics].

X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Virtual wave optics for non-line-of-sight imaging,” arXiv:1810.07535 [cs.CV].

S. A. Reza, M. La Manna, S. Bauer, and A. Velten, “Wave-like properties of phasor fields: experimental demonstrations,” arXiv:190401565 [physics.optics].

A. Ishimaru, Wave Propagation and Scattering in Random Media, Vol. 2: Multiple Scattering, Turbulence, Rough Surfaces, and Remote Sensing (Academic, New York, 1978).

For notational convenience, we have assumed that the diffraction takes place between the z = 0 and z = L planes, but the result we obtain will apply for +z-going Fresnel diffraction over a distance L starting from an arbitrary z plane.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1
Fig. 1 Unfolded geometry for three-bounce NLoS active imaging. Scalar, paraxial diffraction theory is assumed, with {Ek(ρk, t) : 0 ≤ k ≤ 2} being the baseband complex-field envelopes illuminating the z = 0, z = L1, and z = L1 + L2 planes, respectively, written as functions of the transverse spatial coordinates, {ρk = (xk, yk) : 0 ≤ k ≤ 2}, in those planes and time, t. The blue rectangles represent thin transmissive diffusers, and the black line represents a thin transmission screen whose intensity transmission pattern, T(ρ1), is to be imaged using the light that emerges from the z = L1 + L2 plane.
Fig. 2
Fig. 2 Thin-lens imaging setup. A focal-length f thin lens casts an inverted image of the intensity pattern that emerges from the diffuser at z = L1 + L2. The image is located in the plane—shown as a black dashed line—a distance Lim behind the lens, where 1/f = 1/L3 + 1/Lim.
Fig. 3
Fig. 3 Unfolded geometry for three-bounce, occlusion-aided NLoS active imaging. Scalar, paraxial diffraction theory is assumed, with E0(ρ0, t) being the baseband complex-field envelope illuminating the z = 0 plane and E′2(ρ2, t) being the baseband complex-field envelope emerging from the z = L1 + L2 plane. These fields are written as functions of their transverse spatial coordinates, {ρk = (xk, yk) : k = 0, 2}, in their respective planes and time, t. The blue rectangles represent thin transmissive diffusers, and the black line at z = L1 represents a thin specular-plus-diffuser transmission mask with field-transmission function F(ρ1), whose associated intensity-transmission pattern is to be imaged using the light that emerges from the z = L1 + L2 plane. That imaging process is aided by the presence of occluders in the z = L1Ld and z = L1 + L′d planes, whose field-transmission functions are P(ρd) and P′(ρ′d), respectively.
Fig. 4
Fig. 4 Plots of Gps(ρ)/Gps(∞) for the Gaussian pinspeck versus ρ/ρres(Ω) for ρ = (x, 0) and Ω = 0.1, 1, and 10.

Equations (113)

Equations on this page are rendered with MathJax. Learn more.

0 ( ρ 0 , ω ) d t E 0 ( ρ 0 , t ) e i ω t ,
0 ( ρ 0 , ω ) = 0 ( ρ 0 , ω ) exp [ i ( ω 0 + ω ) h 0 ( ρ 0 ) / c ] ,
1 ( ρ 1 , ω ) = d 2 ρ 0 0 ( ρ 0 , ω ) exp [ i ( ω 0 + ω ) L 1 / c + i ( ω 0 + ω ) | ( ρ 1 ρ 0 ) | 2 / 2 c L 1 ] ( ω 0 + ω ) i 2 π c L 1 ,
1 ( ρ 1 , ω ) = 1 ( ρ 1 , ω ) T ( ρ 1 ) exp [ i ( ω 0 + ω ) h 1 ( ρ 1 ) / c ] ,
2 ( ρ 2 , ω ) = d 2 ρ 1 1 ( ρ 1 , ω ) exp [ i ( ω 0 + ω ) L 2 / c + i ( ω 0 + ω ) | ( ρ 2 ρ 1 ) | 2 / 2 c L 2 ] ( ω 0 + ω ) i 2 π c L 2 ,
2 ( ρ 2 , ω ) = 2 ( ρ 2 , ω ) exp [ i ( ω 0 + ω ) h 2 ( ρ 2 ) / c ] ,
I 1 ( ρ 1 , t ) = d ω 2 π d ω 2 π 1 ( ρ 1 , ω ) 1 * ( ρ 1 , ω ) e i ( ω ω ) t
= d ω 2 π [ d ω + 2 π 1 ( ρ 1 , ω + + ω / 2 ) 1 * ( ρ 1 , ω + ω / 2 ) ] e i ω t
= d ω 2 π 𝒫 1 ( ρ 1 , ω ) e i ω t ,
𝒫 1 ( ρ 1 , ω ) = d 2 ρ 0 d 2 ρ 0 d ω + 2 π 0 ( ρ 0 , ω ) 0 * ( ρ 0 , ω ) e i [ ( ω 0 + ω ) h 0 ( ρ 0 ) ( ω 0 + ω ) h 0 ( ρ 0 ) ] / c × ( ω 0 + ω ) ( ω 0 + ω ) e i ( ω ω ) L 1 / c + i [ ( ω 0 + ω ) | ρ 1 ρ 0 | 2 ( ω 0 + ω ) | ρ 1 ρ 0 | 2 ] / 2 c L 1 / ( 2 π c L 1 ) 2 ,
𝒫 1 ( ρ 1 , ω ) = d 2 ρ 0 d 2 ρ 0 d ω + 2 π 0 ( ρ 0 , ω ) 0 * ( ρ 0 , ω ) e i ω 0 [ h 0 ( ρ 0 ) h 0 ( ρ 0 ) ] / c ω 0 2 / ( 2 π c L 1 ) 2 × e i ( ω ω ) L 1 / c + i [ ( ω 0 + ω ) | ρ 1 ρ 0 | 2 ( ω 0 + ω ) | ρ 1 ρ 0 | 2 ] / 2 c L 1 .
e i ω 0 [ h 0 ( ρ 0 ) h 0 ( ρ 0 ) ] / c = exp { ω 0 2 [ σ h 2 K h ( | ρ 0 ρ 0 | ) ] / c 2 } .
e i ω 0 [ h 0 ( ρ 0 ) h 0 ( ρ 0 ) ] / c λ 0 2 δ ( ρ 0 ρ 0 ) ,
𝒫 1 ( ρ 1 , ω ) = d 2 ρ 0 d ω + 2 π 0 ( ρ 0 , ω ) 0 * ( ρ 0 , ω ) e i ( ω ω ) L 1 / c + i ( ω ω ) | ρ 1 ρ 0 | 2 / 2 c L 1 / L 1 2 .
= d 2 ρ 0 𝒫 ( ρ 0 , ω ) e i ω L 1 / c + i ω | ρ 1 ρ 0 | 2 / 2 c L 1 / L 1 2 .
𝒫 0 ( ρ 0 , ω ) = d ω + 2 π 0 ( ρ 0 , ω + + ω / 2 ) 0 * ( ρ 0 , ω + ω / 2 ) ,
I 1 ( ρ 1 , t ) = d 2 ρ 0 I 0 ( ρ 0 , t L 1 / c | ρ 1 ρ 0 | 2 / 2 c L 1 ) / L 1 2 ,
exp [ i ω L 1 2 + | ρ 1 ρ 0 | 2 / c ] L 1 2 + | ρ 1 ρ 0 | 2 exp ( i ω L 1 / c + i ω | ρ 1 ρ 0 | 2 / 2 c L 1 ) L 1 , for | ω | Δ ω
𝒫 2 ( ρ 2 , ω ) d ω + 2 π 2 ( ρ 2 , ω + + ω / 2 ) 2 * ( ρ 2 , ω + ω / 2 )
= d 2 ρ 1 𝒫 1 ( ρ 1 , ω ) T ( ρ 1 ) exp ( i ω L 2 / c + i ω | ρ 2 ρ 1 | 2 / 2 c L 2 ) / L 2 2 ,
𝒫 2 ( ρ 2 , ω ) = d 2 ρ 1 ( d 2 ρ 0 𝒫 0 ( ρ 0 , ω ) exp ( i ω L 1 / c + i ω | ρ 1 ρ 0 | 2 / 2 c L 1 ) / L 1 2 ) × T ( ρ 1 ) exp ( i ω L 2 / c + i ω | ρ 2 ρ 1 | 2 / 2 c L 2 ) / L 2 2 .
𝒫 2 ( ρ 2 , 0 ) = d 2 ρ 1 T ( ρ 1 ) d 2 ρ 0 𝒫 0 ( ρ 0 , 0 ) / ( L 1 L 2 ) 2 ,
im ( ρ im , ω ) = | ρ 3 | D / 2 d 2 ρ 3 e i ( ω 0 + ω ) L im / c + i ( ω 0 + ω ) | ρ im ρ 3 | 2 / 2 c L im i ( ω 0 + ω ) | ρ 3 | 2 / 2 c f i λ 0 L im × d 2 ρ 2 2 ( ρ 2 , ω ) e i ( ω 0 + ω ) L 3 / c + i ( ω 0 + ω ) | ρ 3 ρ 2 | 2 / 2 c L 3 i λ 0 L 3
= e i ( ω 0 + ω ) | ρ im | 2 / 2 c L im d 2 ρ 2 2 ( ρ 2 , ω ) e i ( ω 0 + ω ) ( L 3 + L im ) / c + i ( ω 0 + ω ) | ρ 2 | 2 / 2 c L 3 i λ 0 L 3 × | ρ 3 | D / 2 d 2 ρ 3 e i ( ω + ω 0 ) ρ 3 ( ρ 2 / L 3 + ρ im / L im ) / c i λ 0 L im .
im ( ρ im , ω ) = e i ( ω 0 + ω ) | ρ im | 2 / 2 c L im × d 2 ρ 2 2 ( ρ 2 , ω ) e i ( ω 0 + ω ) ( L 3 + L im ) / c + i ( ω 0 + ω ) | ρ 2 | 2 / 2 c L 3 λ 0 2 L 3 L im π D 2 4 J 1 ( π D λ 0 | ρ 2 L 3 + ρ im L im | ) π D 2 λ 0 | ρ 2 L 3 + ρ im L im | ,
2 ( ρ 2 , ω ) 2 * ( ρ 2 , ω ) λ 0 2 2 ( ρ 2 , ω ) 2 * ( ρ 2 , ω ) δ ( ρ 2 ρ 2 ) ,
𝒫 im ( ρ im , ω ) = d 2 ρ 2 𝒫 2 ( ρ 2 , ω ) × e i ω ( L 3 + L im ) / c + i ω | ρ 2 | 2 / 2 c L 3 + i ω | ρ im | 2 / 2 c L im [ π D 2 4 λ 0 L 3 L im J 1 ( π D λ 0 | ρ 2 L 3 + ρ im L im | ) π D 2 λ 0 | ρ 2 L 3 + ρ im L im | ] 2 .
I im ( ρ im , t ) = d 2 ρ 2 I 2 ( ρ 2 , t ( L 3 + L im ) / c | ρ 2 | 2 / 2 c L 3 | ρ im | 2 / 2 c L im ) × [ π D 2 4 λ 0 L 3 L im J 1 ( π D λ 0 | ρ 2 L 3 + ρ im L im | ) π D 2 λ 0 | ρ 2 L 3 + ρ im L im | ] 2 .
E 0 ( ρ 0 , t ) = { 8 P 0 π d 2 e 4 | ρ 0 | 2 / d 2 cos ( Δ ω t / 2 ) , for | t | t 0 / 2 , 0 , otherwise ,
I 0 ( ρ 0 , t ) = { 8 P 0 π d 2 e 8 | ρ 0 2 / d 2 cos 2 ( Δ ω t / 2 ) = 4 P 0 π d 2 e 8 | ρ 0 2 / d 2 [ 1 + cos ( Δ ω t ) ] , for | t | t 0 / 2 , 0 , otherwise ,
𝒫 0 ( ρ 0 , ω ) = 8 P 0 t 0 π d 2 e 8 | ρ 0 | 2 / d 2 [ sin ( ω t 0 / 2 ) ω t 0 / 2 + sin [ ( ω + Δ ω ) t 0 / 2 ] ( ω + Δ ω ) t 0 + sin [ ( ω Δ ω ) t 0 / 2 ] ( ω Δ ω ) t 0 ] ,
𝒫 1 ( ρ 1 , Δ ω ) d 2 ρ 0 4 P 0 t 0 π d 2 e 8 | ρ 0 | 2 / d 2 exp ( i Δ ω L 1 / c + i Δ ω | ρ 1 ρ 0 | 2 / 2 c L 1 ) L 1 2 ,
𝒫 ˜ 2 ( ρ 2 , Δ ω ) ( L im / L 3 ) 2 𝒫 im ( ρ 2 L im / L 3 , Δ ω ) e i Δ ω ( L 3 + L im ) / c i Δ ω | ρ 2 | 2 / 2 c L 3 i ω | ρ im | 2 / 2 c L im ,
T ˜ ( ρ ˜ 1 ) | 𝒫 1 ( ρ ˜ 1 , Δ ω ) | = | | ρ 2 | D / 2 d 2 ρ 2 𝒫 ˜ 2 ( ρ 2 , Δ ω ) e i Δ ω | ρ 2 | 2 / 2 c L 2 + i Δ ω ρ 2 ρ ˜ 1 / c L 2 Δ λ 2 | ,
𝒫 ˜ 2 ( ρ 2 , Δ ω ) 𝒫 2 ( ρ 2 , Δ ω ) ,
T ˜ ( ρ ˜ 1 ) | 𝒫 1 ( ρ ˜ 1 , Δ ω ) | = | d 2 ρ 1 𝒫 1 ( ρ 1 , Δ ω ) T ( ρ 1 ) e i Δ ω | ρ 1 | 2 / 2 c L 2 × π 4 ( D Δ λ L 2 ) 2 J 1 ( π D | ρ ˜ 1 ρ 1 | / Δ λ L 2 ) π D | ρ ˜ 1 ρ 1 | / 2 Δ λ L 2 | .
I z ( ρ + , s , t ) d 2 ρ λ 0 2 E z ( ρ + + ρ / 2 , t ) E z * ( ρ + ρ / 2 , t ) e i 2 π s ρ / λ 0 .
W ( ρ + , k ) d 2 ρ E z ( ρ + + ρ / 2 ) E z * ( ρ + ρ / 2 ) e i k ρ ,
I z ( ρ + , t ) = d 2 s I z ( ρ + , s , t ) ,
I z ( ρ + , s , t ) = d ω 2 π d ω 2 π d 2 ρ λ 0 2 z ( ρ + + ρ / 2 , ω ) z * ( ρ + ρ / 2 , ω ) e i 2 π s ρ / λ 0 e i ( ω ω ) t
= d ω 2 π [ d ω + 2 π ( d 2 ρ λ 0 2 z ( ρ + + ρ / 2 , ω ) z * ( ρ + ρ / 2 , ω ) e 2 π s ρ / λ 0 ) ] e i ω t ,
W z ( ρ + , k , ω + , ω ) d 2 ρ z ( ρ + + ρ / 2 , ω + + ω / 2 ) z * ( ρ + ρ / 2 , ω + ω / 2 ) e i k ρ ,
I z ( ρ + , s , t ) = 1 λ 0 2 d ω 2 π d ω + 2 π W z ( ρ + , 2 π s / λ 0 , ω + , ω ) e i ω t .
Γ z ( ρ 1 , ρ 2 , t 1 , t 2 ) E z ( ρ 1 , t 1 ) E z * ( ρ 2 , t 2 ) ,
I z ( ρ + , s , t ) = d 2 ρ λ 0 2 Γ z ( ρ + + ρ / 2 , ρ + ρ / 2 , t , t ) e i 2 π s ρ / λ 0 ,
W z ( ρ + , k , ω + , ω ) = d 2 ρ d t 1 d t 2 Γ z ( ρ + + ρ / 2 , ρ + ρ / 2 , t + + t / 2 , t + t / 2 ) × e i ( ω + t + ω t + k ρ ) ,
Γ z ( ρ + + ρ / 2 , ρ + ρ / 2 , t + + t / 2 , t + t / 2 ) = d 2 k ( 2 π ) 2 d ω + 2 π d ω 2 π W z ( ρ + , k , ω + , ω ) × e i ( ω + t + ω t + k ρ ) .
E z ( ρ , t ) = d τ d 2 ρ E z ( ρ , τ ) h ( ρ , ρ ; t , τ ) ,
I z ( ρ + , s , ω + , t ) 1 λ 0 2 d ω 2 π W ( ρ + , 2 π s / λ 0 , ω + , ω ) e i ω t ,
𝒫 z ( ρ + , ω ) = d ω + 2 π d 2 k ( 2 π ) 2 W z ( ρ + , k , ω + , ω ) .
W 0 ( ρ + , k , ω + , ω ) = λ 0 2 d 2 k ( 2 π ) 2 W 0 ( ρ + , k , ω + , ω ) .
W L 1 L d ( ρ + , k , ω + , ω ) = d 2 k ( 2 π ) 2 W L 1 L d ( ρ + , k , ω + , ω ) W P ( ρ + , k k ) .
W L 1 ( ρ + , k , ω + , ω ) = d 2 k ( 2 π ) 2 W L 1 ( ρ + , k , ω + , ω ) W F ( ρ + , k k ) + λ 0 2 ( ρ + ) d 2 k ( 2 π ) 2 W L 1 ( ρ + , k , ω + , ω ) .
W L 1 L d ( ρ + , k , ω + , ω ) = W 0 ( ρ + c ( L 1 L d ) k / ω 0 , k , ω + , ω ) e i [ ω ( L 1 L d ) / c ] ( 1 + c 2 | k | 2 / 2 ω 0 2 ) .
W 0 ( ρ + , k , ω + , ω ) = W in ( ω + , ω ) ( 2 π / λ 0 ) 2 δ ( k ) ,
W in ( ω + , ω ) = λ 0 2 d t I 0 ( t ) e i ( ω + + ω / 2 ) t d u I 0 ( u ) e i ( ω + ω / 2 ) u .
W 0 ( ρ + , k , ω + , ω ) = W in ( ω + , ω ) ,
W L ( ρ + , k , ω + , ω ) = W in ( ω + , ω ) e i ( ω L / c ) ( 1 + c 2 | k | 2 / 2 ω 0 2 ) .
W L ( ρ + , k , ω + , ω ) = ( ρ + ) W in ( ω + , ω ) e i ω L / c 2 π i c / ω L .
W 3 L / 2 ( ρ + , k , ω + , ω ) = ( ρ + c L k / 2 ω 0 ) W in ( ω + , ω ) e i ω 3 L / 2 c e i ω c L | k | 2 / 4 ω 0 2 2 π i c / ω L ,
W 3 L / 2 ( ρ + , k , ω + , ω ) = W in ( ω + , ω ) d 2 k ( 2 π ) 2 ( ρ + c L k / 2 ω 0 ) e i ω 3 L / 2 c e i ω c L | k | 2 / 4 ω 0 2 × W P ( ρ + , k k ) 2 π i c / ω L .
W 2 L ( ρ + k , ω + , ω ) = W in ( ω + , ω ) d 2 k ( 2 π ) 2 ( ρ + c L ( k + k ) / 2 ω 0 ) e i ω 2 L / c × e i ω c L ( | k | 2 + | k | 2 ) / 4 ω 0 2 W P ( ρ + c L k / 2 ω 0 , k k ) 2 π i c / ω L ,
𝒫 2 L ( ρ + , ω ) = d ω + 2 π W in ( ω + , ω ) d 2 k ( 2 π ) 2 d 2 k ( 2 π ) 2 ( ρ + c L ( k + k ) / 2 ω 0 ) e i ω 2 L / c × e i ω c L ( | k | 2 + | k | 2 ) / 4 ω 0 2 W P ( ρ + c L k / 2 ω 0 , k k ) 2 π i c / ω L .
𝒫 0 ( ρ + , ω ) = d ω + 2 π d 2 k ( 2 π ) 2 W 0 ( ρ + , k , ω + , ω ) = d t I 0 ( t ) e i ω t ,
𝒫 2 L ( ρ + , ω ) = λ 0 2 𝒫 0 ( ω ) e i ω 2 L / c d 2 k + ( 2 π ) 2 d 2 k ( 2 π ) 2 ( ρ + c L k + / ω 0 ) × e i ω c L ( 2 | k + | 2 + | k | 2 / 2 ) / 4 ω 0 2 W P ( ρ + c L ( k + / 2 + k / 4 ) / ω 0 , k ) 2 π i c / ω L ,
G ( ρ , ω ) = d 2 k ( 2 π ) 2 e i ω c L | k | 2 / 8 ω 0 2 W P ( ρ / 2 c L k / 4 ω 0 , k ) 2 π i c / ω L .
𝒫 2 L ( ρ + , ω ) = λ 0 2 𝒫 0 ( ω ) e i ω 2 L / c d 2 k + ( 2 π ) 2 ( ρ + c L k + / ω 0 ) × G ( 2 ρ + + c L k + / ω 0 , ω ) e i ω c L | k + | 2 / 2 ω 0 2 .
𝒫 2 L ( ρ + , ω ) = 𝒫 0 ( ω ) e i ω 2 L / c d 2 ρ ˜ ( ρ ˜ ) G ( ρ + ρ ˜ , ω ) e i ω | ρ + ρ ˜ | 2 / 2 c L L 2 .
W E 0 ( ρ + , k ) d 2 ρ E 0 ( ρ + + ρ / 2 ) E 0 * ( ρ + ρ / 2 ) e i k ρ ,
I 2 L ( ρ + ) | E 2 L ( ρ + ) | 2 = I 0 d 2 ρ ˜ ( ρ ˜ ) G ( ρ + ρ ˜ ) ,
G ( ρ ) π L 2 d 2 k ( 2 π ) 2 W P ( ρ / 2 c L k / 4 ω 0 , k ) ,
P ph ( ρ ) = e | ρ | 2 / 2 ρ 0 2 ,
P ps ( ρ ) = 1 e | ρ | 2 / 2 ρ 0 2 ,
G ph ( ρ ) = π Ω 2 L 2 ( 1 + Ω 2 ) exp [ Ω 2 1 + Ω 2 | ρ | 2 4 ρ 0 2 ] ,
G ph opt ( ρ ) = π exp ( π | ρ | 2 / λ 0 L ) 2 L 2 ,
G ph ( ρ ) / G ph ( 0 ) = exp [ π | ρ 2 | ρ res 2 ( Ω ) ] ,
G ps ( ρ ) = π L 2 | 1 Ω 1 + Ω 2 exp [ Ω 1 + Ω 2 | ρ | 2 8 ρ 0 2 ( Ω i ) i tan 1 ( 1 / Ω ) ] | 2 .
G ps opt ( ρ ) = π L 2 | 1 exp ( π | ρ | 2 ( 1 i ) / 2 λ 0 L i π / 4 ) 2 | 2 .
G ps ( ρ ) / G ps ( ) = [ 1 exp ( | ρ | 2 / 8 ρ 0 2 ) ] 2 ,
P ( ρ ) = circ ( 2 ρ / d ) { 1 , for | ρ | d / 2 0 , otherwise .
z ( ρ , ω ) = z ( ρ , ω ) e i ( ω 0 + ω ) h z ( ρ ) / c z ( ρ , ω ) e i ω 0 h z ( ρ ) / c ,
e i ω 0 [ h z ( ρ ) h k ( ρ ) ] / c λ 0 2 δ ( ρ ρ ) .
W z ( ρ + , k , ω + , ω ) = d 2 ρ z ( ρ + + ρ / 2 , ω + + ω / 2 ) z * ( ρ + ρ / 2 , ω + ω / 2 ) e i k · ρ
= d 2 ρ z ( ρ + + ρ / 2 , ω + + ω / 2 ) z * ( ρ + ρ / 2 , ω + ω / 2 ) × e i ω 0 [ h z ( ρ + + ρ / 2 ) h k ( ρ + ρ / 2 ) ] / c e i k ρ
= λ 0 2 z ( ρ + , ω + + ω / 2 ) z * ( ρ + , ω + ω / 2 )
= λ 0 2 d 2 k ( 2 π ) 2 W z ( ρ + , k , ω + , ω ) .
z ( ρ , ω ) = z ( ρ , ω ) P ( ρ ) .
W z ( ρ + , k , ω + , ω ) = d 2 ρ z ( ρ + + ρ / 2 , ω + + ω / 2 ) z * ( ρ + ρ / 2 , ω + ω / 2 ) e i k · ρ
= d 2 ρ z ( ρ + + ρ / 2 , ω + + ω / 2 ) z * ( ρ + ρ / 2 , ω + ω / 2 ) × P ( ρ + + ρ / 2 ) P * ( ρ + ρ / 2 ) e i k ρ
= d 2 k ( 2 π ) 2 W z ( ρ + , k , ω + , ω ) d 2 ρ P ( ρ + + ρ / 2 ) P * ( ρ + ρ / 2 ) e i ( k k ) ρ
= d 2 k ( 2 π ) 2 W z ( ρ + , k , ω + , ω ) W P ( ρ + , k k ) ,
W P ( ρ + , k ) d 2 ρ P ( ρ + + ρ / 2 ) P * ( ρ + ρ / 2 ) e i k ρ
𝒫 z ( ρ + , ω ) = d ω + 2 π d 2 k ( 2 π ) 2 W z ( ρ + , k , ω + , ω )
= d ω + 2 π d 2 k ( 2 π ) 2 d 2 k ( 2 π ) 2 W z ( ρ + , k , ω + , ω ) W P ( ρ + , k k )
= d ω + 2 π d 2 k ( 2 π ) 2 W z ( ρ + , k , ω + , ω ) | P ( ρ + ) | 2 = 𝒫 z ( ρ + , ω ) | P ( ρ + ) | 2 ,
W z ( ρ + , k , ω + , ω ) = d 2 ρ z ( ρ + + ρ / 2 , ω + + ω / 2 ) z * ( ρ + ρ / 2 , ω + ω / 2 ) e i k ρ
= d 2 ρ z ( ρ + + ρ / 2 , ω + + ω / 2 ) z * ( ρ + ρ / 2 , ω + , ω / 2 ) × F ( ρ + + ρ / 2 ) F * ( ρ + ρ / 2 ) e i k ρ .
W z ( ρ + , k , ω + , ω ) = d 2 ρ z ( ρ + + ρ / 2 , ω + + ω / 2 ) z * ( ρ + ρ / 2 , ω + ω / 2 ) × ( F ( ρ + + ρ / 2 ) F * ( ρ + ρ / 2 ) + Δ F ( ρ + + ρ / 2 ) Δ F * ( ρ + ρ / 2 ) ) e i k ρ
= d 2 k ( 2 π ) 2 W L 1 ( ρ + , k , ω + , ω ) W F ( ρ + , k k ) + λ 0 2 ( ρ + ) d 2 k ( 2 π ) 2 W L 1 ( ρ + , k , ω + , ω ) .
L ( ρ L , ω ) = d 2 ρ 0 0 ( ρ 0 , ω ) ( ω 0 + ω ) e i ( ω 0 + ω ) ( L / c + | ρ L ρ 0 | 2 / 2 c L ) i 2 π c L ,
W L ( ρ + , k , ω + , ω ) = d 2 ρ d 2 ρ 0 d 2 ρ 0 0 ( ρ 0 , ω + + ω / 2 ) 0 * ( ρ 0 , ω + ω / 2 ) × e i ω L / c e i k ρ ( ω 0 + ω + + ω / 2 ) e i ( ω 0 + ω + + ω / 2 ) | ρ + + ρ / 2 ρ 0 | 2 / 2 c L i 2 π c L × ( ω 0 + ω + ω / 2 ) e i ( ω 0 + ω + ω / 2 ) | ρ + ρ / 2 ρ 0 | 2 / 2 c L i 2 π c L .
W L ( ρ + , k , ω + , ω ) = d 2 ρ d 2 ρ 0 + d 2 ρ 0 ( ρ 0 + + ρ 0 / 2 , ω + + ω / 2 ) 0 * ( ρ 0 + ρ 0 / 2 , ω + ω / 2 ) × e i ω L / c ( λ 0 L ) 2 e i ( ω 0 + ω + ) + ( ρ + ρ 0 + ) ( ρ ρ 0 ) / c L e i ω ( | ρ + ρ 0 + | 2 + | ρ ρ 0 | 2 / 4 ) / 2 c L e i k ρ .
W L ( ρ + , k , ω + , ω ) = d 2 ρ 0 + d 2 ρ 0 0 ( ρ 0 + + ρ 0 / 2 , ω + + ω / 2 ) 0 * ( ρ 0 + ρ 0 / 2 , ω + ω / 2 ) e i ω L / c ( λ 0 L ) 2 × e i ( ω 0 + ω + ) ( ρ + ρ 0 + ) ρ 0 / c L e i ω ( | ρ + ρ 0 + | 2 / 2 c L + | ρ 0 | 2 / 8 c L ) × d 2 ρ e i ω | ρ | 2 / 8 c L e i [ k ( ω 0 + ω + ) ( ρ + ρ 0 + ) / c L + ω ρ 0 / 4 c L ] ρ .
W L ( ρ + , k , ω + , ω ) = d 2 ρ 0 + d 2 ρ 0 0 ( ρ 0 + + ρ 0 / 2 , ω + + ω / 2 ) 0 * ( ρ 0 + ρ 0 / 2 , ω + ω / 2 ) e i ω L / c ( λ 0 L ) 2 × e i ( ω 0 + ω + ) ( ρ + ρ 0 + ) ρ 0 / c L e i ω | ρ + ρ 0 + | 2 / 2 c L e i ω | ρ 0 | 2 / 8 c L ( i 8 π c L / ω ) × e 2 i c L | k ( ω 0 + ω + ) ( ρ + ρ 0 + ) / c L + ω ρ 0 / 4 c L | 2 / ω ,
W L ( ρ + , k , ω + , ω ) = d 2 ρ 0 + d 2 ρ 0 0 ( ρ 0 + + ρ 0 / 2 , ω + + ω / 2 ) 0 * ( ρ 0 + ρ 0 / 2 , ω + ω / 2 ) × e i ω L / c ( λ 0 L ) 2 e i ω | ρ + ρ 0 + | 2 / 2 c L e 2 i c L | k ( ω 0 + ω + ) ( ρ + ρ 0 + ) / c L | 2 / ω e i k ρ 0 ( i 8 π c L / ω )
= d 2 ρ 0 + W 0 ( ρ 0 + , k , ω + , ω ) e i ω L / c ( λ 0 L ) 2 e i ω | ρ + ρ 0 + | 2 / 2 c L × e 2 i c L | k ( ω 0 + ω + ) ( ρ + ρ 0 + ) / c L | 2 / ω ( i 8 π c L / ω ) .
e 2 i c L | k ( ω 0 + ω + ) ( ρ + ρ 0 + ) / c L | 2 / ω i 8 π c L / ω ( λ 0 L ) 2
W L ( ρ + , k , ω + , ω ) = W 0 ( ρ + c L k / ( ω 0 + ω + ) , k , ω + , ω ) e i ( ω L / c ) ( 1 + c 2 | k | 2 / 2 ( ω 0 + ω + ) 2 ) .
W L ( ρ + , k , ω + , ω ) = W 0 ( ρ + c L k / ω 0 , k , ω + , ω ) e i ( ω L / c ) ( 1 + c 2 | k | 2 / 2 ω 0 2 ) .
𝒫 L ( ρ + , ω ) = d ω + 2 π d 2 k ( 2 π ) 2 W 0 ( ρ + c L k / ω 0 , k , ω + , ω ) e i ( ω L / c ) ( 1 + c 2 | k | 2 / 2 ω 0 2 ) .
𝒫 L ( ρ + , ω ) = λ 0 2 d ω + 2 π d 2 k ( 2 π ) 2 d 2 k ( 2 π ) 2 W 0 ( ρ + c L k / ω 0 , k , ω + , ω ) e i ( ω L / c ) ( 1 + c 2 | k | 2 / 2 ω 0 2 ) .
𝒫 L ( ρ + , ω ) = d ω + 2 π d 2 ρ 0 d 2 k ( 2 π ) 2 W 0 ( ρ 0 , k , ω + , ω ) e i ( ω L / c ) ( 1 + | ρ + ρ 0 | 2 / 2 L 2 ) L 2 .
𝒫 L ( ρ + , ω ) = d 2 ρ 0 𝒫 0 ( ρ 0 , ω ) e i ω L / c e i ω | ρ + ρ 0 | 2 / 2 c L L 2 ,

Metrics